id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
e84e80067b3343d136fd75300691c8b3d3efbdac | e84e80067b3343d136fd75300691c8b3d3efbdac_0 | Q: How do they align the synthetic data?
Text: Introduction
Given the data-driven nature of neural machine translation (NMT), the limited source-to-target bilingual sentence pairs have been one of the major obstacles in building competitive NMT systems. Recently, pseudo parallel data, which refer to the synthetic bilingual sentence pairs automatically generated by existing translation models, have reported promising results with regard to the data scarcity in NMT. Many studies have found that the pseudo parallel data combined with the real bilingual parallel corpus significantly enhance the quality of NMT models BIBREF0 , BIBREF1 , BIBREF2 . In addition, synthesized parallel data have played vital roles in many NMT problems such as domain adaptation BIBREF0 , zero-resource NMT BIBREF3 , and the rare word problem BIBREF4 .
Inspired by their efficacy, we attempt to train NMT models using only synthetic parallel data. To the best of our knowledge, building NMT systems with only pseudo parallel data has yet to be studied. Through our research, we explore the availability of synthetic parallel data as an effective alternative to the real-world parallel corpus. The active usage of synthetic data in NMT particularly has its significance in low-resource environments where the ground truth parallel corpora are very limited or not established. Even in recent approaches such as zero-shot NMT BIBREF5 and pivot-based NMT BIBREF6 , where direct source-to-target bilingual data are not required, the direct parallel corpus brings substantial improvements in translation quality where the pseudo parallel data can also be employed.
Previously suggested synthetic data, however, have several drawbacks to be a reliable alternative to the real parallel corpus. As illustrated in Figure 1 , existing pseudo parallel corpora can be classified into two groups: source-originated and target-originated. The common property between them is that ground truth examples exist only on a single side (source or target) of pseudo sentence pairs, while the other side is composed of synthetic sentences only. The bias of synthetic examples in sentence pairs, however, may lead to the imbalance of the quality of learned NMT models when the given pseudo parallel corpus is exploited in bidirectional translation tasks (e.g., French $\rightarrow $ German and German $\rightarrow $ French). In addition, the reliability of the synthetic parallel data is heavily influenced by a single translation model where the synthetic examples originate. Low-quality synthetic sentences generated by the translation model would prevent NMT models from learning solid parameters.
To overcome these shortcomings, we propose a novel synthetic parallel corpus called PSEUDOmix. In contrast to previous works, PSEUDOmix includes both synthetic and real sentences on either side of sentence pairs. In practice, it can be readily built by mixing source- and target-originated pseudo parallel corpora for a given translation task. Experiments on several language pairs demonstrate that the proposed PSEUDOmix shows useful properties that make it a reliable candidate for real-world parallel data. In detail, we make the following contributions:
Neural Machine Translation
Given a source sentence $x = (x_1, \ldots , x_m)$ and its corresponding target sentence $y= (y_1, \ldots , y_n)$ , the NMT aims to model the conditional probability $p(y|x)$ with a single large neural network. To parameterize the conditional distribution, recent studies on NMT employ the encoder-decoder architecture BIBREF7 , BIBREF8 , BIBREF9 . Thereafter, the attention mechanism BIBREF10 , BIBREF11 has been introduced and successfully addressed the quality degradation of NMT when dealing with long input sentences BIBREF12 .
In this study, we use the attentional NMT architecture proposed by Bahdanau et al. bahdanau2014neural. In their work, the encoder, which is a bidirectional recurrent neural network, reads the source sentence and generates a sequence of source representations $\bf {h} =(\bf {h_1}, \ldots , \bf {h_m}) $ . The decoder, which is another recurrent neural network, produces the target sentence one symbol at a time. The log conditional probability thus can be decomposed as follows:
$$\log p(y|x) = \sum _{t=1}^{n} \log p(y_t|y_{<t}, x)$$ (Eq. 3)
where $y_{<t}$ = ( $y_1, \ldots , y_{t-1}$ ). As described in Equation (2), the conditional distribution of $p(y_t|y_{<t}, x)$ is modeled as a function of the previously predicted output $y_{t-1}$ , the hidden state of the decoder $s_t$ , and the context vector $c_t$ .
$$p(y_t|y_{<t}, x) \propto \exp \lbrace g(y_{t-1}, s_t, c_t)\rbrace $$ (Eq. 4)
The context vector $c_t$ is used to determine the relevant part of the source sentence to predict $y_t$ . It is computed as the weighted sum of source representations $\bf {h_1}, \ldots , \bf {h_m}$ . Each weight $\alpha _{ti}$ for $\bf {h_i}$ implies the probability of the target symbol $y_t$ being aligned to the source symbol $x_i$ :
$$c_t = \sum _{i=1}^{m} \alpha _{ti} \bf {h_i}$$ (Eq. 5)
Given a sentence-aligned parallel corpus of size $N$ , the entire parameter $\theta $ of the NMT model is jointly trained to maximize the conditional probabilities of all sentence pairs ${ \lbrace (x^n, y^n)\rbrace }_{ n=1 }^{ N }$ :
$$\theta ^* = \underset{\theta }{\arg \!\max } \sum _{n=1}^{N} \log p(y^{n}|x^{n})$$ (Eq. 6)
where $\theta ^*$ is the optimal parameter.
Related Work
In statistical machine translation (SMT), synthetic bilingual data have been primarily proposed as a means to exploit monolingual corpora. By applying a self-training scheme, the pseudo parallel data were obtained by automatically translating the source-side monolingual corpora BIBREF13 , BIBREF14 . In a similar but reverse way, the target-side monolingual corpora were also employed to build the synthetic parallel data BIBREF15 , BIBREF16 . The primary goal of these works was to adapt trained SMT models to other domains using relatively abundant in-domain monolingual data.
Inspired by the successful application in SMT, there have been efforts to exploit synthetic parallel data in improving NMT systems. Source-side BIBREF1 , target-side BIBREF0 and both sides BIBREF2 of the monolingual data have been used to build synthetic parallel corpora. In their work, the pseudo parallel data combined with a real training corpus significantly enhanced the translation quality of NMT. In Sennrich et al., sennrich2015improving, domain adaptation of NMT was achieved by fine-tuning trained NMT models using a synthetic parallel corpus. Firat et al. firat2016zero attempted to build NMT systems without any direct source-to-target parallel corpus. In their work, the pseudo parallel corpus was employed in fine-tuning the target-specific attention mechanism of trained multi-way multilingual NMT BIBREF17 models, which enabled zero-resource NMT between the source and target languages. Lastly, synthetic sentence pairs have been utilized to enrich the training examples having rare or unknown translation lexicons BIBREF4 .
Motivation
As described in the previous section, synthetic parallel data have been widely used to boost the performance of NMT. In this work, we further extend their application by training NMT with only synthetic data. In certain language pairs or domains where the source-to-target real parallel corpora are very rare or even unprepared, the model trained with synthetic parallel data can function as an effective baseline model. Once the additional ground truth parallel corpus is established, the trained model can be improved by retraining or fine-tuning using the real parallel data.
Limits of the Previous Approaches
For a given translation task, we classify the existing pseudo parallel data into the following groups:
Source-originated: The source sentences are from a real corpus, and the associated target sentences are synthetic. The corpus can be formed by automatically translating a source-side monolingual corpus into the target language BIBREF4 , BIBREF1 . It can also be built from source-pivot bilingual data by introducing a pivot language. In this case, a pivot-to-target translation model is employed to translate the pivot language corpus into the target language. The generated target sentences paired with the original source sentences form a pseudo parallel corpus.
Target-originated: The target sentences are from a real corpus, and the associated source sentences are synthetic. The corpus can be formed by back-translating a target-side monolingual corpus into the source language BIBREF0 . Similar to the source-originated case, it can be built from a pivot-target bilingual corpus using a pivot-to-source translation model BIBREF3 .
The process of building each synthetic parallel corpus is illustrated in Figure 1 . As shown in Figure 1 , the previous studies on pseudo parallel data share a common property: synthetic and ground truth sentences are biased on a single side of sentence pairs. In such a case where the synthetic parallel data are the only or major resource used to train NMT, this may severely limit the availability of the given pseudo parallel corpus. For instance, as will be demonstrated in our experiments, synthetic data showing relatively high quality in one translation task (e.g., French $\rightarrow $ German) can produce poor results in the translation task of the reverse direction (German $\rightarrow $ French).
Another drawback of employing synthetic parallel data in training NMT is that the capacity of the synthetic parallel corpus is inherently influenced by the mother translation model from which the synthetic sentences originate. Depending on the quality of the mother model, ill-formed or inaccurate synthetic examples could be generated, which would negatively affect the reliability of the resultant synthetic parallel data. In the previous study, Zhang and Zong zhang2016exploiting bypassed this issue by freezing the decoder parameters while training with the minibatches of pseudo bilingual pairs made from a source language monolingual corpus. This scheme, however, cannot be applied to our scenario as the decoder network will remain untrained during the entire training process.
Proposed Mixing Approach
To overcome the limitations of the previously suggested pseudo parallel data, we propose a new type of synthetic parallel corpus called PSEUDOmix. Our approach is quite straightforward: For a given translation task, we first build both source-originated and target-originated pseudo parallel data. PSEUDOmix can then be readily built by mixing them together. The overall process of building PSEUDOmix for the French $\rightarrow $ German translation task is illustrated in Figure 1 .
By mixing source- and target-originated pseudo parallel data, the resultant corpus includes both real and synthetic examples on either side of sentence pairs, which is the most evident feature of PSEUDOmix. Through the mixing approach, we attempt to lower the overall discrepancy in the quality of the source and target examples of synthetic sentence pairs, thus enhancing the reliability as a parallel resource. In the following section, we evaluate the actual benefits of the mixed composition in the synthetic parallel data.
Experiments: Effects of Mixing Real and Synthetic Sentences
In this section, we analyze the effects of the mixed composition in the synthetic parallel data. Mixing pseudo parallel corpora derived from different sources, however, inevitably brings diversity, which affects the capacity of the resulting corpus. We isolate this factor by building both source- and target-originated synthetic corpora from the identical source-to-target real parallel corpus. Our experiments are performed on French (Fr) $\leftrightarrow $ German (De) translation tasks. Throughout the remaining paper, we use the notation * to denote the synthetic part of the pseudo sentence pairs.
Data Preparation
By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section. For automatic translation, we utilize a pre-trained and publicly released NMT model for En $\rightarrow $ De and train another NMT model for En $\rightarrow $ Fr using the WMT'15 En-Fr parallel corpus BIBREF19 . A beam of size 5 is used to generate synthetic sentences. Lastly, to match the size of the training data, PSEUDOmix is established by randomly sampling half of each Fr*-De and Fr-De* corpus and mixing them together.
We use the parallel corpora from the shared translation task of WMT'15 and WMT'16 BIBREF27 . Using the same pivot-based technique as the previous task, Cs-De* and Fr-De* corpora are built from the WMT'15 Cs-En and Fr-En parallel data respectively. For Cs*-De and Fr*-De, WMT'16 En-De parallel data are employed. We again use pre-trained NMT models for En $\rightarrow $ Cs, En $\rightarrow $ De, and En $\rightarrow $ Fr to generate synthetic sentences. A beam of size 1 is used for fast decoding.
For the Real Fine-tuning scenario, we use real parallel corpora from the Europarl and News Commentary11 dataset. These direct parallel corpora are obtained from OPUS BIBREF28 . The size of each set of ground truth and synthetic parallel data is presented in Table 5 . Given that the training corpus for widely studied language pairs amounts to several million lines, the Cs-De language pair (0.6M) reasonably represents a low-resource situation. On the other hand, the Fr-De language pair (1.8M) is considered to be relatively resource-rich in our experiments. The details of the preprocessing are identical to those in the previous case.
Data Preprocessing
Each training corpus is tokenized using the tokenization script in Moses BIBREF20 . We represent every sentence as a sequence of subword units learned from byte-pair encoding BIBREF21 . We remove empty lines and all the sentences of length over 50 subword units. For a fair comparison, all cleaned synthetic parallel data have equal sizes. The summary of the final parallel corpora is presented in Table 1 .
Training and Evaluation
All networks have 1024 hidden units and 500 dimensional embeddings. The vocabulary size is limited to 30K for each language. Each model is trained for 10 epochs using stochastic gradient descent with Adam BIBREF22 . The Minibatch size is 80, and the training set is reshuffled between every epoch. The norm of the gradient is clipped not to exceed 1.0 BIBREF23 . The learning rate is $2 \cdot 10^{-4}$ in every case.
We use the newstest 2012 set for a development set and the newstest 2011 and newstest 2013 sets as test sets. At test time, beam search is used to approximately find the most likely translation. We use a beam of size 12 and normalize probabilities by the length of the candidate sentences. The evaluation metric is case-sensitive tokenized BLEU BIBREF24 computed with the multi-bleu.perl script from Moses. For each case, we present average BLEU evaluated on three different models trained from scratch.
We use the same experimental settings that we used for the previous case except for the Real Fine-tuning scenario. In the fine-tuning step, we use the learning rate of $2 \cdot 10^{-5}$ , which produced better results. Embeddings are fixed throughout the fine-tuning steps. For evaluation, we use the same development and test sets used in the previous task.
Results and Analysis
Before we choose the pivot language-based method for data synthesis, we conduct a preliminary experiment analyzing both pivot-based and direct back-translation. The model used for direct back-translation was trained with the ground truth Europarl Fr-De data made from the multi-parallel corpus presented in Table 2 . On the newstest 2012/2013 sets, the synthetic corpus generated using the pivot approach showed higher BLEU (19.11 / 20.45) than the back-translation counterpart (18.23 / 19.81) when used in training a De $\rightarrow $ Fr NMT model. Although the back-translation method has been effective in many studies BIBREF0 , BIBREF25 , its availability becomes restricted in low-resource cases which is our major concern. This is due to the poor quality of the back-translation model built from the limited source-to-target parallel corpus. Instead, one can utilize abundant pivot-to-target parallel corpora by using a rich-resource language as the pivot language. This consequently improves the reliability of the quality of baseline translation models used for generating synthetic corpora.
From Table 2 , we find that the bias of the synthetic examples in pseudo parallel corpora brings imbalanced quality in the bidirectional translation tasks. Given that the source- and target-originated classification of a specific synthetic corpus is reversed depending on the direction of the translation, the overall results imply that the target-originated corpus for each translation task outperforms the source-originated data. The preference of target-originated synthetic data over the source-originated counterparts was formerly investigated in SMT by Lambert et al., lambert2011investigations. In NMT, it can be explained by the degradation in quality in the source-originated data owing to the erroneous target language model formed by the synthetic target sentences. In contrast, we observe that PSEUDOmix not only produces balanced results for both Fr $\rightarrow $ De and De $\rightarrow $ Fr translation tasks but also shows the best or competitive translation quality for each task.
We note that mixing two different synthetic corpora leads to improved BLEU not their intermediate value. To investigate the cause of the improvement in PSEUDOmix, we build additional target-originated synthetic corpora for each Fr $\leftrightarrow $ De translation with a beam of size 3. As shown in Table 3 , for the De $\rightarrow $ Fr task, the new target-originated corpus (c) shows higher BLEU than the source-originated corpus (b) by itself. The improvement in BLEU, however, occurs only when mixing the source- and target-originated synthetic parallel data (b+d) compared to mixing two target-originated synthetic corpora (c+d). The same phenomenon is observed in the Fr $\rightarrow $ De case as well. The results suggest that real and synthetic sentences mixed on either side of sentence pairs enhance the capability of a synthetic parallel corpus. We conjecture that ground truth examples in both encoder and decoder networks not only compensate for the erroneous language model learned from synthetic sentences but also reinforces patterns of use latent in the pseudo sentences.
We also evaluate the effects of the proposed mixing strategy in phrase-based statistical machine translation BIBREF26 . We use Moses BIBREF20 and its baseline configuration for training. A 5-gram Kneser-Ney model is used as the language model. Table 4 shows the translation results of the phrase-based statistical machine translation (PBSMT) systems. In all experiments, NMT shows higher BLEU (2.44-3.38) compared to the PBSMT setting. We speculate that the deep architecture of NMT provides noise robustness in the synthetic examples. It is also notable that the proposed PSEUDOmix outperforms other synthetic corpora in PBSMT. The results clearly show that the benefit of the mixed composition in synthetic sentence pairs is beyond a specific machine translation framework.
Table 6 shows the results of the Pseudo Only scenario on Cs $\leftrightarrow $ De and Fr $\leftrightarrow $ De tasks. For the baseline comparison, we also present the translation quality of the NMT models trained with the ground truth Europarl+NC11 parallel corpora (a). In Cs $\leftrightarrow $ De, the Pseudo Only scenario shows outperforming results compared to the real parallel corpus by up to 3.86-4.43 BLEU on the newstest 2013 set. Even for the Fr $\leftrightarrow $ De case, where the size of the real parallel corpus is relatively large, the best BLEU of the pseudo parallel corpora is higher than that of the real parallel corpus by 1.3 (Fr $\rightarrow $ De) and 0.49 (De $\rightarrow $ Fr). We list the results on the newstest 2011 and newstest 2012 in the appendix. From the results, we conclude that large-scale synthetic parallel data can perform as an effective alternative to the real parallel corpora, particularly in low-resource language pairs.
As shown in Table 6 , the model learned from the Cs*-De corpus outperforms the model trained with the Cs-De* corpus in every case. This result is slightly different from the previous case, where the target-originated synthetic corpus for each translation task reports better results than the source-originated data. This arises from the diversity in the source of each pseudo parallel corpus, which vary in their suitability for the given test set. Table 6 also shows that mixing the Cs*-De corpus with the Cs-De* corpus of worse quality brings improvements in the resulting PSEUDOmix, showing the highest BLEU for bidirectional Cs $\leftrightarrow $ De translation tasks. In addition, PSEUDOmix again shows much more balanced performance in Fr $\leftrightarrow $ De translations compared to other synthetic parallel corpora.
While the mixing strategy compensates for most of the gap between the Fr-De* and the Fr*-De (3.01 $\rightarrow $ 0.17) in the De $\rightarrow $ Fr case, the resulting PSEUDOmix still shows lower BLEU than the target-originated Fr-De* corpus. We thus enhance the quality of the synthetic examples of the source-originated Fr*-De data by further training its mother translation model (En $\rightarrow $ Fr). As illustrated in Figure 2 , with the target-originated Fr-De* corpus being fixed, the quality of the models trained with the source-originated Fr*-De data and PSEUDOmix increases in proportion to the quality of the mother model for the Fr*-De corpus. Eventually, PSEUDOmix shows the highest BLEU, outperforming both Fr*-De and Fr-De* data. The results indicate that the benefit of the proposed mixing approach becomes much more evident when the quality gap between the source- and target-originated synthetic data is within a certain range.
As presented in Table 6 , we observe that fine-tuning using ground truth parallel data brings substantial improvements in the translation qualities of all NMT models. Among all fine-tuned models, PSEUDOmix shows the best performance in all experiments. This is particularly encouraging for the case of De $\rightarrow $ Fr, where PSEUDOmix reported lower BLEU than the Fr-De* data before it was fine-tuned. Even in the case where PSEUDOmix shows comparable results with other synthetic corpora in the Pseudo Only scenario, it shows higher improvements in the translation quality when fine-tuned with the real parallel data. These results clearly demonstrate the strengths of the proposed PSEUDOmix, which indicate both competitive translation quality by itself and relatively higher potential improvement as a result of the refinement using ground truth parallel corpora.
In Table 6 (b), we also present the performance of NMT models learned from the ground truth Europarl+NC11 data merged with the target-originated synthetic parallel corpus for each task. This is identical in spirit to the method in Sennrich et al. sennrich2015improving which employs back-translation for data synthesis. Instead of direct back-translation, we used pivot-based back-translation, as we verified the strength of the pivot-based data synthesis in low-resource environments. Although the ground truth data is only used for the refinement, the Real Fine-tuning scheme applied to PSEUDOmix shows better translation quality compared to the models trained with the merged corpus (b). Even the results of the Real Fine-tuning on the target-originated corpus provide comparable results to the training with the merged corpus from scratch. The overall results support the efficacy of the proposed two-step methods in practical application: the Pseudo Only method to introduce useful prior on the NMT parameters and the Real Fine-tuning scheme to reorganize the pre-trained NMT parameters using in-domain parallel data.
Experiments: Large-scale Application
The experiments shown in the previous section verify the potential of PSEUDOmix as an efficient alternative to the real parallel data. The condition in the previous case, however, is somewhat artificial, as we deliberately match the sources of all pseudo parallel corpora. In this section, we move on to more practical and large-scale applications of synthetic parallel data. Experiments are conducted on Czech (Cs) $\leftrightarrow $ German (De) and French (Fr) $\leftrightarrow $ German (De) translation tasks.
Application Scenarios
We analyze the efficacy of the proposed mixing approach in the following application scenarios:
Pseudo Only: This setting trains NMT models using only synthetic parallel data without any ground truth parallel corpus.
Real Fine-tuning: Once the training of an NMT model is completed in the Pseudo Only manner, the model is fine-tuned using only a ground truth parallel corpus.
The suggested scenarios reflect low-resource situations in building NMT systems. In the Real Fine-tuning, we fine-tune the best model of the Pseudo Only scenario evaluated on the development set.
Conclusion
In this work, we have constructed NMT systems using only synthetic parallel data. For this purpose, we suggest a novel pseudo parallel corpus called PSEUDOmix where synthetic and ground truth real examples are mixed on either side of sentence pairs. Experiments show that the proposed PSEUDOmix not only shows enhanced results for bidirectional translation but also reports substantial improvement when fine-tuned with ground truth parallel data. Our work has significance in that it provides a thorough investigation on the use of synthetic parallel corpora in low-resource NMT environment. Without any adjustment, the proposed method can also be extended to other learning areas where parallel samples are employed. For future work, we plan to explore robust data sampling methods, which would maximize the quality of the mixed synthetic parallel data. | By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section. |
45bd22f2cfb62a5f79ec3c771c8324b963567cc0 | 45bd22f2cfb62a5f79ec3c771c8324b963567cc0_0 | Q: Where do they collect the synthetic data?
Text: Introduction
Given the data-driven nature of neural machine translation (NMT), the limited source-to-target bilingual sentence pairs have been one of the major obstacles in building competitive NMT systems. Recently, pseudo parallel data, which refer to the synthetic bilingual sentence pairs automatically generated by existing translation models, have reported promising results with regard to the data scarcity in NMT. Many studies have found that the pseudo parallel data combined with the real bilingual parallel corpus significantly enhance the quality of NMT models BIBREF0 , BIBREF1 , BIBREF2 . In addition, synthesized parallel data have played vital roles in many NMT problems such as domain adaptation BIBREF0 , zero-resource NMT BIBREF3 , and the rare word problem BIBREF4 .
Inspired by their efficacy, we attempt to train NMT models using only synthetic parallel data. To the best of our knowledge, building NMT systems with only pseudo parallel data has yet to be studied. Through our research, we explore the availability of synthetic parallel data as an effective alternative to the real-world parallel corpus. The active usage of synthetic data in NMT particularly has its significance in low-resource environments where the ground truth parallel corpora are very limited or not established. Even in recent approaches such as zero-shot NMT BIBREF5 and pivot-based NMT BIBREF6 , where direct source-to-target bilingual data are not required, the direct parallel corpus brings substantial improvements in translation quality where the pseudo parallel data can also be employed.
Previously suggested synthetic data, however, have several drawbacks to be a reliable alternative to the real parallel corpus. As illustrated in Figure 1 , existing pseudo parallel corpora can be classified into two groups: source-originated and target-originated. The common property between them is that ground truth examples exist only on a single side (source or target) of pseudo sentence pairs, while the other side is composed of synthetic sentences only. The bias of synthetic examples in sentence pairs, however, may lead to the imbalance of the quality of learned NMT models when the given pseudo parallel corpus is exploited in bidirectional translation tasks (e.g., French $\rightarrow $ German and German $\rightarrow $ French). In addition, the reliability of the synthetic parallel data is heavily influenced by a single translation model where the synthetic examples originate. Low-quality synthetic sentences generated by the translation model would prevent NMT models from learning solid parameters.
To overcome these shortcomings, we propose a novel synthetic parallel corpus called PSEUDOmix. In contrast to previous works, PSEUDOmix includes both synthetic and real sentences on either side of sentence pairs. In practice, it can be readily built by mixing source- and target-originated pseudo parallel corpora for a given translation task. Experiments on several language pairs demonstrate that the proposed PSEUDOmix shows useful properties that make it a reliable candidate for real-world parallel data. In detail, we make the following contributions:
Neural Machine Translation
Given a source sentence $x = (x_1, \ldots , x_m)$ and its corresponding target sentence $y= (y_1, \ldots , y_n)$ , the NMT aims to model the conditional probability $p(y|x)$ with a single large neural network. To parameterize the conditional distribution, recent studies on NMT employ the encoder-decoder architecture BIBREF7 , BIBREF8 , BIBREF9 . Thereafter, the attention mechanism BIBREF10 , BIBREF11 has been introduced and successfully addressed the quality degradation of NMT when dealing with long input sentences BIBREF12 .
In this study, we use the attentional NMT architecture proposed by Bahdanau et al. bahdanau2014neural. In their work, the encoder, which is a bidirectional recurrent neural network, reads the source sentence and generates a sequence of source representations $\bf {h} =(\bf {h_1}, \ldots , \bf {h_m}) $ . The decoder, which is another recurrent neural network, produces the target sentence one symbol at a time. The log conditional probability thus can be decomposed as follows:
$$\log p(y|x) = \sum _{t=1}^{n} \log p(y_t|y_{<t}, x)$$ (Eq. 3)
where $y_{<t}$ = ( $y_1, \ldots , y_{t-1}$ ). As described in Equation (2), the conditional distribution of $p(y_t|y_{<t}, x)$ is modeled as a function of the previously predicted output $y_{t-1}$ , the hidden state of the decoder $s_t$ , and the context vector $c_t$ .
$$p(y_t|y_{<t}, x) \propto \exp \lbrace g(y_{t-1}, s_t, c_t)\rbrace $$ (Eq. 4)
The context vector $c_t$ is used to determine the relevant part of the source sentence to predict $y_t$ . It is computed as the weighted sum of source representations $\bf {h_1}, \ldots , \bf {h_m}$ . Each weight $\alpha _{ti}$ for $\bf {h_i}$ implies the probability of the target symbol $y_t$ being aligned to the source symbol $x_i$ :
$$c_t = \sum _{i=1}^{m} \alpha _{ti} \bf {h_i}$$ (Eq. 5)
Given a sentence-aligned parallel corpus of size $N$ , the entire parameter $\theta $ of the NMT model is jointly trained to maximize the conditional probabilities of all sentence pairs ${ \lbrace (x^n, y^n)\rbrace }_{ n=1 }^{ N }$ :
$$\theta ^* = \underset{\theta }{\arg \!\max } \sum _{n=1}^{N} \log p(y^{n}|x^{n})$$ (Eq. 6)
where $\theta ^*$ is the optimal parameter.
Related Work
In statistical machine translation (SMT), synthetic bilingual data have been primarily proposed as a means to exploit monolingual corpora. By applying a self-training scheme, the pseudo parallel data were obtained by automatically translating the source-side monolingual corpora BIBREF13 , BIBREF14 . In a similar but reverse way, the target-side monolingual corpora were also employed to build the synthetic parallel data BIBREF15 , BIBREF16 . The primary goal of these works was to adapt trained SMT models to other domains using relatively abundant in-domain monolingual data.
Inspired by the successful application in SMT, there have been efforts to exploit synthetic parallel data in improving NMT systems. Source-side BIBREF1 , target-side BIBREF0 and both sides BIBREF2 of the monolingual data have been used to build synthetic parallel corpora. In their work, the pseudo parallel data combined with a real training corpus significantly enhanced the translation quality of NMT. In Sennrich et al., sennrich2015improving, domain adaptation of NMT was achieved by fine-tuning trained NMT models using a synthetic parallel corpus. Firat et al. firat2016zero attempted to build NMT systems without any direct source-to-target parallel corpus. In their work, the pseudo parallel corpus was employed in fine-tuning the target-specific attention mechanism of trained multi-way multilingual NMT BIBREF17 models, which enabled zero-resource NMT between the source and target languages. Lastly, synthetic sentence pairs have been utilized to enrich the training examples having rare or unknown translation lexicons BIBREF4 .
Motivation
As described in the previous section, synthetic parallel data have been widely used to boost the performance of NMT. In this work, we further extend their application by training NMT with only synthetic data. In certain language pairs or domains where the source-to-target real parallel corpora are very rare or even unprepared, the model trained with synthetic parallel data can function as an effective baseline model. Once the additional ground truth parallel corpus is established, the trained model can be improved by retraining or fine-tuning using the real parallel data.
Limits of the Previous Approaches
For a given translation task, we classify the existing pseudo parallel data into the following groups:
Source-originated: The source sentences are from a real corpus, and the associated target sentences are synthetic. The corpus can be formed by automatically translating a source-side monolingual corpus into the target language BIBREF4 , BIBREF1 . It can also be built from source-pivot bilingual data by introducing a pivot language. In this case, a pivot-to-target translation model is employed to translate the pivot language corpus into the target language. The generated target sentences paired with the original source sentences form a pseudo parallel corpus.
Target-originated: The target sentences are from a real corpus, and the associated source sentences are synthetic. The corpus can be formed by back-translating a target-side monolingual corpus into the source language BIBREF0 . Similar to the source-originated case, it can be built from a pivot-target bilingual corpus using a pivot-to-source translation model BIBREF3 .
The process of building each synthetic parallel corpus is illustrated in Figure 1 . As shown in Figure 1 , the previous studies on pseudo parallel data share a common property: synthetic and ground truth sentences are biased on a single side of sentence pairs. In such a case where the synthetic parallel data are the only or major resource used to train NMT, this may severely limit the availability of the given pseudo parallel corpus. For instance, as will be demonstrated in our experiments, synthetic data showing relatively high quality in one translation task (e.g., French $\rightarrow $ German) can produce poor results in the translation task of the reverse direction (German $\rightarrow $ French).
Another drawback of employing synthetic parallel data in training NMT is that the capacity of the synthetic parallel corpus is inherently influenced by the mother translation model from which the synthetic sentences originate. Depending on the quality of the mother model, ill-formed or inaccurate synthetic examples could be generated, which would negatively affect the reliability of the resultant synthetic parallel data. In the previous study, Zhang and Zong zhang2016exploiting bypassed this issue by freezing the decoder parameters while training with the minibatches of pseudo bilingual pairs made from a source language monolingual corpus. This scheme, however, cannot be applied to our scenario as the decoder network will remain untrained during the entire training process.
Proposed Mixing Approach
To overcome the limitations of the previously suggested pseudo parallel data, we propose a new type of synthetic parallel corpus called PSEUDOmix. Our approach is quite straightforward: For a given translation task, we first build both source-originated and target-originated pseudo parallel data. PSEUDOmix can then be readily built by mixing them together. The overall process of building PSEUDOmix for the French $\rightarrow $ German translation task is illustrated in Figure 1 .
By mixing source- and target-originated pseudo parallel data, the resultant corpus includes both real and synthetic examples on either side of sentence pairs, which is the most evident feature of PSEUDOmix. Through the mixing approach, we attempt to lower the overall discrepancy in the quality of the source and target examples of synthetic sentence pairs, thus enhancing the reliability as a parallel resource. In the following section, we evaluate the actual benefits of the mixed composition in the synthetic parallel data.
Experiments: Effects of Mixing Real and Synthetic Sentences
In this section, we analyze the effects of the mixed composition in the synthetic parallel data. Mixing pseudo parallel corpora derived from different sources, however, inevitably brings diversity, which affects the capacity of the resulting corpus. We isolate this factor by building both source- and target-originated synthetic corpora from the identical source-to-target real parallel corpus. Our experiments are performed on French (Fr) $\leftrightarrow $ German (De) translation tasks. Throughout the remaining paper, we use the notation * to denote the synthetic part of the pseudo sentence pairs.
Data Preparation
By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section. For automatic translation, we utilize a pre-trained and publicly released NMT model for En $\rightarrow $ De and train another NMT model for En $\rightarrow $ Fr using the WMT'15 En-Fr parallel corpus BIBREF19 . A beam of size 5 is used to generate synthetic sentences. Lastly, to match the size of the training data, PSEUDOmix is established by randomly sampling half of each Fr*-De and Fr-De* corpus and mixing them together.
We use the parallel corpora from the shared translation task of WMT'15 and WMT'16 BIBREF27 . Using the same pivot-based technique as the previous task, Cs-De* and Fr-De* corpora are built from the WMT'15 Cs-En and Fr-En parallel data respectively. For Cs*-De and Fr*-De, WMT'16 En-De parallel data are employed. We again use pre-trained NMT models for En $\rightarrow $ Cs, En $\rightarrow $ De, and En $\rightarrow $ Fr to generate synthetic sentences. A beam of size 1 is used for fast decoding.
For the Real Fine-tuning scenario, we use real parallel corpora from the Europarl and News Commentary11 dataset. These direct parallel corpora are obtained from OPUS BIBREF28 . The size of each set of ground truth and synthetic parallel data is presented in Table 5 . Given that the training corpus for widely studied language pairs amounts to several million lines, the Cs-De language pair (0.6M) reasonably represents a low-resource situation. On the other hand, the Fr-De language pair (1.8M) is considered to be relatively resource-rich in our experiments. The details of the preprocessing are identical to those in the previous case.
Data Preprocessing
Each training corpus is tokenized using the tokenization script in Moses BIBREF20 . We represent every sentence as a sequence of subword units learned from byte-pair encoding BIBREF21 . We remove empty lines and all the sentences of length over 50 subword units. For a fair comparison, all cleaned synthetic parallel data have equal sizes. The summary of the final parallel corpora is presented in Table 1 .
Training and Evaluation
All networks have 1024 hidden units and 500 dimensional embeddings. The vocabulary size is limited to 30K for each language. Each model is trained for 10 epochs using stochastic gradient descent with Adam BIBREF22 . The Minibatch size is 80, and the training set is reshuffled between every epoch. The norm of the gradient is clipped not to exceed 1.0 BIBREF23 . The learning rate is $2 \cdot 10^{-4}$ in every case.
We use the newstest 2012 set for a development set and the newstest 2011 and newstest 2013 sets as test sets. At test time, beam search is used to approximately find the most likely translation. We use a beam of size 12 and normalize probabilities by the length of the candidate sentences. The evaluation metric is case-sensitive tokenized BLEU BIBREF24 computed with the multi-bleu.perl script from Moses. For each case, we present average BLEU evaluated on three different models trained from scratch.
We use the same experimental settings that we used for the previous case except for the Real Fine-tuning scenario. In the fine-tuning step, we use the learning rate of $2 \cdot 10^{-5}$ , which produced better results. Embeddings are fixed throughout the fine-tuning steps. For evaluation, we use the same development and test sets used in the previous task.
Results and Analysis
Before we choose the pivot language-based method for data synthesis, we conduct a preliminary experiment analyzing both pivot-based and direct back-translation. The model used for direct back-translation was trained with the ground truth Europarl Fr-De data made from the multi-parallel corpus presented in Table 2 . On the newstest 2012/2013 sets, the synthetic corpus generated using the pivot approach showed higher BLEU (19.11 / 20.45) than the back-translation counterpart (18.23 / 19.81) when used in training a De $\rightarrow $ Fr NMT model. Although the back-translation method has been effective in many studies BIBREF0 , BIBREF25 , its availability becomes restricted in low-resource cases which is our major concern. This is due to the poor quality of the back-translation model built from the limited source-to-target parallel corpus. Instead, one can utilize abundant pivot-to-target parallel corpora by using a rich-resource language as the pivot language. This consequently improves the reliability of the quality of baseline translation models used for generating synthetic corpora.
From Table 2 , we find that the bias of the synthetic examples in pseudo parallel corpora brings imbalanced quality in the bidirectional translation tasks. Given that the source- and target-originated classification of a specific synthetic corpus is reversed depending on the direction of the translation, the overall results imply that the target-originated corpus for each translation task outperforms the source-originated data. The preference of target-originated synthetic data over the source-originated counterparts was formerly investigated in SMT by Lambert et al., lambert2011investigations. In NMT, it can be explained by the degradation in quality in the source-originated data owing to the erroneous target language model formed by the synthetic target sentences. In contrast, we observe that PSEUDOmix not only produces balanced results for both Fr $\rightarrow $ De and De $\rightarrow $ Fr translation tasks but also shows the best or competitive translation quality for each task.
We note that mixing two different synthetic corpora leads to improved BLEU not their intermediate value. To investigate the cause of the improvement in PSEUDOmix, we build additional target-originated synthetic corpora for each Fr $\leftrightarrow $ De translation with a beam of size 3. As shown in Table 3 , for the De $\rightarrow $ Fr task, the new target-originated corpus (c) shows higher BLEU than the source-originated corpus (b) by itself. The improvement in BLEU, however, occurs only when mixing the source- and target-originated synthetic parallel data (b+d) compared to mixing two target-originated synthetic corpora (c+d). The same phenomenon is observed in the Fr $\rightarrow $ De case as well. The results suggest that real and synthetic sentences mixed on either side of sentence pairs enhance the capability of a synthetic parallel corpus. We conjecture that ground truth examples in both encoder and decoder networks not only compensate for the erroneous language model learned from synthetic sentences but also reinforces patterns of use latent in the pseudo sentences.
We also evaluate the effects of the proposed mixing strategy in phrase-based statistical machine translation BIBREF26 . We use Moses BIBREF20 and its baseline configuration for training. A 5-gram Kneser-Ney model is used as the language model. Table 4 shows the translation results of the phrase-based statistical machine translation (PBSMT) systems. In all experiments, NMT shows higher BLEU (2.44-3.38) compared to the PBSMT setting. We speculate that the deep architecture of NMT provides noise robustness in the synthetic examples. It is also notable that the proposed PSEUDOmix outperforms other synthetic corpora in PBSMT. The results clearly show that the benefit of the mixed composition in synthetic sentence pairs is beyond a specific machine translation framework.
Table 6 shows the results of the Pseudo Only scenario on Cs $\leftrightarrow $ De and Fr $\leftrightarrow $ De tasks. For the baseline comparison, we also present the translation quality of the NMT models trained with the ground truth Europarl+NC11 parallel corpora (a). In Cs $\leftrightarrow $ De, the Pseudo Only scenario shows outperforming results compared to the real parallel corpus by up to 3.86-4.43 BLEU on the newstest 2013 set. Even for the Fr $\leftrightarrow $ De case, where the size of the real parallel corpus is relatively large, the best BLEU of the pseudo parallel corpora is higher than that of the real parallel corpus by 1.3 (Fr $\rightarrow $ De) and 0.49 (De $\rightarrow $ Fr). We list the results on the newstest 2011 and newstest 2012 in the appendix. From the results, we conclude that large-scale synthetic parallel data can perform as an effective alternative to the real parallel corpora, particularly in low-resource language pairs.
As shown in Table 6 , the model learned from the Cs*-De corpus outperforms the model trained with the Cs-De* corpus in every case. This result is slightly different from the previous case, where the target-originated synthetic corpus for each translation task reports better results than the source-originated data. This arises from the diversity in the source of each pseudo parallel corpus, which vary in their suitability for the given test set. Table 6 also shows that mixing the Cs*-De corpus with the Cs-De* corpus of worse quality brings improvements in the resulting PSEUDOmix, showing the highest BLEU for bidirectional Cs $\leftrightarrow $ De translation tasks. In addition, PSEUDOmix again shows much more balanced performance in Fr $\leftrightarrow $ De translations compared to other synthetic parallel corpora.
While the mixing strategy compensates for most of the gap between the Fr-De* and the Fr*-De (3.01 $\rightarrow $ 0.17) in the De $\rightarrow $ Fr case, the resulting PSEUDOmix still shows lower BLEU than the target-originated Fr-De* corpus. We thus enhance the quality of the synthetic examples of the source-originated Fr*-De data by further training its mother translation model (En $\rightarrow $ Fr). As illustrated in Figure 2 , with the target-originated Fr-De* corpus being fixed, the quality of the models trained with the source-originated Fr*-De data and PSEUDOmix increases in proportion to the quality of the mother model for the Fr*-De corpus. Eventually, PSEUDOmix shows the highest BLEU, outperforming both Fr*-De and Fr-De* data. The results indicate that the benefit of the proposed mixing approach becomes much more evident when the quality gap between the source- and target-originated synthetic data is within a certain range.
As presented in Table 6 , we observe that fine-tuning using ground truth parallel data brings substantial improvements in the translation qualities of all NMT models. Among all fine-tuned models, PSEUDOmix shows the best performance in all experiments. This is particularly encouraging for the case of De $\rightarrow $ Fr, where PSEUDOmix reported lower BLEU than the Fr-De* data before it was fine-tuned. Even in the case where PSEUDOmix shows comparable results with other synthetic corpora in the Pseudo Only scenario, it shows higher improvements in the translation quality when fine-tuned with the real parallel data. These results clearly demonstrate the strengths of the proposed PSEUDOmix, which indicate both competitive translation quality by itself and relatively higher potential improvement as a result of the refinement using ground truth parallel corpora.
In Table 6 (b), we also present the performance of NMT models learned from the ground truth Europarl+NC11 data merged with the target-originated synthetic parallel corpus for each task. This is identical in spirit to the method in Sennrich et al. sennrich2015improving which employs back-translation for data synthesis. Instead of direct back-translation, we used pivot-based back-translation, as we verified the strength of the pivot-based data synthesis in low-resource environments. Although the ground truth data is only used for the refinement, the Real Fine-tuning scheme applied to PSEUDOmix shows better translation quality compared to the models trained with the merged corpus (b). Even the results of the Real Fine-tuning on the target-originated corpus provide comparable results to the training with the merged corpus from scratch. The overall results support the efficacy of the proposed two-step methods in practical application: the Pseudo Only method to introduce useful prior on the NMT parameters and the Real Fine-tuning scheme to reorganize the pre-trained NMT parameters using in-domain parallel data.
Experiments: Large-scale Application
The experiments shown in the previous section verify the potential of PSEUDOmix as an efficient alternative to the real parallel data. The condition in the previous case, however, is somewhat artificial, as we deliberately match the sources of all pseudo parallel corpora. In this section, we move on to more practical and large-scale applications of synthetic parallel data. Experiments are conducted on Czech (Cs) $\leftrightarrow $ German (De) and French (Fr) $\leftrightarrow $ German (De) translation tasks.
Application Scenarios
We analyze the efficacy of the proposed mixing approach in the following application scenarios:
Pseudo Only: This setting trains NMT models using only synthetic parallel data without any ground truth parallel corpus.
Real Fine-tuning: Once the training of an NMT model is completed in the Pseudo Only manner, the model is fine-tuned using only a ground truth parallel corpus.
The suggested scenarios reflect low-resource situations in building NMT systems. In the Real Fine-tuning, we fine-tune the best model of the Pseudo Only scenario evaluated on the development set.
Conclusion
In this work, we have constructed NMT systems using only synthetic parallel data. For this purpose, we suggest a novel pseudo parallel corpus called PSEUDOmix where synthetic and ground truth real examples are mixed on either side of sentence pairs. Experiments show that the proposed PSEUDOmix not only shows enhanced results for bidirectional translation but also reports substantial improvement when fine-tuned with ground truth parallel data. Our work has significance in that it provides a thorough investigation on the use of synthetic parallel corpora in low-resource NMT environment. Without any adjustment, the proposed method can also be extended to other learning areas where parallel samples are employed. For future work, we plan to explore robust data sampling methods, which would maximize the quality of the mixed synthetic parallel data. | Yes |
936878cff0e6e327b2554ee5d46686797ee92cf2 | 936878cff0e6e327b2554ee5d46686797ee92cf2_0 | Q: Do they analyze what type of content Arabic bots spread in comparison to English?
Text: Introduction
The analysis of social media content to understand online human behavior has gained significant importance in recent years BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . However, a major limitation of the design of such analysis is that it often fails to account for content created by bots, which can significantly influence the messaging in social media. A social bot is an autonomous entity on social media that is typically engineered to pass as a human, often with the intent to manipulate online discourse BIBREF4 . Recent studies have shown that a significant majority of the social media content is generated by bots. For example, a six-week study by the Pew Research Center found that around two-thirds of all tweets with URL links were posted by likely bots BIBREF5 . As a result, the presence of bots can negatively impact the results of social media analysis and misinform our understanding of how humans interact within the online social space. In particular, any social media analysis that doesn't take into account the impact of bots is incomplete. While some bots can be beneficial (e.g., customer service chatbots), the focus in this work is on content-polluter bots that mimic human behavior online to spread falsified information BIBREF6 , create a false sense of public support BIBREF7 , and proliferate dangerous ideologies BIBREF8 , BIBREF9 .
Bots have vigorously invaded online social communities. A recent study estimated that bots constitute about 15% of all Twitter accounts BIBREF10 . With 321 million Twitter accounts BIBREF11 , the implication is that there are more than 48 million bot accounts on Twitter. Twitter reported that the number of bots and trolls suspended each week is on the rise reaching 9.9 million as of May 2018 BIBREF12 . While this number may seem promising, Twitter's fight against bots is far from over BIBREF13 .
Detecting bots in social media is a first step to account for the impact of bots in social media analysis. Our interest is in analysis of abuse in Arabic Twitter space, specifically the spread of religious hate, and thus to account for the impact of bots in our research, this paper focuses on detecting Arabic Twitter bots that are active in spreading hateful messages against various religious groups. Detecting bots in social media is challenging as bot designers are using sophisticated techniques to make a social bot look and behave as close to a human as possible BIBREF4 . Several researchers have looked at the problem of detecting bots in Twitter (See Section SECREF10 ), and several bot detection tools are freely available BIBREF14 , BIBREF10 , BIBREF15 providing fairly high detection accuracy. However, we show in this paper that these tools fail to perform as well on Arabic Twitter bots as they do on English Twitter bots. In fact, Arabic Twitter bot detection and analysis is a considerably under-researched area. A study by Abokhodair et al. BIBREF16 analyzed a Twitter botnet that was active during the Syrian civil war to understand how it might have influenced related discussions. El-Mawass et al. BIBREF17 estimated that around 75% of Saudi trending hashtags on Twitter contained spam content, some of which was created by automated spammers.
In our recent work on hate speech in Arabic social media BIBREF18 , BIBREF19 , we showed that Arabic Twitter is awash with religious hatred which we defined as “a speech that is insulting, offensive, or hurtful and is intended to incite hate, discrimination, or violence against an individual or a group of people on the basis of religious beliefs or lack thereof". Having such a large volume of hate speech and knowing that ISIS and other radical organizations have been using bots to push their extreme ideologies BIBREF8 , BIBREF9 , we hypothesize that bots may be to blame for a significant amount of this widespread hatred.
In this work, we build a novel regression model, based on linguistic, content, behavioral and topic features to detect Arabic Twitter bots to understand the impact of bots in spreading religious hatred in Arabic Twitter space. In particular, we quantitatively code and analyze a representative sample of 450 accounts disseminating hate speech from the dataset constructed in our previous work BIBREF18 , BIBREF19 for bot-like behavior. We compare our assigned bot-likelihood scores to those of Botometer BIBREF14 , a well-known machine-learning-based bot detection tool, and we show that Botometer performs a little above average in detecting Arabic bots. Based on our analysis, we build a predictive regression model and train it on various sets of features and show that our regression model outperforms Botometer's by a significant margin (31 points in Spearman's rho). Finally, we provide a large-scale analysis of predictive features that distinguish bots from humans in terms of characteristics and behaviors within the context of social media.
To facilitate Arabic bot detection research and Twitter automation policy enforcement, this paper provides the following findings and contributions.
Background and Related Work
In this section, we first discuss the main challenges encountered in analyzing Arabic language and social media content in general. We then survey prior research on online hate speech and bot detection and analysis.
Challenges of Arabic Language and User-generated Content
The Arabic language poses unique challenges to the process of analyzing and studying online content BIBREF20 , BIBREF21 . Arabic is a morphologically rich language with a substantial amount of syntactic and relation information encoded at the word level. Arabic is also a pluricentric language with varying dialects corresponding to various regions in the Arab world. Words can have entirely different meanings across dialects. Social media users tend to use Arabic dialects rather than Modern Standard Arabic (MSA). Unlike MSA, Arabic dialects are not standardized, and often fail to follow well-defined language rules and structures. Besides, Arabic is a greatly under-resourced language with few Natural Language Processing (NLP) tools supporting MSA, let alone Arabic dialects BIBREF22 .
Other challenges that are encountered while studying user-generated content include multilingual text, slangs, misspellings, abbreviations, and lengthening of words. Furthermore, microblogging platforms that impose a maximum length on posts such as Twitter can lead to text that lacks context, which in turn may lead to a flawed analysis. Moreover, some online users tend to mask abusive and hateful content by presenting it as a harmless joke or hiding it inside a comical image. Such behavior can lead to abusive and toxic content going undetected. We describe later in this paper how these aforementioned challenges have been addressed.
Online Hate Speech
Our previous work BIBREF18 , BIBREF19 appears to be the only one focusing on hate speech detection and analysis in Arabic social media. Our study revealed that religious hate speech is widespread on Arabic Twitter. We found that almost half of the tweets discussing religion preached hatred and violence against religious minorities, mostly targeting Jews, Atheists, and Shia (the second largest Islamic sect). In particular, we found that there was a 60% chance that a tweet would be hateful if it contained the Arabic equivalent of the word Jews.
To provide a sense of comparison between the volume of hate speech on Arabic Twitter and English Twitter, we report the results of a study conducted by Magdy et al. BIBREF1 , in which they analyzed a large volume of English tweets mentioning Islam while reacting to the 2015 Paris attacks. Their analysis suggested that only 17% of such tweets were directing hate toward Muslims, 61% were spreading positive messages about Islam, while 22% were natural.
A growing body of hate speech research has been conducted on English social media content. Distinguishable among this work are studies related to the detection of online hateful content targeting race and gender using character INLINEFORM0 -grams BIBREF2 , word embeddings BIBREF23 , and document embeddings BIBREF24 . A measurement study conducted by Silva et al. BIBREF0 exploring the main targets of hate speech on Twitter and Whisper, an anonymous social media platform, showed that black people were the most targeted group on both networks, followed by white people on Twitter and fake people on Whisper. While race was the main targeted category on Twitter, behavior (e.g., sensitive people) was the main targeted category on Whisper.
Malicious Use of Bots in Social Media
While previous research has studied harmless bots on several collaborative and social platforms such as Wikidata BIBREF25 , Twitch BIBREF26 , and Reddit BIBREF27 , our focus is on malicious bots. Previous studies have thoroughly investigated such nefarious roles that can be played by bots, particularly in English online social space. One of such roles is political astroturfing wherein a flood of bot accounts (usually created by a single entity) creates the illusion of public support for a particular political candidate for the purpose of influencing public opinion. Bessi and Ferrara BIBREF7 suggested that social bots have generated about one-fifth of the 2016 U.S. Presidential election discourse on Twitter. Twitter confirmed this in an official blog post BIBREF28 reporting that approximately 1.4 million accounts were notified about having some form of interactions with suspicious Russian-linked accounts (trolls and bots) who were spreading misinformation during the 2016 U.S. election. This nefarious use of bots is not new to social media; Ratkiewicz et al. BIBREF6 indicated that bots have been used to amplify fake news and misinformation during the 2010 U.S. midterm elections through a coordinated generation and liking of misguiding tweets. It has also been shown that bots are used by ISIS propagandists to inflate their influence on Twitter and popularize their extreme ideologies BIBREF8 , BIBREF9 .
Limited research has been conducted to study bot behavior on Arabic social media. The only relevant research we are aware of is the work by Abokhodair et al. BIBREF16 , in which they analyzed a Syrian botnet consisting of 130 bots that were active for 35 weeks before being suspended by Twitter. Their analysis suggested that the main task of such bots was to report news from a highly biased news source. A different but related research problem is the detection of spam content which sometimes involves bots. In BIBREF17 , El-Mawass et al. reported that about 74% of tweets in Saudi trending hashtags are spam. They suggested that bots are sometimes used to increase the reach of spam content by coordinated liking and retweeting of spam tweets.
Bot Detection
There are two main approaches to detecting social media bots in literature: supervised learning and unsupervised learning. An example of a supervised-based bot detection model is Botometer BIBREF14 , BIBREF10 , which is a freely available tool that employs supervised machine learning algorithms to predict a bot score, from 0 to 5, for public Twitter accounts. This score indicates the likelihood of an account being a bot based on 1,150 features distributed across six feature categories. Botometer also computes an individual bot score for each of the six feature categories, comprised of friend features (e.g., local time and popularity of retweeting and retweeted accounts), network features (e.g., network metrics that describe distribution and density of retweet and mention networks), user features (e.g., number of followers, number of friends, profile language, account age), temporal features (e.g., average time between two consecutive tweets, tweeting rate), content features (e.g., length of tweet, frequency of part-of-speech tags), and sentiment features (e.g., arousal, valence, and dominance scores). Figure FIGREF12 provides an example of Botometer's bot score interface. It is worth noting that although content and sentiment features are computed for non-English tweeting bots, they are only meaningful for English tweeting bots. Botometer conveniently provides a language-independent bot score, which we considered in our study.
DeBot BIBREF29 , on the other hand, utilizes unsupervised techniques to detect Twitter bots based on synchronicity and activity correlation between accounts. The system has several services that can answer the following questions. Is a given account a bot? How long has it been active? Which bots are currently tweeting about a given topic? Which bots are participating in a given hashtag? They compared their system to Botometer and found that 59% of bots detected using their system had a Botometer bot score exceeding 50% (Botometer's previous scoring scheme ranged from 0% to 100%). Their analysis suggested that bots in a given botnet share the same tweets 87% of the time.
To our knowledge, no existing work has attempted to specifically detect Arabic bots. In BIBREF30 , Morstatter et al. created a dataset of 3,602 Arabic tweeting bots using a honeypot trap mechanism and a human dataset consisting of 3,107 users—a high bot ratio we argue that doesn't represent an actual bot percentage on Twitter, which is estimated to be between 9% and 15% BIBREF10 . Our work is different from BIBREF30 in several important aspects. First, the main goal in BIBREF30 is to improve recall in detecting bots while our goal is to detect Arabic bots with high precision in the context of religious hate. Second, Morstatter et al. created a binary classifier to classify whether an account is a bot or not; as bots nowadays are very sophisticated with many of them exhibiting both human and bot behaviors at the same time BIBREF10 , we argue that the problem can't be simplified into a binary classification problem. To address this issue of mix behaviors, we adopt two techniques: instead of using any automated mechanism such as setting up a honeypot trap, we rely on manual labeling of accounts by assigning each a score ranging from 0 to 5 which indicates the degree of bot-like behavior an account is showing to get the ground truth; and we create a regression predictive model trained on our manually-labeled accounts to predict bot scores for new Twitter accounts. Finally, our work specifically focuses on the unique characteristics of Arabic bots, and thus provides deep insights into the predictive features that distinguish bots from humans and broadens the understanding of bots' behavior in the context of Arabic social space.
Data Collection
To identify accounts disseminating hate speech, we started working from the hate speech dataset constructed in our previous work BIBREF18 , BIBREF19 , which consists of 6,000 Arabic tweets collected in November 2017 and annotated for religious hate speech by crowdsourced workers (see Table TABREF15 for general statistics of the dataset). The tweets were collected by querying Twitter's Standard search API using impartial terms that refer to one of the six most common religious groups across the Middle East and North Africa. Although we didn't use offensive terms or religious slurs in our data collection process, the number of returned hateful tweets was surprisingly large. More details on the construction and analysis of this dataset can be found in BIBREF18 , BIBREF19 .
In this dataset, we identified 4,410 unique Twitter accounts. Of these, 543 accounts were suspended, and thus we excluded them from our study. We then looked at the remaining 3,867 active accounts and classified them into accounts with hateful tweets or accounts with non-hateful tweets based on the number of hateful and non-hateful tweets they had authored. If they had authored more hateful tweets than non-hateful tweets, we classified them as accounts with hateful tweets. This resulted in having 1,750 accounts with hateful tweets and 2,117 accounts with non-hateful tweets. Since this study is focused on identifying the role of bots in spreading religious hatred, only accounts with hateful tweets were considered.
For each account with hateful tweets, we collected up to 3,200 of their recent tweets using the GET statuses/user_timeline method from Twitter's API. The total number of collected tweets was more than 4.2 million tweets. We also collected each account profile information (e.g., location, time zone, language) using the GET users/show API method.
Ground Truth
To evaluate the accuracy of Botometer scores in Arabic Twitter, we need to get a ground truth of which accounts are bots. Getting the ground truth for such an inherently difficult task is not straightforward. Some of the approaches proposed in literature are fully automatic approaches without any manual inspection, e.g. setting a honeypot trap BIBREF31 , BIBREF30 , identifying synchronicity and correlation in accounts' behavior BIBREF29 , and observing accounts getting suspended by Twitter BIBREF30 . Others have relied on manual labeling of accounts into bot-like or human-like BIBREF10 . The snowball mechanism has also been used in which researchers identify a seed list of highly suspicious accounts and then snowball their connections to collect a larger set of suspicious accounts BIBREF32 , BIBREF16 .
The common aspect among all earlier efforts in labeling of bots is that they assign binary labels to accounts, bot or human. Given that there is no simple list of rules that can decisively identify bots, we argue that it is more effective to assign labels on a scale to reflect the inherent uncertainty in recognizing bots. Additionally, since modern bots that attempt to hide themselves are becoming more sophisticated, we argue that any fully automatic approach without any manual inspection to get ground truth about bots is bound to suffer from high inaccuracies.
Thus, we turn to manual labeling approaches to get the ground truth. Although crowdsourced workers can be helpful in many labeling and classification tasks, we argue that our task of fine-grained scoring of accounts on the level of bot-like behavior they are exhibiting requires a high-level of domain knowledge as well as extensive training that is hard to control in a crowdsource setting. We argue that in order get a reasonable set of ground truth data for identifying bots, manual labeling must be done by experts. Therefore, in order to insure high-quality labeling, the labeling of the accounts was done by two members of the research team who are native Arabic speakers and have gone through the following training steps to gain the required expertise to make a sound and informed judgment.
First, as a data exploration step, we applied Botometer on the 1,750 accounts with hateful tweets to discern the distribution of their bot scores (illustrated in Figure FIGREF19 ). Recall that bot scores from Botometer (we refer to this as Botometer scores) are on a scale from zero to five, with zero being “most likely human” and five being “most likely bot”. A score in the middle of the scale is an indication that Botometer is unsure about the classification. As shown in this figure, the distribution is skewed to the right with the majority of accounts being assigned a Botometer score from 0 to 1.
Second, in order to gain the required domain knowledge with respect to bot behaviors and characteristics, we carefully examined the top 50 accounts receiving the highest Botometer scores as well as highly suspicious propaganda bots flagged by Botcheck.me BIBREF15 , a free online tool that is trained to identify English propaganda bots. We noted every suspicious behavior exhibited by these highly-suspected bot accounts with respect to account profile information, friends, followers, interaction with other accounts, tweet content, and posting behavior. We also familiarized ourselves with bot characteristics and behaviors reported in previous studies BIBREF10 , BIBREF16 , BIBREF29 , BIBREF4 , BIBREF33 , BIBREF34 . Following this, we have created a list of bot characteristics described in Table TABREF20 .
Based on this list of bot criteria (Table TABREF20 ), we manually examined each account and assigned a bot-likelihood score (we refer to this as the true score) ranging from 0 to 5, with 0 being “very unlikely" and 5 being “very likely" based on the extent by which an account exhibited a suspicious bot-like behavior from the list. We also added to the list other suspicious behaviors that we encountered while studying and labeling accounts in our dataset. It is important to note that even human accounts do exhibit one or more of these characteristics at different times (e.g., having a large number of followers). Furthermore, a bot may exhibit only a subset of these characteristics in addition to some human-like characteristics. Therefore, in our manual labeling of bot-like scores, the more characteristics an account exhibited, the higher the bot score it got assigned.
Since manual labeling is time and effort consuming, we considered a sample subset of accounts with hateful tweets. Using a 95% confidence level and a 4% margin of error, a representative sample of these accounts would be of size 450. To eliminate sampling bias and to ensure that the sample preserves the statistical proportions of the original dataset (see Figure FIGREF19 ), we applied proportionate stratified random sampling, wherein simple random sampling technique is employed to select training examples proportionally from each stratum (i.e., subgroup). This sampling method ensures that accounts with unusually high Botometer scores are still present in our sample and in a proportion similar to that in our original dataset. The final sample consisted of 239 accounts from the 0-1 stratum, 95 accounts from the 1-2 stratum, 47 accounts from the 2-3 stratum, 32 accounts form the 3-4 stratum, and 37 accounts from the 4-5 stratum.
Finally, to validate the robustness of our labeling process, we calculated the inter-rater agreement score between the two labelers on a subset of 30 independently-labeled accounts. A weighted kappa BIBREF35 score of 0.86 was reported, which indicates an almost perfect agreement BIBREF36 . Given such a high inter-rater agreement score, a well-defined bot criteria (Table TABREF20 ), and a highly time-expensive task (each account required on average a 15-min examination before a score was given), we decided to split the 450 accounts equally between the two labelers.
Quantifying Hate Speech Sent by Bots
The results of our manual labeling of the 450 accounts can provide a preliminary indication of how many hateful tweets were sent by bots vs. humans. Assuming that accounts with a true score of 3 or higher were bots, we found that there were 77 (17%) bots and 373 (83%) humans. Bots authored 109 hateful tweets (a per-bot average rate of 1.4 tweets), and human accounts authored 446 hateful tweets (a per-human average rate of 1.2 tweets). The ratio of tweets sent by bots to those sent by humans is 1:4. In other words, bots were responsible for 22.6% of hateful tweets, while humans were responsible for 77.4% of hateful tweets.
The relatively low per-bot average rate of tweets could be attributed to the fact that we are only considering their tweets in the hate speech dataset. Considering their whole timeline (tweets) and finding how many of those contain an instance of religious hatred is worth investigating in the future. We will extend this analysis in Section SECREF50 to include all 1750 accounts with hateful tweets.
Methods
Our manual scoring of accounts as well as Botometer scoring is done on a scale of 0-5 with a higher score implying a higher likelihood of the account being a bot. However, the absolute scores assigned by the two scoring methods may differ. In order to evaluate the accuracy of Botometer, we need to investigate if there is a monotonic relationship between how we score accounts (true scores) and how Botometer scores accounts (Botometer scores). To do this, we applied two rank correlation tests that measure the strength and direction of the association between the two scorings.
The first rank correlation test is Spearman's rho BIBREF37 , which is a well-known nonparametric statistical test that measures the strength of correlation between two independent variables. As Spearman's rho calculation is based on squaring differences between rankings, it penalizes more for large discrepancies between rankings. In case of tied ranks, the mean of the ranks is assigned. The second evaluation metric is Kendall's tau BIBREF38 , which is also a non-parametric test that is used to measure the strength and direction of correlation between two independent sets of scores. The Tau-b version of Kendall's tau was used to handle tied ranks. While both Spearman's rho and Kendall's tau are based on the ranks of data and not the actual scores, Kendall's tau appears to be less sensitive to wide discrepancies between rankings. The value of both Spearman's rho and Kendall's tau ranges from -1 (perfect negative correlation) to +1 (perfect positive correlation). A value of zero indicates the absence of correlation between the two sets of scores. Finally, we applied mean absolute error (MAE) to measure the average absolute differences (errors) between the true scores and Botometer scores.
We now present a regression predictive modeling task in which we train a random forest regression model to predict the bot score, from zero to 5, of an account based on various hand-crafted features. Random forest is a tree-based ensemble method that has been successfully used in other bot detection tasks BIBREF14 , BIBREF33 . A great property of random forest is that it inherently computes feature importance, and thus it can give insights into which feature is most informative. Random forest can also control over-fitting by training each tree on randomly selected training examples and features BIBREF39 . Note that we also experimented with other regression algorithms such as logistic regression and gradient boosting regression trees. However, their performance was poorer compared to random forest, and thus we only report the results achieved by random forest.
We implemented random forest with scikit-learn BIBREF40 , a Python machine learning library. To understand the impact of each individual feature in detecting bots, we trained our regression model on successive combinations of content, tweet, topic, and account features. We tuned the regression model by performing a hyperparameter grid search with 10-fold cross-validation and optimized for Spearman's rho. The regression model was trained on 70% of the accounts and tested on the remaining 30%. Three evaluation metrics were used to compare the performance of our regression model to that of Botometer: Spearman's rho, Kendall's tau, and MAE, as discussed in Section SECREF23 .
Results
The results of Spearman's rho and Kendall's tau were 0.43 and 0.33, respectively. The results suggest that there is a moderate positive monotonic relationship between the two sets of scores. MAE was 1.14, which indicates that Botometer scores on average are off by 1.14 points. These results indicate that while Botometer performs better than average in detecting Arabic bots, there is a need for developing social bot detection models that can work more effectively on Arabic Twitter accounts.
To better understand the limitations of Botometer in detecting Arabic bots, we further analyze the results in Figures FIGREF25 and FIGREF26 . Figure FIGREF25 presents a scatter graph plotting the true score for an account against its Botometer score, along with a regression line that best fits the data. The regression line indicates that Botometer tends to assign a higher score to accounts with true scores of 1.5 or less. On the other hand, Botometer tends to assign lower scores to accounts with a true score higher than 1.5. The margin increases as the true score rises. In other words, Botometer appears to be compressing the range of scores by assigning obvious human accounts a higher score than zero and highly suspected bot accounts a lower score than 5. Figure FIGREF26 shows a joint histogram for the two sets of scores using hexagonal binning. The figure represents a heatmap where darker colored hexagons represent higher density. As shown in the graph, the highest agreement is when both the true score and Botometer score are between 0 and 1. We can also see from the two histograms that Botometer scores have higher frequencies in the middle bins than the true scores.
Finally, in order to gain some insights into the reasons for Botometer's weakness in identifying Arabic bots, we manually inspected accounts with wide discrepancies in their true scores and Botometer scores and have identified the following possible reasons for this. Note that while some of these reasons may be applicable to English bots as well, we verified them to some extent only for Arabic bots. It is also important to note that a larger, more structured investigation would be required in order to fully validate those reasons. We leave that as part of our future work.
Botometer appears to be assigning a high-bot score to human accounts who have used their Twitter account for a short period of time, and thus have fewer tweets in their timeline. This could be due to the restriction Botometer enforced on their data collection process, which is considering only accounts with at least 200 tweets BIBREF10 . We had 17 accounts in our dataset with at most 100 tweets (i.e., inactive), and 71% of them were given Botometer scores larger than 2.5 while their true scores were less than 2.5. Given that Botometer generally assigned 13% of all accounts in our dataset scores in the upper range, while their true scores were in the lower range, we found this 71% misclassification rate to be significant ( INLINEFORM1 = 46.9, df = 1, INLINEFORM2 -value < 0.001).
Having unusually small number of followers or friends (followings) appears to be triggering Botometer to assign a high-bot score without taking into considerations other human-like behavior an account is exhibiting. We had 29 accounts in our dataset with followers or friends less than 5, 35% of them were misclassified by Botometer. This was significantly different from what is expected ( INLINEFORM0 = 11.4, df = 1, INLINEFORM1 -value < 0.001).
As Botometer doesn't support Arabic language, it may miss linguistic and content cues that can giveaway bots, e.g., posting unrelated tweets to a hashtag. We show in Section SECREF35 that linguistic features such as the use of numerics and emojis can be significant distinguishing features. This could be a reason for Botometer assigning a lower score to bot-like accounts with higher true scores.
Sometimes Arabic Twitter accounts use third-party Islamic applications that post Islamic supplications and/or Quranic verses on their behalf. There were 18 unique Islamic applications that were used by accounts in our dataset. Such behavior may result in Botometer assuming that these accounts are bots although some of them are in fact humans. This could be a reason for Botometer assigning a higher score to obvious human accounts with true scores of 1.5 or less.
We also considered other reasons that we believed to be causing wide discrepancies between true scores and Botometer scores. For example, including a hashtag in every tweet appeared to be triggering Botometer to assign a high bot score even when the account exhibited more human-like behavior. We had 41 accounts in our dataset with an average of one or more hashtags per tweet, and Botometer assigned higher scores to 15% of them. However, we found this proportion to be statistically insignificant. Another case where we noticed higher scores given to accounts by Botometer is when human accounts appeared to be followed by fake (probably purchased) followers. However, we couldn't verify this claim as such feature (followers being fake or not) was not part of our collected metadata.
We trained regression models on successive sets of features and assessed their generalization ability on a holdout testing dataset. Although we collected up to 3,200 tweets for each account, training the regression model using up to 200 tweets from each account provided faster training with similar results. Therefore, the results reported here are the ones using features extracted from up to 200 tweets per account, resulting in a total of 86,346 tweets.
Table TABREF36 compares the performances of these regression models in terms of Spearman's rho, Kendall's tau, and MAE. Highest scores are shown in bold. We have included the performance of Botometer as well in this table as a baseline. As shown in the table, our regression model trained on only simple content features outperformed Botometer which uses user, friend, network, and temporal features. The most informative content features reported by this regression model were the average numbers of account mentions, URL links, numerics, and punctuation marks, respectively. This shows that linguistic cues conveyed in tweets are highly effective in detecting Arabic bots. We will further discuss the importance and direction of contribution of these features in Section SECREF37 .
By including the tweet features in addition to the content features in training the regression model, the Spearman's coefficient improved by five points. Among the content and tweet features, the most distinguishing features were reply tweet proportion, original tweet proportion, and average number of account mentions, respectively. Adding topic and sentiment features in training further improved the performance of our regression model. These topic features were extracted using bow as opposed to tf-idf, as bow delivered better performance. We found that topic features extracted from lemmatized text achieved superior results to those extracted from stemmed text. However, we also found that not using stemming or lemmatization led to the best performance.
The best Spearman's rho and Kendall's tau were achieved after adding account features. The 0.74 in Spearman's rho indicates a strong positive correlation between scores predicted by our regression model and the true scores. The most informative features for this regression model were still reply tweet proportion, average number of mentions, and original tweet proportion, respectively.
The least informative features were mostly from the account feature category such as whether the account has an empty profile description, location, and URL link. Also, whether or not the account has the default profile image or their geotagging feature enabled didn't contribute much to the predicted bot score. This suggests that there wasn't a significant difference between the distribution of humans and bots across those features.
Features
Based on our analysis in Section SECREF24 , we identify four sets of features that we anticipate to be informative for Arabic bot detection. These are content, tweet, topic & sentiment, and account features. Table TABREF34 provides a description of each of these features. For content features, we used average lengths of words and tweets, and average numbers of emojis, punctuation marks, numerics, hashtags, account mentions, URL links and elongated words per tweet. For tweet features, we used the proportions of original tweets, reply tweets and retweet tweets, as well as the number of times an original/reply tweet was retweeted or favorited. For account features, we considered features such as the total number of tweets posted and favorited, numbers of followers and friends as well as features related to the account profile (e.g., account age).
Sentiment features were obtained by using TextBlob NLP Python library BIBREF41 which offers support for the Arabic language. Topic modeling was implemented using Latent Dirichlet Allocation (LDA) model provided by Gensim BIBREF42 , an unsupervised topic modeling python library. Before extracting topic features, tweets were preprocessed by removing diacritics (i.e., accents), tatweel (i.e., kashida), elongation, two-character words, non-Arabic characters, URL links, punctuation marks and numbers. We also removed Arabic stop words and normalized Arabic letters and hashtags as described in BIBREF18 , and then filtered out very rare words that appeared in less than 10 accounts' timelines and too common words that appeared in more than 50% of accounts' timelines. From the remaining list, we considered the 10K most frequent words. We experimented with both bag-of-words (bow) and term frequency-inverse document frequency (tf-idf) text representation techniques. We also experimented with stemming words using ARLSTem Arabic stemmer from NLTK Python library BIBREF43 and lemmatization using StanfordNLP Python library BIBREF44 . Results of these experiments are provided in the next subsection.
Feature Importance & Contribution
Random forest computes a feature importance score for each feature using the mean decrease impurity feature selection method which reflects how much each feature reduces variance BIBREF45 . The top most-important features, along with their importance scores for the best performing regression model are shown in Table TABREF38 .
Random forest feature importance score doesn't offer insights into feature contribution (i.e., the direction of feature importance). To understand how much positively or negatively each feature contributed to the final predicted bot score, we used TreeInterpreter Python package BIBREF46 , which breaks down each prediction made by the regression model into the sum of bias (i.e., the average bot score in the training data) and each features' contribution. We selected some of the top informative features and plotted their contribution against their corresponding feature value (see Figure FIGREF39 ). Figure FIGREF39 shows feature contribution for the reply tweet proportion which is the most bot-distinguishing feature. It shows that the more reply tweets an account has, the less likely that account is a bot. This suggests that these bots are not yet smart enough to engage in conversations and interact with other accounts as humans would normally do. The same feature contribution pattern was found for the average number of mentions as illustrated in Figure FIGREF39 . Mentioning other accounts usually implies interacting and communicating with them, and thus the more social an account is, the less likely that account is a bot.
Figure FIGREF39 shows how the proportion of original tweets (not retweet or reply tweets) contributes to the predicted bot score. If original tweets constitute more than 60% of an account's overall tweets, then the predicted bot-likelihood score would increase as the original tweet percentage increases. Such accounts that don't reply nor retweet might be using third-party applications to post tweets on their behalf, or that their “masters” programmed them so that they only disseminate prespecified text. This also suggests that human accounts on Twitter usually exhibit a variety of behaviors such as replying, retweeting, and tweeting an original text. As for retweeting, we can see from Figure FIGREF39 that there are two retweeting behaviors that would result in an increase in the predicted bot score. These are never retweeting any tweet (x INLINEFORM0 0) and retweeting extensively (x INLINEFORM1 1). Again, such black and white behavior is more of a bot-like behavior rather than a human-like behavior.
We also found a clear distinction between English bots and Arabic bots in terms of retweet, reply, and original tweet proportions. It has been claimed that English bots tend to retweet more than posting original tweets BIBREF7 . This was not found to be true in our dataset, i.e., Arabic bots in our dataset were posting original tweets more often than retweeting tweets. In particular, the average retweet, original, and reply proportions for bots were 17%, 76%, and 7%, respectively.
The average number of emojis per tweet was also one of the highly informative features. This feature was not considered by Botometer because it wasn't trained for Arabic bots. Figure FIGREF39 illustrates that bots tend to not use emojis in their tweets. We believe that this could be due to the fact that people use emojis instinctively to convey different kinds of emotions. Having a URL link in more than 50% of an account's tweets would contribute positively to the predicted bot score as shown in Figure FIGREF39 . This makes sense as many automatically generated tweets contain links to books, news articles, posts from a linked Facebook account, etc.
Another feature that might not be considered by Botometer as it doesn't extract Arabic-specific features is the average number of numerics. At first, we were surprised to find that the more numbers accounts use in their tweets, the more likely they are bots (see Figure FIGREF39 ). Upon closer look at accounts with a high use of numbers in their tweets and a positive average number of numerics feature contribution, we found that some accounts had random English letters and numbers in their tweets, which suggests that such tweets were generated automatically by computers. Such behavior was previously shown in Figures FIGREF21 and FIGREF21 .
Topic, Source, and Network Analysis
Here we further investigate the topics that are most discussed by bots (true scores INLINEFORM0 2.5) and humans (true scores INLINEFORM1 2.5). Although LDA doesn't assign names to topics, we inferred those names from the list of terms that are most associated with each topic. To give more insight into what these topics represent, we list in Table TABREF42 the most relevant terms for each topic.
We then considered the most dominant topic for each account, i.e., the topic with the highest probability distribution. Figure FIGREF43 shows the percentages of dominant topics of humans and bots tweets. We found that the distributions of humans and bots differ significantly among the seven topics ( INLINEFORM0 = 25.9, df = 6, INLINEFORM1 -value < 0.001). The topic distributions for bots is lopsided, i.e., the majority of the posts from bots were concentrated on a small number of topics, while humans' tweets covered a wider range of topics. While the top three discussed topics for both humans and bots were identical, the percentages were different. About 44% of suspected bots were mainly posting Islamic supplications and prayers, while 21% of humans were tweeting about the same topic. Suspected bots were least interested in sports (3.9%); however, they showed somewhat similar interest to that of humans in political topics related to Jerusalem, Jews, and Houthi.
Twitter provides source label along with tweet metadata which indicates which source (i.e., client) was used to post the tweet to its service. The accounts in our dataset were posting tweets using various official and/or third-party sources. We considered the dominant (i.e., main) used source for each account and grouped these sources into three categories: official Twitter sources, Islamic supplications, and other third-party sources. Official Twitter sources include Twitter Web Client, Twitter Lite, Twitter for IPhone, Twitter for Android, and Twitter for Windows. Islamic supplications include third-party applications mainly for automatically posting Islamic supplications on accounts' behalf. Other third-party sources include Facebook, Instagram, Google, If This Then That (IFTTT), Tweetbot, and Alameednews.com. In total, bots were mainly tweeting using 17 unique sources, while humans were tweeting using 14 unique sources.
Figure FIGREF46 illustrates the percentages of dominant sources for bots and humans accounts. We found that the distributions of humans and bots differ significantly among the three categories of sources ( INLINEFORM0 = 78.6, df = 2, INLINEFORM1 -value < 0.001). About 92% of humans were mainly posting tweets using official Twitter sources, whereas 53% of bots were mostly using official Twitter sources to post tweets. Posting mainly using third-party sources (including Islamic ones) was a more common behavior of bots (47%) rather than humans (8%).
It has been shown that bot network characteristics differ significantly from those of humans BIBREF33 . Here we investigate if this holds for Arabic bots as well. Since in our dataset we have more human accounts (373) than bot accounts (77) and to ensure a fair comparison, we randomly selected 77 human accounts to match the set of the 77 bot accounts. We constructed two types of networks: retweet network (see Figure FIGREF48 ) and mention network (see Figure FIGREF49 ). Nodes in these graphs represent accounts in our dataset as well as all other accounts that got retweeted/mentioned by the accounts in our dataset. Edges represent a retweeting/mentioning activity.
In the retweet network, humans have 2,561 nodes and 2,831 edges, while bots have 1,018 nodes and 1,054 edges, i.e., the human retweet network was more than twice as large as the bot retweet network. This gap was even larger for the mention network; the human mention network (4,978 nodes and 6,514 edges) was more than three times as large as the bot mention network (1,585 nodes 1,666 edges). We can see that bot networks are loosely connected with many singleton nodes, while human networks are highly connected with a very few singleton nodes. These network results are in line with what has been found for English bot networks BIBREF33 .
The Role of Bots in Spreading Hate Speech
To answer the question on how many hateful tweets were sent by bots rather than humans, we extend our analysis presented in Section SECREF22 to include all 1750 accounts with hateful tweets. Three accounts were suspended before we were able to collect their data, and thus they were excluded from this analysis. As we already have true scores for 450 accounts, we applied our regression model to the remaining 1,297 accounts with hateful tweets. Of the 1747 accounts with hateful tweets, we found that 185 (10.6%) accounts were more likely to be bots (predicted/true scores INLINEFORM0 2.5), and 1,562 (89.4%) accounts were more likely to be humans (predicted/true scores INLINEFORM1 2.5). Bots authored 238 hateful tweets (1.29 per-bot average rate), whereas humans authored 1,974 tweets (1.26 per-human average rate). The ratio of hateful tweets sent by bots to those sent by humans is 1:8. In particular, humans were responsible for 89.24% of all hateful tweets, while bots were responsible for 10.76% of all hateful tweets.
At the time of this writing (March 2019), we checked to see if the bots identified in our study were still active or not. We found that only 11% of them were suspended by Twitter. This indicates that the remaining 89% of the bots have lived for at least 1.4 years. In a recent study by Chavoshi et al. BIBREF29 , Twitter suspended 45% of the bots detected in their study within a three-month period. This shows that Arabic bots can go undetected for a long period of time.
DISCUSSION
Our analysis suggests that Arabic Twitter bots do have a role in the spread of religious hate on Arabic Twitter. In particular, bots were responsible for about 11% of all hateful tweets in the hate speech dataset. Our topic analysis showed that bots participate in highly controversial political discussions related to Israel/Palestine and Yemen. This builds on prior work that showed participation of Arabic bots, especially through the dissemination of highly polarizing tweets, during the Syrian civil war BIBREF16 .
Such political use of bots (i.e., disseminating hate speech and highly biased news) has been shown to be true for English bots as well. Bots on English Twitter have been used to promote jihadist propaganda BIBREF8 , BIBREF9 , spread fake news BIBREF6 , and infiltrate political discussions BIBREF7 . Bots have also been used for spamming in both Arabic BIBREF17 and English BIBREF47 Twitter networks. Other nefarious roles of bots that have been explored on English Twitter include manipulating the stock market and stealing personal data BIBREF4 . Unfortunately, there is a significant lack of Arabic-focused research to investigate other roles that can be played by bots in Arabic social networks. Our study serves as a starting point for understanding and detecting Arabic bots, demanding additional research to explore this understudied area.
While the social roles played by Arabic and English bots can be to some extent similar, our analysis showed that some Arabic bot characteristics are unique and different from English bots. As discussed in Section SECREF37 , Arabic bots in our dataset were posting original tweets more often than retweeting tweets. This found to be in contrast to English bots that tend to retweet more than posting original tweets BIBREF7 . We also showed that Arabic bots can live longer than English bots. Further, it has been shown that English bots tend to have fewer followers than humans BIBREF34 , BIBREF7 . This was not the case for Arabic bots. In our data set, bots on average have 81K followers (std = 588K), while humans on average have 7.5K (std = 25.5K). While manually studying accounts, we noticed that suspected bots tend to have a large number of fake followers to amplify their influence and reach. This use of bots (i.e., inflating popularity) has been found to be used by pro-ISIS Twitter accounts BIBREF8 , BIBREF9 . Another special consideration that must be taken into account when analyzing Arabic bots is that some Arabic users use third-party Islamic applications to post Quranic versus automatically on their behalf. This implies that even if some form of automation exists in an account, it doesn't necessarily mean that such an account is a bot.
The result of our regression model shows that Arabic bots can be identified with a high level of accuracy. Our feature analysis showed that bots in our dataset exhibit distinct behaviors and features. Unlike humans, bots tend to not communicate and engage in conversations with other accounts. This characteristic has been found to be true for English bots as well BIBREF33 . Significant differences appeared in the distribution of sources used by bots and humans, where we found that bots tend to use third-party applications more often than humans to keep their accounts flowing and active. We also found a significant difference in the distribution of topics discussed by bots and humans. Unlike bots, humans tend to discuss a wider range of topics.
We found linguistic features to be highly discriminatory in detecting Arabic bots. We showed that training the regression model on simple content and linguistic features outperformed Botometer by 20 points in Spearman's rho. This result emphasizes the importance of considering language-specific features in bot detection tasks. Important informative linguistic features include the use of numerics and emojis. We found that bots tend to include in their tweets less emojis and more numbers than humans. Other informative linguistic features include the average length of words and the average number of punctuations marks. Linguistic features especially deceptive language cues have been found to be highly discriminatory for distinguishing English bots as well BIBREF48 .
The topic of understanding online human behavior has been of a great interest to CSCW/HCI researchers in various contexts such as mental health BIBREF49 , BIBREF50 , political polarization BIBREF51 , BIBREF1 , and abusive social behaviors BIBREF52 , BIBREF53 . Our findings challenge the assumption often made by such studies that online social media content is always created by humans. We showed that the presence of bots can bias analysis results and disrupt people's online social experience. Platform designers should increase their efforts in combating malicious bots that compromise online democracy. Data scientists should also account for bots in their studies. In particular, Arabic social media studies that are focused on understanding the differences in behaviors and language use between humans and bots can benefit greatly from our bot detection model. For example, a recent study on English Twitter showed how trolls/bots, unlike humans, had been relying on the use of a deceptive/persuasive language in an effort to manipulate the 2016 U.S. elections BIBREF48 . Having a bot detection tool fitted for Arabic such as the one presented in this paper would make such studies possible in Arabic online social spaces.
While our results mark an important step toward detecting and understanding Arabic bots, our work has potential limitations. First, despite that our model provides a promising performance on detecting current bots, it needs to be updated regularly with new bot examples in order to capture the continuous and inevitable changes in bot behaviors and characteristics. Second, bots in our study were limited to bots that had a role in spreading religious hatred. It will be worth studying Arabic Twitter bots with a wider range of malicious activities and investigate common features among them. Additionally, it may be useful in future works to investigate a larger set of features (e.g., temporal features and features extracted from followers and friends). It will also be important to investigate the efficacy of combining supervised and unsupervised methods to reduce the high cost of manual labeling without sacrificing much of the accuracy.
Another important future direction is to investigate the impact of bots on human behavior. In particular, it would be valuable to investigate whether bot-disseminated hateful tweets influence/encourage humans to participate in such discourse either through liking, retweeting, or even authoring new hateful tweets. In a political context, this kind of influence has been shown to exist; Twitter reported that nearly 1.4 million human accounts have made some sort of interaction with content created by bots/trolls during the 2016 U.S. election BIBREF28 . If this bot impact on humans can be shown to be effective in the context of hate speech, a more important question would be, can bots be used to decrease online hate speech? In other words, would creating “good" bots that promote tolerance, acceptance, and diversity values in Arabic social media make an impact on humans? The effect of social norms on prejudice is strongly supported in social psychological literature BIBREF54 , BIBREF55 . Studies have also shown that people conform to perceived cultural norm of prejudice and that norms can be influenced BIBREF56 . Thus, a more focused question would be, can we leverage bots in online social space to positively influence perceived social norms, which would then make people less prejudiced toward other religious groups? A body of CSCW/HCI research has explored the impact of perceived norms on shaping behavior BIBREF57 , BIBREF58 , BIBREF59 , and thus the potential of bots for positive behavior change is certainly worth investigating in future studies.
CONCLUSION
In this paper, we have investigated the role of bots in spreading hateful messages on Arabic Twitter. We found that bots were responsible for 11% of hateful tweets in the hate speech dataset. We further showed that English-trained bot detection models deliver a moderate performance in detecting Arabic bots. Therefore, we developed a more accurate bot detection model trained on various sets of features extracted from 86,346 tweets disseminated by 450 manually-labeled accounts. Finally, we presented a thorough analysis of characteristics and behaviors that distinguish Arabic bots from English Bots and from humans in general. Our results facilitate future Arabic bot detection research in contexts beyond spread of religious hate. | No |
58c1b162a4491d4a5ae0ff86cc8bd64e98739620 | 58c1b162a4491d4a5ae0ff86cc8bd64e98739620_0 | Q: Do they propose a new model to better detect Arabic bots specifically?
Text: Introduction
The analysis of social media content to understand online human behavior has gained significant importance in recent years BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . However, a major limitation of the design of such analysis is that it often fails to account for content created by bots, which can significantly influence the messaging in social media. A social bot is an autonomous entity on social media that is typically engineered to pass as a human, often with the intent to manipulate online discourse BIBREF4 . Recent studies have shown that a significant majority of the social media content is generated by bots. For example, a six-week study by the Pew Research Center found that around two-thirds of all tweets with URL links were posted by likely bots BIBREF5 . As a result, the presence of bots can negatively impact the results of social media analysis and misinform our understanding of how humans interact within the online social space. In particular, any social media analysis that doesn't take into account the impact of bots is incomplete. While some bots can be beneficial (e.g., customer service chatbots), the focus in this work is on content-polluter bots that mimic human behavior online to spread falsified information BIBREF6 , create a false sense of public support BIBREF7 , and proliferate dangerous ideologies BIBREF8 , BIBREF9 .
Bots have vigorously invaded online social communities. A recent study estimated that bots constitute about 15% of all Twitter accounts BIBREF10 . With 321 million Twitter accounts BIBREF11 , the implication is that there are more than 48 million bot accounts on Twitter. Twitter reported that the number of bots and trolls suspended each week is on the rise reaching 9.9 million as of May 2018 BIBREF12 . While this number may seem promising, Twitter's fight against bots is far from over BIBREF13 .
Detecting bots in social media is a first step to account for the impact of bots in social media analysis. Our interest is in analysis of abuse in Arabic Twitter space, specifically the spread of religious hate, and thus to account for the impact of bots in our research, this paper focuses on detecting Arabic Twitter bots that are active in spreading hateful messages against various religious groups. Detecting bots in social media is challenging as bot designers are using sophisticated techniques to make a social bot look and behave as close to a human as possible BIBREF4 . Several researchers have looked at the problem of detecting bots in Twitter (See Section SECREF10 ), and several bot detection tools are freely available BIBREF14 , BIBREF10 , BIBREF15 providing fairly high detection accuracy. However, we show in this paper that these tools fail to perform as well on Arabic Twitter bots as they do on English Twitter bots. In fact, Arabic Twitter bot detection and analysis is a considerably under-researched area. A study by Abokhodair et al. BIBREF16 analyzed a Twitter botnet that was active during the Syrian civil war to understand how it might have influenced related discussions. El-Mawass et al. BIBREF17 estimated that around 75% of Saudi trending hashtags on Twitter contained spam content, some of which was created by automated spammers.
In our recent work on hate speech in Arabic social media BIBREF18 , BIBREF19 , we showed that Arabic Twitter is awash with religious hatred which we defined as “a speech that is insulting, offensive, or hurtful and is intended to incite hate, discrimination, or violence against an individual or a group of people on the basis of religious beliefs or lack thereof". Having such a large volume of hate speech and knowing that ISIS and other radical organizations have been using bots to push their extreme ideologies BIBREF8 , BIBREF9 , we hypothesize that bots may be to blame for a significant amount of this widespread hatred.
In this work, we build a novel regression model, based on linguistic, content, behavioral and topic features to detect Arabic Twitter bots to understand the impact of bots in spreading religious hatred in Arabic Twitter space. In particular, we quantitatively code and analyze a representative sample of 450 accounts disseminating hate speech from the dataset constructed in our previous work BIBREF18 , BIBREF19 for bot-like behavior. We compare our assigned bot-likelihood scores to those of Botometer BIBREF14 , a well-known machine-learning-based bot detection tool, and we show that Botometer performs a little above average in detecting Arabic bots. Based on our analysis, we build a predictive regression model and train it on various sets of features and show that our regression model outperforms Botometer's by a significant margin (31 points in Spearman's rho). Finally, we provide a large-scale analysis of predictive features that distinguish bots from humans in terms of characteristics and behaviors within the context of social media.
To facilitate Arabic bot detection research and Twitter automation policy enforcement, this paper provides the following findings and contributions.
Background and Related Work
In this section, we first discuss the main challenges encountered in analyzing Arabic language and social media content in general. We then survey prior research on online hate speech and bot detection and analysis.
Challenges of Arabic Language and User-generated Content
The Arabic language poses unique challenges to the process of analyzing and studying online content BIBREF20 , BIBREF21 . Arabic is a morphologically rich language with a substantial amount of syntactic and relation information encoded at the word level. Arabic is also a pluricentric language with varying dialects corresponding to various regions in the Arab world. Words can have entirely different meanings across dialects. Social media users tend to use Arabic dialects rather than Modern Standard Arabic (MSA). Unlike MSA, Arabic dialects are not standardized, and often fail to follow well-defined language rules and structures. Besides, Arabic is a greatly under-resourced language with few Natural Language Processing (NLP) tools supporting MSA, let alone Arabic dialects BIBREF22 .
Other challenges that are encountered while studying user-generated content include multilingual text, slangs, misspellings, abbreviations, and lengthening of words. Furthermore, microblogging platforms that impose a maximum length on posts such as Twitter can lead to text that lacks context, which in turn may lead to a flawed analysis. Moreover, some online users tend to mask abusive and hateful content by presenting it as a harmless joke or hiding it inside a comical image. Such behavior can lead to abusive and toxic content going undetected. We describe later in this paper how these aforementioned challenges have been addressed.
Online Hate Speech
Our previous work BIBREF18 , BIBREF19 appears to be the only one focusing on hate speech detection and analysis in Arabic social media. Our study revealed that religious hate speech is widespread on Arabic Twitter. We found that almost half of the tweets discussing religion preached hatred and violence against religious minorities, mostly targeting Jews, Atheists, and Shia (the second largest Islamic sect). In particular, we found that there was a 60% chance that a tweet would be hateful if it contained the Arabic equivalent of the word Jews.
To provide a sense of comparison between the volume of hate speech on Arabic Twitter and English Twitter, we report the results of a study conducted by Magdy et al. BIBREF1 , in which they analyzed a large volume of English tweets mentioning Islam while reacting to the 2015 Paris attacks. Their analysis suggested that only 17% of such tweets were directing hate toward Muslims, 61% were spreading positive messages about Islam, while 22% were natural.
A growing body of hate speech research has been conducted on English social media content. Distinguishable among this work are studies related to the detection of online hateful content targeting race and gender using character INLINEFORM0 -grams BIBREF2 , word embeddings BIBREF23 , and document embeddings BIBREF24 . A measurement study conducted by Silva et al. BIBREF0 exploring the main targets of hate speech on Twitter and Whisper, an anonymous social media platform, showed that black people were the most targeted group on both networks, followed by white people on Twitter and fake people on Whisper. While race was the main targeted category on Twitter, behavior (e.g., sensitive people) was the main targeted category on Whisper.
Malicious Use of Bots in Social Media
While previous research has studied harmless bots on several collaborative and social platforms such as Wikidata BIBREF25 , Twitch BIBREF26 , and Reddit BIBREF27 , our focus is on malicious bots. Previous studies have thoroughly investigated such nefarious roles that can be played by bots, particularly in English online social space. One of such roles is political astroturfing wherein a flood of bot accounts (usually created by a single entity) creates the illusion of public support for a particular political candidate for the purpose of influencing public opinion. Bessi and Ferrara BIBREF7 suggested that social bots have generated about one-fifth of the 2016 U.S. Presidential election discourse on Twitter. Twitter confirmed this in an official blog post BIBREF28 reporting that approximately 1.4 million accounts were notified about having some form of interactions with suspicious Russian-linked accounts (trolls and bots) who were spreading misinformation during the 2016 U.S. election. This nefarious use of bots is not new to social media; Ratkiewicz et al. BIBREF6 indicated that bots have been used to amplify fake news and misinformation during the 2010 U.S. midterm elections through a coordinated generation and liking of misguiding tweets. It has also been shown that bots are used by ISIS propagandists to inflate their influence on Twitter and popularize their extreme ideologies BIBREF8 , BIBREF9 .
Limited research has been conducted to study bot behavior on Arabic social media. The only relevant research we are aware of is the work by Abokhodair et al. BIBREF16 , in which they analyzed a Syrian botnet consisting of 130 bots that were active for 35 weeks before being suspended by Twitter. Their analysis suggested that the main task of such bots was to report news from a highly biased news source. A different but related research problem is the detection of spam content which sometimes involves bots. In BIBREF17 , El-Mawass et al. reported that about 74% of tweets in Saudi trending hashtags are spam. They suggested that bots are sometimes used to increase the reach of spam content by coordinated liking and retweeting of spam tweets.
Bot Detection
There are two main approaches to detecting social media bots in literature: supervised learning and unsupervised learning. An example of a supervised-based bot detection model is Botometer BIBREF14 , BIBREF10 , which is a freely available tool that employs supervised machine learning algorithms to predict a bot score, from 0 to 5, for public Twitter accounts. This score indicates the likelihood of an account being a bot based on 1,150 features distributed across six feature categories. Botometer also computes an individual bot score for each of the six feature categories, comprised of friend features (e.g., local time and popularity of retweeting and retweeted accounts), network features (e.g., network metrics that describe distribution and density of retweet and mention networks), user features (e.g., number of followers, number of friends, profile language, account age), temporal features (e.g., average time between two consecutive tweets, tweeting rate), content features (e.g., length of tweet, frequency of part-of-speech tags), and sentiment features (e.g., arousal, valence, and dominance scores). Figure FIGREF12 provides an example of Botometer's bot score interface. It is worth noting that although content and sentiment features are computed for non-English tweeting bots, they are only meaningful for English tweeting bots. Botometer conveniently provides a language-independent bot score, which we considered in our study.
DeBot BIBREF29 , on the other hand, utilizes unsupervised techniques to detect Twitter bots based on synchronicity and activity correlation between accounts. The system has several services that can answer the following questions. Is a given account a bot? How long has it been active? Which bots are currently tweeting about a given topic? Which bots are participating in a given hashtag? They compared their system to Botometer and found that 59% of bots detected using their system had a Botometer bot score exceeding 50% (Botometer's previous scoring scheme ranged from 0% to 100%). Their analysis suggested that bots in a given botnet share the same tweets 87% of the time.
To our knowledge, no existing work has attempted to specifically detect Arabic bots. In BIBREF30 , Morstatter et al. created a dataset of 3,602 Arabic tweeting bots using a honeypot trap mechanism and a human dataset consisting of 3,107 users—a high bot ratio we argue that doesn't represent an actual bot percentage on Twitter, which is estimated to be between 9% and 15% BIBREF10 . Our work is different from BIBREF30 in several important aspects. First, the main goal in BIBREF30 is to improve recall in detecting bots while our goal is to detect Arabic bots with high precision in the context of religious hate. Second, Morstatter et al. created a binary classifier to classify whether an account is a bot or not; as bots nowadays are very sophisticated with many of them exhibiting both human and bot behaviors at the same time BIBREF10 , we argue that the problem can't be simplified into a binary classification problem. To address this issue of mix behaviors, we adopt two techniques: instead of using any automated mechanism such as setting up a honeypot trap, we rely on manual labeling of accounts by assigning each a score ranging from 0 to 5 which indicates the degree of bot-like behavior an account is showing to get the ground truth; and we create a regression predictive model trained on our manually-labeled accounts to predict bot scores for new Twitter accounts. Finally, our work specifically focuses on the unique characteristics of Arabic bots, and thus provides deep insights into the predictive features that distinguish bots from humans and broadens the understanding of bots' behavior in the context of Arabic social space.
Data Collection
To identify accounts disseminating hate speech, we started working from the hate speech dataset constructed in our previous work BIBREF18 , BIBREF19 , which consists of 6,000 Arabic tweets collected in November 2017 and annotated for religious hate speech by crowdsourced workers (see Table TABREF15 for general statistics of the dataset). The tweets were collected by querying Twitter's Standard search API using impartial terms that refer to one of the six most common religious groups across the Middle East and North Africa. Although we didn't use offensive terms or religious slurs in our data collection process, the number of returned hateful tweets was surprisingly large. More details on the construction and analysis of this dataset can be found in BIBREF18 , BIBREF19 .
In this dataset, we identified 4,410 unique Twitter accounts. Of these, 543 accounts were suspended, and thus we excluded them from our study. We then looked at the remaining 3,867 active accounts and classified them into accounts with hateful tweets or accounts with non-hateful tweets based on the number of hateful and non-hateful tweets they had authored. If they had authored more hateful tweets than non-hateful tweets, we classified them as accounts with hateful tweets. This resulted in having 1,750 accounts with hateful tweets and 2,117 accounts with non-hateful tweets. Since this study is focused on identifying the role of bots in spreading religious hatred, only accounts with hateful tweets were considered.
For each account with hateful tweets, we collected up to 3,200 of their recent tweets using the GET statuses/user_timeline method from Twitter's API. The total number of collected tweets was more than 4.2 million tweets. We also collected each account profile information (e.g., location, time zone, language) using the GET users/show API method.
Ground Truth
To evaluate the accuracy of Botometer scores in Arabic Twitter, we need to get a ground truth of which accounts are bots. Getting the ground truth for such an inherently difficult task is not straightforward. Some of the approaches proposed in literature are fully automatic approaches without any manual inspection, e.g. setting a honeypot trap BIBREF31 , BIBREF30 , identifying synchronicity and correlation in accounts' behavior BIBREF29 , and observing accounts getting suspended by Twitter BIBREF30 . Others have relied on manual labeling of accounts into bot-like or human-like BIBREF10 . The snowball mechanism has also been used in which researchers identify a seed list of highly suspicious accounts and then snowball their connections to collect a larger set of suspicious accounts BIBREF32 , BIBREF16 .
The common aspect among all earlier efforts in labeling of bots is that they assign binary labels to accounts, bot or human. Given that there is no simple list of rules that can decisively identify bots, we argue that it is more effective to assign labels on a scale to reflect the inherent uncertainty in recognizing bots. Additionally, since modern bots that attempt to hide themselves are becoming more sophisticated, we argue that any fully automatic approach without any manual inspection to get ground truth about bots is bound to suffer from high inaccuracies.
Thus, we turn to manual labeling approaches to get the ground truth. Although crowdsourced workers can be helpful in many labeling and classification tasks, we argue that our task of fine-grained scoring of accounts on the level of bot-like behavior they are exhibiting requires a high-level of domain knowledge as well as extensive training that is hard to control in a crowdsource setting. We argue that in order get a reasonable set of ground truth data for identifying bots, manual labeling must be done by experts. Therefore, in order to insure high-quality labeling, the labeling of the accounts was done by two members of the research team who are native Arabic speakers and have gone through the following training steps to gain the required expertise to make a sound and informed judgment.
First, as a data exploration step, we applied Botometer on the 1,750 accounts with hateful tweets to discern the distribution of their bot scores (illustrated in Figure FIGREF19 ). Recall that bot scores from Botometer (we refer to this as Botometer scores) are on a scale from zero to five, with zero being “most likely human” and five being “most likely bot”. A score in the middle of the scale is an indication that Botometer is unsure about the classification. As shown in this figure, the distribution is skewed to the right with the majority of accounts being assigned a Botometer score from 0 to 1.
Second, in order to gain the required domain knowledge with respect to bot behaviors and characteristics, we carefully examined the top 50 accounts receiving the highest Botometer scores as well as highly suspicious propaganda bots flagged by Botcheck.me BIBREF15 , a free online tool that is trained to identify English propaganda bots. We noted every suspicious behavior exhibited by these highly-suspected bot accounts with respect to account profile information, friends, followers, interaction with other accounts, tweet content, and posting behavior. We also familiarized ourselves with bot characteristics and behaviors reported in previous studies BIBREF10 , BIBREF16 , BIBREF29 , BIBREF4 , BIBREF33 , BIBREF34 . Following this, we have created a list of bot characteristics described in Table TABREF20 .
Based on this list of bot criteria (Table TABREF20 ), we manually examined each account and assigned a bot-likelihood score (we refer to this as the true score) ranging from 0 to 5, with 0 being “very unlikely" and 5 being “very likely" based on the extent by which an account exhibited a suspicious bot-like behavior from the list. We also added to the list other suspicious behaviors that we encountered while studying and labeling accounts in our dataset. It is important to note that even human accounts do exhibit one or more of these characteristics at different times (e.g., having a large number of followers). Furthermore, a bot may exhibit only a subset of these characteristics in addition to some human-like characteristics. Therefore, in our manual labeling of bot-like scores, the more characteristics an account exhibited, the higher the bot score it got assigned.
Since manual labeling is time and effort consuming, we considered a sample subset of accounts with hateful tweets. Using a 95% confidence level and a 4% margin of error, a representative sample of these accounts would be of size 450. To eliminate sampling bias and to ensure that the sample preserves the statistical proportions of the original dataset (see Figure FIGREF19 ), we applied proportionate stratified random sampling, wherein simple random sampling technique is employed to select training examples proportionally from each stratum (i.e., subgroup). This sampling method ensures that accounts with unusually high Botometer scores are still present in our sample and in a proportion similar to that in our original dataset. The final sample consisted of 239 accounts from the 0-1 stratum, 95 accounts from the 1-2 stratum, 47 accounts from the 2-3 stratum, 32 accounts form the 3-4 stratum, and 37 accounts from the 4-5 stratum.
Finally, to validate the robustness of our labeling process, we calculated the inter-rater agreement score between the two labelers on a subset of 30 independently-labeled accounts. A weighted kappa BIBREF35 score of 0.86 was reported, which indicates an almost perfect agreement BIBREF36 . Given such a high inter-rater agreement score, a well-defined bot criteria (Table TABREF20 ), and a highly time-expensive task (each account required on average a 15-min examination before a score was given), we decided to split the 450 accounts equally between the two labelers.
Quantifying Hate Speech Sent by Bots
The results of our manual labeling of the 450 accounts can provide a preliminary indication of how many hateful tweets were sent by bots vs. humans. Assuming that accounts with a true score of 3 or higher were bots, we found that there were 77 (17%) bots and 373 (83%) humans. Bots authored 109 hateful tweets (a per-bot average rate of 1.4 tweets), and human accounts authored 446 hateful tweets (a per-human average rate of 1.2 tweets). The ratio of tweets sent by bots to those sent by humans is 1:4. In other words, bots were responsible for 22.6% of hateful tweets, while humans were responsible for 77.4% of hateful tweets.
The relatively low per-bot average rate of tweets could be attributed to the fact that we are only considering their tweets in the hate speech dataset. Considering their whole timeline (tweets) and finding how many of those contain an instance of religious hatred is worth investigating in the future. We will extend this analysis in Section SECREF50 to include all 1750 accounts with hateful tweets.
Methods
Our manual scoring of accounts as well as Botometer scoring is done on a scale of 0-5 with a higher score implying a higher likelihood of the account being a bot. However, the absolute scores assigned by the two scoring methods may differ. In order to evaluate the accuracy of Botometer, we need to investigate if there is a monotonic relationship between how we score accounts (true scores) and how Botometer scores accounts (Botometer scores). To do this, we applied two rank correlation tests that measure the strength and direction of the association between the two scorings.
The first rank correlation test is Spearman's rho BIBREF37 , which is a well-known nonparametric statistical test that measures the strength of correlation between two independent variables. As Spearman's rho calculation is based on squaring differences between rankings, it penalizes more for large discrepancies between rankings. In case of tied ranks, the mean of the ranks is assigned. The second evaluation metric is Kendall's tau BIBREF38 , which is also a non-parametric test that is used to measure the strength and direction of correlation between two independent sets of scores. The Tau-b version of Kendall's tau was used to handle tied ranks. While both Spearman's rho and Kendall's tau are based on the ranks of data and not the actual scores, Kendall's tau appears to be less sensitive to wide discrepancies between rankings. The value of both Spearman's rho and Kendall's tau ranges from -1 (perfect negative correlation) to +1 (perfect positive correlation). A value of zero indicates the absence of correlation between the two sets of scores. Finally, we applied mean absolute error (MAE) to measure the average absolute differences (errors) between the true scores and Botometer scores.
We now present a regression predictive modeling task in which we train a random forest regression model to predict the bot score, from zero to 5, of an account based on various hand-crafted features. Random forest is a tree-based ensemble method that has been successfully used in other bot detection tasks BIBREF14 , BIBREF33 . A great property of random forest is that it inherently computes feature importance, and thus it can give insights into which feature is most informative. Random forest can also control over-fitting by training each tree on randomly selected training examples and features BIBREF39 . Note that we also experimented with other regression algorithms such as logistic regression and gradient boosting regression trees. However, their performance was poorer compared to random forest, and thus we only report the results achieved by random forest.
We implemented random forest with scikit-learn BIBREF40 , a Python machine learning library. To understand the impact of each individual feature in detecting bots, we trained our regression model on successive combinations of content, tweet, topic, and account features. We tuned the regression model by performing a hyperparameter grid search with 10-fold cross-validation and optimized for Spearman's rho. The regression model was trained on 70% of the accounts and tested on the remaining 30%. Three evaluation metrics were used to compare the performance of our regression model to that of Botometer: Spearman's rho, Kendall's tau, and MAE, as discussed in Section SECREF23 .
Results
The results of Spearman's rho and Kendall's tau were 0.43 and 0.33, respectively. The results suggest that there is a moderate positive monotonic relationship between the two sets of scores. MAE was 1.14, which indicates that Botometer scores on average are off by 1.14 points. These results indicate that while Botometer performs better than average in detecting Arabic bots, there is a need for developing social bot detection models that can work more effectively on Arabic Twitter accounts.
To better understand the limitations of Botometer in detecting Arabic bots, we further analyze the results in Figures FIGREF25 and FIGREF26 . Figure FIGREF25 presents a scatter graph plotting the true score for an account against its Botometer score, along with a regression line that best fits the data. The regression line indicates that Botometer tends to assign a higher score to accounts with true scores of 1.5 or less. On the other hand, Botometer tends to assign lower scores to accounts with a true score higher than 1.5. The margin increases as the true score rises. In other words, Botometer appears to be compressing the range of scores by assigning obvious human accounts a higher score than zero and highly suspected bot accounts a lower score than 5. Figure FIGREF26 shows a joint histogram for the two sets of scores using hexagonal binning. The figure represents a heatmap where darker colored hexagons represent higher density. As shown in the graph, the highest agreement is when both the true score and Botometer score are between 0 and 1. We can also see from the two histograms that Botometer scores have higher frequencies in the middle bins than the true scores.
Finally, in order to gain some insights into the reasons for Botometer's weakness in identifying Arabic bots, we manually inspected accounts with wide discrepancies in their true scores and Botometer scores and have identified the following possible reasons for this. Note that while some of these reasons may be applicable to English bots as well, we verified them to some extent only for Arabic bots. It is also important to note that a larger, more structured investigation would be required in order to fully validate those reasons. We leave that as part of our future work.
Botometer appears to be assigning a high-bot score to human accounts who have used their Twitter account for a short period of time, and thus have fewer tweets in their timeline. This could be due to the restriction Botometer enforced on their data collection process, which is considering only accounts with at least 200 tweets BIBREF10 . We had 17 accounts in our dataset with at most 100 tweets (i.e., inactive), and 71% of them were given Botometer scores larger than 2.5 while their true scores were less than 2.5. Given that Botometer generally assigned 13% of all accounts in our dataset scores in the upper range, while their true scores were in the lower range, we found this 71% misclassification rate to be significant ( INLINEFORM1 = 46.9, df = 1, INLINEFORM2 -value < 0.001).
Having unusually small number of followers or friends (followings) appears to be triggering Botometer to assign a high-bot score without taking into considerations other human-like behavior an account is exhibiting. We had 29 accounts in our dataset with followers or friends less than 5, 35% of them were misclassified by Botometer. This was significantly different from what is expected ( INLINEFORM0 = 11.4, df = 1, INLINEFORM1 -value < 0.001).
As Botometer doesn't support Arabic language, it may miss linguistic and content cues that can giveaway bots, e.g., posting unrelated tweets to a hashtag. We show in Section SECREF35 that linguistic features such as the use of numerics and emojis can be significant distinguishing features. This could be a reason for Botometer assigning a lower score to bot-like accounts with higher true scores.
Sometimes Arabic Twitter accounts use third-party Islamic applications that post Islamic supplications and/or Quranic verses on their behalf. There were 18 unique Islamic applications that were used by accounts in our dataset. Such behavior may result in Botometer assuming that these accounts are bots although some of them are in fact humans. This could be a reason for Botometer assigning a higher score to obvious human accounts with true scores of 1.5 or less.
We also considered other reasons that we believed to be causing wide discrepancies between true scores and Botometer scores. For example, including a hashtag in every tweet appeared to be triggering Botometer to assign a high bot score even when the account exhibited more human-like behavior. We had 41 accounts in our dataset with an average of one or more hashtags per tweet, and Botometer assigned higher scores to 15% of them. However, we found this proportion to be statistically insignificant. Another case where we noticed higher scores given to accounts by Botometer is when human accounts appeared to be followed by fake (probably purchased) followers. However, we couldn't verify this claim as such feature (followers being fake or not) was not part of our collected metadata.
We trained regression models on successive sets of features and assessed their generalization ability on a holdout testing dataset. Although we collected up to 3,200 tweets for each account, training the regression model using up to 200 tweets from each account provided faster training with similar results. Therefore, the results reported here are the ones using features extracted from up to 200 tweets per account, resulting in a total of 86,346 tweets.
Table TABREF36 compares the performances of these regression models in terms of Spearman's rho, Kendall's tau, and MAE. Highest scores are shown in bold. We have included the performance of Botometer as well in this table as a baseline. As shown in the table, our regression model trained on only simple content features outperformed Botometer which uses user, friend, network, and temporal features. The most informative content features reported by this regression model were the average numbers of account mentions, URL links, numerics, and punctuation marks, respectively. This shows that linguistic cues conveyed in tweets are highly effective in detecting Arabic bots. We will further discuss the importance and direction of contribution of these features in Section SECREF37 .
By including the tweet features in addition to the content features in training the regression model, the Spearman's coefficient improved by five points. Among the content and tweet features, the most distinguishing features were reply tweet proportion, original tweet proportion, and average number of account mentions, respectively. Adding topic and sentiment features in training further improved the performance of our regression model. These topic features were extracted using bow as opposed to tf-idf, as bow delivered better performance. We found that topic features extracted from lemmatized text achieved superior results to those extracted from stemmed text. However, we also found that not using stemming or lemmatization led to the best performance.
The best Spearman's rho and Kendall's tau were achieved after adding account features. The 0.74 in Spearman's rho indicates a strong positive correlation between scores predicted by our regression model and the true scores. The most informative features for this regression model were still reply tweet proportion, average number of mentions, and original tweet proportion, respectively.
The least informative features were mostly from the account feature category such as whether the account has an empty profile description, location, and URL link. Also, whether or not the account has the default profile image or their geotagging feature enabled didn't contribute much to the predicted bot score. This suggests that there wasn't a significant difference between the distribution of humans and bots across those features.
Features
Based on our analysis in Section SECREF24 , we identify four sets of features that we anticipate to be informative for Arabic bot detection. These are content, tweet, topic & sentiment, and account features. Table TABREF34 provides a description of each of these features. For content features, we used average lengths of words and tweets, and average numbers of emojis, punctuation marks, numerics, hashtags, account mentions, URL links and elongated words per tweet. For tweet features, we used the proportions of original tweets, reply tweets and retweet tweets, as well as the number of times an original/reply tweet was retweeted or favorited. For account features, we considered features such as the total number of tweets posted and favorited, numbers of followers and friends as well as features related to the account profile (e.g., account age).
Sentiment features were obtained by using TextBlob NLP Python library BIBREF41 which offers support for the Arabic language. Topic modeling was implemented using Latent Dirichlet Allocation (LDA) model provided by Gensim BIBREF42 , an unsupervised topic modeling python library. Before extracting topic features, tweets were preprocessed by removing diacritics (i.e., accents), tatweel (i.e., kashida), elongation, two-character words, non-Arabic characters, URL links, punctuation marks and numbers. We also removed Arabic stop words and normalized Arabic letters and hashtags as described in BIBREF18 , and then filtered out very rare words that appeared in less than 10 accounts' timelines and too common words that appeared in more than 50% of accounts' timelines. From the remaining list, we considered the 10K most frequent words. We experimented with both bag-of-words (bow) and term frequency-inverse document frequency (tf-idf) text representation techniques. We also experimented with stemming words using ARLSTem Arabic stemmer from NLTK Python library BIBREF43 and lemmatization using StanfordNLP Python library BIBREF44 . Results of these experiments are provided in the next subsection.
Feature Importance & Contribution
Random forest computes a feature importance score for each feature using the mean decrease impurity feature selection method which reflects how much each feature reduces variance BIBREF45 . The top most-important features, along with their importance scores for the best performing regression model are shown in Table TABREF38 .
Random forest feature importance score doesn't offer insights into feature contribution (i.e., the direction of feature importance). To understand how much positively or negatively each feature contributed to the final predicted bot score, we used TreeInterpreter Python package BIBREF46 , which breaks down each prediction made by the regression model into the sum of bias (i.e., the average bot score in the training data) and each features' contribution. We selected some of the top informative features and plotted their contribution against their corresponding feature value (see Figure FIGREF39 ). Figure FIGREF39 shows feature contribution for the reply tweet proportion which is the most bot-distinguishing feature. It shows that the more reply tweets an account has, the less likely that account is a bot. This suggests that these bots are not yet smart enough to engage in conversations and interact with other accounts as humans would normally do. The same feature contribution pattern was found for the average number of mentions as illustrated in Figure FIGREF39 . Mentioning other accounts usually implies interacting and communicating with them, and thus the more social an account is, the less likely that account is a bot.
Figure FIGREF39 shows how the proportion of original tweets (not retweet or reply tweets) contributes to the predicted bot score. If original tweets constitute more than 60% of an account's overall tweets, then the predicted bot-likelihood score would increase as the original tweet percentage increases. Such accounts that don't reply nor retweet might be using third-party applications to post tweets on their behalf, or that their “masters” programmed them so that they only disseminate prespecified text. This also suggests that human accounts on Twitter usually exhibit a variety of behaviors such as replying, retweeting, and tweeting an original text. As for retweeting, we can see from Figure FIGREF39 that there are two retweeting behaviors that would result in an increase in the predicted bot score. These are never retweeting any tweet (x INLINEFORM0 0) and retweeting extensively (x INLINEFORM1 1). Again, such black and white behavior is more of a bot-like behavior rather than a human-like behavior.
We also found a clear distinction between English bots and Arabic bots in terms of retweet, reply, and original tweet proportions. It has been claimed that English bots tend to retweet more than posting original tweets BIBREF7 . This was not found to be true in our dataset, i.e., Arabic bots in our dataset were posting original tweets more often than retweeting tweets. In particular, the average retweet, original, and reply proportions for bots were 17%, 76%, and 7%, respectively.
The average number of emojis per tweet was also one of the highly informative features. This feature was not considered by Botometer because it wasn't trained for Arabic bots. Figure FIGREF39 illustrates that bots tend to not use emojis in their tweets. We believe that this could be due to the fact that people use emojis instinctively to convey different kinds of emotions. Having a URL link in more than 50% of an account's tweets would contribute positively to the predicted bot score as shown in Figure FIGREF39 . This makes sense as many automatically generated tweets contain links to books, news articles, posts from a linked Facebook account, etc.
Another feature that might not be considered by Botometer as it doesn't extract Arabic-specific features is the average number of numerics. At first, we were surprised to find that the more numbers accounts use in their tweets, the more likely they are bots (see Figure FIGREF39 ). Upon closer look at accounts with a high use of numbers in their tweets and a positive average number of numerics feature contribution, we found that some accounts had random English letters and numbers in their tweets, which suggests that such tweets were generated automatically by computers. Such behavior was previously shown in Figures FIGREF21 and FIGREF21 .
Topic, Source, and Network Analysis
Here we further investigate the topics that are most discussed by bots (true scores INLINEFORM0 2.5) and humans (true scores INLINEFORM1 2.5). Although LDA doesn't assign names to topics, we inferred those names from the list of terms that are most associated with each topic. To give more insight into what these topics represent, we list in Table TABREF42 the most relevant terms for each topic.
We then considered the most dominant topic for each account, i.e., the topic with the highest probability distribution. Figure FIGREF43 shows the percentages of dominant topics of humans and bots tweets. We found that the distributions of humans and bots differ significantly among the seven topics ( INLINEFORM0 = 25.9, df = 6, INLINEFORM1 -value < 0.001). The topic distributions for bots is lopsided, i.e., the majority of the posts from bots were concentrated on a small number of topics, while humans' tweets covered a wider range of topics. While the top three discussed topics for both humans and bots were identical, the percentages were different. About 44% of suspected bots were mainly posting Islamic supplications and prayers, while 21% of humans were tweeting about the same topic. Suspected bots were least interested in sports (3.9%); however, they showed somewhat similar interest to that of humans in political topics related to Jerusalem, Jews, and Houthi.
Twitter provides source label along with tweet metadata which indicates which source (i.e., client) was used to post the tweet to its service. The accounts in our dataset were posting tweets using various official and/or third-party sources. We considered the dominant (i.e., main) used source for each account and grouped these sources into three categories: official Twitter sources, Islamic supplications, and other third-party sources. Official Twitter sources include Twitter Web Client, Twitter Lite, Twitter for IPhone, Twitter for Android, and Twitter for Windows. Islamic supplications include third-party applications mainly for automatically posting Islamic supplications on accounts' behalf. Other third-party sources include Facebook, Instagram, Google, If This Then That (IFTTT), Tweetbot, and Alameednews.com. In total, bots were mainly tweeting using 17 unique sources, while humans were tweeting using 14 unique sources.
Figure FIGREF46 illustrates the percentages of dominant sources for bots and humans accounts. We found that the distributions of humans and bots differ significantly among the three categories of sources ( INLINEFORM0 = 78.6, df = 2, INLINEFORM1 -value < 0.001). About 92% of humans were mainly posting tweets using official Twitter sources, whereas 53% of bots were mostly using official Twitter sources to post tweets. Posting mainly using third-party sources (including Islamic ones) was a more common behavior of bots (47%) rather than humans (8%).
It has been shown that bot network characteristics differ significantly from those of humans BIBREF33 . Here we investigate if this holds for Arabic bots as well. Since in our dataset we have more human accounts (373) than bot accounts (77) and to ensure a fair comparison, we randomly selected 77 human accounts to match the set of the 77 bot accounts. We constructed two types of networks: retweet network (see Figure FIGREF48 ) and mention network (see Figure FIGREF49 ). Nodes in these graphs represent accounts in our dataset as well as all other accounts that got retweeted/mentioned by the accounts in our dataset. Edges represent a retweeting/mentioning activity.
In the retweet network, humans have 2,561 nodes and 2,831 edges, while bots have 1,018 nodes and 1,054 edges, i.e., the human retweet network was more than twice as large as the bot retweet network. This gap was even larger for the mention network; the human mention network (4,978 nodes and 6,514 edges) was more than three times as large as the bot mention network (1,585 nodes 1,666 edges). We can see that bot networks are loosely connected with many singleton nodes, while human networks are highly connected with a very few singleton nodes. These network results are in line with what has been found for English bot networks BIBREF33 .
The Role of Bots in Spreading Hate Speech
To answer the question on how many hateful tweets were sent by bots rather than humans, we extend our analysis presented in Section SECREF22 to include all 1750 accounts with hateful tweets. Three accounts were suspended before we were able to collect their data, and thus they were excluded from this analysis. As we already have true scores for 450 accounts, we applied our regression model to the remaining 1,297 accounts with hateful tweets. Of the 1747 accounts with hateful tweets, we found that 185 (10.6%) accounts were more likely to be bots (predicted/true scores INLINEFORM0 2.5), and 1,562 (89.4%) accounts were more likely to be humans (predicted/true scores INLINEFORM1 2.5). Bots authored 238 hateful tweets (1.29 per-bot average rate), whereas humans authored 1,974 tweets (1.26 per-human average rate). The ratio of hateful tweets sent by bots to those sent by humans is 1:8. In particular, humans were responsible for 89.24% of all hateful tweets, while bots were responsible for 10.76% of all hateful tweets.
At the time of this writing (March 2019), we checked to see if the bots identified in our study were still active or not. We found that only 11% of them were suspended by Twitter. This indicates that the remaining 89% of the bots have lived for at least 1.4 years. In a recent study by Chavoshi et al. BIBREF29 , Twitter suspended 45% of the bots detected in their study within a three-month period. This shows that Arabic bots can go undetected for a long period of time.
DISCUSSION
Our analysis suggests that Arabic Twitter bots do have a role in the spread of religious hate on Arabic Twitter. In particular, bots were responsible for about 11% of all hateful tweets in the hate speech dataset. Our topic analysis showed that bots participate in highly controversial political discussions related to Israel/Palestine and Yemen. This builds on prior work that showed participation of Arabic bots, especially through the dissemination of highly polarizing tweets, during the Syrian civil war BIBREF16 .
Such political use of bots (i.e., disseminating hate speech and highly biased news) has been shown to be true for English bots as well. Bots on English Twitter have been used to promote jihadist propaganda BIBREF8 , BIBREF9 , spread fake news BIBREF6 , and infiltrate political discussions BIBREF7 . Bots have also been used for spamming in both Arabic BIBREF17 and English BIBREF47 Twitter networks. Other nefarious roles of bots that have been explored on English Twitter include manipulating the stock market and stealing personal data BIBREF4 . Unfortunately, there is a significant lack of Arabic-focused research to investigate other roles that can be played by bots in Arabic social networks. Our study serves as a starting point for understanding and detecting Arabic bots, demanding additional research to explore this understudied area.
While the social roles played by Arabic and English bots can be to some extent similar, our analysis showed that some Arabic bot characteristics are unique and different from English bots. As discussed in Section SECREF37 , Arabic bots in our dataset were posting original tweets more often than retweeting tweets. This found to be in contrast to English bots that tend to retweet more than posting original tweets BIBREF7 . We also showed that Arabic bots can live longer than English bots. Further, it has been shown that English bots tend to have fewer followers than humans BIBREF34 , BIBREF7 . This was not the case for Arabic bots. In our data set, bots on average have 81K followers (std = 588K), while humans on average have 7.5K (std = 25.5K). While manually studying accounts, we noticed that suspected bots tend to have a large number of fake followers to amplify their influence and reach. This use of bots (i.e., inflating popularity) has been found to be used by pro-ISIS Twitter accounts BIBREF8 , BIBREF9 . Another special consideration that must be taken into account when analyzing Arabic bots is that some Arabic users use third-party Islamic applications to post Quranic versus automatically on their behalf. This implies that even if some form of automation exists in an account, it doesn't necessarily mean that such an account is a bot.
The result of our regression model shows that Arabic bots can be identified with a high level of accuracy. Our feature analysis showed that bots in our dataset exhibit distinct behaviors and features. Unlike humans, bots tend to not communicate and engage in conversations with other accounts. This characteristic has been found to be true for English bots as well BIBREF33 . Significant differences appeared in the distribution of sources used by bots and humans, where we found that bots tend to use third-party applications more often than humans to keep their accounts flowing and active. We also found a significant difference in the distribution of topics discussed by bots and humans. Unlike bots, humans tend to discuss a wider range of topics.
We found linguistic features to be highly discriminatory in detecting Arabic bots. We showed that training the regression model on simple content and linguistic features outperformed Botometer by 20 points in Spearman's rho. This result emphasizes the importance of considering language-specific features in bot detection tasks. Important informative linguistic features include the use of numerics and emojis. We found that bots tend to include in their tweets less emojis and more numbers than humans. Other informative linguistic features include the average length of words and the average number of punctuations marks. Linguistic features especially deceptive language cues have been found to be highly discriminatory for distinguishing English bots as well BIBREF48 .
The topic of understanding online human behavior has been of a great interest to CSCW/HCI researchers in various contexts such as mental health BIBREF49 , BIBREF50 , political polarization BIBREF51 , BIBREF1 , and abusive social behaviors BIBREF52 , BIBREF53 . Our findings challenge the assumption often made by such studies that online social media content is always created by humans. We showed that the presence of bots can bias analysis results and disrupt people's online social experience. Platform designers should increase their efforts in combating malicious bots that compromise online democracy. Data scientists should also account for bots in their studies. In particular, Arabic social media studies that are focused on understanding the differences in behaviors and language use between humans and bots can benefit greatly from our bot detection model. For example, a recent study on English Twitter showed how trolls/bots, unlike humans, had been relying on the use of a deceptive/persuasive language in an effort to manipulate the 2016 U.S. elections BIBREF48 . Having a bot detection tool fitted for Arabic such as the one presented in this paper would make such studies possible in Arabic online social spaces.
While our results mark an important step toward detecting and understanding Arabic bots, our work has potential limitations. First, despite that our model provides a promising performance on detecting current bots, it needs to be updated regularly with new bot examples in order to capture the continuous and inevitable changes in bot behaviors and characteristics. Second, bots in our study were limited to bots that had a role in spreading religious hatred. It will be worth studying Arabic Twitter bots with a wider range of malicious activities and investigate common features among them. Additionally, it may be useful in future works to investigate a larger set of features (e.g., temporal features and features extracted from followers and friends). It will also be important to investigate the efficacy of combining supervised and unsupervised methods to reduce the high cost of manual labeling without sacrificing much of the accuracy.
Another important future direction is to investigate the impact of bots on human behavior. In particular, it would be valuable to investigate whether bot-disseminated hateful tweets influence/encourage humans to participate in such discourse either through liking, retweeting, or even authoring new hateful tweets. In a political context, this kind of influence has been shown to exist; Twitter reported that nearly 1.4 million human accounts have made some sort of interaction with content created by bots/trolls during the 2016 U.S. election BIBREF28 . If this bot impact on humans can be shown to be effective in the context of hate speech, a more important question would be, can bots be used to decrease online hate speech? In other words, would creating “good" bots that promote tolerance, acceptance, and diversity values in Arabic social media make an impact on humans? The effect of social norms on prejudice is strongly supported in social psychological literature BIBREF54 , BIBREF55 . Studies have also shown that people conform to perceived cultural norm of prejudice and that norms can be influenced BIBREF56 . Thus, a more focused question would be, can we leverage bots in online social space to positively influence perceived social norms, which would then make people less prejudiced toward other religious groups? A body of CSCW/HCI research has explored the impact of perceived norms on shaping behavior BIBREF57 , BIBREF58 , BIBREF59 , and thus the potential of bots for positive behavior change is certainly worth investigating in future studies.
CONCLUSION
In this paper, we have investigated the role of bots in spreading hateful messages on Arabic Twitter. We found that bots were responsible for 11% of hateful tweets in the hate speech dataset. We further showed that English-trained bot detection models deliver a moderate performance in detecting Arabic bots. Therefore, we developed a more accurate bot detection model trained on various sets of features extracted from 86,346 tweets disseminated by 450 manually-labeled accounts. Finally, we presented a thorough analysis of characteristics and behaviors that distinguish Arabic bots from English Bots and from humans in general. Our results facilitate future Arabic bot detection research in contexts beyond spread of religious hate. | Yes |
4dad15fee1fe01c3eadce8f0914781ca0a6e3f23 | 4dad15fee1fe01c3eadce8f0914781ca0a6e3f23_0 | Q: How do they prevent the model complexity increasing with the increased number of slots?
Text: Introduction
With the rapid development in deep learning, there is a recent boom of task-oriented dialogue systems in terms of both algorithms and datasets. The goal of task-oriented dialogue is to fulfill a user's requests such as booking hotels via communication in natural language. Due to the complexity and ambiguity of human language, previous systems have included semantic decoding BIBREF0 to project natural language input into pre-defined dialogue states. These states are typically represented by slots and values: slots indicate the category of information and values specify the content of information. For instance, the user utterance “can you help me find the address of any hotel in the south side of the city” can be decoded as $inform(area, south)$ and $request(address)$, meaning that the user has specified the value south for slot area and requested another slot address.
Numerous methods have been put forward to decode a user's utterance into slot values. Some use hand-crafted features and domain-specific delexicalization methods to achieve strong performance BIBREF1, BIBREF2. BIBREF0 employs CNN and pretrained embeddings to further improve the state tracking accuracy. BIBREF3 extends this work by using two additional statistical update mechanisms. BIBREF4 uses human teaching and feedback to boost the state tracking performance. BIBREF5 utilizes both global and local attention mechanism in the proposed GLAD model which obtains state-of-the-art results on WoZ and DSTC2 datasets. However, most of these methods require slot-specific neural structures for accurate prediction. For example, BIBREF5 defines a parametrized local attention matrix for each slot. Slot-specific mechanisms become unwieldy when the dialogue task involves many topics and slots, as is typical in a complex conversational setting like product troubleshooting. Furthermore, due to the sparsity of labels, there may not be enough data to thoroughly train each slot-specific network structure. BIBREF6, BIBREF7 both propose to remove the model's dependency on dialogue slots but there's no modification to the representation part, which could be crucial to textual understanding as we will show later.
To solve this problem, we need a state tracking model independent of dialogue slots. In other words, the network should depend on the semantic similarity between slots and utterance instead of slot-specific modules. To this end, we propose the Slot-Independent Model (SIM). Our model complexity does not increase when the number of slots in dialogue tasks go up. Thus, SIM has many fewer parameters than existing dialogue state tracking models. To compensate for the exclusion of slot-specific parameters, we incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN). The refined representation, in addition to cross and self-attention mechanisms, make our model achieve even better performance than slot-specific models. For instance, on Wizard-of-Oz (WOZ) 2.0 dataset BIBREF8, the SIM model obtains a joint-accuracy score of 89.5%, 1.4% higher than the previously best model GLAD, with only 22% of the number of parameters. On DSTC2 dataset, SIM achieves comparable performance with previous best models with only 19% of the model size.
Problem Formulation
As outlined in BIBREF9, the dialogue state tracking task is formulated as follows: at each turn of dialogue, the user's utterance is semantically decoded into a set of slot-value pairs. There are two types of slots. Goal slots indicate the category, e.g. area, food, and the values specify the constraint given by users for the category, e.g. South, Mediterranean. Request slots refer to requests, and the value is the category that the user demands, e.g. phone, area. Each user's turn is thus decoded into turn goals and turn requests. Furthermore, to summarize the user's goals so far, the union of all previous turn goals up to the current turn is defined as joint goals.
Similarly, the dialogue system's reply from the previous round is labeled with a set of slot-value pairs denoted as system actions. The dialogue state tracking task requires models to predict turn goal and turn request given user's utterance and system actions from previous turns.
Formally, the ontology of dialogue, $O$, consists of all possible slots $S$ and the set of values for each slot, $V(s), \forall s \in S$. Specifically, req is the name for request slot and its values include all the requestable category information. The dialogue state tracking task is that, given the user's utterance in the $i$-th turn, $U$, and system actions from the $(i-1)$-th turn, $A=\lbrace (s_1, v_1), ..., (s_q, v_q)\rbrace $, where $s_j \in S, v_j \in V(s_j)$, the model should predict:
Turn goals: $\lbrace (s_1, v_1), ..., (s_b, v_b)\rbrace $, where $s_j \in S, v_j \in V(s_j)$,
Turn requests: $\lbrace (req, v_1), ..., (req, v_t)\rbrace $, where $v_j \in V(req)$.
The joint goals at turn $i$ are then computed by taking the union of all the predicted turn goals from turn 1 to turn $i$.
Usually this prediction task is cast as a binary classification problem: for each slot-value pair $(s, v)$, determine whether it should be included in the predicted turn goals/requests. Namely, the model is to learn a mapping function $f(U, A, (s, v))\rightarrow \lbrace 0,1\rbrace $.
Slot-Independent Model
To predict whether a slot-value pair should be included in the turn goals/requests, previous models BIBREF0, BIBREF5 usually define network components for each slot $s\in S$. This can be cumbersome when the ontology is large, and it suffers from the insufficient data problem: the labelled data for a single slot may not suffice to effectively train the parameters for the slot-specific neural networks structure.
Therefore, we propose that in the classification process, the model needs to rely on the semantic similarity between the user's utterance and slot-value pair, with system action information. In other words, the model should have only a single global neural structure independent of slots. We heretofore refer to this model as Slot-Independent Model (SIM) for dialogue state tracking.
Slot-Independent Model ::: Input Representation
Suppose the user's utterance in the $i$-th turn contains $m$ words, $U=(w_1, w_2, ..., w_m)$. For each word $w_i$, we use GloVe word embedding $e_i$, character-CNN embedding $c_i$, Part-Of-Speech (POS) embedding $\operatorname{POS}_i$, Named-Entity-Recognition (NER) embedding $\operatorname{NER}_i$ and exact match feature $\operatorname{EM}_i$. The POS and NER tags are extracted by spaCy and then mapped into a fixed-length vector. The exact matching feature has two bits, indicating whether a word and its lemma can be found in the slot-value pair representation, respectively. This is the first step to establish a semantic relationship between user utterance and slots. To summarize, we represent the user utterance as $X^U=\lbrace {u}_1, {u}_2, ..., {u}_m\rbrace \in \mathbb {R}^{m\times d_u}, {u}_i=[e_i; c_i; \operatorname{POS}_i; \operatorname{NER}_i; \operatorname{EM}_i]$.
For each slot-value pair $(s, v)$ either in system action or in the ontology, we get its text representation by concatenating the contents of slot and value. We use GloVe to embed each word in the text. Therefore, each slot-value pair in system actions is represented as $X^A\in \mathbb {R}^{a\times d}$ and each slot-value pair in ontology is represented as $X^O\in \mathbb {R}^{o\times d}$, where $a$ and $o$ is the number of words in the corresponding text.
Slot-Independent Model ::: Contextual Representation
To incorporate contextual information, we employ a bi-directional RNN layer on the input representation. For instance, for user utterance,
We apply variational dropout BIBREF10 for RNN inputs, i.e. the dropout mask is shared over different timesteps.
After RNN, we use linear self-attention to get a single summarization vector for user utterance, using weight vector $w\in \mathbb {R}^{d_{rnn}}$ and bias scalar $b$:
For each slot-value pair in the system actions and ontology, we conduct RNN and linear self-attention summarization in a similar way. As the slot-value pair input is not a sentence, we only keep the summarization vector $s^A \in \mathbb {R}^{d_{rnn}}$ and $s^O \in \mathbb {R}^{d_{rnn}}$ for each slot-value pair in system actions and ontology respectively.
Slot-Independent Model ::: Inter-Attention
To determine whether the current user utterance refers to a slot-value pair $(s, v)$ in the ontology, the model employs inter-attention between user utterance, system action and ontology. Similar to the framework in BIBREF5, we employ two sources of interactions.
The first is the semantic similarity between the user utterance, represented by embedding $R^U$ and each slot-value pair from ontology $(s, v)$, represented by embedding $s^O$. We linearly combine vectors in $R^U$ via the normalized inner product with $s^O$, which is then employed to compute the similarity score $y_1$:
The second source involves the system actions. The reason is that if the system requested certain information in the previous round, it is very likely that the user will give answer in this round, and the answer may refer to the question, e.g. “yes” or “no” to the question. Thus, we first attend to system actions from user utterance and then combine with the ontology to get similarity score. Suppose there are $L$ slot-values pairs in the system actions from previous round, represented by $s_1^A, ..., s_L^A$:
The final similarity score between the user utterance and a slot-value pair $(s, v)$ from the ontology is a linear combination of $y_1$ and $y_2$ and normalized using sigmoid function.
where $\beta $ is a learned coefficient. The loss function is the sum of binary cross entropy over all slot-value pairs in the ontology:
where $y_{(s, v)}\in \lbrace 0, 1\rbrace $ is the ground truth. We illustrate the model structure of SIM in fig:model.
Experiment ::: Dataset
We evaluated our model on Wizard of Oz (WoZ) BIBREF8 and the second Dialogue System Technology Challenges BIBREF11. Both tasks are for restaurant reservation and have slot-value pairs of both goal and request types. WoZ has 4 kinds of slots (area, food, price range, request) and 94 values in total. DSTC2 has an additional slot name and 220 values in total. WoZ has 800 dialogues in the training and development set and 400 dialogues in the test set, while DSTC2 dataset consists of 2118 dialogues in the training and development set, and 1117 dialogues in the test set.
Experiment ::: Metrics
We use accuracy on the joint goal and turn request as the evaluation metrics. Both are sets of slot-value pairs, so the predicted set must exactly match the answer to be judged as correct. For joint goals, if a later turn generates a slot-value pair where the slot has been specified in previous rounds, we replace the value with the latest content.
Experiment ::: Training Details
We fix GloVe BIBREF12 as the word embedding matrix. The models are trained using ADAM optimizer BIBREF13 with an initial learning rate of 1e-3. The dimension of POS and NER embeddings are 12 and 8, respectively. In character-CNN, each character is embedded into a vector of length 50. The CNN window size is 3 and hidden size is 50. We apply a dropout rate of 0.1 for the input to each module. The hidden size of RNN is 125.
During training, we pick the best model with highest joint goal score on development set and report the result on the test set.
For DSTC2, we adhere to the standard procedure to use the N-best list from the noisy ASR results for testing. The ASR results are very noisy. We experimented with several strategies and ended up using only the top result from the N-best list. The training and validation on DSTC2 are based on noise-free user utterance. The WoZ task does not have ASR results available, so we directly use noise-free user utterance.
Experiment ::: Baseline models and result
We compare our model SIM with a number of baseline systems: delexicalization model BIBREF8, BIBREF1, the neural belief tracker model (NBT) BIBREF0, global-locally self-attentive model GLAD BIBREF5, large-scale belief tracking model LSBT BIBREF7 and scalable multi-domain dialogue state tracking model SMDST BIBREF6.
Table TABREF17 shows that, on WoZ dataset, SIM achieves a new state-of-the-art joint goal accuracy of 89.5%, a significant improvement of 1.4% over GLAD, and turn request accuracy of 97.3%, 0.2% above GLAD. On DSTC2 dataset, where noisy ASR results are used as user utterance during test, SIM obtains comparable results with GLAD. Furthermore, the better representation in SIM makes it significantly outperform previous slot-independent models LSBT and SMDST.
Furthermore, as SIM has no slot-specific neural network structures, its model size is much smaller than previous models. Table TABREF20 shows that, on WoZ and DSTC2 datasets, SIM model has the same number of parameters, which is only 23% and 19% of that in GLAD model.
Ablation Study. We conduct an ablation study of SIM on WoZ dataset. As shown in Table TABREF21, the additional utterance word features, including character, POS, NER and exact matching embeddings, can boost the performance by 2.4% in joint goal accuracy. These features include POS, NER and exact match features. This indicates that for the dialogue state tracking task, syntactic information and text matching are very useful. Character-CNN captures sub-word level information and is effective in understanding spelling errors, hence it helps with 1.2% in joint goal accuracy. Variational dropout is also beneficial, contributing 0.9% to the joint goal accuracy, which shows the importance of uniform masking during dropout.
Conclusion
In this paper, we propose a slot-independent neural model, SIM, to tackle the dialogue state tracking problem. Via incorporating better feature representations, SIM can effectively reduce the model complexity while still achieving superior or comparable results on various datasets, compared with previous models.
For future work, we plan to design general slot-free dialogue state tracking models which can be adapted to different domains during inference time, given domain-specific ontology information. This will make the model more agile in real applications.
Acknowledgement
We thank the anonymous reviewers for the insightful comments. We thank William Hinthorn for proof-reading our paper. | They exclude slot-specific parameters and incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN). |
892c346617a3391c7dafc9da1b65e5ea3890294d | 892c346617a3391c7dafc9da1b65e5ea3890294d_0 | Q: What network architecture do they use for SIM?
Text: Introduction
With the rapid development in deep learning, there is a recent boom of task-oriented dialogue systems in terms of both algorithms and datasets. The goal of task-oriented dialogue is to fulfill a user's requests such as booking hotels via communication in natural language. Due to the complexity and ambiguity of human language, previous systems have included semantic decoding BIBREF0 to project natural language input into pre-defined dialogue states. These states are typically represented by slots and values: slots indicate the category of information and values specify the content of information. For instance, the user utterance “can you help me find the address of any hotel in the south side of the city” can be decoded as $inform(area, south)$ and $request(address)$, meaning that the user has specified the value south for slot area and requested another slot address.
Numerous methods have been put forward to decode a user's utterance into slot values. Some use hand-crafted features and domain-specific delexicalization methods to achieve strong performance BIBREF1, BIBREF2. BIBREF0 employs CNN and pretrained embeddings to further improve the state tracking accuracy. BIBREF3 extends this work by using two additional statistical update mechanisms. BIBREF4 uses human teaching and feedback to boost the state tracking performance. BIBREF5 utilizes both global and local attention mechanism in the proposed GLAD model which obtains state-of-the-art results on WoZ and DSTC2 datasets. However, most of these methods require slot-specific neural structures for accurate prediction. For example, BIBREF5 defines a parametrized local attention matrix for each slot. Slot-specific mechanisms become unwieldy when the dialogue task involves many topics and slots, as is typical in a complex conversational setting like product troubleshooting. Furthermore, due to the sparsity of labels, there may not be enough data to thoroughly train each slot-specific network structure. BIBREF6, BIBREF7 both propose to remove the model's dependency on dialogue slots but there's no modification to the representation part, which could be crucial to textual understanding as we will show later.
To solve this problem, we need a state tracking model independent of dialogue slots. In other words, the network should depend on the semantic similarity between slots and utterance instead of slot-specific modules. To this end, we propose the Slot-Independent Model (SIM). Our model complexity does not increase when the number of slots in dialogue tasks go up. Thus, SIM has many fewer parameters than existing dialogue state tracking models. To compensate for the exclusion of slot-specific parameters, we incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN). The refined representation, in addition to cross and self-attention mechanisms, make our model achieve even better performance than slot-specific models. For instance, on Wizard-of-Oz (WOZ) 2.0 dataset BIBREF8, the SIM model obtains a joint-accuracy score of 89.5%, 1.4% higher than the previously best model GLAD, with only 22% of the number of parameters. On DSTC2 dataset, SIM achieves comparable performance with previous best models with only 19% of the model size.
Problem Formulation
As outlined in BIBREF9, the dialogue state tracking task is formulated as follows: at each turn of dialogue, the user's utterance is semantically decoded into a set of slot-value pairs. There are two types of slots. Goal slots indicate the category, e.g. area, food, and the values specify the constraint given by users for the category, e.g. South, Mediterranean. Request slots refer to requests, and the value is the category that the user demands, e.g. phone, area. Each user's turn is thus decoded into turn goals and turn requests. Furthermore, to summarize the user's goals so far, the union of all previous turn goals up to the current turn is defined as joint goals.
Similarly, the dialogue system's reply from the previous round is labeled with a set of slot-value pairs denoted as system actions. The dialogue state tracking task requires models to predict turn goal and turn request given user's utterance and system actions from previous turns.
Formally, the ontology of dialogue, $O$, consists of all possible slots $S$ and the set of values for each slot, $V(s), \forall s \in S$. Specifically, req is the name for request slot and its values include all the requestable category information. The dialogue state tracking task is that, given the user's utterance in the $i$-th turn, $U$, and system actions from the $(i-1)$-th turn, $A=\lbrace (s_1, v_1), ..., (s_q, v_q)\rbrace $, where $s_j \in S, v_j \in V(s_j)$, the model should predict:
Turn goals: $\lbrace (s_1, v_1), ..., (s_b, v_b)\rbrace $, where $s_j \in S, v_j \in V(s_j)$,
Turn requests: $\lbrace (req, v_1), ..., (req, v_t)\rbrace $, where $v_j \in V(req)$.
The joint goals at turn $i$ are then computed by taking the union of all the predicted turn goals from turn 1 to turn $i$.
Usually this prediction task is cast as a binary classification problem: for each slot-value pair $(s, v)$, determine whether it should be included in the predicted turn goals/requests. Namely, the model is to learn a mapping function $f(U, A, (s, v))\rightarrow \lbrace 0,1\rbrace $.
Slot-Independent Model
To predict whether a slot-value pair should be included in the turn goals/requests, previous models BIBREF0, BIBREF5 usually define network components for each slot $s\in S$. This can be cumbersome when the ontology is large, and it suffers from the insufficient data problem: the labelled data for a single slot may not suffice to effectively train the parameters for the slot-specific neural networks structure.
Therefore, we propose that in the classification process, the model needs to rely on the semantic similarity between the user's utterance and slot-value pair, with system action information. In other words, the model should have only a single global neural structure independent of slots. We heretofore refer to this model as Slot-Independent Model (SIM) for dialogue state tracking.
Slot-Independent Model ::: Input Representation
Suppose the user's utterance in the $i$-th turn contains $m$ words, $U=(w_1, w_2, ..., w_m)$. For each word $w_i$, we use GloVe word embedding $e_i$, character-CNN embedding $c_i$, Part-Of-Speech (POS) embedding $\operatorname{POS}_i$, Named-Entity-Recognition (NER) embedding $\operatorname{NER}_i$ and exact match feature $\operatorname{EM}_i$. The POS and NER tags are extracted by spaCy and then mapped into a fixed-length vector. The exact matching feature has two bits, indicating whether a word and its lemma can be found in the slot-value pair representation, respectively. This is the first step to establish a semantic relationship between user utterance and slots. To summarize, we represent the user utterance as $X^U=\lbrace {u}_1, {u}_2, ..., {u}_m\rbrace \in \mathbb {R}^{m\times d_u}, {u}_i=[e_i; c_i; \operatorname{POS}_i; \operatorname{NER}_i; \operatorname{EM}_i]$.
For each slot-value pair $(s, v)$ either in system action or in the ontology, we get its text representation by concatenating the contents of slot and value. We use GloVe to embed each word in the text. Therefore, each slot-value pair in system actions is represented as $X^A\in \mathbb {R}^{a\times d}$ and each slot-value pair in ontology is represented as $X^O\in \mathbb {R}^{o\times d}$, where $a$ and $o$ is the number of words in the corresponding text.
Slot-Independent Model ::: Contextual Representation
To incorporate contextual information, we employ a bi-directional RNN layer on the input representation. For instance, for user utterance,
We apply variational dropout BIBREF10 for RNN inputs, i.e. the dropout mask is shared over different timesteps.
After RNN, we use linear self-attention to get a single summarization vector for user utterance, using weight vector $w\in \mathbb {R}^{d_{rnn}}$ and bias scalar $b$:
For each slot-value pair in the system actions and ontology, we conduct RNN and linear self-attention summarization in a similar way. As the slot-value pair input is not a sentence, we only keep the summarization vector $s^A \in \mathbb {R}^{d_{rnn}}$ and $s^O \in \mathbb {R}^{d_{rnn}}$ for each slot-value pair in system actions and ontology respectively.
Slot-Independent Model ::: Inter-Attention
To determine whether the current user utterance refers to a slot-value pair $(s, v)$ in the ontology, the model employs inter-attention between user utterance, system action and ontology. Similar to the framework in BIBREF5, we employ two sources of interactions.
The first is the semantic similarity between the user utterance, represented by embedding $R^U$ and each slot-value pair from ontology $(s, v)$, represented by embedding $s^O$. We linearly combine vectors in $R^U$ via the normalized inner product with $s^O$, which is then employed to compute the similarity score $y_1$:
The second source involves the system actions. The reason is that if the system requested certain information in the previous round, it is very likely that the user will give answer in this round, and the answer may refer to the question, e.g. “yes” or “no” to the question. Thus, we first attend to system actions from user utterance and then combine with the ontology to get similarity score. Suppose there are $L$ slot-values pairs in the system actions from previous round, represented by $s_1^A, ..., s_L^A$:
The final similarity score between the user utterance and a slot-value pair $(s, v)$ from the ontology is a linear combination of $y_1$ and $y_2$ and normalized using sigmoid function.
where $\beta $ is a learned coefficient. The loss function is the sum of binary cross entropy over all slot-value pairs in the ontology:
where $y_{(s, v)}\in \lbrace 0, 1\rbrace $ is the ground truth. We illustrate the model structure of SIM in fig:model.
Experiment ::: Dataset
We evaluated our model on Wizard of Oz (WoZ) BIBREF8 and the second Dialogue System Technology Challenges BIBREF11. Both tasks are for restaurant reservation and have slot-value pairs of both goal and request types. WoZ has 4 kinds of slots (area, food, price range, request) and 94 values in total. DSTC2 has an additional slot name and 220 values in total. WoZ has 800 dialogues in the training and development set and 400 dialogues in the test set, while DSTC2 dataset consists of 2118 dialogues in the training and development set, and 1117 dialogues in the test set.
Experiment ::: Metrics
We use accuracy on the joint goal and turn request as the evaluation metrics. Both are sets of slot-value pairs, so the predicted set must exactly match the answer to be judged as correct. For joint goals, if a later turn generates a slot-value pair where the slot has been specified in previous rounds, we replace the value with the latest content.
Experiment ::: Training Details
We fix GloVe BIBREF12 as the word embedding matrix. The models are trained using ADAM optimizer BIBREF13 with an initial learning rate of 1e-3. The dimension of POS and NER embeddings are 12 and 8, respectively. In character-CNN, each character is embedded into a vector of length 50. The CNN window size is 3 and hidden size is 50. We apply a dropout rate of 0.1 for the input to each module. The hidden size of RNN is 125.
During training, we pick the best model with highest joint goal score on development set and report the result on the test set.
For DSTC2, we adhere to the standard procedure to use the N-best list from the noisy ASR results for testing. The ASR results are very noisy. We experimented with several strategies and ended up using only the top result from the N-best list. The training and validation on DSTC2 are based on noise-free user utterance. The WoZ task does not have ASR results available, so we directly use noise-free user utterance.
Experiment ::: Baseline models and result
We compare our model SIM with a number of baseline systems: delexicalization model BIBREF8, BIBREF1, the neural belief tracker model (NBT) BIBREF0, global-locally self-attentive model GLAD BIBREF5, large-scale belief tracking model LSBT BIBREF7 and scalable multi-domain dialogue state tracking model SMDST BIBREF6.
Table TABREF17 shows that, on WoZ dataset, SIM achieves a new state-of-the-art joint goal accuracy of 89.5%, a significant improvement of 1.4% over GLAD, and turn request accuracy of 97.3%, 0.2% above GLAD. On DSTC2 dataset, where noisy ASR results are used as user utterance during test, SIM obtains comparable results with GLAD. Furthermore, the better representation in SIM makes it significantly outperform previous slot-independent models LSBT and SMDST.
Furthermore, as SIM has no slot-specific neural network structures, its model size is much smaller than previous models. Table TABREF20 shows that, on WoZ and DSTC2 datasets, SIM model has the same number of parameters, which is only 23% and 19% of that in GLAD model.
Ablation Study. We conduct an ablation study of SIM on WoZ dataset. As shown in Table TABREF21, the additional utterance word features, including character, POS, NER and exact matching embeddings, can boost the performance by 2.4% in joint goal accuracy. These features include POS, NER and exact match features. This indicates that for the dialogue state tracking task, syntactic information and text matching are very useful. Character-CNN captures sub-word level information and is effective in understanding spelling errors, hence it helps with 1.2% in joint goal accuracy. Variational dropout is also beneficial, contributing 0.9% to the joint goal accuracy, which shows the importance of uniform masking during dropout.
Conclusion
In this paper, we propose a slot-independent neural model, SIM, to tackle the dialogue state tracking problem. Via incorporating better feature representations, SIM can effectively reduce the model complexity while still achieving superior or comparable results on various datasets, compared with previous models.
For future work, we plan to design general slot-free dialogue state tracking models which can be adapted to different domains during inference time, given domain-specific ontology information. This will make the model more agile in real applications.
Acknowledgement
We thank the anonymous reviewers for the insightful comments. We thank William Hinthorn for proof-reading our paper. | convolutional neural networks (CNN) |
36feaac9d9dee5ae09aaebc2019b014e57f61fbf | 36feaac9d9dee5ae09aaebc2019b014e57f61fbf_0 | Q: How do they measure model size?
Text: Introduction
With the rapid development in deep learning, there is a recent boom of task-oriented dialogue systems in terms of both algorithms and datasets. The goal of task-oriented dialogue is to fulfill a user's requests such as booking hotels via communication in natural language. Due to the complexity and ambiguity of human language, previous systems have included semantic decoding BIBREF0 to project natural language input into pre-defined dialogue states. These states are typically represented by slots and values: slots indicate the category of information and values specify the content of information. For instance, the user utterance “can you help me find the address of any hotel in the south side of the city” can be decoded as $inform(area, south)$ and $request(address)$, meaning that the user has specified the value south for slot area and requested another slot address.
Numerous methods have been put forward to decode a user's utterance into slot values. Some use hand-crafted features and domain-specific delexicalization methods to achieve strong performance BIBREF1, BIBREF2. BIBREF0 employs CNN and pretrained embeddings to further improve the state tracking accuracy. BIBREF3 extends this work by using two additional statistical update mechanisms. BIBREF4 uses human teaching and feedback to boost the state tracking performance. BIBREF5 utilizes both global and local attention mechanism in the proposed GLAD model which obtains state-of-the-art results on WoZ and DSTC2 datasets. However, most of these methods require slot-specific neural structures for accurate prediction. For example, BIBREF5 defines a parametrized local attention matrix for each slot. Slot-specific mechanisms become unwieldy when the dialogue task involves many topics and slots, as is typical in a complex conversational setting like product troubleshooting. Furthermore, due to the sparsity of labels, there may not be enough data to thoroughly train each slot-specific network structure. BIBREF6, BIBREF7 both propose to remove the model's dependency on dialogue slots but there's no modification to the representation part, which could be crucial to textual understanding as we will show later.
To solve this problem, we need a state tracking model independent of dialogue slots. In other words, the network should depend on the semantic similarity between slots and utterance instead of slot-specific modules. To this end, we propose the Slot-Independent Model (SIM). Our model complexity does not increase when the number of slots in dialogue tasks go up. Thus, SIM has many fewer parameters than existing dialogue state tracking models. To compensate for the exclusion of slot-specific parameters, we incorporate better feature representation of user utterance and dialogue states using syntactic information and convolutional neural networks (CNN). The refined representation, in addition to cross and self-attention mechanisms, make our model achieve even better performance than slot-specific models. For instance, on Wizard-of-Oz (WOZ) 2.0 dataset BIBREF8, the SIM model obtains a joint-accuracy score of 89.5%, 1.4% higher than the previously best model GLAD, with only 22% of the number of parameters. On DSTC2 dataset, SIM achieves comparable performance with previous best models with only 19% of the model size.
Problem Formulation
As outlined in BIBREF9, the dialogue state tracking task is formulated as follows: at each turn of dialogue, the user's utterance is semantically decoded into a set of slot-value pairs. There are two types of slots. Goal slots indicate the category, e.g. area, food, and the values specify the constraint given by users for the category, e.g. South, Mediterranean. Request slots refer to requests, and the value is the category that the user demands, e.g. phone, area. Each user's turn is thus decoded into turn goals and turn requests. Furthermore, to summarize the user's goals so far, the union of all previous turn goals up to the current turn is defined as joint goals.
Similarly, the dialogue system's reply from the previous round is labeled with a set of slot-value pairs denoted as system actions. The dialogue state tracking task requires models to predict turn goal and turn request given user's utterance and system actions from previous turns.
Formally, the ontology of dialogue, $O$, consists of all possible slots $S$ and the set of values for each slot, $V(s), \forall s \in S$. Specifically, req is the name for request slot and its values include all the requestable category information. The dialogue state tracking task is that, given the user's utterance in the $i$-th turn, $U$, and system actions from the $(i-1)$-th turn, $A=\lbrace (s_1, v_1), ..., (s_q, v_q)\rbrace $, where $s_j \in S, v_j \in V(s_j)$, the model should predict:
Turn goals: $\lbrace (s_1, v_1), ..., (s_b, v_b)\rbrace $, where $s_j \in S, v_j \in V(s_j)$,
Turn requests: $\lbrace (req, v_1), ..., (req, v_t)\rbrace $, where $v_j \in V(req)$.
The joint goals at turn $i$ are then computed by taking the union of all the predicted turn goals from turn 1 to turn $i$.
Usually this prediction task is cast as a binary classification problem: for each slot-value pair $(s, v)$, determine whether it should be included in the predicted turn goals/requests. Namely, the model is to learn a mapping function $f(U, A, (s, v))\rightarrow \lbrace 0,1\rbrace $.
Slot-Independent Model
To predict whether a slot-value pair should be included in the turn goals/requests, previous models BIBREF0, BIBREF5 usually define network components for each slot $s\in S$. This can be cumbersome when the ontology is large, and it suffers from the insufficient data problem: the labelled data for a single slot may not suffice to effectively train the parameters for the slot-specific neural networks structure.
Therefore, we propose that in the classification process, the model needs to rely on the semantic similarity between the user's utterance and slot-value pair, with system action information. In other words, the model should have only a single global neural structure independent of slots. We heretofore refer to this model as Slot-Independent Model (SIM) for dialogue state tracking.
Slot-Independent Model ::: Input Representation
Suppose the user's utterance in the $i$-th turn contains $m$ words, $U=(w_1, w_2, ..., w_m)$. For each word $w_i$, we use GloVe word embedding $e_i$, character-CNN embedding $c_i$, Part-Of-Speech (POS) embedding $\operatorname{POS}_i$, Named-Entity-Recognition (NER) embedding $\operatorname{NER}_i$ and exact match feature $\operatorname{EM}_i$. The POS and NER tags are extracted by spaCy and then mapped into a fixed-length vector. The exact matching feature has two bits, indicating whether a word and its lemma can be found in the slot-value pair representation, respectively. This is the first step to establish a semantic relationship between user utterance and slots. To summarize, we represent the user utterance as $X^U=\lbrace {u}_1, {u}_2, ..., {u}_m\rbrace \in \mathbb {R}^{m\times d_u}, {u}_i=[e_i; c_i; \operatorname{POS}_i; \operatorname{NER}_i; \operatorname{EM}_i]$.
For each slot-value pair $(s, v)$ either in system action or in the ontology, we get its text representation by concatenating the contents of slot and value. We use GloVe to embed each word in the text. Therefore, each slot-value pair in system actions is represented as $X^A\in \mathbb {R}^{a\times d}$ and each slot-value pair in ontology is represented as $X^O\in \mathbb {R}^{o\times d}$, where $a$ and $o$ is the number of words in the corresponding text.
Slot-Independent Model ::: Contextual Representation
To incorporate contextual information, we employ a bi-directional RNN layer on the input representation. For instance, for user utterance,
We apply variational dropout BIBREF10 for RNN inputs, i.e. the dropout mask is shared over different timesteps.
After RNN, we use linear self-attention to get a single summarization vector for user utterance, using weight vector $w\in \mathbb {R}^{d_{rnn}}$ and bias scalar $b$:
For each slot-value pair in the system actions and ontology, we conduct RNN and linear self-attention summarization in a similar way. As the slot-value pair input is not a sentence, we only keep the summarization vector $s^A \in \mathbb {R}^{d_{rnn}}$ and $s^O \in \mathbb {R}^{d_{rnn}}$ for each slot-value pair in system actions and ontology respectively.
Slot-Independent Model ::: Inter-Attention
To determine whether the current user utterance refers to a slot-value pair $(s, v)$ in the ontology, the model employs inter-attention between user utterance, system action and ontology. Similar to the framework in BIBREF5, we employ two sources of interactions.
The first is the semantic similarity between the user utterance, represented by embedding $R^U$ and each slot-value pair from ontology $(s, v)$, represented by embedding $s^O$. We linearly combine vectors in $R^U$ via the normalized inner product with $s^O$, which is then employed to compute the similarity score $y_1$:
The second source involves the system actions. The reason is that if the system requested certain information in the previous round, it is very likely that the user will give answer in this round, and the answer may refer to the question, e.g. “yes” or “no” to the question. Thus, we first attend to system actions from user utterance and then combine with the ontology to get similarity score. Suppose there are $L$ slot-values pairs in the system actions from previous round, represented by $s_1^A, ..., s_L^A$:
The final similarity score between the user utterance and a slot-value pair $(s, v)$ from the ontology is a linear combination of $y_1$ and $y_2$ and normalized using sigmoid function.
where $\beta $ is a learned coefficient. The loss function is the sum of binary cross entropy over all slot-value pairs in the ontology:
where $y_{(s, v)}\in \lbrace 0, 1\rbrace $ is the ground truth. We illustrate the model structure of SIM in fig:model.
Experiment ::: Dataset
We evaluated our model on Wizard of Oz (WoZ) BIBREF8 and the second Dialogue System Technology Challenges BIBREF11. Both tasks are for restaurant reservation and have slot-value pairs of both goal and request types. WoZ has 4 kinds of slots (area, food, price range, request) and 94 values in total. DSTC2 has an additional slot name and 220 values in total. WoZ has 800 dialogues in the training and development set and 400 dialogues in the test set, while DSTC2 dataset consists of 2118 dialogues in the training and development set, and 1117 dialogues in the test set.
Experiment ::: Metrics
We use accuracy on the joint goal and turn request as the evaluation metrics. Both are sets of slot-value pairs, so the predicted set must exactly match the answer to be judged as correct. For joint goals, if a later turn generates a slot-value pair where the slot has been specified in previous rounds, we replace the value with the latest content.
Experiment ::: Training Details
We fix GloVe BIBREF12 as the word embedding matrix. The models are trained using ADAM optimizer BIBREF13 with an initial learning rate of 1e-3. The dimension of POS and NER embeddings are 12 and 8, respectively. In character-CNN, each character is embedded into a vector of length 50. The CNN window size is 3 and hidden size is 50. We apply a dropout rate of 0.1 for the input to each module. The hidden size of RNN is 125.
During training, we pick the best model with highest joint goal score on development set and report the result on the test set.
For DSTC2, we adhere to the standard procedure to use the N-best list from the noisy ASR results for testing. The ASR results are very noisy. We experimented with several strategies and ended up using only the top result from the N-best list. The training and validation on DSTC2 are based on noise-free user utterance. The WoZ task does not have ASR results available, so we directly use noise-free user utterance.
Experiment ::: Baseline models and result
We compare our model SIM with a number of baseline systems: delexicalization model BIBREF8, BIBREF1, the neural belief tracker model (NBT) BIBREF0, global-locally self-attentive model GLAD BIBREF5, large-scale belief tracking model LSBT BIBREF7 and scalable multi-domain dialogue state tracking model SMDST BIBREF6.
Table TABREF17 shows that, on WoZ dataset, SIM achieves a new state-of-the-art joint goal accuracy of 89.5%, a significant improvement of 1.4% over GLAD, and turn request accuracy of 97.3%, 0.2% above GLAD. On DSTC2 dataset, where noisy ASR results are used as user utterance during test, SIM obtains comparable results with GLAD. Furthermore, the better representation in SIM makes it significantly outperform previous slot-independent models LSBT and SMDST.
Furthermore, as SIM has no slot-specific neural network structures, its model size is much smaller than previous models. Table TABREF20 shows that, on WoZ and DSTC2 datasets, SIM model has the same number of parameters, which is only 23% and 19% of that in GLAD model.
Ablation Study. We conduct an ablation study of SIM on WoZ dataset. As shown in Table TABREF21, the additional utterance word features, including character, POS, NER and exact matching embeddings, can boost the performance by 2.4% in joint goal accuracy. These features include POS, NER and exact match features. This indicates that for the dialogue state tracking task, syntactic information and text matching are very useful. Character-CNN captures sub-word level information and is effective in understanding spelling errors, hence it helps with 1.2% in joint goal accuracy. Variational dropout is also beneficial, contributing 0.9% to the joint goal accuracy, which shows the importance of uniform masking during dropout.
Conclusion
In this paper, we propose a slot-independent neural model, SIM, to tackle the dialogue state tracking problem. Via incorporating better feature representations, SIM can effectively reduce the model complexity while still achieving superior or comparable results on various datasets, compared with previous models.
For future work, we plan to design general slot-free dialogue state tracking models which can be adapted to different domains during inference time, given domain-specific ontology information. This will make the model more agile in real applications.
Acknowledgement
We thank the anonymous reviewers for the insightful comments. We thank William Hinthorn for proof-reading our paper. | By the number of parameters. |
df25dd9004a3b367202d7731ee912a8052a35780 | df25dd9004a3b367202d7731ee912a8052a35780_0 | Q: Does model uses pretrained Transformer encoders?
Text: Introduction
In the past few years, models employing self-attention BIBREF0 have achieved state-of-art results for many tasks, such as machine translation, language modeling, and language understanding BIBREF0, BIBREF1. In particular, large Transformer-based language models have brought gains in speech recognition tasks when used for second-pass re-scoring and in first-pass shallow fusion BIBREF2. As typically used in sequence-to-sequence transduction tasks BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, Transformer-based models attend over encoder features using decoder features, implying that the decoding has to be done in a label-synchronous way, thereby posing a challenge for streaming speech recognition applications. An additional challenge for streaming speech recognition with these models is that the number of computations for self-attention increases quadratically with input sequence size. For streaming to be computationally practical, it is highly desirable that the time it takes to process each frame remains constant relative to the length of the input. Transformer-based alternatives to RNNs have recently been explored for use in ASR BIBREF8, BIBREF9, BIBREF10, BIBREF11.
For streaming speech recognition models, recurrent neural networks (RNNs) have been the de facto choice since they can model the temporal dependencies in the audio features effectively BIBREF12 while maintaining a constant computational requirement for each frame. Streamable end-to-end modeling architectures such as the Recurrent Neural Network Transducer (RNN-T) BIBREF13, BIBREF14, BIBREF15, Recurrent Neural Aligner (RNA) BIBREF16, and Neural Transducer BIBREF17 utilize an encoder-decoder based framework where both encoder and decoder are layers of RNNs that generate features from audio and labels respectively. In particular, the RNN-T and RNA models are trained to learn alignments between the acoustic encoder features and the label encoder features, and so lend themselves naturally to frame-synchronous decoding.
Several optimization techniques have been evaluated to enable running RNN-T on device BIBREF15. In addition, extensive architecture and modeling unit exploration has been done for RNN-T BIBREF14. In this paper, we explore the possibility of replacing RNN-based audio and label encoders in the conventional RNN-T architecture with Transformer encoders. With a view to preserving model streamability, we show that Transformer-based models can be trained with self-attention on a fixed number of past input frames and previous labels. This results in a degradation of performance (compared to attending to all past input frames and labels), but then the model satisfies a constant computational requirement for processing each frame, making it suitable for streaming. Given the simple architecture and parallelizable nature of self-attention computations, we observe large improvements in training time and training resource utilization compared to RNN-T models that employ RNNs.
The RNN-T architecture (as depicted in Figure FIGREF1) is a neural network architecture that can be trained end-to-end with the RNN-T loss to map input sequences (e.g. audio feature vectors) to target sequences (e.g. phonemes, graphemes). Given an input sequence of real-valued vectors of length $T$, ${\mathbf {x}}= (x_1, x_2, ..., x_T)$, the RNN-T model tries to predict the target sequence of labels ${\mathbf {y}}= (y_1, y_2, ..., y_U)$ of length $U$.
Unlike a typical attention-based sequence-to-sequence model, which attends over the entire input for every prediction in the output sequence, the RNN-T model gives a probability distribution over the label space at every time step, and the output label space includes an additional null label to indicate the lack of output for that time step — similar to the Connectionist Temporal Classification (CTC) framework BIBREF18. But unlike CTC, this label distribution is also conditioned on the previous label history.
The RNN-T model defines a conditional distribution $P({\mathbf {z}}|{\mathbf {x}})$ over all the possible alignments, where
is a sequence of $(z_i, t_i)$ pairs of length $\overline{U}$, and $(z_i, t_i)$ represents an alignment between output label $z_i$ and the encoded feature at time $t_i$. The labels $z_i$ can optionally be blank labels (null predictions). Removing the blank labels gives the actual output label sequence ${\mathbf {y}}$, of length $U$.
We can marginalize $P({\mathbf {z}}|{\mathbf {x}})$ over all possible alignments ${\mathbf {z}}$ to obtain the probability of the target label sequence ${\mathbf {y}}$ given the input sequence ${\mathbf {x}}$,
where ${\cal Z}({\mathbf {y}},T)$ is the set of valid alignments of length $T$ for the label sequence.
Transformer Transducer ::: RNN-T Architecture and Loss
In this paper, we present all experimental results with the RNN-T loss BIBREF13 for consistency, which performs similarly to the monotonic RNN-T loss BIBREF19 in our experiments.
The probability of an alignment $P({\mathbf {z}}|{\mathbf {x}})$ can be factorized as
where $\mathrm {Labels}(z_{1:(i-1)})$ is the sequence of non-blank labels in $z_{1:(i-1)}$. The RNN-T architecture parameterizes $P({\mathbf {z}}|{\mathbf {x}})$ with an audio encoder, a label encoder, and a joint network. The encoders are two neural networks that encode the input sequence and the target output sequence, respectively. Previous work BIBREF13 has employed Long Short-term Memory models (LSTMs) as the encoders, giving the RNN-T its name. However, this framework is not restricted to RNNs. In this paper, we are particularly interested in replacing the LSTM encoders with Transformers BIBREF0, BIBREF1. In the following, we refer to this new architecture as the Transformer Transducer (T-T). As in the original RNN-T model, the joint network combines the audio encoder output at $t_i$ and the label encoder output given the previous non-blank output label sequence $\mathrm {Labels}(z_{1:(i-1)})$ using a feed-forward neural network with a softmax layer, inducing a distribution over the labels. The model defines $P(z_i|{\mathbf {x}}, t_i, \mathrm {Labels}(z_{1:(i-1)}))$ as follows:
where each $\mathrm {Linear}$ function is a different single-layer feed-forward neural network, $\mathrm {AudioEncoder}_{t_{i}}({\mathbf {x}})$ is the audio encoder output at time $t_i$, and $\mathrm {LabelEncoder}(\mathrm {Labels}(z_{1:(i-1)}))$ is the label encoder output given the previous non-blank label sequence.
To compute Eq. (DISPLAY_FORM3) by summing all valid alignments naively is computationally intractable. Therefore, we define the forward variable $\alpha (t,u)$ as the sum of probabilities for all paths ending at time-frame $t$ and label position $u$. We then use the forward algorithm BIBREF13, BIBREF20 to compute the last alpha variable $\alpha ({T, U})$, which corresponds to $P({\mathbf {y}}|{\mathbf {x}})$ defined in Eq. (DISPLAY_FORM3). Efficient computation of $P({\mathbf {y}}|{\mathbf {x}})$ using the forward algorithm is enabled by the fact that the local probability estimate (Eq. (DISPLAY_FORM7)) at any given label position and any given time-frame is not dependent on the alignment BIBREF13. The training loss for the model is then the sum of the negative log probabilities defined in Eq. (DISPLAY_FORM3) over all the training examples,
where $T_i$ and $U_i$ are the lengths of the input sequence and the output target label sequence of the $i$-th training example, respectively.
Transformer Transducer ::: Transformer
The Transformer BIBREF0 is composed of a stack of multiple identical layers. Each layer has two sub-layers, a multi-headed attention layer and a feed-forward layer. Our multi-headed attention layer first applies $\mathrm {LayerNorm}$, then projects the input to $\mathrm {Query}$, $\mathrm {Key}$, and $\mathrm {Value}$ for all the heads BIBREF1. The attention mechanism is applied separately for different attention heads. The attention mechanism provides a flexible way to control the context that the model uses. For example, we can mask the attention score to the left of the current frame to produce output conditioned only on the previous state history. The weight-averaged $\mathrm {Value}$s for all heads are concatenated and passed to a dense layer. We then employ a residual connection on the normalized input and the output of the dense layer to form the final output of the multi-headed attention sub-layer (i.e. $\mathrm {LayerNorm}(x) + \mathrm {AttentionLayer}(\mathrm {LayerNorm}(x))$, where $x$ is the input to the multi-headed attention sub-layer). We also apply dropout on the output of the dense layer to prevent overfitting. Our feed-forward sub-layer applies $\mathrm {LayerNorm}$ on the input first, then applies two dense layers. We use $\mathrm {ReLu}$ as the activation for the first dense layer. Again, dropout to both dense layers for regularization, and a residual connection of normalized input and the output of the second dense layer (i.e. $\mathrm {LayerNorm}(x) + \mathrm {FeedForwardLayer}(\mathrm {LayerNorm}(x))$, where $x$ is the input to the feed-forward sub-layer) are applied. See Figure FIGREF10 for more details.
Note that $\mathrm {LabelEncoder}$ states do not attend to $\mathrm {AudioEncoder}$ states, in contrast to the architecture in BIBREF0. As discussed in the Introduction, doing so poses a challenge for streaming applications. Instead, we implement $\mathrm {AudioEncoder}$ and $\mathrm {LabelEncoder}$ in Eq. (DISPLAY_FORM6), which are LSTMs in conventional RNN-T architectures BIBREF13, BIBREF15, BIBREF14, using the Transformers described above. In tandem with the RNN-T architecture described in the previous section, the attention mechanism here only operates within $\mathrm {AudioEncoder}$ or $\mathrm {LabelEncoder}$, contrary to the standard practice for Transformer-based systems. In addition, so as to model sequential order, we use the relative positional encoding proposed in BIBREF1. With relative positional encoding, the encoding only affects the attention score instead of the $\mathrm {Value}$s being summed. This allows us to reuse previously computed states rather than recomputing all previous states and getting the last state in an overlapping inference manner when the number of frames or labels that $\mathrm {AudioEncoder}$ or $\mathrm {LabelEncoder}$ processed is larger than the maximum length used during training (which would again be intractable for streaming applications). More specifically, the complexity of running one-step inference to get activations at time $t$ is $\mathrm {O}(t)$, which is the computation cost of attending to $t$ states and of the feed-forward process for the current step when using relative positional encoding. On the other hand, with absolute positional encoding, the encoding added to the input should be shifted by one when $t$ is larger than the maximum length used during training, which precludes re-use of the states, and makes the complexity $\mathrm {O}(t^2)$. However, even if we can reduce the complexity from $\mathrm {O}(t^2)$ to $\mathrm {O}(t)$ with relative positional encoding, there is still the issue of latency growing over time. One intuitive solution is to limit the model to attend to a moving window $W$ of states, making the one-step inference complexity constant. Note that training or inference with attention to limited context is not possible for Transformer-based models that have attention from $\mathrm {Decoder}$ to $\mathrm {Encoder}$, as such a setup is itself trying to learn the alignment. In contrast, the separation of $\mathrm {AudioEncoder}$ and $\mathrm {LabelEncoder}$, and the fact that the alignment is handled by a separate forward-backward process, within the RNN-T architecture, makes it possible to train with attention over an explicitly specified, limited context.
Experiments and Results ::: Data
We evaluated the proposed model using the publicly available LibriSpeech ASR corpus BIBREF23. The LibriSpeech dataset consists of 970 hours of audio data with corresponding text transcripts (around 10M word tokens) and an additional 800M word token text only dataset. The paired audio/transcript dataset was used to train T-T models and an LSTM-based baseline. The full 810M word tokens text dataset was used for standalone language model (LM) training. We extracted 128-channel logmel energy values from a 32 ms window, stacked every 4 frames, and sub-sampled every 3 frames, to produce a 512-dimensional acoustic feature vector with a stride of 30 ms. Feature augmentation BIBREF22 was applied during model training to prevent overfitting and to improve generalization, with only frequency masking ($\mathrm {F}=50$, $\mathrm {mF}=2$) and time masking ($\mathrm {T}=30$, $\mathrm {mT}=10$).
Experiments and Results ::: Transformer Transducer
Our Transformer Transducer model architecture has 18 audio and 2 label encoder layers. Every layer is identical for both audio and label encoders. The details of computations in a layer are shown in Figure FIGREF10 and Table TABREF11. All the models for experiments presented in this paper are trained on 8x8 TPU with a per-core batch size of 16 (effective batch size of 2048). The learning rate schedule is ramped up linearly from 0 to $2.5\mathrm {e}{-4}$ during first 4K steps, it is then held constant till 30K steps and then decays exponentially to $2.5\mathrm {e}{-6}$ till 200K steps. During training we also added a gaussian noise($\mu =0,\sigma =0.01$) to model weights BIBREF24 starting at 10K steps. We train this model to output grapheme units in all our experiments. We found that the Transformer Transducer models trained much faster ($\approx 1$ day) compared to the an LSTM-based RNN-T model ($\approx 3.5$ days), with a similar number of parameters.
Experiments and Results ::: Results
We first compared the performance of Transformer Transducer (T-T) models with full attention on audio to an RNN-T model using a bidirectional LSTM audio encoder. As shown in Table TABREF12, the T-T model significantly outperforms the LSTM-based RNN-T baseline. We also observed that T-T models can achieve competitive recognition accuracy with existing wordpiece-based end-to-end models with similar model size. To compare with systems using shallow fusion BIBREF18, BIBREF25 with separately trained LMs, we also trained a Transformer-based LM with the same architecture as the label encoder used in T-T, using the full 810M word token dataset. This Transformer LM (6 layers; 57M parameters) had a perplexity of $2.49$ on the dev-clean set; the use of dropout, and of larger models, did not improve either perplexity or WER. Shallow fusion was then performed using that LM and both the trained T-T system and the trained bidirectional LSTM-based RNN-T baseline, with scaling factors on the LM output and on the non-blank symbol sequence length tuned on the LibriSpeech dev sets. The results are shown in Table TABREF12 in the “With LM” column. The shallow fusion result for the T-T system is competitive with corresponding results for top-performing existing systems.
Next, we ran training and decoding experiments using T-T models with limited attention windows over audio and text, with a view to building online streaming speech recognition systems with low latency. Similarly to the use of unidirectional RNN audio encoders in online models, where activations for time $t$ are computed with conditioning only on audio frames before $t$, here we constrain the $\mathrm {AudioEncoder}$ to attend to the left of the current frame by masking the attention scores to the right of the current frame. In order to make one-step inference for $\mathrm {AudioEncoder}$ tractable (i.e. to have constant time complexity), we further limit the attention for $\mathrm {AudioEncoder}$ to a fixed window of previous states by again masking the attention score. Due to limited computation resources, we used the same mask for different Transformer layers, but the use of different contexts (masks) for different layers is worth exploring. The results are shown in Table TABREF15, where N in the first two columns indicates the number of states that the model uses to the left or right of the current frame. As we can see, using more audio history gives the lower WER, but considering a streamable model with reasonable time complexity for inference, we experimented with a left context of up to 10 frames per layer.
Similarly, we explored the use of limited right context to allow the model to see some future audio frames, in the hope of bridging the gap between a streamable T-T model (left = 10, right = 0) and a full attention T-T model (left = 512, right = 512). Since we apply the same mask for every layer, the latency introduced by using right context is aggregated over all the layers. For example, in Figure FIGREF17, to produce $y_7$ from a 3-layer Transformer with one frame of right context, it actually needs to wait for $x_{10}$ to arrive, which is 90 ms latency in our case. To explore the right context impact for modeling, we did comparisons with fixed 512 frames left context per layer to compared with full attention T-T model. As we can see from Table TABREF18, with right context of 6 frames per layer (around 3.2 secs of latency), the performance is around 16% worse than full attention model. Compared with streamable T-T model, 2 frames right context per layer (around 1 sec of latency) brings around 30% improvements.
In addition, we evaluated how the left context used in the T-T $\mathrm {LabelEncoder}$ affects performance. In Table TABREF19, we show that constraining each layer to only use three previous label states yields the similar accuracy with the model using 20 states per layer. It shows very limited left context for label encoder is good engough for T-T model. We see a similar trend when limiting left label states while using a full attention T-T audio encoder.
Finally, Table TABREF20 reports the results when using a limited left context of 10 frames, which reduces the time complexity for one-step inference to a constant, with look-ahead to future frames, as a way of bridging the gap between the performance of left-only attention and full attention models.
Conclusions
In this paper, we presented the Transformer Transducer model, embedding Transformer based self-attention for audio and label encoding within the RNN-T architecture, resulting in an end-to-end model that can be optimized using a loss function that efficiently marginalizes over all possible alignments and that is well-suited to time-synchronous decoding. This model achieves a new state-of-the-art accuracy on the LibriSpeech benchmark, and can easily be used for streaming speech recognition by limiting the audio and label context used in self-attention. Transformer Transducer models train significantly faster than LSTM based RNN-T models, and they allow us to trade recognition accuracy and latency in a flexible manner. | No |
5328cc2588b2bf7b91f4e0f342e8cbfc6dc8ac00 | 5328cc2588b2bf7b91f4e0f342e8cbfc6dc8ac00_0 | Q: What was previous state of the art model?
Text: Introduction
In the past few years, models employing self-attention BIBREF0 have achieved state-of-art results for many tasks, such as machine translation, language modeling, and language understanding BIBREF0, BIBREF1. In particular, large Transformer-based language models have brought gains in speech recognition tasks when used for second-pass re-scoring and in first-pass shallow fusion BIBREF2. As typically used in sequence-to-sequence transduction tasks BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, Transformer-based models attend over encoder features using decoder features, implying that the decoding has to be done in a label-synchronous way, thereby posing a challenge for streaming speech recognition applications. An additional challenge for streaming speech recognition with these models is that the number of computations for self-attention increases quadratically with input sequence size. For streaming to be computationally practical, it is highly desirable that the time it takes to process each frame remains constant relative to the length of the input. Transformer-based alternatives to RNNs have recently been explored for use in ASR BIBREF8, BIBREF9, BIBREF10, BIBREF11.
For streaming speech recognition models, recurrent neural networks (RNNs) have been the de facto choice since they can model the temporal dependencies in the audio features effectively BIBREF12 while maintaining a constant computational requirement for each frame. Streamable end-to-end modeling architectures such as the Recurrent Neural Network Transducer (RNN-T) BIBREF13, BIBREF14, BIBREF15, Recurrent Neural Aligner (RNA) BIBREF16, and Neural Transducer BIBREF17 utilize an encoder-decoder based framework where both encoder and decoder are layers of RNNs that generate features from audio and labels respectively. In particular, the RNN-T and RNA models are trained to learn alignments between the acoustic encoder features and the label encoder features, and so lend themselves naturally to frame-synchronous decoding.
Several optimization techniques have been evaluated to enable running RNN-T on device BIBREF15. In addition, extensive architecture and modeling unit exploration has been done for RNN-T BIBREF14. In this paper, we explore the possibility of replacing RNN-based audio and label encoders in the conventional RNN-T architecture with Transformer encoders. With a view to preserving model streamability, we show that Transformer-based models can be trained with self-attention on a fixed number of past input frames and previous labels. This results in a degradation of performance (compared to attending to all past input frames and labels), but then the model satisfies a constant computational requirement for processing each frame, making it suitable for streaming. Given the simple architecture and parallelizable nature of self-attention computations, we observe large improvements in training time and training resource utilization compared to RNN-T models that employ RNNs.
The RNN-T architecture (as depicted in Figure FIGREF1) is a neural network architecture that can be trained end-to-end with the RNN-T loss to map input sequences (e.g. audio feature vectors) to target sequences (e.g. phonemes, graphemes). Given an input sequence of real-valued vectors of length $T$, ${\mathbf {x}}= (x_1, x_2, ..., x_T)$, the RNN-T model tries to predict the target sequence of labels ${\mathbf {y}}= (y_1, y_2, ..., y_U)$ of length $U$.
Unlike a typical attention-based sequence-to-sequence model, which attends over the entire input for every prediction in the output sequence, the RNN-T model gives a probability distribution over the label space at every time step, and the output label space includes an additional null label to indicate the lack of output for that time step — similar to the Connectionist Temporal Classification (CTC) framework BIBREF18. But unlike CTC, this label distribution is also conditioned on the previous label history.
The RNN-T model defines a conditional distribution $P({\mathbf {z}}|{\mathbf {x}})$ over all the possible alignments, where
is a sequence of $(z_i, t_i)$ pairs of length $\overline{U}$, and $(z_i, t_i)$ represents an alignment between output label $z_i$ and the encoded feature at time $t_i$. The labels $z_i$ can optionally be blank labels (null predictions). Removing the blank labels gives the actual output label sequence ${\mathbf {y}}$, of length $U$.
We can marginalize $P({\mathbf {z}}|{\mathbf {x}})$ over all possible alignments ${\mathbf {z}}$ to obtain the probability of the target label sequence ${\mathbf {y}}$ given the input sequence ${\mathbf {x}}$,
where ${\cal Z}({\mathbf {y}},T)$ is the set of valid alignments of length $T$ for the label sequence.
Transformer Transducer ::: RNN-T Architecture and Loss
In this paper, we present all experimental results with the RNN-T loss BIBREF13 for consistency, which performs similarly to the monotonic RNN-T loss BIBREF19 in our experiments.
The probability of an alignment $P({\mathbf {z}}|{\mathbf {x}})$ can be factorized as
where $\mathrm {Labels}(z_{1:(i-1)})$ is the sequence of non-blank labels in $z_{1:(i-1)}$. The RNN-T architecture parameterizes $P({\mathbf {z}}|{\mathbf {x}})$ with an audio encoder, a label encoder, and a joint network. The encoders are two neural networks that encode the input sequence and the target output sequence, respectively. Previous work BIBREF13 has employed Long Short-term Memory models (LSTMs) as the encoders, giving the RNN-T its name. However, this framework is not restricted to RNNs. In this paper, we are particularly interested in replacing the LSTM encoders with Transformers BIBREF0, BIBREF1. In the following, we refer to this new architecture as the Transformer Transducer (T-T). As in the original RNN-T model, the joint network combines the audio encoder output at $t_i$ and the label encoder output given the previous non-blank output label sequence $\mathrm {Labels}(z_{1:(i-1)})$ using a feed-forward neural network with a softmax layer, inducing a distribution over the labels. The model defines $P(z_i|{\mathbf {x}}, t_i, \mathrm {Labels}(z_{1:(i-1)}))$ as follows:
where each $\mathrm {Linear}$ function is a different single-layer feed-forward neural network, $\mathrm {AudioEncoder}_{t_{i}}({\mathbf {x}})$ is the audio encoder output at time $t_i$, and $\mathrm {LabelEncoder}(\mathrm {Labels}(z_{1:(i-1)}))$ is the label encoder output given the previous non-blank label sequence.
To compute Eq. (DISPLAY_FORM3) by summing all valid alignments naively is computationally intractable. Therefore, we define the forward variable $\alpha (t,u)$ as the sum of probabilities for all paths ending at time-frame $t$ and label position $u$. We then use the forward algorithm BIBREF13, BIBREF20 to compute the last alpha variable $\alpha ({T, U})$, which corresponds to $P({\mathbf {y}}|{\mathbf {x}})$ defined in Eq. (DISPLAY_FORM3). Efficient computation of $P({\mathbf {y}}|{\mathbf {x}})$ using the forward algorithm is enabled by the fact that the local probability estimate (Eq. (DISPLAY_FORM7)) at any given label position and any given time-frame is not dependent on the alignment BIBREF13. The training loss for the model is then the sum of the negative log probabilities defined in Eq. (DISPLAY_FORM3) over all the training examples,
where $T_i$ and $U_i$ are the lengths of the input sequence and the output target label sequence of the $i$-th training example, respectively.
Transformer Transducer ::: Transformer
The Transformer BIBREF0 is composed of a stack of multiple identical layers. Each layer has two sub-layers, a multi-headed attention layer and a feed-forward layer. Our multi-headed attention layer first applies $\mathrm {LayerNorm}$, then projects the input to $\mathrm {Query}$, $\mathrm {Key}$, and $\mathrm {Value}$ for all the heads BIBREF1. The attention mechanism is applied separately for different attention heads. The attention mechanism provides a flexible way to control the context that the model uses. For example, we can mask the attention score to the left of the current frame to produce output conditioned only on the previous state history. The weight-averaged $\mathrm {Value}$s for all heads are concatenated and passed to a dense layer. We then employ a residual connection on the normalized input and the output of the dense layer to form the final output of the multi-headed attention sub-layer (i.e. $\mathrm {LayerNorm}(x) + \mathrm {AttentionLayer}(\mathrm {LayerNorm}(x))$, where $x$ is the input to the multi-headed attention sub-layer). We also apply dropout on the output of the dense layer to prevent overfitting. Our feed-forward sub-layer applies $\mathrm {LayerNorm}$ on the input first, then applies two dense layers. We use $\mathrm {ReLu}$ as the activation for the first dense layer. Again, dropout to both dense layers for regularization, and a residual connection of normalized input and the output of the second dense layer (i.e. $\mathrm {LayerNorm}(x) + \mathrm {FeedForwardLayer}(\mathrm {LayerNorm}(x))$, where $x$ is the input to the feed-forward sub-layer) are applied. See Figure FIGREF10 for more details.
Note that $\mathrm {LabelEncoder}$ states do not attend to $\mathrm {AudioEncoder}$ states, in contrast to the architecture in BIBREF0. As discussed in the Introduction, doing so poses a challenge for streaming applications. Instead, we implement $\mathrm {AudioEncoder}$ and $\mathrm {LabelEncoder}$ in Eq. (DISPLAY_FORM6), which are LSTMs in conventional RNN-T architectures BIBREF13, BIBREF15, BIBREF14, using the Transformers described above. In tandem with the RNN-T architecture described in the previous section, the attention mechanism here only operates within $\mathrm {AudioEncoder}$ or $\mathrm {LabelEncoder}$, contrary to the standard practice for Transformer-based systems. In addition, so as to model sequential order, we use the relative positional encoding proposed in BIBREF1. With relative positional encoding, the encoding only affects the attention score instead of the $\mathrm {Value}$s being summed. This allows us to reuse previously computed states rather than recomputing all previous states and getting the last state in an overlapping inference manner when the number of frames or labels that $\mathrm {AudioEncoder}$ or $\mathrm {LabelEncoder}$ processed is larger than the maximum length used during training (which would again be intractable for streaming applications). More specifically, the complexity of running one-step inference to get activations at time $t$ is $\mathrm {O}(t)$, which is the computation cost of attending to $t$ states and of the feed-forward process for the current step when using relative positional encoding. On the other hand, with absolute positional encoding, the encoding added to the input should be shifted by one when $t$ is larger than the maximum length used during training, which precludes re-use of the states, and makes the complexity $\mathrm {O}(t^2)$. However, even if we can reduce the complexity from $\mathrm {O}(t^2)$ to $\mathrm {O}(t)$ with relative positional encoding, there is still the issue of latency growing over time. One intuitive solution is to limit the model to attend to a moving window $W$ of states, making the one-step inference complexity constant. Note that training or inference with attention to limited context is not possible for Transformer-based models that have attention from $\mathrm {Decoder}$ to $\mathrm {Encoder}$, as such a setup is itself trying to learn the alignment. In contrast, the separation of $\mathrm {AudioEncoder}$ and $\mathrm {LabelEncoder}$, and the fact that the alignment is handled by a separate forward-backward process, within the RNN-T architecture, makes it possible to train with attention over an explicitly specified, limited context.
Experiments and Results ::: Data
We evaluated the proposed model using the publicly available LibriSpeech ASR corpus BIBREF23. The LibriSpeech dataset consists of 970 hours of audio data with corresponding text transcripts (around 10M word tokens) and an additional 800M word token text only dataset. The paired audio/transcript dataset was used to train T-T models and an LSTM-based baseline. The full 810M word tokens text dataset was used for standalone language model (LM) training. We extracted 128-channel logmel energy values from a 32 ms window, stacked every 4 frames, and sub-sampled every 3 frames, to produce a 512-dimensional acoustic feature vector with a stride of 30 ms. Feature augmentation BIBREF22 was applied during model training to prevent overfitting and to improve generalization, with only frequency masking ($\mathrm {F}=50$, $\mathrm {mF}=2$) and time masking ($\mathrm {T}=30$, $\mathrm {mT}=10$).
Experiments and Results ::: Transformer Transducer
Our Transformer Transducer model architecture has 18 audio and 2 label encoder layers. Every layer is identical for both audio and label encoders. The details of computations in a layer are shown in Figure FIGREF10 and Table TABREF11. All the models for experiments presented in this paper are trained on 8x8 TPU with a per-core batch size of 16 (effective batch size of 2048). The learning rate schedule is ramped up linearly from 0 to $2.5\mathrm {e}{-4}$ during first 4K steps, it is then held constant till 30K steps and then decays exponentially to $2.5\mathrm {e}{-6}$ till 200K steps. During training we also added a gaussian noise($\mu =0,\sigma =0.01$) to model weights BIBREF24 starting at 10K steps. We train this model to output grapheme units in all our experiments. We found that the Transformer Transducer models trained much faster ($\approx 1$ day) compared to the an LSTM-based RNN-T model ($\approx 3.5$ days), with a similar number of parameters.
Experiments and Results ::: Results
We first compared the performance of Transformer Transducer (T-T) models with full attention on audio to an RNN-T model using a bidirectional LSTM audio encoder. As shown in Table TABREF12, the T-T model significantly outperforms the LSTM-based RNN-T baseline. We also observed that T-T models can achieve competitive recognition accuracy with existing wordpiece-based end-to-end models with similar model size. To compare with systems using shallow fusion BIBREF18, BIBREF25 with separately trained LMs, we also trained a Transformer-based LM with the same architecture as the label encoder used in T-T, using the full 810M word token dataset. This Transformer LM (6 layers; 57M parameters) had a perplexity of $2.49$ on the dev-clean set; the use of dropout, and of larger models, did not improve either perplexity or WER. Shallow fusion was then performed using that LM and both the trained T-T system and the trained bidirectional LSTM-based RNN-T baseline, with scaling factors on the LM output and on the non-blank symbol sequence length tuned on the LibriSpeech dev sets. The results are shown in Table TABREF12 in the “With LM” column. The shallow fusion result for the T-T system is competitive with corresponding results for top-performing existing systems.
Next, we ran training and decoding experiments using T-T models with limited attention windows over audio and text, with a view to building online streaming speech recognition systems with low latency. Similarly to the use of unidirectional RNN audio encoders in online models, where activations for time $t$ are computed with conditioning only on audio frames before $t$, here we constrain the $\mathrm {AudioEncoder}$ to attend to the left of the current frame by masking the attention scores to the right of the current frame. In order to make one-step inference for $\mathrm {AudioEncoder}$ tractable (i.e. to have constant time complexity), we further limit the attention for $\mathrm {AudioEncoder}$ to a fixed window of previous states by again masking the attention score. Due to limited computation resources, we used the same mask for different Transformer layers, but the use of different contexts (masks) for different layers is worth exploring. The results are shown in Table TABREF15, where N in the first two columns indicates the number of states that the model uses to the left or right of the current frame. As we can see, using more audio history gives the lower WER, but considering a streamable model with reasonable time complexity for inference, we experimented with a left context of up to 10 frames per layer.
Similarly, we explored the use of limited right context to allow the model to see some future audio frames, in the hope of bridging the gap between a streamable T-T model (left = 10, right = 0) and a full attention T-T model (left = 512, right = 512). Since we apply the same mask for every layer, the latency introduced by using right context is aggregated over all the layers. For example, in Figure FIGREF17, to produce $y_7$ from a 3-layer Transformer with one frame of right context, it actually needs to wait for $x_{10}$ to arrive, which is 90 ms latency in our case. To explore the right context impact for modeling, we did comparisons with fixed 512 frames left context per layer to compared with full attention T-T model. As we can see from Table TABREF18, with right context of 6 frames per layer (around 3.2 secs of latency), the performance is around 16% worse than full attention model. Compared with streamable T-T model, 2 frames right context per layer (around 1 sec of latency) brings around 30% improvements.
In addition, we evaluated how the left context used in the T-T $\mathrm {LabelEncoder}$ affects performance. In Table TABREF19, we show that constraining each layer to only use three previous label states yields the similar accuracy with the model using 20 states per layer. It shows very limited left context for label encoder is good engough for T-T model. We see a similar trend when limiting left label states while using a full attention T-T audio encoder.
Finally, Table TABREF20 reports the results when using a limited left context of 10 frames, which reduces the time complexity for one-step inference to a constant, with look-ahead to future frames, as a way of bridging the gap between the performance of left-only attention and full attention models.
Conclusions
In this paper, we presented the Transformer Transducer model, embedding Transformer based self-attention for audio and label encoding within the RNN-T architecture, resulting in an end-to-end model that can be optimized using a loss function that efficiently marginalizes over all possible alignments and that is well-suited to time-synchronous decoding. This model achieves a new state-of-the-art accuracy on the LibriSpeech benchmark, and can easily be used for streaming speech recognition by limiting the audio and label context used in self-attention. Transformer Transducer models train significantly faster than LSTM based RNN-T models, and they allow us to trade recognition accuracy and latency in a flexible manner. | LSTM-based RNN-T |
2ebd7a59baad1f935fe83f90526557bfa9df4047 | 2ebd7a59baad1f935fe83f90526557bfa9df4047_0 | Q: What was previous state of the art accuracy on LibriSpeech benchmark?
Text: Introduction
In the past few years, models employing self-attention BIBREF0 have achieved state-of-art results for many tasks, such as machine translation, language modeling, and language understanding BIBREF0, BIBREF1. In particular, large Transformer-based language models have brought gains in speech recognition tasks when used for second-pass re-scoring and in first-pass shallow fusion BIBREF2. As typically used in sequence-to-sequence transduction tasks BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, Transformer-based models attend over encoder features using decoder features, implying that the decoding has to be done in a label-synchronous way, thereby posing a challenge for streaming speech recognition applications. An additional challenge for streaming speech recognition with these models is that the number of computations for self-attention increases quadratically with input sequence size. For streaming to be computationally practical, it is highly desirable that the time it takes to process each frame remains constant relative to the length of the input. Transformer-based alternatives to RNNs have recently been explored for use in ASR BIBREF8, BIBREF9, BIBREF10, BIBREF11.
For streaming speech recognition models, recurrent neural networks (RNNs) have been the de facto choice since they can model the temporal dependencies in the audio features effectively BIBREF12 while maintaining a constant computational requirement for each frame. Streamable end-to-end modeling architectures such as the Recurrent Neural Network Transducer (RNN-T) BIBREF13, BIBREF14, BIBREF15, Recurrent Neural Aligner (RNA) BIBREF16, and Neural Transducer BIBREF17 utilize an encoder-decoder based framework where both encoder and decoder are layers of RNNs that generate features from audio and labels respectively. In particular, the RNN-T and RNA models are trained to learn alignments between the acoustic encoder features and the label encoder features, and so lend themselves naturally to frame-synchronous decoding.
Several optimization techniques have been evaluated to enable running RNN-T on device BIBREF15. In addition, extensive architecture and modeling unit exploration has been done for RNN-T BIBREF14. In this paper, we explore the possibility of replacing RNN-based audio and label encoders in the conventional RNN-T architecture with Transformer encoders. With a view to preserving model streamability, we show that Transformer-based models can be trained with self-attention on a fixed number of past input frames and previous labels. This results in a degradation of performance (compared to attending to all past input frames and labels), but then the model satisfies a constant computational requirement for processing each frame, making it suitable for streaming. Given the simple architecture and parallelizable nature of self-attention computations, we observe large improvements in training time and training resource utilization compared to RNN-T models that employ RNNs.
The RNN-T architecture (as depicted in Figure FIGREF1) is a neural network architecture that can be trained end-to-end with the RNN-T loss to map input sequences (e.g. audio feature vectors) to target sequences (e.g. phonemes, graphemes). Given an input sequence of real-valued vectors of length $T$, ${\mathbf {x}}= (x_1, x_2, ..., x_T)$, the RNN-T model tries to predict the target sequence of labels ${\mathbf {y}}= (y_1, y_2, ..., y_U)$ of length $U$.
Unlike a typical attention-based sequence-to-sequence model, which attends over the entire input for every prediction in the output sequence, the RNN-T model gives a probability distribution over the label space at every time step, and the output label space includes an additional null label to indicate the lack of output for that time step — similar to the Connectionist Temporal Classification (CTC) framework BIBREF18. But unlike CTC, this label distribution is also conditioned on the previous label history.
The RNN-T model defines a conditional distribution $P({\mathbf {z}}|{\mathbf {x}})$ over all the possible alignments, where
is a sequence of $(z_i, t_i)$ pairs of length $\overline{U}$, and $(z_i, t_i)$ represents an alignment between output label $z_i$ and the encoded feature at time $t_i$. The labels $z_i$ can optionally be blank labels (null predictions). Removing the blank labels gives the actual output label sequence ${\mathbf {y}}$, of length $U$.
We can marginalize $P({\mathbf {z}}|{\mathbf {x}})$ over all possible alignments ${\mathbf {z}}$ to obtain the probability of the target label sequence ${\mathbf {y}}$ given the input sequence ${\mathbf {x}}$,
where ${\cal Z}({\mathbf {y}},T)$ is the set of valid alignments of length $T$ for the label sequence.
Transformer Transducer ::: RNN-T Architecture and Loss
In this paper, we present all experimental results with the RNN-T loss BIBREF13 for consistency, which performs similarly to the monotonic RNN-T loss BIBREF19 in our experiments.
The probability of an alignment $P({\mathbf {z}}|{\mathbf {x}})$ can be factorized as
where $\mathrm {Labels}(z_{1:(i-1)})$ is the sequence of non-blank labels in $z_{1:(i-1)}$. The RNN-T architecture parameterizes $P({\mathbf {z}}|{\mathbf {x}})$ with an audio encoder, a label encoder, and a joint network. The encoders are two neural networks that encode the input sequence and the target output sequence, respectively. Previous work BIBREF13 has employed Long Short-term Memory models (LSTMs) as the encoders, giving the RNN-T its name. However, this framework is not restricted to RNNs. In this paper, we are particularly interested in replacing the LSTM encoders with Transformers BIBREF0, BIBREF1. In the following, we refer to this new architecture as the Transformer Transducer (T-T). As in the original RNN-T model, the joint network combines the audio encoder output at $t_i$ and the label encoder output given the previous non-blank output label sequence $\mathrm {Labels}(z_{1:(i-1)})$ using a feed-forward neural network with a softmax layer, inducing a distribution over the labels. The model defines $P(z_i|{\mathbf {x}}, t_i, \mathrm {Labels}(z_{1:(i-1)}))$ as follows:
where each $\mathrm {Linear}$ function is a different single-layer feed-forward neural network, $\mathrm {AudioEncoder}_{t_{i}}({\mathbf {x}})$ is the audio encoder output at time $t_i$, and $\mathrm {LabelEncoder}(\mathrm {Labels}(z_{1:(i-1)}))$ is the label encoder output given the previous non-blank label sequence.
To compute Eq. (DISPLAY_FORM3) by summing all valid alignments naively is computationally intractable. Therefore, we define the forward variable $\alpha (t,u)$ as the sum of probabilities for all paths ending at time-frame $t$ and label position $u$. We then use the forward algorithm BIBREF13, BIBREF20 to compute the last alpha variable $\alpha ({T, U})$, which corresponds to $P({\mathbf {y}}|{\mathbf {x}})$ defined in Eq. (DISPLAY_FORM3). Efficient computation of $P({\mathbf {y}}|{\mathbf {x}})$ using the forward algorithm is enabled by the fact that the local probability estimate (Eq. (DISPLAY_FORM7)) at any given label position and any given time-frame is not dependent on the alignment BIBREF13. The training loss for the model is then the sum of the negative log probabilities defined in Eq. (DISPLAY_FORM3) over all the training examples,
where $T_i$ and $U_i$ are the lengths of the input sequence and the output target label sequence of the $i$-th training example, respectively.
Transformer Transducer ::: Transformer
The Transformer BIBREF0 is composed of a stack of multiple identical layers. Each layer has two sub-layers, a multi-headed attention layer and a feed-forward layer. Our multi-headed attention layer first applies $\mathrm {LayerNorm}$, then projects the input to $\mathrm {Query}$, $\mathrm {Key}$, and $\mathrm {Value}$ for all the heads BIBREF1. The attention mechanism is applied separately for different attention heads. The attention mechanism provides a flexible way to control the context that the model uses. For example, we can mask the attention score to the left of the current frame to produce output conditioned only on the previous state history. The weight-averaged $\mathrm {Value}$s for all heads are concatenated and passed to a dense layer. We then employ a residual connection on the normalized input and the output of the dense layer to form the final output of the multi-headed attention sub-layer (i.e. $\mathrm {LayerNorm}(x) + \mathrm {AttentionLayer}(\mathrm {LayerNorm}(x))$, where $x$ is the input to the multi-headed attention sub-layer). We also apply dropout on the output of the dense layer to prevent overfitting. Our feed-forward sub-layer applies $\mathrm {LayerNorm}$ on the input first, then applies two dense layers. We use $\mathrm {ReLu}$ as the activation for the first dense layer. Again, dropout to both dense layers for regularization, and a residual connection of normalized input and the output of the second dense layer (i.e. $\mathrm {LayerNorm}(x) + \mathrm {FeedForwardLayer}(\mathrm {LayerNorm}(x))$, where $x$ is the input to the feed-forward sub-layer) are applied. See Figure FIGREF10 for more details.
Note that $\mathrm {LabelEncoder}$ states do not attend to $\mathrm {AudioEncoder}$ states, in contrast to the architecture in BIBREF0. As discussed in the Introduction, doing so poses a challenge for streaming applications. Instead, we implement $\mathrm {AudioEncoder}$ and $\mathrm {LabelEncoder}$ in Eq. (DISPLAY_FORM6), which are LSTMs in conventional RNN-T architectures BIBREF13, BIBREF15, BIBREF14, using the Transformers described above. In tandem with the RNN-T architecture described in the previous section, the attention mechanism here only operates within $\mathrm {AudioEncoder}$ or $\mathrm {LabelEncoder}$, contrary to the standard practice for Transformer-based systems. In addition, so as to model sequential order, we use the relative positional encoding proposed in BIBREF1. With relative positional encoding, the encoding only affects the attention score instead of the $\mathrm {Value}$s being summed. This allows us to reuse previously computed states rather than recomputing all previous states and getting the last state in an overlapping inference manner when the number of frames or labels that $\mathrm {AudioEncoder}$ or $\mathrm {LabelEncoder}$ processed is larger than the maximum length used during training (which would again be intractable for streaming applications). More specifically, the complexity of running one-step inference to get activations at time $t$ is $\mathrm {O}(t)$, which is the computation cost of attending to $t$ states and of the feed-forward process for the current step when using relative positional encoding. On the other hand, with absolute positional encoding, the encoding added to the input should be shifted by one when $t$ is larger than the maximum length used during training, which precludes re-use of the states, and makes the complexity $\mathrm {O}(t^2)$. However, even if we can reduce the complexity from $\mathrm {O}(t^2)$ to $\mathrm {O}(t)$ with relative positional encoding, there is still the issue of latency growing over time. One intuitive solution is to limit the model to attend to a moving window $W$ of states, making the one-step inference complexity constant. Note that training or inference with attention to limited context is not possible for Transformer-based models that have attention from $\mathrm {Decoder}$ to $\mathrm {Encoder}$, as such a setup is itself trying to learn the alignment. In contrast, the separation of $\mathrm {AudioEncoder}$ and $\mathrm {LabelEncoder}$, and the fact that the alignment is handled by a separate forward-backward process, within the RNN-T architecture, makes it possible to train with attention over an explicitly specified, limited context.
Experiments and Results ::: Data
We evaluated the proposed model using the publicly available LibriSpeech ASR corpus BIBREF23. The LibriSpeech dataset consists of 970 hours of audio data with corresponding text transcripts (around 10M word tokens) and an additional 800M word token text only dataset. The paired audio/transcript dataset was used to train T-T models and an LSTM-based baseline. The full 810M word tokens text dataset was used for standalone language model (LM) training. We extracted 128-channel logmel energy values from a 32 ms window, stacked every 4 frames, and sub-sampled every 3 frames, to produce a 512-dimensional acoustic feature vector with a stride of 30 ms. Feature augmentation BIBREF22 was applied during model training to prevent overfitting and to improve generalization, with only frequency masking ($\mathrm {F}=50$, $\mathrm {mF}=2$) and time masking ($\mathrm {T}=30$, $\mathrm {mT}=10$).
Experiments and Results ::: Transformer Transducer
Our Transformer Transducer model architecture has 18 audio and 2 label encoder layers. Every layer is identical for both audio and label encoders. The details of computations in a layer are shown in Figure FIGREF10 and Table TABREF11. All the models for experiments presented in this paper are trained on 8x8 TPU with a per-core batch size of 16 (effective batch size of 2048). The learning rate schedule is ramped up linearly from 0 to $2.5\mathrm {e}{-4}$ during first 4K steps, it is then held constant till 30K steps and then decays exponentially to $2.5\mathrm {e}{-6}$ till 200K steps. During training we also added a gaussian noise($\mu =0,\sigma =0.01$) to model weights BIBREF24 starting at 10K steps. We train this model to output grapheme units in all our experiments. We found that the Transformer Transducer models trained much faster ($\approx 1$ day) compared to the an LSTM-based RNN-T model ($\approx 3.5$ days), with a similar number of parameters.
Experiments and Results ::: Results
We first compared the performance of Transformer Transducer (T-T) models with full attention on audio to an RNN-T model using a bidirectional LSTM audio encoder. As shown in Table TABREF12, the T-T model significantly outperforms the LSTM-based RNN-T baseline. We also observed that T-T models can achieve competitive recognition accuracy with existing wordpiece-based end-to-end models with similar model size. To compare with systems using shallow fusion BIBREF18, BIBREF25 with separately trained LMs, we also trained a Transformer-based LM with the same architecture as the label encoder used in T-T, using the full 810M word token dataset. This Transformer LM (6 layers; 57M parameters) had a perplexity of $2.49$ on the dev-clean set; the use of dropout, and of larger models, did not improve either perplexity or WER. Shallow fusion was then performed using that LM and both the trained T-T system and the trained bidirectional LSTM-based RNN-T baseline, with scaling factors on the LM output and on the non-blank symbol sequence length tuned on the LibriSpeech dev sets. The results are shown in Table TABREF12 in the “With LM” column. The shallow fusion result for the T-T system is competitive with corresponding results for top-performing existing systems.
Next, we ran training and decoding experiments using T-T models with limited attention windows over audio and text, with a view to building online streaming speech recognition systems with low latency. Similarly to the use of unidirectional RNN audio encoders in online models, where activations for time $t$ are computed with conditioning only on audio frames before $t$, here we constrain the $\mathrm {AudioEncoder}$ to attend to the left of the current frame by masking the attention scores to the right of the current frame. In order to make one-step inference for $\mathrm {AudioEncoder}$ tractable (i.e. to have constant time complexity), we further limit the attention for $\mathrm {AudioEncoder}$ to a fixed window of previous states by again masking the attention score. Due to limited computation resources, we used the same mask for different Transformer layers, but the use of different contexts (masks) for different layers is worth exploring. The results are shown in Table TABREF15, where N in the first two columns indicates the number of states that the model uses to the left or right of the current frame. As we can see, using more audio history gives the lower WER, but considering a streamable model with reasonable time complexity for inference, we experimented with a left context of up to 10 frames per layer.
Similarly, we explored the use of limited right context to allow the model to see some future audio frames, in the hope of bridging the gap between a streamable T-T model (left = 10, right = 0) and a full attention T-T model (left = 512, right = 512). Since we apply the same mask for every layer, the latency introduced by using right context is aggregated over all the layers. For example, in Figure FIGREF17, to produce $y_7$ from a 3-layer Transformer with one frame of right context, it actually needs to wait for $x_{10}$ to arrive, which is 90 ms latency in our case. To explore the right context impact for modeling, we did comparisons with fixed 512 frames left context per layer to compared with full attention T-T model. As we can see from Table TABREF18, with right context of 6 frames per layer (around 3.2 secs of latency), the performance is around 16% worse than full attention model. Compared with streamable T-T model, 2 frames right context per layer (around 1 sec of latency) brings around 30% improvements.
In addition, we evaluated how the left context used in the T-T $\mathrm {LabelEncoder}$ affects performance. In Table TABREF19, we show that constraining each layer to only use three previous label states yields the similar accuracy with the model using 20 states per layer. It shows very limited left context for label encoder is good engough for T-T model. We see a similar trend when limiting left label states while using a full attention T-T audio encoder.
Finally, Table TABREF20 reports the results when using a limited left context of 10 frames, which reduces the time complexity for one-step inference to a constant, with look-ahead to future frames, as a way of bridging the gap between the performance of left-only attention and full attention models.
Conclusions
In this paper, we presented the Transformer Transducer model, embedding Transformer based self-attention for audio and label encoding within the RNN-T architecture, resulting in an end-to-end model that can be optimized using a loss function that efficiently marginalizes over all possible alignments and that is well-suited to time-synchronous decoding. This model achieves a new state-of-the-art accuracy on the LibriSpeech benchmark, and can easily be used for streaming speech recognition by limiting the audio and label context used in self-attention. Transformer Transducer models train significantly faster than LSTM based RNN-T models, and they allow us to trade recognition accuracy and latency in a flexible manner. | Unanswerable |
766e2e35968ef7434b56330aa41957c5d5f8d0ee | 766e2e35968ef7434b56330aa41957c5d5f8d0ee_0 | Q: How big is LibriSpeech dataset?
Text: Introduction
In the past few years, models employing self-attention BIBREF0 have achieved state-of-art results for many tasks, such as machine translation, language modeling, and language understanding BIBREF0, BIBREF1. In particular, large Transformer-based language models have brought gains in speech recognition tasks when used for second-pass re-scoring and in first-pass shallow fusion BIBREF2. As typically used in sequence-to-sequence transduction tasks BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, Transformer-based models attend over encoder features using decoder features, implying that the decoding has to be done in a label-synchronous way, thereby posing a challenge for streaming speech recognition applications. An additional challenge for streaming speech recognition with these models is that the number of computations for self-attention increases quadratically with input sequence size. For streaming to be computationally practical, it is highly desirable that the time it takes to process each frame remains constant relative to the length of the input. Transformer-based alternatives to RNNs have recently been explored for use in ASR BIBREF8, BIBREF9, BIBREF10, BIBREF11.
For streaming speech recognition models, recurrent neural networks (RNNs) have been the de facto choice since they can model the temporal dependencies in the audio features effectively BIBREF12 while maintaining a constant computational requirement for each frame. Streamable end-to-end modeling architectures such as the Recurrent Neural Network Transducer (RNN-T) BIBREF13, BIBREF14, BIBREF15, Recurrent Neural Aligner (RNA) BIBREF16, and Neural Transducer BIBREF17 utilize an encoder-decoder based framework where both encoder and decoder are layers of RNNs that generate features from audio and labels respectively. In particular, the RNN-T and RNA models are trained to learn alignments between the acoustic encoder features and the label encoder features, and so lend themselves naturally to frame-synchronous decoding.
Several optimization techniques have been evaluated to enable running RNN-T on device BIBREF15. In addition, extensive architecture and modeling unit exploration has been done for RNN-T BIBREF14. In this paper, we explore the possibility of replacing RNN-based audio and label encoders in the conventional RNN-T architecture with Transformer encoders. With a view to preserving model streamability, we show that Transformer-based models can be trained with self-attention on a fixed number of past input frames and previous labels. This results in a degradation of performance (compared to attending to all past input frames and labels), but then the model satisfies a constant computational requirement for processing each frame, making it suitable for streaming. Given the simple architecture and parallelizable nature of self-attention computations, we observe large improvements in training time and training resource utilization compared to RNN-T models that employ RNNs.
The RNN-T architecture (as depicted in Figure FIGREF1) is a neural network architecture that can be trained end-to-end with the RNN-T loss to map input sequences (e.g. audio feature vectors) to target sequences (e.g. phonemes, graphemes). Given an input sequence of real-valued vectors of length $T$, ${\mathbf {x}}= (x_1, x_2, ..., x_T)$, the RNN-T model tries to predict the target sequence of labels ${\mathbf {y}}= (y_1, y_2, ..., y_U)$ of length $U$.
Unlike a typical attention-based sequence-to-sequence model, which attends over the entire input for every prediction in the output sequence, the RNN-T model gives a probability distribution over the label space at every time step, and the output label space includes an additional null label to indicate the lack of output for that time step — similar to the Connectionist Temporal Classification (CTC) framework BIBREF18. But unlike CTC, this label distribution is also conditioned on the previous label history.
The RNN-T model defines a conditional distribution $P({\mathbf {z}}|{\mathbf {x}})$ over all the possible alignments, where
is a sequence of $(z_i, t_i)$ pairs of length $\overline{U}$, and $(z_i, t_i)$ represents an alignment between output label $z_i$ and the encoded feature at time $t_i$. The labels $z_i$ can optionally be blank labels (null predictions). Removing the blank labels gives the actual output label sequence ${\mathbf {y}}$, of length $U$.
We can marginalize $P({\mathbf {z}}|{\mathbf {x}})$ over all possible alignments ${\mathbf {z}}$ to obtain the probability of the target label sequence ${\mathbf {y}}$ given the input sequence ${\mathbf {x}}$,
where ${\cal Z}({\mathbf {y}},T)$ is the set of valid alignments of length $T$ for the label sequence.
Transformer Transducer ::: RNN-T Architecture and Loss
In this paper, we present all experimental results with the RNN-T loss BIBREF13 for consistency, which performs similarly to the monotonic RNN-T loss BIBREF19 in our experiments.
The probability of an alignment $P({\mathbf {z}}|{\mathbf {x}})$ can be factorized as
where $\mathrm {Labels}(z_{1:(i-1)})$ is the sequence of non-blank labels in $z_{1:(i-1)}$. The RNN-T architecture parameterizes $P({\mathbf {z}}|{\mathbf {x}})$ with an audio encoder, a label encoder, and a joint network. The encoders are two neural networks that encode the input sequence and the target output sequence, respectively. Previous work BIBREF13 has employed Long Short-term Memory models (LSTMs) as the encoders, giving the RNN-T its name. However, this framework is not restricted to RNNs. In this paper, we are particularly interested in replacing the LSTM encoders with Transformers BIBREF0, BIBREF1. In the following, we refer to this new architecture as the Transformer Transducer (T-T). As in the original RNN-T model, the joint network combines the audio encoder output at $t_i$ and the label encoder output given the previous non-blank output label sequence $\mathrm {Labels}(z_{1:(i-1)})$ using a feed-forward neural network with a softmax layer, inducing a distribution over the labels. The model defines $P(z_i|{\mathbf {x}}, t_i, \mathrm {Labels}(z_{1:(i-1)}))$ as follows:
where each $\mathrm {Linear}$ function is a different single-layer feed-forward neural network, $\mathrm {AudioEncoder}_{t_{i}}({\mathbf {x}})$ is the audio encoder output at time $t_i$, and $\mathrm {LabelEncoder}(\mathrm {Labels}(z_{1:(i-1)}))$ is the label encoder output given the previous non-blank label sequence.
To compute Eq. (DISPLAY_FORM3) by summing all valid alignments naively is computationally intractable. Therefore, we define the forward variable $\alpha (t,u)$ as the sum of probabilities for all paths ending at time-frame $t$ and label position $u$. We then use the forward algorithm BIBREF13, BIBREF20 to compute the last alpha variable $\alpha ({T, U})$, which corresponds to $P({\mathbf {y}}|{\mathbf {x}})$ defined in Eq. (DISPLAY_FORM3). Efficient computation of $P({\mathbf {y}}|{\mathbf {x}})$ using the forward algorithm is enabled by the fact that the local probability estimate (Eq. (DISPLAY_FORM7)) at any given label position and any given time-frame is not dependent on the alignment BIBREF13. The training loss for the model is then the sum of the negative log probabilities defined in Eq. (DISPLAY_FORM3) over all the training examples,
where $T_i$ and $U_i$ are the lengths of the input sequence and the output target label sequence of the $i$-th training example, respectively.
Transformer Transducer ::: Transformer
The Transformer BIBREF0 is composed of a stack of multiple identical layers. Each layer has two sub-layers, a multi-headed attention layer and a feed-forward layer. Our multi-headed attention layer first applies $\mathrm {LayerNorm}$, then projects the input to $\mathrm {Query}$, $\mathrm {Key}$, and $\mathrm {Value}$ for all the heads BIBREF1. The attention mechanism is applied separately for different attention heads. The attention mechanism provides a flexible way to control the context that the model uses. For example, we can mask the attention score to the left of the current frame to produce output conditioned only on the previous state history. The weight-averaged $\mathrm {Value}$s for all heads are concatenated and passed to a dense layer. We then employ a residual connection on the normalized input and the output of the dense layer to form the final output of the multi-headed attention sub-layer (i.e. $\mathrm {LayerNorm}(x) + \mathrm {AttentionLayer}(\mathrm {LayerNorm}(x))$, where $x$ is the input to the multi-headed attention sub-layer). We also apply dropout on the output of the dense layer to prevent overfitting. Our feed-forward sub-layer applies $\mathrm {LayerNorm}$ on the input first, then applies two dense layers. We use $\mathrm {ReLu}$ as the activation for the first dense layer. Again, dropout to both dense layers for regularization, and a residual connection of normalized input and the output of the second dense layer (i.e. $\mathrm {LayerNorm}(x) + \mathrm {FeedForwardLayer}(\mathrm {LayerNorm}(x))$, where $x$ is the input to the feed-forward sub-layer) are applied. See Figure FIGREF10 for more details.
Note that $\mathrm {LabelEncoder}$ states do not attend to $\mathrm {AudioEncoder}$ states, in contrast to the architecture in BIBREF0. As discussed in the Introduction, doing so poses a challenge for streaming applications. Instead, we implement $\mathrm {AudioEncoder}$ and $\mathrm {LabelEncoder}$ in Eq. (DISPLAY_FORM6), which are LSTMs in conventional RNN-T architectures BIBREF13, BIBREF15, BIBREF14, using the Transformers described above. In tandem with the RNN-T architecture described in the previous section, the attention mechanism here only operates within $\mathrm {AudioEncoder}$ or $\mathrm {LabelEncoder}$, contrary to the standard practice for Transformer-based systems. In addition, so as to model sequential order, we use the relative positional encoding proposed in BIBREF1. With relative positional encoding, the encoding only affects the attention score instead of the $\mathrm {Value}$s being summed. This allows us to reuse previously computed states rather than recomputing all previous states and getting the last state in an overlapping inference manner when the number of frames or labels that $\mathrm {AudioEncoder}$ or $\mathrm {LabelEncoder}$ processed is larger than the maximum length used during training (which would again be intractable for streaming applications). More specifically, the complexity of running one-step inference to get activations at time $t$ is $\mathrm {O}(t)$, which is the computation cost of attending to $t$ states and of the feed-forward process for the current step when using relative positional encoding. On the other hand, with absolute positional encoding, the encoding added to the input should be shifted by one when $t$ is larger than the maximum length used during training, which precludes re-use of the states, and makes the complexity $\mathrm {O}(t^2)$. However, even if we can reduce the complexity from $\mathrm {O}(t^2)$ to $\mathrm {O}(t)$ with relative positional encoding, there is still the issue of latency growing over time. One intuitive solution is to limit the model to attend to a moving window $W$ of states, making the one-step inference complexity constant. Note that training or inference with attention to limited context is not possible for Transformer-based models that have attention from $\mathrm {Decoder}$ to $\mathrm {Encoder}$, as such a setup is itself trying to learn the alignment. In contrast, the separation of $\mathrm {AudioEncoder}$ and $\mathrm {LabelEncoder}$, and the fact that the alignment is handled by a separate forward-backward process, within the RNN-T architecture, makes it possible to train with attention over an explicitly specified, limited context.
Experiments and Results ::: Data
We evaluated the proposed model using the publicly available LibriSpeech ASR corpus BIBREF23. The LibriSpeech dataset consists of 970 hours of audio data with corresponding text transcripts (around 10M word tokens) and an additional 800M word token text only dataset. The paired audio/transcript dataset was used to train T-T models and an LSTM-based baseline. The full 810M word tokens text dataset was used for standalone language model (LM) training. We extracted 128-channel logmel energy values from a 32 ms window, stacked every 4 frames, and sub-sampled every 3 frames, to produce a 512-dimensional acoustic feature vector with a stride of 30 ms. Feature augmentation BIBREF22 was applied during model training to prevent overfitting and to improve generalization, with only frequency masking ($\mathrm {F}=50$, $\mathrm {mF}=2$) and time masking ($\mathrm {T}=30$, $\mathrm {mT}=10$).
Experiments and Results ::: Transformer Transducer
Our Transformer Transducer model architecture has 18 audio and 2 label encoder layers. Every layer is identical for both audio and label encoders. The details of computations in a layer are shown in Figure FIGREF10 and Table TABREF11. All the models for experiments presented in this paper are trained on 8x8 TPU with a per-core batch size of 16 (effective batch size of 2048). The learning rate schedule is ramped up linearly from 0 to $2.5\mathrm {e}{-4}$ during first 4K steps, it is then held constant till 30K steps and then decays exponentially to $2.5\mathrm {e}{-6}$ till 200K steps. During training we also added a gaussian noise($\mu =0,\sigma =0.01$) to model weights BIBREF24 starting at 10K steps. We train this model to output grapheme units in all our experiments. We found that the Transformer Transducer models trained much faster ($\approx 1$ day) compared to the an LSTM-based RNN-T model ($\approx 3.5$ days), with a similar number of parameters.
Experiments and Results ::: Results
We first compared the performance of Transformer Transducer (T-T) models with full attention on audio to an RNN-T model using a bidirectional LSTM audio encoder. As shown in Table TABREF12, the T-T model significantly outperforms the LSTM-based RNN-T baseline. We also observed that T-T models can achieve competitive recognition accuracy with existing wordpiece-based end-to-end models with similar model size. To compare with systems using shallow fusion BIBREF18, BIBREF25 with separately trained LMs, we also trained a Transformer-based LM with the same architecture as the label encoder used in T-T, using the full 810M word token dataset. This Transformer LM (6 layers; 57M parameters) had a perplexity of $2.49$ on the dev-clean set; the use of dropout, and of larger models, did not improve either perplexity or WER. Shallow fusion was then performed using that LM and both the trained T-T system and the trained bidirectional LSTM-based RNN-T baseline, with scaling factors on the LM output and on the non-blank symbol sequence length tuned on the LibriSpeech dev sets. The results are shown in Table TABREF12 in the “With LM” column. The shallow fusion result for the T-T system is competitive with corresponding results for top-performing existing systems.
Next, we ran training and decoding experiments using T-T models with limited attention windows over audio and text, with a view to building online streaming speech recognition systems with low latency. Similarly to the use of unidirectional RNN audio encoders in online models, where activations for time $t$ are computed with conditioning only on audio frames before $t$, here we constrain the $\mathrm {AudioEncoder}$ to attend to the left of the current frame by masking the attention scores to the right of the current frame. In order to make one-step inference for $\mathrm {AudioEncoder}$ tractable (i.e. to have constant time complexity), we further limit the attention for $\mathrm {AudioEncoder}$ to a fixed window of previous states by again masking the attention score. Due to limited computation resources, we used the same mask for different Transformer layers, but the use of different contexts (masks) for different layers is worth exploring. The results are shown in Table TABREF15, where N in the first two columns indicates the number of states that the model uses to the left or right of the current frame. As we can see, using more audio history gives the lower WER, but considering a streamable model with reasonable time complexity for inference, we experimented with a left context of up to 10 frames per layer.
Similarly, we explored the use of limited right context to allow the model to see some future audio frames, in the hope of bridging the gap between a streamable T-T model (left = 10, right = 0) and a full attention T-T model (left = 512, right = 512). Since we apply the same mask for every layer, the latency introduced by using right context is aggregated over all the layers. For example, in Figure FIGREF17, to produce $y_7$ from a 3-layer Transformer with one frame of right context, it actually needs to wait for $x_{10}$ to arrive, which is 90 ms latency in our case. To explore the right context impact for modeling, we did comparisons with fixed 512 frames left context per layer to compared with full attention T-T model. As we can see from Table TABREF18, with right context of 6 frames per layer (around 3.2 secs of latency), the performance is around 16% worse than full attention model. Compared with streamable T-T model, 2 frames right context per layer (around 1 sec of latency) brings around 30% improvements.
In addition, we evaluated how the left context used in the T-T $\mathrm {LabelEncoder}$ affects performance. In Table TABREF19, we show that constraining each layer to only use three previous label states yields the similar accuracy with the model using 20 states per layer. It shows very limited left context for label encoder is good engough for T-T model. We see a similar trend when limiting left label states while using a full attention T-T audio encoder.
Finally, Table TABREF20 reports the results when using a limited left context of 10 frames, which reduces the time complexity for one-step inference to a constant, with look-ahead to future frames, as a way of bridging the gap between the performance of left-only attention and full attention models.
Conclusions
In this paper, we presented the Transformer Transducer model, embedding Transformer based self-attention for audio and label encoding within the RNN-T architecture, resulting in an end-to-end model that can be optimized using a loss function that efficiently marginalizes over all possible alignments and that is well-suited to time-synchronous decoding. This model achieves a new state-of-the-art accuracy on the LibriSpeech benchmark, and can easily be used for streaming speech recognition by limiting the audio and label context used in self-attention. Transformer Transducer models train significantly faster than LSTM based RNN-T models, and they allow us to trade recognition accuracy and latency in a flexible manner. | 970 hours of audio data with corresponding text transcripts (around 10M word tokens) and an additional 800M word token text only dataset |
63a77d2640df8315bf0bc3925fdd7e27132b1244 | 63a77d2640df8315bf0bc3925fdd7e27132b1244_0 | Q: Which language(s) do they work with?
Text: Introduction
Transfer learning has driven a number of recent successes in computer vision and NLP. Computer vision tasks like image captioning BIBREF0 and visual question answering typically use CNNs pretrained on ImageNet BIBREF1 , BIBREF2 to extract representations of the image, while several natural language tasks such as reading comprehension and sequence labeling BIBREF3 have benefited from pretrained word embeddings BIBREF4 , BIBREF5 that are either fine-tuned for a specific task or held fixed.
Many neural NLP systems are initialized with pretrained word embeddings but learn their representations of words in context from scratch, in a task-specific manner from supervised learning signals. However, learning these representations reliably from scratch is not always feasible, especially in low-resource settings, where we believe that using general purpose sentence representations will be beneficial.
Some recent work has addressed this by learning general-purpose sentence representations BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . However, there exists no clear consensus yet on what training objective or methodology is best suited to this goal.
Understanding the inductive biases of distinct neural models is important for guiding progress in representation learning. BIBREF14 and BIBREF15 demonstrate that neural machine translation (NMT) systems appear to capture morphology and some syntactic properties. BIBREF14 also present evidence that sequence-to-sequence parsers BIBREF16 more strongly encode source language syntax. Similarly, BIBREF17 probe representations extracted by sequence autoencoders, word embedding averages, and skip-thought vectors with a multi-layer perceptron (MLP) classifier to study whether sentence characteristics such as length, word content and word order are encoded.
To generalize across a diverse set of tasks, it is important to build representations that encode several aspects of a sentence. Neural approaches to tasks such as skip-thoughts, machine translation, natural language inference, and constituency parsing likely have different inductive biases. Our work exploits this in the context of a simple one-to-many multi-task learning (MTL) framework, wherein a single recurrent sentence encoder is shared across multiple tasks. We hypothesize that sentence representations learned by training on a reasonably large number of weakly related tasks will generalize better to novel tasks unseen during training, since this process encodes the inductive biases of multiple models. This hypothesis is based on the theoretical work of BIBREF18 . While our work aims at learning fixed-length distributed sentence representations, it is not always practical to assume that the entire “meaning” of a sentence can be encoded into a fixed-length vector. We merely hope to capture some of its characteristics that could be of use in a variety of tasks.
The primary contribution of our work is to combine the benefits of diverse sentence-representation learning objectives into a single multi-task framework. To the best of our knowledge, this is the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors. We demonstrate through extensive experimentation that representations learned in this way lead to improved performance across a diverse set of novel tasks not used in the learning of our representations. Such representations facilitate low-resource learning as exhibited by significant improvements to model performance for new tasks in the low labelled data regime - achieving comparable performance to a few models trained from scratch using only 6% of the available training set on the Quora duplicate question dataset.
Related Work
The problem of learning distributed representations of phrases and sentences dates back over a decade. For example, BIBREF19 present an additive and multiplicative linear composition function of the distributed representations of individual words. BIBREF20 combine symbolic and distributed representations of words using tensor products. Advances in learning better distributed representations of words BIBREF4 , BIBREF5 combined with deep learning have made it possible to learn complex non-linear composition functions of an arbitrary number of word embeddings using convolutional or recurrent neural networks (RNNs). A network's representation of the last element in a sequence, which is a non-linear composition of all inputs, is typically assumed to contain a squashed “summary” of the sentence. Most work in supervised learning for NLP builds task-specific representations of sentences rather than general-purpose ones.
Notably, skip-thought vectors BIBREF6 , an extension of the skip-gram model for word embeddings BIBREF4 to sentences, learn re-usable sentence representations from weakly labeled data. Unfortunately, these models take weeks or often months to train. BIBREF8 address this by considering faster alternatives such as sequential denoising autoencoders and shallow log-linear models. BIBREF21 , however, demonstrate that simple word embedding averages are comparable to more complicated models like skip-thoughts. More recently, BIBREF9 show that a completely supervised approach to learning sentence representations from natural language inference data outperforms all previous approaches on transfer learning benchmarks. Here we use the terms “transfer learning performance" on “transfer tasks” to mean the performance of sentence representations evaluated on tasks unseen during training. BIBREF10 demonstrated that representations learned by state-of-the-art large-scale NMT systems also generalize well to other tasks. However, their use of an attention mechanism prevents the learning of a single fixed-length vector representation of a sentence. As a result, they present a bi-attentive classification network that composes information present in all of the model's hidden states to achieve improvements over a corresponding model trained from scratch. BIBREF11 and BIBREF12 demonstrate that discourse-based objectives can also be leveraged to learn good sentence representations.
Our work is most similar to that of BIBREF22 , who train a many-to-many sequence-to-sequence model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. There are two key differences between that work and our own. First, like BIBREF10 , their use of an attention mechanism prevents learning a fixed-length vector representation for a sentence. Second, their work aims for improvements on the same tasks on which the model is trained, as opposed to learning re-usable sentence representations that transfer elsewhere.
We further present a fine-grained analysis of how different tasks contribute to the encoding of different information signals in our representations following work by BIBREF14 and BIBREF17 .
BIBREF23 similarly present a multi-task framework for textual entailment with task supervision at different levels of learning. “Universal" multi-task models have also been successfully explored in the context of computer vision problems BIBREF24 , BIBREF25 .
Sequence-to-Sequence Learning
Five out of the six tasks that we consider for multi-task learning are formulated as sequence-to-sequence problems BIBREF26 , BIBREF27 . Briefly, sequence-to-sequence models are a specific case of encoder-decoder models where the inputs and outputs are sequential. They directly model the conditional distribution of outputs given inputs INLINEFORM0 . The input INLINEFORM1 and output INLINEFORM2 are sequences INLINEFORM3 and INLINEFORM4 . The encoder produces a fixed length vector representation INLINEFORM5 of the input, which the decoder then conditions on to generate an output. The decoder is auto-regressive and breaks down the joint probability of outputs into a product of conditional probabilities via the chain rule: INLINEFORM6
BIBREF26 and BIBREF27 use encoders and decoders parameterized as RNN variants such as Long Short-term Memory (LSTMs) BIBREF28 or Gated Recurrent Units (GRUs) BIBREF29 . The hidden representation INLINEFORM0 is typically the last hidden state of the encoder RNN.
BIBREF30 alleviate the gradient bottleneck between the encoder and the decoder by introducing an attention mechanism that allows the decoder to condition on every hidden state of the encoder RNN instead of only the last one. In this work, as in BIBREF6 , BIBREF8 , we do not employ an attention mechanism. This enables us to obtain a single, fixed-length, distributed sentence representation. To diminish the effects of vanishing gradient, we condition every decoding step on the encoder hidden representation INLINEFORM0 . We use a GRU for the encoder and decoder in the interest of computational speed. The encoder is a bidirectional GRU while the decoder is a unidirectional conditional GRU whose parameterization is as follows: DISPLAYFORM0
The encoder representation INLINEFORM0 is provided as conditioning information to the reset gate, update gate and hidden state computation in the GRU via the parameters INLINEFORM1 , INLINEFORM2 and INLINEFORM3 to avoid attenuation of information from the encoder.
Multi-task Sequence-to-sequence Learning
BIBREF31 present a simple one-to-many multi-task sequence-to-sequence learning model for NMT that uses a shared encoder for English and task-specific decoders for multiple target languages. BIBREF22 extend this by also considering many-to-one (many encoders, one decoder) and many-to-many architectures. In this work, we consider a one-to-many model since it lends itself naturally to the idea of combining inductive biases from different training objectives. The same bidirectional GRU encodes the input sentences from different tasks into a compressed summary INLINEFORM0 which is then used to condition a task-specific GRU to produce the output sentence.
Training Objectives & Evaluation
Our motivation for multi-task training stems from theoretical insights presented in BIBREF18 . We refer readers to that work for a detailed discussion of results, but the conclusions most relevant to this discussion are (i) that learning multiple related tasks jointly results in good generalization as measured by the number of training examples required per task; and (ii) that inductive biases learned on sufficiently many training tasks are likely to be good for learning novel tasks drawn from the same environment.
We select the following training objectives to learn general-purpose sentence embeddings. Our desiderata for the task collection were: sufficient diversity, existence of fairly large datasets for training, and success as standalone training objectives for sentence representations.
Multi-task training setup
Multi-task training with different data sources for each task stills poses open questions. For example: When does one switch to training on a different task? Should the switching be periodic? Do we weight each task equally? If not, what training ratios do we use?
BIBREF31 use periodic task alternations with equal training ratios for every task. In contrast, BIBREF22 alter the training ratios for each task based on the size of their respective training sets. Specifically, the training ratio for a particular task, INLINEFORM0 , is the fraction of the number of training examples in that task to the total number of training samples across all tasks. The authors then perform INLINEFORM1 parameter updates on task INLINEFORM2 before selecting a new task at random proportional to the training ratios, where N is a predetermined constant.
We take a simpler approach and pick a new sequence-to-sequence task to train on after every parameter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to-sequence tasks (this was chosen so as to complete roughly 6 epochs of the dataset after 7 days of training). Our approach is described formally in the Algorithm below.
Model details can be found in section SECREF7 in the Appendix.
0ptcenterline
A set of INLINEFORM0 tasks with a common source language, a shared encoder INLINEFORM1 across all tasks and a set of INLINEFORM2 task specific decoders INLINEFORM3 . Let INLINEFORM4 denote each model's parameters, INLINEFORM5 a probability vector ( INLINEFORM6 ) denoting the probability of sampling a task such that INLINEFORM7 , datasets for each task INLINEFORM8 and a loss function INLINEFORM9 .
INLINEFORM0 has not converged [1] Sample task INLINEFORM1 . Sample input, output pairs INLINEFORM2 . Input representation INLINEFORM3 . Prediction INLINEFORM4 INLINEFORM5 Adam INLINEFORM6 .
Evaluation Strategies, Experimental Results & Discussion
In this section, we describe our approach to evaluate the quality of our learned representations, present the results of our evaluation and discuss our findings.
Evaluation Strategy
We follow a similar evaluation protocol to those presented in BIBREF6 , BIBREF8 , BIBREF9 which is to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks BIBREF36 , BIBREF37 .
The choice of transfer tasks and evaluation framework are borrowed largely from BIBREF9 . We provide a condensed summary of the tasks in section SECREF10 in the Appendix but refer readers to their paper for a more detailed description.
https://github.com/kudkudak/word-embeddings-benchmarks/wiki
Experimental Results & Discussion
Table 2 presents the results of training logistic regression on 10 different supervised transfer tasks using different fixed-length sentence representation. Supervised approaches trained from scratch on some of these tasks are also presented for comparison. We present performance ablations when adding more tasks and increasing the number of hidden units in our GRU (+L). Ablation specifics are presented in section SECREF9 of the Appendix.
It is evident from Table 2 that adding more tasks improves the transfer performance of our model. Increasing the capacity our sentence encoder with more hidden units (+L) as well as an additional layer (+2L) also lead to improved transfer performance. We observe gains of 1.1-2.0% on the sentiment classification tasks (MR, CR, SUBJ & MPQA) over Infersent. We demonstrate substantial gains on TREC (6% over Infersent and roughly 2% over the CNN-LSTM), outperforming even a competitive supervised baseline. We see similar gains (2.3%) on paraphrase identification (MPRC), closing the gap on supervised approaches trained from scratch. The addition of constituency parsing improves performance on sentence relatedness (SICK-R) and entailment (SICK-E) consistent with observations made by BIBREF48 .
In Table TABREF19 , we show that simply training an MLP on top of our fixed sentence representations outperforms several strong & complex supervised approaches that use attention mechanisms, even on this fairly large dataset. For example, we observe a 0.2-0.5% improvement over the decomposable attention model BIBREF49 . When using only a small fraction of the training data, indicated by the columns 1k-25k, we are able to outperform the Siamese and Multi-Perspective CNN using roughly 6% of the available training set. We also outperform the Deconv LVM model proposed by BIBREF47 in this low-resource setting.
Unlike BIBREF9 , who use pretrained GloVe word embeddings, we learn our word embeddings from scratch. Somewhat surprisingly, in Table TABREF18 we observe that the learned word embeddings are competitive with popular methods such as GloVe, word2vec, and fasttext BIBREF50 on the benchmarks presented by BIBREF36 and BIBREF37 .
In Table TABREF20 , we probe our sentence representations to determine if certain sentence characteristics and syntactic properties can be inferred following work by BIBREF17 and BIBREF14 . We observe that syntactic properties are better encoded with the addition of multi-lingual NMT and parsing. Representations learned solely from NLI do appear to encode syntax but incorporation into our multi-task framework does not amplify this signal. Similarly, we observe that sentence characteristics such as length and word order are better encoded with the addition of parsing.
In Appendix Table TABREF30 , we note that our sentence representations outperform skip-thoughts and are on par with Infersent for image-caption retrieval. We also observe that comparing sentences using cosine similarities correlates reasonably well with their relatedness on semantic textual similarity benchmarks (Appendix Table TABREF31 ).
We also present qualitative analysis of our learned representations by visualizations using dimensionality reduction techniques (Figure FIGREF11 ) and nearest neighbor exploration (Appendix Table TABREF32 ). Figure FIGREF11 shows t-sne plots of our sentence representations on three different datasets - SUBJ, TREC and DBpedia. DBpedia is a large corpus of sentences from Wikipedia labeled by category and used by BIBREF51 . Sentences appear to cluster reasonably well according to their labels. The clustering also appears better than that demonstrated in Figure 2 of BIBREF6 on TREC and SUBJ. Appendix Table TABREF32 contains sentences from the BookCorpus and their nearest neighbors. Sentences with some lexical overlap and similar discourse structure appear to be clustered together.
Conclusion & Future Work
We present a multi-task framework for learning general-purpose fixed-length sentence representations. Our primary motivation is to encapsulate the inductive biases of several diverse training signals used to learn sentence representations into a single model. Our multi-task framework includes a combination of sequence-to-sequence tasks such as multi-lingual NMT, constituency parsing and skip-thought vectors as well as a classification task - natural language inference. We demonstrate that the learned representations yield competitive or superior results to previous general-purpose sentence representation methods. We also observe that this approach produces good word embeddings.
In future work, we would like understand and interpret the inductive biases that our model learns and observe how it changes with the addition of different tasks beyond just our simple analysis of sentence characteristics and syntax. Having a rich, continuous sentence representation space could allow the application of state-of-the-art generative models of images such as that of BIBREF52 to language. One could also consider controllable text generation by directly manipulating the sentence representations and realizing it by decoding with a conditional language model.
Acknowledgements
The authors would like to thank Chinnadhurai Sankar, Sebastian Ruder, Eric Yuan, Tong Wang, Alessandro Sordoni, Guillaume Lample and Varsha Embar for useful discussions. We are also grateful to the PyTorch development team BIBREF53 . We thank NVIDIA for donating a DGX-1 computer used in this work and Fonds de recherche du Québec - Nature et technologies for funding.
Model Training
We present some architectural specifics and training details of our multi-task framework. Our shared encoder uses a common word embedding lookup table and GRU. We experiment with unidirectional, bidirectional and 2 layer bidirectional GRUs (details in Appendix section SECREF9 ). For each task, every decoder has its separate word embedding lookups, conditional GRUs and fully connected layers that project the GRU hidden states to the target vocabularies. The last hidden state of the encoder is used as the initial hidden state of the decoder and is also presented as input to all the gates of the GRU at every time step. For natural language inference, the same encoder is used to encode both the premise and hypothesis and a concatenation of their representations along with the absolute difference and hadamard product (as described in BIBREF9 ) are given to a single layer MLP with a dropout BIBREF55 rate of 0.3. All models use word embeddings of 512 dimensions and GRUs with either 1500 or 2048 hidden units. We used minibatches of 48 examples and the Adam BIBREF54 optimizer with a learning rate of 0.002. Models were trained for 7 days on an Nvidia Tesla P100-SXM2-16GB GPU. While BIBREF6 report close to a month of training, we only train for 7 days, made possible by advancements in GPU hardware and software (cuDNN RNNs).
We did not tune any of the architectural details and hyperparameters owing to the fact that we were unable to identify any clear criterion on which to tune them. Gains in performance on a specific task do not often translate to better transfer performance.
Vocabulary Expansion & Representation Pooling
In addition to performing 10-fold cross-validation to determine the L2 regularization penalty on the logistic regression models, we also tune the way in which our sentence representations are generated from the hidden states corresponding to words in a sentence. For example, BIBREF6 use the last hidden state while BIBREF9 perform max-pooling across all of the hidden states. We consider both of these approaches and pick the one with better performance on the validation set. We note that max-pooling works best on sentiment tasks such as MR, CR, SUBJ and MPQA, while the last hidden state works better on all other tasks.
We also employ vocabulary expansion on all tasks as in BIBREF6 by training a linear regression to map from the space of pre-trained word embeddings (GloVe) to our model's word embeddings.
Multi-task model details
This section describes the specifics our multi-task ablations in the experiments section. These definitions hold for all tables except for TABREF18 and TABREF20 . We refer to skip-thought next as STN, French and German NMT as Fr and De, natural language inference as NLI, skip-thought previous as STP and parsing as Par.
+STN +Fr +De : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a forward GRU with 1500-dimensional hidden vectors and a bidirectional GRU, also with 1500-dimensional hidden vectors.
+STN +Fr +De +NLI : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 1500-dimensional hidden vectors and another bidirectional GRU with 1500-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +2L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a 2-layer bidirectional GRU with 2048-dimensional hidden vectors and a 1-layer bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +L +STP +Par : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without Par.
In tables TABREF18 and TABREF20 we do not concatenate the representations of multiple models.
Description of evaluation tasks
BIBREF6 and BIBREF9 provide a detailed description of tasks that are typically used to evaluate sentence representations. We provide a condensed summary and refer readers to their work for a more thorough description.
Text Classification
We evaluate on text classification benchmarks - sentiment classification on movie reviews (MR), product reviews (CR) and Stanford sentiment (SST), question type classification (TREC), subjectivity/objectivity classification (SUBJ) and opinion polarity (MPQA). Representations are used to train a logistic regression classifier with 10-fold cross validation to tune the L2 weight penalty. The evaluation metric for all these tasks is classification accuracy.
Paraphrase Identification
We also evaluate on pairwise text classification tasks such as paraphrase identification on the Microsoft Research Paraphrase Corpus (MRPC) corpus. This is a binary classification problem to identify if two sentences are paraphrases of each other. The evaluation metric is classification accuracy and F1.
Entailment and Semantic Relatedness
To test if similar sentences share similar representations, we evaluate on the SICK relatedness (SICK-R) task where a linear model is trained to output a score from 1 to 5 indicating the relatedness of two sentences. We also evaluate using the entailment labels in the same dataset (SICK-E) which is a binary classification problem. The evaluation metric for SICK-R is Pearson correlation and classification accuracy for SICK-E.
Semantic Textual Similarity
In this evaluation, we measure the relatedness of two sentences using only the cosine similarity between their representations. We use the similarity textual similarity (STS) benchmark tasks from 2012-2016 (STS12, STS13, STS14, STS15, STS16, STSB). The STS dataset contains sentences from a diverse set of data sources. The evaluation criteria is Pearson correlation.
Image-caption retrieval
Image-caption retrieval is typically formulated as a ranking task wherein images are retrieved and ranked based on a textual description and vice-versa. We use 113k training images from MSCOCO with 5k images for validation and 5k for testing. Image features are extracted using a pre-trained 110 layer ResNet. The evaluation criterion is Recall@K and the median K across 5 different splits of the data.
Quora Duplicate Question Classification
In addition to the above tasks which were considered by BIBREF9 , we also evaluate on the recently published Quora duplicate question dataset since it is an order of magnitude larger than the others (approximately 400,000 question pairs). The task is to correctly identify question pairs that are duplicates of one another, which we formulate as a binary classification problem. We use the same data splits as in BIBREF45 . Given the size of this data, we consider a more expressive classifier on top of the representations of both questions. Specifically, we train a 4 layer MLP with 1024 hidden units, with a dropout rate of 0.5 after every hidden layer. The evaluation criterion is classification accuracy. We also artificially create a low-resource setting by reducing the number of training examples between 1,000 and 25,000 using the same splits as BIBREF47 .
Sentence Characteristics & Syntax
In an attempt to understand what information is encoded in by sentence representations, we consider six different classification tasks where the objective is to predict sentence characteristics such as length, word content and word order BIBREF17 or syntactic properties such as active/passive, tense and the top syntactic sequence (TSS) from the parse tree of a sentence BIBREF14 .
The sentence characteristic tasks are setup in the same way as described in BIBREF17 . The length task is an 8-way classification problem where sentence lengths are binned into 8 ranges. The content task is formulated as a binary classification problem that takes a concatenation of a sentence representation INLINEFORM0 and a word representation INLINEFORM1 to determine if the word is contained in the sentence. The order task is an extension of the content task where a concatenation of the sentence representation and word representations of two words in sentence is used to determine if the first word occurs before or after the second. We use a random subset of the 1-billion-word dataset for these experiments that were not used to train our multi-task representations.
The syntactic properties tasks are setup in the same way as described in BIBREF14 .The passive and tense tasks are characterized as binary classification problems given a sentence's representation. The former's objective is to determine if a sentence is written in active/passive voice while the latter's objective is to determine if the sentence is in the past tense or not. The top syntactic sequence (TSS) is a 20-way classification problem with 19 most frequent top syntactic sequences and 1 miscellaneous class. We use the same dataset as the authors but different training, validation and test splits. | Unanswerable |
50be9e6203c40ed3db48ed37103f967ef0ea946c | 50be9e6203c40ed3db48ed37103f967ef0ea946c_0 | Q: How do they evaluate their sentence representations?
Text: Introduction
Transfer learning has driven a number of recent successes in computer vision and NLP. Computer vision tasks like image captioning BIBREF0 and visual question answering typically use CNNs pretrained on ImageNet BIBREF1 , BIBREF2 to extract representations of the image, while several natural language tasks such as reading comprehension and sequence labeling BIBREF3 have benefited from pretrained word embeddings BIBREF4 , BIBREF5 that are either fine-tuned for a specific task or held fixed.
Many neural NLP systems are initialized with pretrained word embeddings but learn their representations of words in context from scratch, in a task-specific manner from supervised learning signals. However, learning these representations reliably from scratch is not always feasible, especially in low-resource settings, where we believe that using general purpose sentence representations will be beneficial.
Some recent work has addressed this by learning general-purpose sentence representations BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . However, there exists no clear consensus yet on what training objective or methodology is best suited to this goal.
Understanding the inductive biases of distinct neural models is important for guiding progress in representation learning. BIBREF14 and BIBREF15 demonstrate that neural machine translation (NMT) systems appear to capture morphology and some syntactic properties. BIBREF14 also present evidence that sequence-to-sequence parsers BIBREF16 more strongly encode source language syntax. Similarly, BIBREF17 probe representations extracted by sequence autoencoders, word embedding averages, and skip-thought vectors with a multi-layer perceptron (MLP) classifier to study whether sentence characteristics such as length, word content and word order are encoded.
To generalize across a diverse set of tasks, it is important to build representations that encode several aspects of a sentence. Neural approaches to tasks such as skip-thoughts, machine translation, natural language inference, and constituency parsing likely have different inductive biases. Our work exploits this in the context of a simple one-to-many multi-task learning (MTL) framework, wherein a single recurrent sentence encoder is shared across multiple tasks. We hypothesize that sentence representations learned by training on a reasonably large number of weakly related tasks will generalize better to novel tasks unseen during training, since this process encodes the inductive biases of multiple models. This hypothesis is based on the theoretical work of BIBREF18 . While our work aims at learning fixed-length distributed sentence representations, it is not always practical to assume that the entire “meaning” of a sentence can be encoded into a fixed-length vector. We merely hope to capture some of its characteristics that could be of use in a variety of tasks.
The primary contribution of our work is to combine the benefits of diverse sentence-representation learning objectives into a single multi-task framework. To the best of our knowledge, this is the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors. We demonstrate through extensive experimentation that representations learned in this way lead to improved performance across a diverse set of novel tasks not used in the learning of our representations. Such representations facilitate low-resource learning as exhibited by significant improvements to model performance for new tasks in the low labelled data regime - achieving comparable performance to a few models trained from scratch using only 6% of the available training set on the Quora duplicate question dataset.
Related Work
The problem of learning distributed representations of phrases and sentences dates back over a decade. For example, BIBREF19 present an additive and multiplicative linear composition function of the distributed representations of individual words. BIBREF20 combine symbolic and distributed representations of words using tensor products. Advances in learning better distributed representations of words BIBREF4 , BIBREF5 combined with deep learning have made it possible to learn complex non-linear composition functions of an arbitrary number of word embeddings using convolutional or recurrent neural networks (RNNs). A network's representation of the last element in a sequence, which is a non-linear composition of all inputs, is typically assumed to contain a squashed “summary” of the sentence. Most work in supervised learning for NLP builds task-specific representations of sentences rather than general-purpose ones.
Notably, skip-thought vectors BIBREF6 , an extension of the skip-gram model for word embeddings BIBREF4 to sentences, learn re-usable sentence representations from weakly labeled data. Unfortunately, these models take weeks or often months to train. BIBREF8 address this by considering faster alternatives such as sequential denoising autoencoders and shallow log-linear models. BIBREF21 , however, demonstrate that simple word embedding averages are comparable to more complicated models like skip-thoughts. More recently, BIBREF9 show that a completely supervised approach to learning sentence representations from natural language inference data outperforms all previous approaches on transfer learning benchmarks. Here we use the terms “transfer learning performance" on “transfer tasks” to mean the performance of sentence representations evaluated on tasks unseen during training. BIBREF10 demonstrated that representations learned by state-of-the-art large-scale NMT systems also generalize well to other tasks. However, their use of an attention mechanism prevents the learning of a single fixed-length vector representation of a sentence. As a result, they present a bi-attentive classification network that composes information present in all of the model's hidden states to achieve improvements over a corresponding model trained from scratch. BIBREF11 and BIBREF12 demonstrate that discourse-based objectives can also be leveraged to learn good sentence representations.
Our work is most similar to that of BIBREF22 , who train a many-to-many sequence-to-sequence model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. There are two key differences between that work and our own. First, like BIBREF10 , their use of an attention mechanism prevents learning a fixed-length vector representation for a sentence. Second, their work aims for improvements on the same tasks on which the model is trained, as opposed to learning re-usable sentence representations that transfer elsewhere.
We further present a fine-grained analysis of how different tasks contribute to the encoding of different information signals in our representations following work by BIBREF14 and BIBREF17 .
BIBREF23 similarly present a multi-task framework for textual entailment with task supervision at different levels of learning. “Universal" multi-task models have also been successfully explored in the context of computer vision problems BIBREF24 , BIBREF25 .
Sequence-to-Sequence Learning
Five out of the six tasks that we consider for multi-task learning are formulated as sequence-to-sequence problems BIBREF26 , BIBREF27 . Briefly, sequence-to-sequence models are a specific case of encoder-decoder models where the inputs and outputs are sequential. They directly model the conditional distribution of outputs given inputs INLINEFORM0 . The input INLINEFORM1 and output INLINEFORM2 are sequences INLINEFORM3 and INLINEFORM4 . The encoder produces a fixed length vector representation INLINEFORM5 of the input, which the decoder then conditions on to generate an output. The decoder is auto-regressive and breaks down the joint probability of outputs into a product of conditional probabilities via the chain rule: INLINEFORM6
BIBREF26 and BIBREF27 use encoders and decoders parameterized as RNN variants such as Long Short-term Memory (LSTMs) BIBREF28 or Gated Recurrent Units (GRUs) BIBREF29 . The hidden representation INLINEFORM0 is typically the last hidden state of the encoder RNN.
BIBREF30 alleviate the gradient bottleneck between the encoder and the decoder by introducing an attention mechanism that allows the decoder to condition on every hidden state of the encoder RNN instead of only the last one. In this work, as in BIBREF6 , BIBREF8 , we do not employ an attention mechanism. This enables us to obtain a single, fixed-length, distributed sentence representation. To diminish the effects of vanishing gradient, we condition every decoding step on the encoder hidden representation INLINEFORM0 . We use a GRU for the encoder and decoder in the interest of computational speed. The encoder is a bidirectional GRU while the decoder is a unidirectional conditional GRU whose parameterization is as follows: DISPLAYFORM0
The encoder representation INLINEFORM0 is provided as conditioning information to the reset gate, update gate and hidden state computation in the GRU via the parameters INLINEFORM1 , INLINEFORM2 and INLINEFORM3 to avoid attenuation of information from the encoder.
Multi-task Sequence-to-sequence Learning
BIBREF31 present a simple one-to-many multi-task sequence-to-sequence learning model for NMT that uses a shared encoder for English and task-specific decoders for multiple target languages. BIBREF22 extend this by also considering many-to-one (many encoders, one decoder) and many-to-many architectures. In this work, we consider a one-to-many model since it lends itself naturally to the idea of combining inductive biases from different training objectives. The same bidirectional GRU encodes the input sentences from different tasks into a compressed summary INLINEFORM0 which is then used to condition a task-specific GRU to produce the output sentence.
Training Objectives & Evaluation
Our motivation for multi-task training stems from theoretical insights presented in BIBREF18 . We refer readers to that work for a detailed discussion of results, but the conclusions most relevant to this discussion are (i) that learning multiple related tasks jointly results in good generalization as measured by the number of training examples required per task; and (ii) that inductive biases learned on sufficiently many training tasks are likely to be good for learning novel tasks drawn from the same environment.
We select the following training objectives to learn general-purpose sentence embeddings. Our desiderata for the task collection were: sufficient diversity, existence of fairly large datasets for training, and success as standalone training objectives for sentence representations.
Multi-task training setup
Multi-task training with different data sources for each task stills poses open questions. For example: When does one switch to training on a different task? Should the switching be periodic? Do we weight each task equally? If not, what training ratios do we use?
BIBREF31 use periodic task alternations with equal training ratios for every task. In contrast, BIBREF22 alter the training ratios for each task based on the size of their respective training sets. Specifically, the training ratio for a particular task, INLINEFORM0 , is the fraction of the number of training examples in that task to the total number of training samples across all tasks. The authors then perform INLINEFORM1 parameter updates on task INLINEFORM2 before selecting a new task at random proportional to the training ratios, where N is a predetermined constant.
We take a simpler approach and pick a new sequence-to-sequence task to train on after every parameter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to-sequence tasks (this was chosen so as to complete roughly 6 epochs of the dataset after 7 days of training). Our approach is described formally in the Algorithm below.
Model details can be found in section SECREF7 in the Appendix.
0ptcenterline
A set of INLINEFORM0 tasks with a common source language, a shared encoder INLINEFORM1 across all tasks and a set of INLINEFORM2 task specific decoders INLINEFORM3 . Let INLINEFORM4 denote each model's parameters, INLINEFORM5 a probability vector ( INLINEFORM6 ) denoting the probability of sampling a task such that INLINEFORM7 , datasets for each task INLINEFORM8 and a loss function INLINEFORM9 .
INLINEFORM0 has not converged [1] Sample task INLINEFORM1 . Sample input, output pairs INLINEFORM2 . Input representation INLINEFORM3 . Prediction INLINEFORM4 INLINEFORM5 Adam INLINEFORM6 .
Evaluation Strategies, Experimental Results & Discussion
In this section, we describe our approach to evaluate the quality of our learned representations, present the results of our evaluation and discuss our findings.
Evaluation Strategy
We follow a similar evaluation protocol to those presented in BIBREF6 , BIBREF8 , BIBREF9 which is to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks BIBREF36 , BIBREF37 .
The choice of transfer tasks and evaluation framework are borrowed largely from BIBREF9 . We provide a condensed summary of the tasks in section SECREF10 in the Appendix but refer readers to their paper for a more detailed description.
https://github.com/kudkudak/word-embeddings-benchmarks/wiki
Experimental Results & Discussion
Table 2 presents the results of training logistic regression on 10 different supervised transfer tasks using different fixed-length sentence representation. Supervised approaches trained from scratch on some of these tasks are also presented for comparison. We present performance ablations when adding more tasks and increasing the number of hidden units in our GRU (+L). Ablation specifics are presented in section SECREF9 of the Appendix.
It is evident from Table 2 that adding more tasks improves the transfer performance of our model. Increasing the capacity our sentence encoder with more hidden units (+L) as well as an additional layer (+2L) also lead to improved transfer performance. We observe gains of 1.1-2.0% on the sentiment classification tasks (MR, CR, SUBJ & MPQA) over Infersent. We demonstrate substantial gains on TREC (6% over Infersent and roughly 2% over the CNN-LSTM), outperforming even a competitive supervised baseline. We see similar gains (2.3%) on paraphrase identification (MPRC), closing the gap on supervised approaches trained from scratch. The addition of constituency parsing improves performance on sentence relatedness (SICK-R) and entailment (SICK-E) consistent with observations made by BIBREF48 .
In Table TABREF19 , we show that simply training an MLP on top of our fixed sentence representations outperforms several strong & complex supervised approaches that use attention mechanisms, even on this fairly large dataset. For example, we observe a 0.2-0.5% improvement over the decomposable attention model BIBREF49 . When using only a small fraction of the training data, indicated by the columns 1k-25k, we are able to outperform the Siamese and Multi-Perspective CNN using roughly 6% of the available training set. We also outperform the Deconv LVM model proposed by BIBREF47 in this low-resource setting.
Unlike BIBREF9 , who use pretrained GloVe word embeddings, we learn our word embeddings from scratch. Somewhat surprisingly, in Table TABREF18 we observe that the learned word embeddings are competitive with popular methods such as GloVe, word2vec, and fasttext BIBREF50 on the benchmarks presented by BIBREF36 and BIBREF37 .
In Table TABREF20 , we probe our sentence representations to determine if certain sentence characteristics and syntactic properties can be inferred following work by BIBREF17 and BIBREF14 . We observe that syntactic properties are better encoded with the addition of multi-lingual NMT and parsing. Representations learned solely from NLI do appear to encode syntax but incorporation into our multi-task framework does not amplify this signal. Similarly, we observe that sentence characteristics such as length and word order are better encoded with the addition of parsing.
In Appendix Table TABREF30 , we note that our sentence representations outperform skip-thoughts and are on par with Infersent for image-caption retrieval. We also observe that comparing sentences using cosine similarities correlates reasonably well with their relatedness on semantic textual similarity benchmarks (Appendix Table TABREF31 ).
We also present qualitative analysis of our learned representations by visualizations using dimensionality reduction techniques (Figure FIGREF11 ) and nearest neighbor exploration (Appendix Table TABREF32 ). Figure FIGREF11 shows t-sne plots of our sentence representations on three different datasets - SUBJ, TREC and DBpedia. DBpedia is a large corpus of sentences from Wikipedia labeled by category and used by BIBREF51 . Sentences appear to cluster reasonably well according to their labels. The clustering also appears better than that demonstrated in Figure 2 of BIBREF6 on TREC and SUBJ. Appendix Table TABREF32 contains sentences from the BookCorpus and their nearest neighbors. Sentences with some lexical overlap and similar discourse structure appear to be clustered together.
Conclusion & Future Work
We present a multi-task framework for learning general-purpose fixed-length sentence representations. Our primary motivation is to encapsulate the inductive biases of several diverse training signals used to learn sentence representations into a single model. Our multi-task framework includes a combination of sequence-to-sequence tasks such as multi-lingual NMT, constituency parsing and skip-thought vectors as well as a classification task - natural language inference. We demonstrate that the learned representations yield competitive or superior results to previous general-purpose sentence representation methods. We also observe that this approach produces good word embeddings.
In future work, we would like understand and interpret the inductive biases that our model learns and observe how it changes with the addition of different tasks beyond just our simple analysis of sentence characteristics and syntax. Having a rich, continuous sentence representation space could allow the application of state-of-the-art generative models of images such as that of BIBREF52 to language. One could also consider controllable text generation by directly manipulating the sentence representations and realizing it by decoding with a conditional language model.
Acknowledgements
The authors would like to thank Chinnadhurai Sankar, Sebastian Ruder, Eric Yuan, Tong Wang, Alessandro Sordoni, Guillaume Lample and Varsha Embar for useful discussions. We are also grateful to the PyTorch development team BIBREF53 . We thank NVIDIA for donating a DGX-1 computer used in this work and Fonds de recherche du Québec - Nature et technologies for funding.
Model Training
We present some architectural specifics and training details of our multi-task framework. Our shared encoder uses a common word embedding lookup table and GRU. We experiment with unidirectional, bidirectional and 2 layer bidirectional GRUs (details in Appendix section SECREF9 ). For each task, every decoder has its separate word embedding lookups, conditional GRUs and fully connected layers that project the GRU hidden states to the target vocabularies. The last hidden state of the encoder is used as the initial hidden state of the decoder and is also presented as input to all the gates of the GRU at every time step. For natural language inference, the same encoder is used to encode both the premise and hypothesis and a concatenation of their representations along with the absolute difference and hadamard product (as described in BIBREF9 ) are given to a single layer MLP with a dropout BIBREF55 rate of 0.3. All models use word embeddings of 512 dimensions and GRUs with either 1500 or 2048 hidden units. We used minibatches of 48 examples and the Adam BIBREF54 optimizer with a learning rate of 0.002. Models were trained for 7 days on an Nvidia Tesla P100-SXM2-16GB GPU. While BIBREF6 report close to a month of training, we only train for 7 days, made possible by advancements in GPU hardware and software (cuDNN RNNs).
We did not tune any of the architectural details and hyperparameters owing to the fact that we were unable to identify any clear criterion on which to tune them. Gains in performance on a specific task do not often translate to better transfer performance.
Vocabulary Expansion & Representation Pooling
In addition to performing 10-fold cross-validation to determine the L2 regularization penalty on the logistic regression models, we also tune the way in which our sentence representations are generated from the hidden states corresponding to words in a sentence. For example, BIBREF6 use the last hidden state while BIBREF9 perform max-pooling across all of the hidden states. We consider both of these approaches and pick the one with better performance on the validation set. We note that max-pooling works best on sentiment tasks such as MR, CR, SUBJ and MPQA, while the last hidden state works better on all other tasks.
We also employ vocabulary expansion on all tasks as in BIBREF6 by training a linear regression to map from the space of pre-trained word embeddings (GloVe) to our model's word embeddings.
Multi-task model details
This section describes the specifics our multi-task ablations in the experiments section. These definitions hold for all tables except for TABREF18 and TABREF20 . We refer to skip-thought next as STN, French and German NMT as Fr and De, natural language inference as NLI, skip-thought previous as STP and parsing as Par.
+STN +Fr +De : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a forward GRU with 1500-dimensional hidden vectors and a bidirectional GRU, also with 1500-dimensional hidden vectors.
+STN +Fr +De +NLI : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 1500-dimensional hidden vectors and another bidirectional GRU with 1500-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +2L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a 2-layer bidirectional GRU with 2048-dimensional hidden vectors and a 1-layer bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +L +STP +Par : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without Par.
In tables TABREF18 and TABREF20 we do not concatenate the representations of multiple models.
Description of evaluation tasks
BIBREF6 and BIBREF9 provide a detailed description of tasks that are typically used to evaluate sentence representations. We provide a condensed summary and refer readers to their work for a more thorough description.
Text Classification
We evaluate on text classification benchmarks - sentiment classification on movie reviews (MR), product reviews (CR) and Stanford sentiment (SST), question type classification (TREC), subjectivity/objectivity classification (SUBJ) and opinion polarity (MPQA). Representations are used to train a logistic regression classifier with 10-fold cross validation to tune the L2 weight penalty. The evaluation metric for all these tasks is classification accuracy.
Paraphrase Identification
We also evaluate on pairwise text classification tasks such as paraphrase identification on the Microsoft Research Paraphrase Corpus (MRPC) corpus. This is a binary classification problem to identify if two sentences are paraphrases of each other. The evaluation metric is classification accuracy and F1.
Entailment and Semantic Relatedness
To test if similar sentences share similar representations, we evaluate on the SICK relatedness (SICK-R) task where a linear model is trained to output a score from 1 to 5 indicating the relatedness of two sentences. We also evaluate using the entailment labels in the same dataset (SICK-E) which is a binary classification problem. The evaluation metric for SICK-R is Pearson correlation and classification accuracy for SICK-E.
Semantic Textual Similarity
In this evaluation, we measure the relatedness of two sentences using only the cosine similarity between their representations. We use the similarity textual similarity (STS) benchmark tasks from 2012-2016 (STS12, STS13, STS14, STS15, STS16, STSB). The STS dataset contains sentences from a diverse set of data sources. The evaluation criteria is Pearson correlation.
Image-caption retrieval
Image-caption retrieval is typically formulated as a ranking task wherein images are retrieved and ranked based on a textual description and vice-versa. We use 113k training images from MSCOCO with 5k images for validation and 5k for testing. Image features are extracted using a pre-trained 110 layer ResNet. The evaluation criterion is Recall@K and the median K across 5 different splits of the data.
Quora Duplicate Question Classification
In addition to the above tasks which were considered by BIBREF9 , we also evaluate on the recently published Quora duplicate question dataset since it is an order of magnitude larger than the others (approximately 400,000 question pairs). The task is to correctly identify question pairs that are duplicates of one another, which we formulate as a binary classification problem. We use the same data splits as in BIBREF45 . Given the size of this data, we consider a more expressive classifier on top of the representations of both questions. Specifically, we train a 4 layer MLP with 1024 hidden units, with a dropout rate of 0.5 after every hidden layer. The evaluation criterion is classification accuracy. We also artificially create a low-resource setting by reducing the number of training examples between 1,000 and 25,000 using the same splits as BIBREF47 .
Sentence Characteristics & Syntax
In an attempt to understand what information is encoded in by sentence representations, we consider six different classification tasks where the objective is to predict sentence characteristics such as length, word content and word order BIBREF17 or syntactic properties such as active/passive, tense and the top syntactic sequence (TSS) from the parse tree of a sentence BIBREF14 .
The sentence characteristic tasks are setup in the same way as described in BIBREF17 . The length task is an 8-way classification problem where sentence lengths are binned into 8 ranges. The content task is formulated as a binary classification problem that takes a concatenation of a sentence representation INLINEFORM0 and a word representation INLINEFORM1 to determine if the word is contained in the sentence. The order task is an extension of the content task where a concatenation of the sentence representation and word representations of two words in sentence is used to determine if the first word occurs before or after the second. We use a random subset of the 1-billion-word dataset for these experiments that were not used to train our multi-task representations.
The syntactic properties tasks are setup in the same way as described in BIBREF14 .The passive and tense tasks are characterized as binary classification problems given a sentence's representation. The former's objective is to determine if a sentence is written in active/passive voice while the latter's objective is to determine if the sentence is in the past tense or not. The top syntactic sequence (TSS) is a 20-way classification problem with 19 most frequent top syntactic sequences and 1 miscellaneous class. We use the same dataset as the authors but different training, validation and test splits. | standard benchmarks BIBREF36 , BIBREF37, to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters, transfer learning evaluation in an artificially constructed low-resource setting |
36a9230fadf997d3b0c5fc8af8d89bd48bf04f12 | 36a9230fadf997d3b0c5fc8af8d89bd48bf04f12_0 | Q: Which model architecture do they for sentence encoding?
Text: Introduction
Transfer learning has driven a number of recent successes in computer vision and NLP. Computer vision tasks like image captioning BIBREF0 and visual question answering typically use CNNs pretrained on ImageNet BIBREF1 , BIBREF2 to extract representations of the image, while several natural language tasks such as reading comprehension and sequence labeling BIBREF3 have benefited from pretrained word embeddings BIBREF4 , BIBREF5 that are either fine-tuned for a specific task or held fixed.
Many neural NLP systems are initialized with pretrained word embeddings but learn their representations of words in context from scratch, in a task-specific manner from supervised learning signals. However, learning these representations reliably from scratch is not always feasible, especially in low-resource settings, where we believe that using general purpose sentence representations will be beneficial.
Some recent work has addressed this by learning general-purpose sentence representations BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . However, there exists no clear consensus yet on what training objective or methodology is best suited to this goal.
Understanding the inductive biases of distinct neural models is important for guiding progress in representation learning. BIBREF14 and BIBREF15 demonstrate that neural machine translation (NMT) systems appear to capture morphology and some syntactic properties. BIBREF14 also present evidence that sequence-to-sequence parsers BIBREF16 more strongly encode source language syntax. Similarly, BIBREF17 probe representations extracted by sequence autoencoders, word embedding averages, and skip-thought vectors with a multi-layer perceptron (MLP) classifier to study whether sentence characteristics such as length, word content and word order are encoded.
To generalize across a diverse set of tasks, it is important to build representations that encode several aspects of a sentence. Neural approaches to tasks such as skip-thoughts, machine translation, natural language inference, and constituency parsing likely have different inductive biases. Our work exploits this in the context of a simple one-to-many multi-task learning (MTL) framework, wherein a single recurrent sentence encoder is shared across multiple tasks. We hypothesize that sentence representations learned by training on a reasonably large number of weakly related tasks will generalize better to novel tasks unseen during training, since this process encodes the inductive biases of multiple models. This hypothesis is based on the theoretical work of BIBREF18 . While our work aims at learning fixed-length distributed sentence representations, it is not always practical to assume that the entire “meaning” of a sentence can be encoded into a fixed-length vector. We merely hope to capture some of its characteristics that could be of use in a variety of tasks.
The primary contribution of our work is to combine the benefits of diverse sentence-representation learning objectives into a single multi-task framework. To the best of our knowledge, this is the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors. We demonstrate through extensive experimentation that representations learned in this way lead to improved performance across a diverse set of novel tasks not used in the learning of our representations. Such representations facilitate low-resource learning as exhibited by significant improvements to model performance for new tasks in the low labelled data regime - achieving comparable performance to a few models trained from scratch using only 6% of the available training set on the Quora duplicate question dataset.
Related Work
The problem of learning distributed representations of phrases and sentences dates back over a decade. For example, BIBREF19 present an additive and multiplicative linear composition function of the distributed representations of individual words. BIBREF20 combine symbolic and distributed representations of words using tensor products. Advances in learning better distributed representations of words BIBREF4 , BIBREF5 combined with deep learning have made it possible to learn complex non-linear composition functions of an arbitrary number of word embeddings using convolutional or recurrent neural networks (RNNs). A network's representation of the last element in a sequence, which is a non-linear composition of all inputs, is typically assumed to contain a squashed “summary” of the sentence. Most work in supervised learning for NLP builds task-specific representations of sentences rather than general-purpose ones.
Notably, skip-thought vectors BIBREF6 , an extension of the skip-gram model for word embeddings BIBREF4 to sentences, learn re-usable sentence representations from weakly labeled data. Unfortunately, these models take weeks or often months to train. BIBREF8 address this by considering faster alternatives such as sequential denoising autoencoders and shallow log-linear models. BIBREF21 , however, demonstrate that simple word embedding averages are comparable to more complicated models like skip-thoughts. More recently, BIBREF9 show that a completely supervised approach to learning sentence representations from natural language inference data outperforms all previous approaches on transfer learning benchmarks. Here we use the terms “transfer learning performance" on “transfer tasks” to mean the performance of sentence representations evaluated on tasks unseen during training. BIBREF10 demonstrated that representations learned by state-of-the-art large-scale NMT systems also generalize well to other tasks. However, their use of an attention mechanism prevents the learning of a single fixed-length vector representation of a sentence. As a result, they present a bi-attentive classification network that composes information present in all of the model's hidden states to achieve improvements over a corresponding model trained from scratch. BIBREF11 and BIBREF12 demonstrate that discourse-based objectives can also be leveraged to learn good sentence representations.
Our work is most similar to that of BIBREF22 , who train a many-to-many sequence-to-sequence model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. There are two key differences between that work and our own. First, like BIBREF10 , their use of an attention mechanism prevents learning a fixed-length vector representation for a sentence. Second, their work aims for improvements on the same tasks on which the model is trained, as opposed to learning re-usable sentence representations that transfer elsewhere.
We further present a fine-grained analysis of how different tasks contribute to the encoding of different information signals in our representations following work by BIBREF14 and BIBREF17 .
BIBREF23 similarly present a multi-task framework for textual entailment with task supervision at different levels of learning. “Universal" multi-task models have also been successfully explored in the context of computer vision problems BIBREF24 , BIBREF25 .
Sequence-to-Sequence Learning
Five out of the six tasks that we consider for multi-task learning are formulated as sequence-to-sequence problems BIBREF26 , BIBREF27 . Briefly, sequence-to-sequence models are a specific case of encoder-decoder models where the inputs and outputs are sequential. They directly model the conditional distribution of outputs given inputs INLINEFORM0 . The input INLINEFORM1 and output INLINEFORM2 are sequences INLINEFORM3 and INLINEFORM4 . The encoder produces a fixed length vector representation INLINEFORM5 of the input, which the decoder then conditions on to generate an output. The decoder is auto-regressive and breaks down the joint probability of outputs into a product of conditional probabilities via the chain rule: INLINEFORM6
BIBREF26 and BIBREF27 use encoders and decoders parameterized as RNN variants such as Long Short-term Memory (LSTMs) BIBREF28 or Gated Recurrent Units (GRUs) BIBREF29 . The hidden representation INLINEFORM0 is typically the last hidden state of the encoder RNN.
BIBREF30 alleviate the gradient bottleneck between the encoder and the decoder by introducing an attention mechanism that allows the decoder to condition on every hidden state of the encoder RNN instead of only the last one. In this work, as in BIBREF6 , BIBREF8 , we do not employ an attention mechanism. This enables us to obtain a single, fixed-length, distributed sentence representation. To diminish the effects of vanishing gradient, we condition every decoding step on the encoder hidden representation INLINEFORM0 . We use a GRU for the encoder and decoder in the interest of computational speed. The encoder is a bidirectional GRU while the decoder is a unidirectional conditional GRU whose parameterization is as follows: DISPLAYFORM0
The encoder representation INLINEFORM0 is provided as conditioning information to the reset gate, update gate and hidden state computation in the GRU via the parameters INLINEFORM1 , INLINEFORM2 and INLINEFORM3 to avoid attenuation of information from the encoder.
Multi-task Sequence-to-sequence Learning
BIBREF31 present a simple one-to-many multi-task sequence-to-sequence learning model for NMT that uses a shared encoder for English and task-specific decoders for multiple target languages. BIBREF22 extend this by also considering many-to-one (many encoders, one decoder) and many-to-many architectures. In this work, we consider a one-to-many model since it lends itself naturally to the idea of combining inductive biases from different training objectives. The same bidirectional GRU encodes the input sentences from different tasks into a compressed summary INLINEFORM0 which is then used to condition a task-specific GRU to produce the output sentence.
Training Objectives & Evaluation
Our motivation for multi-task training stems from theoretical insights presented in BIBREF18 . We refer readers to that work for a detailed discussion of results, but the conclusions most relevant to this discussion are (i) that learning multiple related tasks jointly results in good generalization as measured by the number of training examples required per task; and (ii) that inductive biases learned on sufficiently many training tasks are likely to be good for learning novel tasks drawn from the same environment.
We select the following training objectives to learn general-purpose sentence embeddings. Our desiderata for the task collection were: sufficient diversity, existence of fairly large datasets for training, and success as standalone training objectives for sentence representations.
Multi-task training setup
Multi-task training with different data sources for each task stills poses open questions. For example: When does one switch to training on a different task? Should the switching be periodic? Do we weight each task equally? If not, what training ratios do we use?
BIBREF31 use periodic task alternations with equal training ratios for every task. In contrast, BIBREF22 alter the training ratios for each task based on the size of their respective training sets. Specifically, the training ratio for a particular task, INLINEFORM0 , is the fraction of the number of training examples in that task to the total number of training samples across all tasks. The authors then perform INLINEFORM1 parameter updates on task INLINEFORM2 before selecting a new task at random proportional to the training ratios, where N is a predetermined constant.
We take a simpler approach and pick a new sequence-to-sequence task to train on after every parameter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to-sequence tasks (this was chosen so as to complete roughly 6 epochs of the dataset after 7 days of training). Our approach is described formally in the Algorithm below.
Model details can be found in section SECREF7 in the Appendix.
0ptcenterline
A set of INLINEFORM0 tasks with a common source language, a shared encoder INLINEFORM1 across all tasks and a set of INLINEFORM2 task specific decoders INLINEFORM3 . Let INLINEFORM4 denote each model's parameters, INLINEFORM5 a probability vector ( INLINEFORM6 ) denoting the probability of sampling a task such that INLINEFORM7 , datasets for each task INLINEFORM8 and a loss function INLINEFORM9 .
INLINEFORM0 has not converged [1] Sample task INLINEFORM1 . Sample input, output pairs INLINEFORM2 . Input representation INLINEFORM3 . Prediction INLINEFORM4 INLINEFORM5 Adam INLINEFORM6 .
Evaluation Strategies, Experimental Results & Discussion
In this section, we describe our approach to evaluate the quality of our learned representations, present the results of our evaluation and discuss our findings.
Evaluation Strategy
We follow a similar evaluation protocol to those presented in BIBREF6 , BIBREF8 , BIBREF9 which is to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks BIBREF36 , BIBREF37 .
The choice of transfer tasks and evaluation framework are borrowed largely from BIBREF9 . We provide a condensed summary of the tasks in section SECREF10 in the Appendix but refer readers to their paper for a more detailed description.
https://github.com/kudkudak/word-embeddings-benchmarks/wiki
Experimental Results & Discussion
Table 2 presents the results of training logistic regression on 10 different supervised transfer tasks using different fixed-length sentence representation. Supervised approaches trained from scratch on some of these tasks are also presented for comparison. We present performance ablations when adding more tasks and increasing the number of hidden units in our GRU (+L). Ablation specifics are presented in section SECREF9 of the Appendix.
It is evident from Table 2 that adding more tasks improves the transfer performance of our model. Increasing the capacity our sentence encoder with more hidden units (+L) as well as an additional layer (+2L) also lead to improved transfer performance. We observe gains of 1.1-2.0% on the sentiment classification tasks (MR, CR, SUBJ & MPQA) over Infersent. We demonstrate substantial gains on TREC (6% over Infersent and roughly 2% over the CNN-LSTM), outperforming even a competitive supervised baseline. We see similar gains (2.3%) on paraphrase identification (MPRC), closing the gap on supervised approaches trained from scratch. The addition of constituency parsing improves performance on sentence relatedness (SICK-R) and entailment (SICK-E) consistent with observations made by BIBREF48 .
In Table TABREF19 , we show that simply training an MLP on top of our fixed sentence representations outperforms several strong & complex supervised approaches that use attention mechanisms, even on this fairly large dataset. For example, we observe a 0.2-0.5% improvement over the decomposable attention model BIBREF49 . When using only a small fraction of the training data, indicated by the columns 1k-25k, we are able to outperform the Siamese and Multi-Perspective CNN using roughly 6% of the available training set. We also outperform the Deconv LVM model proposed by BIBREF47 in this low-resource setting.
Unlike BIBREF9 , who use pretrained GloVe word embeddings, we learn our word embeddings from scratch. Somewhat surprisingly, in Table TABREF18 we observe that the learned word embeddings are competitive with popular methods such as GloVe, word2vec, and fasttext BIBREF50 on the benchmarks presented by BIBREF36 and BIBREF37 .
In Table TABREF20 , we probe our sentence representations to determine if certain sentence characteristics and syntactic properties can be inferred following work by BIBREF17 and BIBREF14 . We observe that syntactic properties are better encoded with the addition of multi-lingual NMT and parsing. Representations learned solely from NLI do appear to encode syntax but incorporation into our multi-task framework does not amplify this signal. Similarly, we observe that sentence characteristics such as length and word order are better encoded with the addition of parsing.
In Appendix Table TABREF30 , we note that our sentence representations outperform skip-thoughts and are on par with Infersent for image-caption retrieval. We also observe that comparing sentences using cosine similarities correlates reasonably well with their relatedness on semantic textual similarity benchmarks (Appendix Table TABREF31 ).
We also present qualitative analysis of our learned representations by visualizations using dimensionality reduction techniques (Figure FIGREF11 ) and nearest neighbor exploration (Appendix Table TABREF32 ). Figure FIGREF11 shows t-sne plots of our sentence representations on three different datasets - SUBJ, TREC and DBpedia. DBpedia is a large corpus of sentences from Wikipedia labeled by category and used by BIBREF51 . Sentences appear to cluster reasonably well according to their labels. The clustering also appears better than that demonstrated in Figure 2 of BIBREF6 on TREC and SUBJ. Appendix Table TABREF32 contains sentences from the BookCorpus and their nearest neighbors. Sentences with some lexical overlap and similar discourse structure appear to be clustered together.
Conclusion & Future Work
We present a multi-task framework for learning general-purpose fixed-length sentence representations. Our primary motivation is to encapsulate the inductive biases of several diverse training signals used to learn sentence representations into a single model. Our multi-task framework includes a combination of sequence-to-sequence tasks such as multi-lingual NMT, constituency parsing and skip-thought vectors as well as a classification task - natural language inference. We demonstrate that the learned representations yield competitive or superior results to previous general-purpose sentence representation methods. We also observe that this approach produces good word embeddings.
In future work, we would like understand and interpret the inductive biases that our model learns and observe how it changes with the addition of different tasks beyond just our simple analysis of sentence characteristics and syntax. Having a rich, continuous sentence representation space could allow the application of state-of-the-art generative models of images such as that of BIBREF52 to language. One could also consider controllable text generation by directly manipulating the sentence representations and realizing it by decoding with a conditional language model.
Acknowledgements
The authors would like to thank Chinnadhurai Sankar, Sebastian Ruder, Eric Yuan, Tong Wang, Alessandro Sordoni, Guillaume Lample and Varsha Embar for useful discussions. We are also grateful to the PyTorch development team BIBREF53 . We thank NVIDIA for donating a DGX-1 computer used in this work and Fonds de recherche du Québec - Nature et technologies for funding.
Model Training
We present some architectural specifics and training details of our multi-task framework. Our shared encoder uses a common word embedding lookup table and GRU. We experiment with unidirectional, bidirectional and 2 layer bidirectional GRUs (details in Appendix section SECREF9 ). For each task, every decoder has its separate word embedding lookups, conditional GRUs and fully connected layers that project the GRU hidden states to the target vocabularies. The last hidden state of the encoder is used as the initial hidden state of the decoder and is also presented as input to all the gates of the GRU at every time step. For natural language inference, the same encoder is used to encode both the premise and hypothesis and a concatenation of their representations along with the absolute difference and hadamard product (as described in BIBREF9 ) are given to a single layer MLP with a dropout BIBREF55 rate of 0.3. All models use word embeddings of 512 dimensions and GRUs with either 1500 or 2048 hidden units. We used minibatches of 48 examples and the Adam BIBREF54 optimizer with a learning rate of 0.002. Models were trained for 7 days on an Nvidia Tesla P100-SXM2-16GB GPU. While BIBREF6 report close to a month of training, we only train for 7 days, made possible by advancements in GPU hardware and software (cuDNN RNNs).
We did not tune any of the architectural details and hyperparameters owing to the fact that we were unable to identify any clear criterion on which to tune them. Gains in performance on a specific task do not often translate to better transfer performance.
Vocabulary Expansion & Representation Pooling
In addition to performing 10-fold cross-validation to determine the L2 regularization penalty on the logistic regression models, we also tune the way in which our sentence representations are generated from the hidden states corresponding to words in a sentence. For example, BIBREF6 use the last hidden state while BIBREF9 perform max-pooling across all of the hidden states. We consider both of these approaches and pick the one with better performance on the validation set. We note that max-pooling works best on sentiment tasks such as MR, CR, SUBJ and MPQA, while the last hidden state works better on all other tasks.
We also employ vocabulary expansion on all tasks as in BIBREF6 by training a linear regression to map from the space of pre-trained word embeddings (GloVe) to our model's word embeddings.
Multi-task model details
This section describes the specifics our multi-task ablations in the experiments section. These definitions hold for all tables except for TABREF18 and TABREF20 . We refer to skip-thought next as STN, French and German NMT as Fr and De, natural language inference as NLI, skip-thought previous as STP and parsing as Par.
+STN +Fr +De : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a forward GRU with 1500-dimensional hidden vectors and a bidirectional GRU, also with 1500-dimensional hidden vectors.
+STN +Fr +De +NLI : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 1500-dimensional hidden vectors and another bidirectional GRU with 1500-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +2L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a 2-layer bidirectional GRU with 2048-dimensional hidden vectors and a 1-layer bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +L +STP +Par : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without Par.
In tables TABREF18 and TABREF20 we do not concatenate the representations of multiple models.
Description of evaluation tasks
BIBREF6 and BIBREF9 provide a detailed description of tasks that are typically used to evaluate sentence representations. We provide a condensed summary and refer readers to their work for a more thorough description.
Text Classification
We evaluate on text classification benchmarks - sentiment classification on movie reviews (MR), product reviews (CR) and Stanford sentiment (SST), question type classification (TREC), subjectivity/objectivity classification (SUBJ) and opinion polarity (MPQA). Representations are used to train a logistic regression classifier with 10-fold cross validation to tune the L2 weight penalty. The evaluation metric for all these tasks is classification accuracy.
Paraphrase Identification
We also evaluate on pairwise text classification tasks such as paraphrase identification on the Microsoft Research Paraphrase Corpus (MRPC) corpus. This is a binary classification problem to identify if two sentences are paraphrases of each other. The evaluation metric is classification accuracy and F1.
Entailment and Semantic Relatedness
To test if similar sentences share similar representations, we evaluate on the SICK relatedness (SICK-R) task where a linear model is trained to output a score from 1 to 5 indicating the relatedness of two sentences. We also evaluate using the entailment labels in the same dataset (SICK-E) which is a binary classification problem. The evaluation metric for SICK-R is Pearson correlation and classification accuracy for SICK-E.
Semantic Textual Similarity
In this evaluation, we measure the relatedness of two sentences using only the cosine similarity between their representations. We use the similarity textual similarity (STS) benchmark tasks from 2012-2016 (STS12, STS13, STS14, STS15, STS16, STSB). The STS dataset contains sentences from a diverse set of data sources. The evaluation criteria is Pearson correlation.
Image-caption retrieval
Image-caption retrieval is typically formulated as a ranking task wherein images are retrieved and ranked based on a textual description and vice-versa. We use 113k training images from MSCOCO with 5k images for validation and 5k for testing. Image features are extracted using a pre-trained 110 layer ResNet. The evaluation criterion is Recall@K and the median K across 5 different splits of the data.
Quora Duplicate Question Classification
In addition to the above tasks which were considered by BIBREF9 , we also evaluate on the recently published Quora duplicate question dataset since it is an order of magnitude larger than the others (approximately 400,000 question pairs). The task is to correctly identify question pairs that are duplicates of one another, which we formulate as a binary classification problem. We use the same data splits as in BIBREF45 . Given the size of this data, we consider a more expressive classifier on top of the representations of both questions. Specifically, we train a 4 layer MLP with 1024 hidden units, with a dropout rate of 0.5 after every hidden layer. The evaluation criterion is classification accuracy. We also artificially create a low-resource setting by reducing the number of training examples between 1,000 and 25,000 using the same splits as BIBREF47 .
Sentence Characteristics & Syntax
In an attempt to understand what information is encoded in by sentence representations, we consider six different classification tasks where the objective is to predict sentence characteristics such as length, word content and word order BIBREF17 or syntactic properties such as active/passive, tense and the top syntactic sequence (TSS) from the parse tree of a sentence BIBREF14 .
The sentence characteristic tasks are setup in the same way as described in BIBREF17 . The length task is an 8-way classification problem where sentence lengths are binned into 8 ranges. The content task is formulated as a binary classification problem that takes a concatenation of a sentence representation INLINEFORM0 and a word representation INLINEFORM1 to determine if the word is contained in the sentence. The order task is an extension of the content task where a concatenation of the sentence representation and word representations of two words in sentence is used to determine if the first word occurs before or after the second. We use a random subset of the 1-billion-word dataset for these experiments that were not used to train our multi-task representations.
The syntactic properties tasks are setup in the same way as described in BIBREF14 .The passive and tense tasks are characterized as binary classification problems given a sentence's representation. The former's objective is to determine if a sentence is written in active/passive voice while the latter's objective is to determine if the sentence is in the past tense or not. The top syntactic sequence (TSS) is a 20-way classification problem with 19 most frequent top syntactic sequences and 1 miscellaneous class. We use the same dataset as the authors but different training, validation and test splits. | Answer with content missing: (Skip-thought vectors-Natural Language Inference paragraphs) The encoder for the current sentence and the decoders for the previous (STP) and next sentence (STN) are typically parameterized as separate RNNs
- RNN |
496304f63006205ee63da376e02ef1b3010c4aa1 | 496304f63006205ee63da376e02ef1b3010c4aa1_0 | Q: How many tokens can sentences in their model at most contain?
Text: Introduction
Transfer learning has driven a number of recent successes in computer vision and NLP. Computer vision tasks like image captioning BIBREF0 and visual question answering typically use CNNs pretrained on ImageNet BIBREF1 , BIBREF2 to extract representations of the image, while several natural language tasks such as reading comprehension and sequence labeling BIBREF3 have benefited from pretrained word embeddings BIBREF4 , BIBREF5 that are either fine-tuned for a specific task or held fixed.
Many neural NLP systems are initialized with pretrained word embeddings but learn their representations of words in context from scratch, in a task-specific manner from supervised learning signals. However, learning these representations reliably from scratch is not always feasible, especially in low-resource settings, where we believe that using general purpose sentence representations will be beneficial.
Some recent work has addressed this by learning general-purpose sentence representations BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . However, there exists no clear consensus yet on what training objective or methodology is best suited to this goal.
Understanding the inductive biases of distinct neural models is important for guiding progress in representation learning. BIBREF14 and BIBREF15 demonstrate that neural machine translation (NMT) systems appear to capture morphology and some syntactic properties. BIBREF14 also present evidence that sequence-to-sequence parsers BIBREF16 more strongly encode source language syntax. Similarly, BIBREF17 probe representations extracted by sequence autoencoders, word embedding averages, and skip-thought vectors with a multi-layer perceptron (MLP) classifier to study whether sentence characteristics such as length, word content and word order are encoded.
To generalize across a diverse set of tasks, it is important to build representations that encode several aspects of a sentence. Neural approaches to tasks such as skip-thoughts, machine translation, natural language inference, and constituency parsing likely have different inductive biases. Our work exploits this in the context of a simple one-to-many multi-task learning (MTL) framework, wherein a single recurrent sentence encoder is shared across multiple tasks. We hypothesize that sentence representations learned by training on a reasonably large number of weakly related tasks will generalize better to novel tasks unseen during training, since this process encodes the inductive biases of multiple models. This hypothesis is based on the theoretical work of BIBREF18 . While our work aims at learning fixed-length distributed sentence representations, it is not always practical to assume that the entire “meaning” of a sentence can be encoded into a fixed-length vector. We merely hope to capture some of its characteristics that could be of use in a variety of tasks.
The primary contribution of our work is to combine the benefits of diverse sentence-representation learning objectives into a single multi-task framework. To the best of our knowledge, this is the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors. We demonstrate through extensive experimentation that representations learned in this way lead to improved performance across a diverse set of novel tasks not used in the learning of our representations. Such representations facilitate low-resource learning as exhibited by significant improvements to model performance for new tasks in the low labelled data regime - achieving comparable performance to a few models trained from scratch using only 6% of the available training set on the Quora duplicate question dataset.
Related Work
The problem of learning distributed representations of phrases and sentences dates back over a decade. For example, BIBREF19 present an additive and multiplicative linear composition function of the distributed representations of individual words. BIBREF20 combine symbolic and distributed representations of words using tensor products. Advances in learning better distributed representations of words BIBREF4 , BIBREF5 combined with deep learning have made it possible to learn complex non-linear composition functions of an arbitrary number of word embeddings using convolutional or recurrent neural networks (RNNs). A network's representation of the last element in a sequence, which is a non-linear composition of all inputs, is typically assumed to contain a squashed “summary” of the sentence. Most work in supervised learning for NLP builds task-specific representations of sentences rather than general-purpose ones.
Notably, skip-thought vectors BIBREF6 , an extension of the skip-gram model for word embeddings BIBREF4 to sentences, learn re-usable sentence representations from weakly labeled data. Unfortunately, these models take weeks or often months to train. BIBREF8 address this by considering faster alternatives such as sequential denoising autoencoders and shallow log-linear models. BIBREF21 , however, demonstrate that simple word embedding averages are comparable to more complicated models like skip-thoughts. More recently, BIBREF9 show that a completely supervised approach to learning sentence representations from natural language inference data outperforms all previous approaches on transfer learning benchmarks. Here we use the terms “transfer learning performance" on “transfer tasks” to mean the performance of sentence representations evaluated on tasks unseen during training. BIBREF10 demonstrated that representations learned by state-of-the-art large-scale NMT systems also generalize well to other tasks. However, their use of an attention mechanism prevents the learning of a single fixed-length vector representation of a sentence. As a result, they present a bi-attentive classification network that composes information present in all of the model's hidden states to achieve improvements over a corresponding model trained from scratch. BIBREF11 and BIBREF12 demonstrate that discourse-based objectives can also be leveraged to learn good sentence representations.
Our work is most similar to that of BIBREF22 , who train a many-to-many sequence-to-sequence model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. There are two key differences between that work and our own. First, like BIBREF10 , their use of an attention mechanism prevents learning a fixed-length vector representation for a sentence. Second, their work aims for improvements on the same tasks on which the model is trained, as opposed to learning re-usable sentence representations that transfer elsewhere.
We further present a fine-grained analysis of how different tasks contribute to the encoding of different information signals in our representations following work by BIBREF14 and BIBREF17 .
BIBREF23 similarly present a multi-task framework for textual entailment with task supervision at different levels of learning. “Universal" multi-task models have also been successfully explored in the context of computer vision problems BIBREF24 , BIBREF25 .
Sequence-to-Sequence Learning
Five out of the six tasks that we consider for multi-task learning are formulated as sequence-to-sequence problems BIBREF26 , BIBREF27 . Briefly, sequence-to-sequence models are a specific case of encoder-decoder models where the inputs and outputs are sequential. They directly model the conditional distribution of outputs given inputs INLINEFORM0 . The input INLINEFORM1 and output INLINEFORM2 are sequences INLINEFORM3 and INLINEFORM4 . The encoder produces a fixed length vector representation INLINEFORM5 of the input, which the decoder then conditions on to generate an output. The decoder is auto-regressive and breaks down the joint probability of outputs into a product of conditional probabilities via the chain rule: INLINEFORM6
BIBREF26 and BIBREF27 use encoders and decoders parameterized as RNN variants such as Long Short-term Memory (LSTMs) BIBREF28 or Gated Recurrent Units (GRUs) BIBREF29 . The hidden representation INLINEFORM0 is typically the last hidden state of the encoder RNN.
BIBREF30 alleviate the gradient bottleneck between the encoder and the decoder by introducing an attention mechanism that allows the decoder to condition on every hidden state of the encoder RNN instead of only the last one. In this work, as in BIBREF6 , BIBREF8 , we do not employ an attention mechanism. This enables us to obtain a single, fixed-length, distributed sentence representation. To diminish the effects of vanishing gradient, we condition every decoding step on the encoder hidden representation INLINEFORM0 . We use a GRU for the encoder and decoder in the interest of computational speed. The encoder is a bidirectional GRU while the decoder is a unidirectional conditional GRU whose parameterization is as follows: DISPLAYFORM0
The encoder representation INLINEFORM0 is provided as conditioning information to the reset gate, update gate and hidden state computation in the GRU via the parameters INLINEFORM1 , INLINEFORM2 and INLINEFORM3 to avoid attenuation of information from the encoder.
Multi-task Sequence-to-sequence Learning
BIBREF31 present a simple one-to-many multi-task sequence-to-sequence learning model for NMT that uses a shared encoder for English and task-specific decoders for multiple target languages. BIBREF22 extend this by also considering many-to-one (many encoders, one decoder) and many-to-many architectures. In this work, we consider a one-to-many model since it lends itself naturally to the idea of combining inductive biases from different training objectives. The same bidirectional GRU encodes the input sentences from different tasks into a compressed summary INLINEFORM0 which is then used to condition a task-specific GRU to produce the output sentence.
Training Objectives & Evaluation
Our motivation for multi-task training stems from theoretical insights presented in BIBREF18 . We refer readers to that work for a detailed discussion of results, but the conclusions most relevant to this discussion are (i) that learning multiple related tasks jointly results in good generalization as measured by the number of training examples required per task; and (ii) that inductive biases learned on sufficiently many training tasks are likely to be good for learning novel tasks drawn from the same environment.
We select the following training objectives to learn general-purpose sentence embeddings. Our desiderata for the task collection were: sufficient diversity, existence of fairly large datasets for training, and success as standalone training objectives for sentence representations.
Multi-task training setup
Multi-task training with different data sources for each task stills poses open questions. For example: When does one switch to training on a different task? Should the switching be periodic? Do we weight each task equally? If not, what training ratios do we use?
BIBREF31 use periodic task alternations with equal training ratios for every task. In contrast, BIBREF22 alter the training ratios for each task based on the size of their respective training sets. Specifically, the training ratio for a particular task, INLINEFORM0 , is the fraction of the number of training examples in that task to the total number of training samples across all tasks. The authors then perform INLINEFORM1 parameter updates on task INLINEFORM2 before selecting a new task at random proportional to the training ratios, where N is a predetermined constant.
We take a simpler approach and pick a new sequence-to-sequence task to train on after every parameter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to-sequence tasks (this was chosen so as to complete roughly 6 epochs of the dataset after 7 days of training). Our approach is described formally in the Algorithm below.
Model details can be found in section SECREF7 in the Appendix.
0ptcenterline
A set of INLINEFORM0 tasks with a common source language, a shared encoder INLINEFORM1 across all tasks and a set of INLINEFORM2 task specific decoders INLINEFORM3 . Let INLINEFORM4 denote each model's parameters, INLINEFORM5 a probability vector ( INLINEFORM6 ) denoting the probability of sampling a task such that INLINEFORM7 , datasets for each task INLINEFORM8 and a loss function INLINEFORM9 .
INLINEFORM0 has not converged [1] Sample task INLINEFORM1 . Sample input, output pairs INLINEFORM2 . Input representation INLINEFORM3 . Prediction INLINEFORM4 INLINEFORM5 Adam INLINEFORM6 .
Evaluation Strategies, Experimental Results & Discussion
In this section, we describe our approach to evaluate the quality of our learned representations, present the results of our evaluation and discuss our findings.
Evaluation Strategy
We follow a similar evaluation protocol to those presented in BIBREF6 , BIBREF8 , BIBREF9 which is to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks BIBREF36 , BIBREF37 .
The choice of transfer tasks and evaluation framework are borrowed largely from BIBREF9 . We provide a condensed summary of the tasks in section SECREF10 in the Appendix but refer readers to their paper for a more detailed description.
https://github.com/kudkudak/word-embeddings-benchmarks/wiki
Experimental Results & Discussion
Table 2 presents the results of training logistic regression on 10 different supervised transfer tasks using different fixed-length sentence representation. Supervised approaches trained from scratch on some of these tasks are also presented for comparison. We present performance ablations when adding more tasks and increasing the number of hidden units in our GRU (+L). Ablation specifics are presented in section SECREF9 of the Appendix.
It is evident from Table 2 that adding more tasks improves the transfer performance of our model. Increasing the capacity our sentence encoder with more hidden units (+L) as well as an additional layer (+2L) also lead to improved transfer performance. We observe gains of 1.1-2.0% on the sentiment classification tasks (MR, CR, SUBJ & MPQA) over Infersent. We demonstrate substantial gains on TREC (6% over Infersent and roughly 2% over the CNN-LSTM), outperforming even a competitive supervised baseline. We see similar gains (2.3%) on paraphrase identification (MPRC), closing the gap on supervised approaches trained from scratch. The addition of constituency parsing improves performance on sentence relatedness (SICK-R) and entailment (SICK-E) consistent with observations made by BIBREF48 .
In Table TABREF19 , we show that simply training an MLP on top of our fixed sentence representations outperforms several strong & complex supervised approaches that use attention mechanisms, even on this fairly large dataset. For example, we observe a 0.2-0.5% improvement over the decomposable attention model BIBREF49 . When using only a small fraction of the training data, indicated by the columns 1k-25k, we are able to outperform the Siamese and Multi-Perspective CNN using roughly 6% of the available training set. We also outperform the Deconv LVM model proposed by BIBREF47 in this low-resource setting.
Unlike BIBREF9 , who use pretrained GloVe word embeddings, we learn our word embeddings from scratch. Somewhat surprisingly, in Table TABREF18 we observe that the learned word embeddings are competitive with popular methods such as GloVe, word2vec, and fasttext BIBREF50 on the benchmarks presented by BIBREF36 and BIBREF37 .
In Table TABREF20 , we probe our sentence representations to determine if certain sentence characteristics and syntactic properties can be inferred following work by BIBREF17 and BIBREF14 . We observe that syntactic properties are better encoded with the addition of multi-lingual NMT and parsing. Representations learned solely from NLI do appear to encode syntax but incorporation into our multi-task framework does not amplify this signal. Similarly, we observe that sentence characteristics such as length and word order are better encoded with the addition of parsing.
In Appendix Table TABREF30 , we note that our sentence representations outperform skip-thoughts and are on par with Infersent for image-caption retrieval. We also observe that comparing sentences using cosine similarities correlates reasonably well with their relatedness on semantic textual similarity benchmarks (Appendix Table TABREF31 ).
We also present qualitative analysis of our learned representations by visualizations using dimensionality reduction techniques (Figure FIGREF11 ) and nearest neighbor exploration (Appendix Table TABREF32 ). Figure FIGREF11 shows t-sne plots of our sentence representations on three different datasets - SUBJ, TREC and DBpedia. DBpedia is a large corpus of sentences from Wikipedia labeled by category and used by BIBREF51 . Sentences appear to cluster reasonably well according to their labels. The clustering also appears better than that demonstrated in Figure 2 of BIBREF6 on TREC and SUBJ. Appendix Table TABREF32 contains sentences from the BookCorpus and their nearest neighbors. Sentences with some lexical overlap and similar discourse structure appear to be clustered together.
Conclusion & Future Work
We present a multi-task framework for learning general-purpose fixed-length sentence representations. Our primary motivation is to encapsulate the inductive biases of several diverse training signals used to learn sentence representations into a single model. Our multi-task framework includes a combination of sequence-to-sequence tasks such as multi-lingual NMT, constituency parsing and skip-thought vectors as well as a classification task - natural language inference. We demonstrate that the learned representations yield competitive or superior results to previous general-purpose sentence representation methods. We also observe that this approach produces good word embeddings.
In future work, we would like understand and interpret the inductive biases that our model learns and observe how it changes with the addition of different tasks beyond just our simple analysis of sentence characteristics and syntax. Having a rich, continuous sentence representation space could allow the application of state-of-the-art generative models of images such as that of BIBREF52 to language. One could also consider controllable text generation by directly manipulating the sentence representations and realizing it by decoding with a conditional language model.
Acknowledgements
The authors would like to thank Chinnadhurai Sankar, Sebastian Ruder, Eric Yuan, Tong Wang, Alessandro Sordoni, Guillaume Lample and Varsha Embar for useful discussions. We are also grateful to the PyTorch development team BIBREF53 . We thank NVIDIA for donating a DGX-1 computer used in this work and Fonds de recherche du Québec - Nature et technologies for funding.
Model Training
We present some architectural specifics and training details of our multi-task framework. Our shared encoder uses a common word embedding lookup table and GRU. We experiment with unidirectional, bidirectional and 2 layer bidirectional GRUs (details in Appendix section SECREF9 ). For each task, every decoder has its separate word embedding lookups, conditional GRUs and fully connected layers that project the GRU hidden states to the target vocabularies. The last hidden state of the encoder is used as the initial hidden state of the decoder and is also presented as input to all the gates of the GRU at every time step. For natural language inference, the same encoder is used to encode both the premise and hypothesis and a concatenation of their representations along with the absolute difference and hadamard product (as described in BIBREF9 ) are given to a single layer MLP with a dropout BIBREF55 rate of 0.3. All models use word embeddings of 512 dimensions and GRUs with either 1500 or 2048 hidden units. We used minibatches of 48 examples and the Adam BIBREF54 optimizer with a learning rate of 0.002. Models were trained for 7 days on an Nvidia Tesla P100-SXM2-16GB GPU. While BIBREF6 report close to a month of training, we only train for 7 days, made possible by advancements in GPU hardware and software (cuDNN RNNs).
We did not tune any of the architectural details and hyperparameters owing to the fact that we were unable to identify any clear criterion on which to tune them. Gains in performance on a specific task do not often translate to better transfer performance.
Vocabulary Expansion & Representation Pooling
In addition to performing 10-fold cross-validation to determine the L2 regularization penalty on the logistic regression models, we also tune the way in which our sentence representations are generated from the hidden states corresponding to words in a sentence. For example, BIBREF6 use the last hidden state while BIBREF9 perform max-pooling across all of the hidden states. We consider both of these approaches and pick the one with better performance on the validation set. We note that max-pooling works best on sentiment tasks such as MR, CR, SUBJ and MPQA, while the last hidden state works better on all other tasks.
We also employ vocabulary expansion on all tasks as in BIBREF6 by training a linear regression to map from the space of pre-trained word embeddings (GloVe) to our model's word embeddings.
Multi-task model details
This section describes the specifics our multi-task ablations in the experiments section. These definitions hold for all tables except for TABREF18 and TABREF20 . We refer to skip-thought next as STN, French and German NMT as Fr and De, natural language inference as NLI, skip-thought previous as STP and parsing as Par.
+STN +Fr +De : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a forward GRU with 1500-dimensional hidden vectors and a bidirectional GRU, also with 1500-dimensional hidden vectors.
+STN +Fr +De +NLI : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 1500-dimensional hidden vectors and another bidirectional GRU with 1500-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +2L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a 2-layer bidirectional GRU with 2048-dimensional hidden vectors and a 1-layer bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +L +STP +Par : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without Par.
In tables TABREF18 and TABREF20 we do not concatenate the representations of multiple models.
Description of evaluation tasks
BIBREF6 and BIBREF9 provide a detailed description of tasks that are typically used to evaluate sentence representations. We provide a condensed summary and refer readers to their work for a more thorough description.
Text Classification
We evaluate on text classification benchmarks - sentiment classification on movie reviews (MR), product reviews (CR) and Stanford sentiment (SST), question type classification (TREC), subjectivity/objectivity classification (SUBJ) and opinion polarity (MPQA). Representations are used to train a logistic regression classifier with 10-fold cross validation to tune the L2 weight penalty. The evaluation metric for all these tasks is classification accuracy.
Paraphrase Identification
We also evaluate on pairwise text classification tasks such as paraphrase identification on the Microsoft Research Paraphrase Corpus (MRPC) corpus. This is a binary classification problem to identify if two sentences are paraphrases of each other. The evaluation metric is classification accuracy and F1.
Entailment and Semantic Relatedness
To test if similar sentences share similar representations, we evaluate on the SICK relatedness (SICK-R) task where a linear model is trained to output a score from 1 to 5 indicating the relatedness of two sentences. We also evaluate using the entailment labels in the same dataset (SICK-E) which is a binary classification problem. The evaluation metric for SICK-R is Pearson correlation and classification accuracy for SICK-E.
Semantic Textual Similarity
In this evaluation, we measure the relatedness of two sentences using only the cosine similarity between their representations. We use the similarity textual similarity (STS) benchmark tasks from 2012-2016 (STS12, STS13, STS14, STS15, STS16, STSB). The STS dataset contains sentences from a diverse set of data sources. The evaluation criteria is Pearson correlation.
Image-caption retrieval
Image-caption retrieval is typically formulated as a ranking task wherein images are retrieved and ranked based on a textual description and vice-versa. We use 113k training images from MSCOCO with 5k images for validation and 5k for testing. Image features are extracted using a pre-trained 110 layer ResNet. The evaluation criterion is Recall@K and the median K across 5 different splits of the data.
Quora Duplicate Question Classification
In addition to the above tasks which were considered by BIBREF9 , we also evaluate on the recently published Quora duplicate question dataset since it is an order of magnitude larger than the others (approximately 400,000 question pairs). The task is to correctly identify question pairs that are duplicates of one another, which we formulate as a binary classification problem. We use the same data splits as in BIBREF45 . Given the size of this data, we consider a more expressive classifier on top of the representations of both questions. Specifically, we train a 4 layer MLP with 1024 hidden units, with a dropout rate of 0.5 after every hidden layer. The evaluation criterion is classification accuracy. We also artificially create a low-resource setting by reducing the number of training examples between 1,000 and 25,000 using the same splits as BIBREF47 .
Sentence Characteristics & Syntax
In an attempt to understand what information is encoded in by sentence representations, we consider six different classification tasks where the objective is to predict sentence characteristics such as length, word content and word order BIBREF17 or syntactic properties such as active/passive, tense and the top syntactic sequence (TSS) from the parse tree of a sentence BIBREF14 .
The sentence characteristic tasks are setup in the same way as described in BIBREF17 . The length task is an 8-way classification problem where sentence lengths are binned into 8 ranges. The content task is formulated as a binary classification problem that takes a concatenation of a sentence representation INLINEFORM0 and a word representation INLINEFORM1 to determine if the word is contained in the sentence. The order task is an extension of the content task where a concatenation of the sentence representation and word representations of two words in sentence is used to determine if the first word occurs before or after the second. We use a random subset of the 1-billion-word dataset for these experiments that were not used to train our multi-task representations.
The syntactic properties tasks are setup in the same way as described in BIBREF14 .The passive and tense tasks are characterized as binary classification problems given a sentence's representation. The former's objective is to determine if a sentence is written in active/passive voice while the latter's objective is to determine if the sentence is in the past tense or not. The top syntactic sequence (TSS) is a 20-way classification problem with 19 most frequent top syntactic sequences and 1 miscellaneous class. We use the same dataset as the authors but different training, validation and test splits. | Unanswerable |
00e9f088291fcf27956f32a791f87e4a1e311e41 | 00e9f088291fcf27956f32a791f87e4a1e311e41_0 | Q: Which training objectives do they combine?
Text: Introduction
Transfer learning has driven a number of recent successes in computer vision and NLP. Computer vision tasks like image captioning BIBREF0 and visual question answering typically use CNNs pretrained on ImageNet BIBREF1 , BIBREF2 to extract representations of the image, while several natural language tasks such as reading comprehension and sequence labeling BIBREF3 have benefited from pretrained word embeddings BIBREF4 , BIBREF5 that are either fine-tuned for a specific task or held fixed.
Many neural NLP systems are initialized with pretrained word embeddings but learn their representations of words in context from scratch, in a task-specific manner from supervised learning signals. However, learning these representations reliably from scratch is not always feasible, especially in low-resource settings, where we believe that using general purpose sentence representations will be beneficial.
Some recent work has addressed this by learning general-purpose sentence representations BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . However, there exists no clear consensus yet on what training objective or methodology is best suited to this goal.
Understanding the inductive biases of distinct neural models is important for guiding progress in representation learning. BIBREF14 and BIBREF15 demonstrate that neural machine translation (NMT) systems appear to capture morphology and some syntactic properties. BIBREF14 also present evidence that sequence-to-sequence parsers BIBREF16 more strongly encode source language syntax. Similarly, BIBREF17 probe representations extracted by sequence autoencoders, word embedding averages, and skip-thought vectors with a multi-layer perceptron (MLP) classifier to study whether sentence characteristics such as length, word content and word order are encoded.
To generalize across a diverse set of tasks, it is important to build representations that encode several aspects of a sentence. Neural approaches to tasks such as skip-thoughts, machine translation, natural language inference, and constituency parsing likely have different inductive biases. Our work exploits this in the context of a simple one-to-many multi-task learning (MTL) framework, wherein a single recurrent sentence encoder is shared across multiple tasks. We hypothesize that sentence representations learned by training on a reasonably large number of weakly related tasks will generalize better to novel tasks unseen during training, since this process encodes the inductive biases of multiple models. This hypothesis is based on the theoretical work of BIBREF18 . While our work aims at learning fixed-length distributed sentence representations, it is not always practical to assume that the entire “meaning” of a sentence can be encoded into a fixed-length vector. We merely hope to capture some of its characteristics that could be of use in a variety of tasks.
The primary contribution of our work is to combine the benefits of diverse sentence-representation learning objectives into a single multi-task framework. To the best of our knowledge, this is the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors. We demonstrate through extensive experimentation that representations learned in this way lead to improved performance across a diverse set of novel tasks not used in the learning of our representations. Such representations facilitate low-resource learning as exhibited by significant improvements to model performance for new tasks in the low labelled data regime - achieving comparable performance to a few models trained from scratch using only 6% of the available training set on the Quora duplicate question dataset.
Related Work
The problem of learning distributed representations of phrases and sentences dates back over a decade. For example, BIBREF19 present an additive and multiplicative linear composition function of the distributed representations of individual words. BIBREF20 combine symbolic and distributed representations of words using tensor products. Advances in learning better distributed representations of words BIBREF4 , BIBREF5 combined with deep learning have made it possible to learn complex non-linear composition functions of an arbitrary number of word embeddings using convolutional or recurrent neural networks (RNNs). A network's representation of the last element in a sequence, which is a non-linear composition of all inputs, is typically assumed to contain a squashed “summary” of the sentence. Most work in supervised learning for NLP builds task-specific representations of sentences rather than general-purpose ones.
Notably, skip-thought vectors BIBREF6 , an extension of the skip-gram model for word embeddings BIBREF4 to sentences, learn re-usable sentence representations from weakly labeled data. Unfortunately, these models take weeks or often months to train. BIBREF8 address this by considering faster alternatives such as sequential denoising autoencoders and shallow log-linear models. BIBREF21 , however, demonstrate that simple word embedding averages are comparable to more complicated models like skip-thoughts. More recently, BIBREF9 show that a completely supervised approach to learning sentence representations from natural language inference data outperforms all previous approaches on transfer learning benchmarks. Here we use the terms “transfer learning performance" on “transfer tasks” to mean the performance of sentence representations evaluated on tasks unseen during training. BIBREF10 demonstrated that representations learned by state-of-the-art large-scale NMT systems also generalize well to other tasks. However, their use of an attention mechanism prevents the learning of a single fixed-length vector representation of a sentence. As a result, they present a bi-attentive classification network that composes information present in all of the model's hidden states to achieve improvements over a corresponding model trained from scratch. BIBREF11 and BIBREF12 demonstrate that discourse-based objectives can also be leveraged to learn good sentence representations.
Our work is most similar to that of BIBREF22 , who train a many-to-many sequence-to-sequence model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. There are two key differences between that work and our own. First, like BIBREF10 , their use of an attention mechanism prevents learning a fixed-length vector representation for a sentence. Second, their work aims for improvements on the same tasks on which the model is trained, as opposed to learning re-usable sentence representations that transfer elsewhere.
We further present a fine-grained analysis of how different tasks contribute to the encoding of different information signals in our representations following work by BIBREF14 and BIBREF17 .
BIBREF23 similarly present a multi-task framework for textual entailment with task supervision at different levels of learning. “Universal" multi-task models have also been successfully explored in the context of computer vision problems BIBREF24 , BIBREF25 .
Sequence-to-Sequence Learning
Five out of the six tasks that we consider for multi-task learning are formulated as sequence-to-sequence problems BIBREF26 , BIBREF27 . Briefly, sequence-to-sequence models are a specific case of encoder-decoder models where the inputs and outputs are sequential. They directly model the conditional distribution of outputs given inputs INLINEFORM0 . The input INLINEFORM1 and output INLINEFORM2 are sequences INLINEFORM3 and INLINEFORM4 . The encoder produces a fixed length vector representation INLINEFORM5 of the input, which the decoder then conditions on to generate an output. The decoder is auto-regressive and breaks down the joint probability of outputs into a product of conditional probabilities via the chain rule: INLINEFORM6
BIBREF26 and BIBREF27 use encoders and decoders parameterized as RNN variants such as Long Short-term Memory (LSTMs) BIBREF28 or Gated Recurrent Units (GRUs) BIBREF29 . The hidden representation INLINEFORM0 is typically the last hidden state of the encoder RNN.
BIBREF30 alleviate the gradient bottleneck between the encoder and the decoder by introducing an attention mechanism that allows the decoder to condition on every hidden state of the encoder RNN instead of only the last one. In this work, as in BIBREF6 , BIBREF8 , we do not employ an attention mechanism. This enables us to obtain a single, fixed-length, distributed sentence representation. To diminish the effects of vanishing gradient, we condition every decoding step on the encoder hidden representation INLINEFORM0 . We use a GRU for the encoder and decoder in the interest of computational speed. The encoder is a bidirectional GRU while the decoder is a unidirectional conditional GRU whose parameterization is as follows: DISPLAYFORM0
The encoder representation INLINEFORM0 is provided as conditioning information to the reset gate, update gate and hidden state computation in the GRU via the parameters INLINEFORM1 , INLINEFORM2 and INLINEFORM3 to avoid attenuation of information from the encoder.
Multi-task Sequence-to-sequence Learning
BIBREF31 present a simple one-to-many multi-task sequence-to-sequence learning model for NMT that uses a shared encoder for English and task-specific decoders for multiple target languages. BIBREF22 extend this by also considering many-to-one (many encoders, one decoder) and many-to-many architectures. In this work, we consider a one-to-many model since it lends itself naturally to the idea of combining inductive biases from different training objectives. The same bidirectional GRU encodes the input sentences from different tasks into a compressed summary INLINEFORM0 which is then used to condition a task-specific GRU to produce the output sentence.
Training Objectives & Evaluation
Our motivation for multi-task training stems from theoretical insights presented in BIBREF18 . We refer readers to that work for a detailed discussion of results, but the conclusions most relevant to this discussion are (i) that learning multiple related tasks jointly results in good generalization as measured by the number of training examples required per task; and (ii) that inductive biases learned on sufficiently many training tasks are likely to be good for learning novel tasks drawn from the same environment.
We select the following training objectives to learn general-purpose sentence embeddings. Our desiderata for the task collection were: sufficient diversity, existence of fairly large datasets for training, and success as standalone training objectives for sentence representations.
Multi-task training setup
Multi-task training with different data sources for each task stills poses open questions. For example: When does one switch to training on a different task? Should the switching be periodic? Do we weight each task equally? If not, what training ratios do we use?
BIBREF31 use periodic task alternations with equal training ratios for every task. In contrast, BIBREF22 alter the training ratios for each task based on the size of their respective training sets. Specifically, the training ratio for a particular task, INLINEFORM0 , is the fraction of the number of training examples in that task to the total number of training samples across all tasks. The authors then perform INLINEFORM1 parameter updates on task INLINEFORM2 before selecting a new task at random proportional to the training ratios, where N is a predetermined constant.
We take a simpler approach and pick a new sequence-to-sequence task to train on after every parameter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to-sequence tasks (this was chosen so as to complete roughly 6 epochs of the dataset after 7 days of training). Our approach is described formally in the Algorithm below.
Model details can be found in section SECREF7 in the Appendix.
0ptcenterline
A set of INLINEFORM0 tasks with a common source language, a shared encoder INLINEFORM1 across all tasks and a set of INLINEFORM2 task specific decoders INLINEFORM3 . Let INLINEFORM4 denote each model's parameters, INLINEFORM5 a probability vector ( INLINEFORM6 ) denoting the probability of sampling a task such that INLINEFORM7 , datasets for each task INLINEFORM8 and a loss function INLINEFORM9 .
INLINEFORM0 has not converged [1] Sample task INLINEFORM1 . Sample input, output pairs INLINEFORM2 . Input representation INLINEFORM3 . Prediction INLINEFORM4 INLINEFORM5 Adam INLINEFORM6 .
Evaluation Strategies, Experimental Results & Discussion
In this section, we describe our approach to evaluate the quality of our learned representations, present the results of our evaluation and discuss our findings.
Evaluation Strategy
We follow a similar evaluation protocol to those presented in BIBREF6 , BIBREF8 , BIBREF9 which is to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks BIBREF36 , BIBREF37 .
The choice of transfer tasks and evaluation framework are borrowed largely from BIBREF9 . We provide a condensed summary of the tasks in section SECREF10 in the Appendix but refer readers to their paper for a more detailed description.
https://github.com/kudkudak/word-embeddings-benchmarks/wiki
Experimental Results & Discussion
Table 2 presents the results of training logistic regression on 10 different supervised transfer tasks using different fixed-length sentence representation. Supervised approaches trained from scratch on some of these tasks are also presented for comparison. We present performance ablations when adding more tasks and increasing the number of hidden units in our GRU (+L). Ablation specifics are presented in section SECREF9 of the Appendix.
It is evident from Table 2 that adding more tasks improves the transfer performance of our model. Increasing the capacity our sentence encoder with more hidden units (+L) as well as an additional layer (+2L) also lead to improved transfer performance. We observe gains of 1.1-2.0% on the sentiment classification tasks (MR, CR, SUBJ & MPQA) over Infersent. We demonstrate substantial gains on TREC (6% over Infersent and roughly 2% over the CNN-LSTM), outperforming even a competitive supervised baseline. We see similar gains (2.3%) on paraphrase identification (MPRC), closing the gap on supervised approaches trained from scratch. The addition of constituency parsing improves performance on sentence relatedness (SICK-R) and entailment (SICK-E) consistent with observations made by BIBREF48 .
In Table TABREF19 , we show that simply training an MLP on top of our fixed sentence representations outperforms several strong & complex supervised approaches that use attention mechanisms, even on this fairly large dataset. For example, we observe a 0.2-0.5% improvement over the decomposable attention model BIBREF49 . When using only a small fraction of the training data, indicated by the columns 1k-25k, we are able to outperform the Siamese and Multi-Perspective CNN using roughly 6% of the available training set. We also outperform the Deconv LVM model proposed by BIBREF47 in this low-resource setting.
Unlike BIBREF9 , who use pretrained GloVe word embeddings, we learn our word embeddings from scratch. Somewhat surprisingly, in Table TABREF18 we observe that the learned word embeddings are competitive with popular methods such as GloVe, word2vec, and fasttext BIBREF50 on the benchmarks presented by BIBREF36 and BIBREF37 .
In Table TABREF20 , we probe our sentence representations to determine if certain sentence characteristics and syntactic properties can be inferred following work by BIBREF17 and BIBREF14 . We observe that syntactic properties are better encoded with the addition of multi-lingual NMT and parsing. Representations learned solely from NLI do appear to encode syntax but incorporation into our multi-task framework does not amplify this signal. Similarly, we observe that sentence characteristics such as length and word order are better encoded with the addition of parsing.
In Appendix Table TABREF30 , we note that our sentence representations outperform skip-thoughts and are on par with Infersent for image-caption retrieval. We also observe that comparing sentences using cosine similarities correlates reasonably well with their relatedness on semantic textual similarity benchmarks (Appendix Table TABREF31 ).
We also present qualitative analysis of our learned representations by visualizations using dimensionality reduction techniques (Figure FIGREF11 ) and nearest neighbor exploration (Appendix Table TABREF32 ). Figure FIGREF11 shows t-sne plots of our sentence representations on three different datasets - SUBJ, TREC and DBpedia. DBpedia is a large corpus of sentences from Wikipedia labeled by category and used by BIBREF51 . Sentences appear to cluster reasonably well according to their labels. The clustering also appears better than that demonstrated in Figure 2 of BIBREF6 on TREC and SUBJ. Appendix Table TABREF32 contains sentences from the BookCorpus and their nearest neighbors. Sentences with some lexical overlap and similar discourse structure appear to be clustered together.
Conclusion & Future Work
We present a multi-task framework for learning general-purpose fixed-length sentence representations. Our primary motivation is to encapsulate the inductive biases of several diverse training signals used to learn sentence representations into a single model. Our multi-task framework includes a combination of sequence-to-sequence tasks such as multi-lingual NMT, constituency parsing and skip-thought vectors as well as a classification task - natural language inference. We demonstrate that the learned representations yield competitive or superior results to previous general-purpose sentence representation methods. We also observe that this approach produces good word embeddings.
In future work, we would like understand and interpret the inductive biases that our model learns and observe how it changes with the addition of different tasks beyond just our simple analysis of sentence characteristics and syntax. Having a rich, continuous sentence representation space could allow the application of state-of-the-art generative models of images such as that of BIBREF52 to language. One could also consider controllable text generation by directly manipulating the sentence representations and realizing it by decoding with a conditional language model.
Acknowledgements
The authors would like to thank Chinnadhurai Sankar, Sebastian Ruder, Eric Yuan, Tong Wang, Alessandro Sordoni, Guillaume Lample and Varsha Embar for useful discussions. We are also grateful to the PyTorch development team BIBREF53 . We thank NVIDIA for donating a DGX-1 computer used in this work and Fonds de recherche du Québec - Nature et technologies for funding.
Model Training
We present some architectural specifics and training details of our multi-task framework. Our shared encoder uses a common word embedding lookup table and GRU. We experiment with unidirectional, bidirectional and 2 layer bidirectional GRUs (details in Appendix section SECREF9 ). For each task, every decoder has its separate word embedding lookups, conditional GRUs and fully connected layers that project the GRU hidden states to the target vocabularies. The last hidden state of the encoder is used as the initial hidden state of the decoder and is also presented as input to all the gates of the GRU at every time step. For natural language inference, the same encoder is used to encode both the premise and hypothesis and a concatenation of their representations along with the absolute difference and hadamard product (as described in BIBREF9 ) are given to a single layer MLP with a dropout BIBREF55 rate of 0.3. All models use word embeddings of 512 dimensions and GRUs with either 1500 or 2048 hidden units. We used minibatches of 48 examples and the Adam BIBREF54 optimizer with a learning rate of 0.002. Models were trained for 7 days on an Nvidia Tesla P100-SXM2-16GB GPU. While BIBREF6 report close to a month of training, we only train for 7 days, made possible by advancements in GPU hardware and software (cuDNN RNNs).
We did not tune any of the architectural details and hyperparameters owing to the fact that we were unable to identify any clear criterion on which to tune them. Gains in performance on a specific task do not often translate to better transfer performance.
Vocabulary Expansion & Representation Pooling
In addition to performing 10-fold cross-validation to determine the L2 regularization penalty on the logistic regression models, we also tune the way in which our sentence representations are generated from the hidden states corresponding to words in a sentence. For example, BIBREF6 use the last hidden state while BIBREF9 perform max-pooling across all of the hidden states. We consider both of these approaches and pick the one with better performance on the validation set. We note that max-pooling works best on sentiment tasks such as MR, CR, SUBJ and MPQA, while the last hidden state works better on all other tasks.
We also employ vocabulary expansion on all tasks as in BIBREF6 by training a linear regression to map from the space of pre-trained word embeddings (GloVe) to our model's word embeddings.
Multi-task model details
This section describes the specifics our multi-task ablations in the experiments section. These definitions hold for all tables except for TABREF18 and TABREF20 . We refer to skip-thought next as STN, French and German NMT as Fr and De, natural language inference as NLI, skip-thought previous as STP and parsing as Par.
+STN +Fr +De : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a forward GRU with 1500-dimensional hidden vectors and a bidirectional GRU, also with 1500-dimensional hidden vectors.
+STN +Fr +De +NLI : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 1500-dimensional hidden vectors and another bidirectional GRU with 1500-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +2L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a 2-layer bidirectional GRU with 2048-dimensional hidden vectors and a 1-layer bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +L +STP +Par : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without Par.
In tables TABREF18 and TABREF20 we do not concatenate the representations of multiple models.
Description of evaluation tasks
BIBREF6 and BIBREF9 provide a detailed description of tasks that are typically used to evaluate sentence representations. We provide a condensed summary and refer readers to their work for a more thorough description.
Text Classification
We evaluate on text classification benchmarks - sentiment classification on movie reviews (MR), product reviews (CR) and Stanford sentiment (SST), question type classification (TREC), subjectivity/objectivity classification (SUBJ) and opinion polarity (MPQA). Representations are used to train a logistic regression classifier with 10-fold cross validation to tune the L2 weight penalty. The evaluation metric for all these tasks is classification accuracy.
Paraphrase Identification
We also evaluate on pairwise text classification tasks such as paraphrase identification on the Microsoft Research Paraphrase Corpus (MRPC) corpus. This is a binary classification problem to identify if two sentences are paraphrases of each other. The evaluation metric is classification accuracy and F1.
Entailment and Semantic Relatedness
To test if similar sentences share similar representations, we evaluate on the SICK relatedness (SICK-R) task where a linear model is trained to output a score from 1 to 5 indicating the relatedness of two sentences. We also evaluate using the entailment labels in the same dataset (SICK-E) which is a binary classification problem. The evaluation metric for SICK-R is Pearson correlation and classification accuracy for SICK-E.
Semantic Textual Similarity
In this evaluation, we measure the relatedness of two sentences using only the cosine similarity between their representations. We use the similarity textual similarity (STS) benchmark tasks from 2012-2016 (STS12, STS13, STS14, STS15, STS16, STSB). The STS dataset contains sentences from a diverse set of data sources. The evaluation criteria is Pearson correlation.
Image-caption retrieval
Image-caption retrieval is typically formulated as a ranking task wherein images are retrieved and ranked based on a textual description and vice-versa. We use 113k training images from MSCOCO with 5k images for validation and 5k for testing. Image features are extracted using a pre-trained 110 layer ResNet. The evaluation criterion is Recall@K and the median K across 5 different splits of the data.
Quora Duplicate Question Classification
In addition to the above tasks which were considered by BIBREF9 , we also evaluate on the recently published Quora duplicate question dataset since it is an order of magnitude larger than the others (approximately 400,000 question pairs). The task is to correctly identify question pairs that are duplicates of one another, which we formulate as a binary classification problem. We use the same data splits as in BIBREF45 . Given the size of this data, we consider a more expressive classifier on top of the representations of both questions. Specifically, we train a 4 layer MLP with 1024 hidden units, with a dropout rate of 0.5 after every hidden layer. The evaluation criterion is classification accuracy. We also artificially create a low-resource setting by reducing the number of training examples between 1,000 and 25,000 using the same splits as BIBREF47 .
Sentence Characteristics & Syntax
In an attempt to understand what information is encoded in by sentence representations, we consider six different classification tasks where the objective is to predict sentence characteristics such as length, word content and word order BIBREF17 or syntactic properties such as active/passive, tense and the top syntactic sequence (TSS) from the parse tree of a sentence BIBREF14 .
The sentence characteristic tasks are setup in the same way as described in BIBREF17 . The length task is an 8-way classification problem where sentence lengths are binned into 8 ranges. The content task is formulated as a binary classification problem that takes a concatenation of a sentence representation INLINEFORM0 and a word representation INLINEFORM1 to determine if the word is contained in the sentence. The order task is an extension of the content task where a concatenation of the sentence representation and word representations of two words in sentence is used to determine if the first word occurs before or after the second. We use a random subset of the 1-billion-word dataset for these experiments that were not used to train our multi-task representations.
The syntactic properties tasks are setup in the same way as described in BIBREF14 .The passive and tense tasks are characterized as binary classification problems given a sentence's representation. The former's objective is to determine if a sentence is written in active/passive voice while the latter's objective is to determine if the sentence is in the past tense or not. The top syntactic sequence (TSS) is a 20-way classification problem with 19 most frequent top syntactic sequences and 1 miscellaneous class. We use the same dataset as the authors but different training, validation and test splits. | multi-lingual NMT, natural language inference, constituency parsing, skip-thought vectors |
e2f269997f5a01949733c2ec8169f126dabd7571 | e2f269997f5a01949733c2ec8169f126dabd7571_0 | Q: Which data sources do they use?
Text: Introduction
Transfer learning has driven a number of recent successes in computer vision and NLP. Computer vision tasks like image captioning BIBREF0 and visual question answering typically use CNNs pretrained on ImageNet BIBREF1 , BIBREF2 to extract representations of the image, while several natural language tasks such as reading comprehension and sequence labeling BIBREF3 have benefited from pretrained word embeddings BIBREF4 , BIBREF5 that are either fine-tuned for a specific task or held fixed.
Many neural NLP systems are initialized with pretrained word embeddings but learn their representations of words in context from scratch, in a task-specific manner from supervised learning signals. However, learning these representations reliably from scratch is not always feasible, especially in low-resource settings, where we believe that using general purpose sentence representations will be beneficial.
Some recent work has addressed this by learning general-purpose sentence representations BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . However, there exists no clear consensus yet on what training objective or methodology is best suited to this goal.
Understanding the inductive biases of distinct neural models is important for guiding progress in representation learning. BIBREF14 and BIBREF15 demonstrate that neural machine translation (NMT) systems appear to capture morphology and some syntactic properties. BIBREF14 also present evidence that sequence-to-sequence parsers BIBREF16 more strongly encode source language syntax. Similarly, BIBREF17 probe representations extracted by sequence autoencoders, word embedding averages, and skip-thought vectors with a multi-layer perceptron (MLP) classifier to study whether sentence characteristics such as length, word content and word order are encoded.
To generalize across a diverse set of tasks, it is important to build representations that encode several aspects of a sentence. Neural approaches to tasks such as skip-thoughts, machine translation, natural language inference, and constituency parsing likely have different inductive biases. Our work exploits this in the context of a simple one-to-many multi-task learning (MTL) framework, wherein a single recurrent sentence encoder is shared across multiple tasks. We hypothesize that sentence representations learned by training on a reasonably large number of weakly related tasks will generalize better to novel tasks unseen during training, since this process encodes the inductive biases of multiple models. This hypothesis is based on the theoretical work of BIBREF18 . While our work aims at learning fixed-length distributed sentence representations, it is not always practical to assume that the entire “meaning” of a sentence can be encoded into a fixed-length vector. We merely hope to capture some of its characteristics that could be of use in a variety of tasks.
The primary contribution of our work is to combine the benefits of diverse sentence-representation learning objectives into a single multi-task framework. To the best of our knowledge, this is the first large-scale reusable sentence representation model obtained by combining a set of training objectives with the level of diversity explored here, i.e. multi-lingual NMT, natural language inference, constituency parsing and skip-thought vectors. We demonstrate through extensive experimentation that representations learned in this way lead to improved performance across a diverse set of novel tasks not used in the learning of our representations. Such representations facilitate low-resource learning as exhibited by significant improvements to model performance for new tasks in the low labelled data regime - achieving comparable performance to a few models trained from scratch using only 6% of the available training set on the Quora duplicate question dataset.
Related Work
The problem of learning distributed representations of phrases and sentences dates back over a decade. For example, BIBREF19 present an additive and multiplicative linear composition function of the distributed representations of individual words. BIBREF20 combine symbolic and distributed representations of words using tensor products. Advances in learning better distributed representations of words BIBREF4 , BIBREF5 combined with deep learning have made it possible to learn complex non-linear composition functions of an arbitrary number of word embeddings using convolutional or recurrent neural networks (RNNs). A network's representation of the last element in a sequence, which is a non-linear composition of all inputs, is typically assumed to contain a squashed “summary” of the sentence. Most work in supervised learning for NLP builds task-specific representations of sentences rather than general-purpose ones.
Notably, skip-thought vectors BIBREF6 , an extension of the skip-gram model for word embeddings BIBREF4 to sentences, learn re-usable sentence representations from weakly labeled data. Unfortunately, these models take weeks or often months to train. BIBREF8 address this by considering faster alternatives such as sequential denoising autoencoders and shallow log-linear models. BIBREF21 , however, demonstrate that simple word embedding averages are comparable to more complicated models like skip-thoughts. More recently, BIBREF9 show that a completely supervised approach to learning sentence representations from natural language inference data outperforms all previous approaches on transfer learning benchmarks. Here we use the terms “transfer learning performance" on “transfer tasks” to mean the performance of sentence representations evaluated on tasks unseen during training. BIBREF10 demonstrated that representations learned by state-of-the-art large-scale NMT systems also generalize well to other tasks. However, their use of an attention mechanism prevents the learning of a single fixed-length vector representation of a sentence. As a result, they present a bi-attentive classification network that composes information present in all of the model's hidden states to achieve improvements over a corresponding model trained from scratch. BIBREF11 and BIBREF12 demonstrate that discourse-based objectives can also be leveraged to learn good sentence representations.
Our work is most similar to that of BIBREF22 , who train a many-to-many sequence-to-sequence model on a diverse set of weakly related tasks that includes machine translation, constituency parsing, image captioning, sequence autoencoding, and intra-sentence skip-thoughts. There are two key differences between that work and our own. First, like BIBREF10 , their use of an attention mechanism prevents learning a fixed-length vector representation for a sentence. Second, their work aims for improvements on the same tasks on which the model is trained, as opposed to learning re-usable sentence representations that transfer elsewhere.
We further present a fine-grained analysis of how different tasks contribute to the encoding of different information signals in our representations following work by BIBREF14 and BIBREF17 .
BIBREF23 similarly present a multi-task framework for textual entailment with task supervision at different levels of learning. “Universal" multi-task models have also been successfully explored in the context of computer vision problems BIBREF24 , BIBREF25 .
Sequence-to-Sequence Learning
Five out of the six tasks that we consider for multi-task learning are formulated as sequence-to-sequence problems BIBREF26 , BIBREF27 . Briefly, sequence-to-sequence models are a specific case of encoder-decoder models where the inputs and outputs are sequential. They directly model the conditional distribution of outputs given inputs INLINEFORM0 . The input INLINEFORM1 and output INLINEFORM2 are sequences INLINEFORM3 and INLINEFORM4 . The encoder produces a fixed length vector representation INLINEFORM5 of the input, which the decoder then conditions on to generate an output. The decoder is auto-regressive and breaks down the joint probability of outputs into a product of conditional probabilities via the chain rule: INLINEFORM6
BIBREF26 and BIBREF27 use encoders and decoders parameterized as RNN variants such as Long Short-term Memory (LSTMs) BIBREF28 or Gated Recurrent Units (GRUs) BIBREF29 . The hidden representation INLINEFORM0 is typically the last hidden state of the encoder RNN.
BIBREF30 alleviate the gradient bottleneck between the encoder and the decoder by introducing an attention mechanism that allows the decoder to condition on every hidden state of the encoder RNN instead of only the last one. In this work, as in BIBREF6 , BIBREF8 , we do not employ an attention mechanism. This enables us to obtain a single, fixed-length, distributed sentence representation. To diminish the effects of vanishing gradient, we condition every decoding step on the encoder hidden representation INLINEFORM0 . We use a GRU for the encoder and decoder in the interest of computational speed. The encoder is a bidirectional GRU while the decoder is a unidirectional conditional GRU whose parameterization is as follows: DISPLAYFORM0
The encoder representation INLINEFORM0 is provided as conditioning information to the reset gate, update gate and hidden state computation in the GRU via the parameters INLINEFORM1 , INLINEFORM2 and INLINEFORM3 to avoid attenuation of information from the encoder.
Multi-task Sequence-to-sequence Learning
BIBREF31 present a simple one-to-many multi-task sequence-to-sequence learning model for NMT that uses a shared encoder for English and task-specific decoders for multiple target languages. BIBREF22 extend this by also considering many-to-one (many encoders, one decoder) and many-to-many architectures. In this work, we consider a one-to-many model since it lends itself naturally to the idea of combining inductive biases from different training objectives. The same bidirectional GRU encodes the input sentences from different tasks into a compressed summary INLINEFORM0 which is then used to condition a task-specific GRU to produce the output sentence.
Training Objectives & Evaluation
Our motivation for multi-task training stems from theoretical insights presented in BIBREF18 . We refer readers to that work for a detailed discussion of results, but the conclusions most relevant to this discussion are (i) that learning multiple related tasks jointly results in good generalization as measured by the number of training examples required per task; and (ii) that inductive biases learned on sufficiently many training tasks are likely to be good for learning novel tasks drawn from the same environment.
We select the following training objectives to learn general-purpose sentence embeddings. Our desiderata for the task collection were: sufficient diversity, existence of fairly large datasets for training, and success as standalone training objectives for sentence representations.
Multi-task training setup
Multi-task training with different data sources for each task stills poses open questions. For example: When does one switch to training on a different task? Should the switching be periodic? Do we weight each task equally? If not, what training ratios do we use?
BIBREF31 use periodic task alternations with equal training ratios for every task. In contrast, BIBREF22 alter the training ratios for each task based on the size of their respective training sets. Specifically, the training ratio for a particular task, INLINEFORM0 , is the fraction of the number of training examples in that task to the total number of training samples across all tasks. The authors then perform INLINEFORM1 parameter updates on task INLINEFORM2 before selecting a new task at random proportional to the training ratios, where N is a predetermined constant.
We take a simpler approach and pick a new sequence-to-sequence task to train on after every parameter update sampled uniformly. An NLI minibatch is interspersed after every ten parameter updates on sequence-to-sequence tasks (this was chosen so as to complete roughly 6 epochs of the dataset after 7 days of training). Our approach is described formally in the Algorithm below.
Model details can be found in section SECREF7 in the Appendix.
0ptcenterline
A set of INLINEFORM0 tasks with a common source language, a shared encoder INLINEFORM1 across all tasks and a set of INLINEFORM2 task specific decoders INLINEFORM3 . Let INLINEFORM4 denote each model's parameters, INLINEFORM5 a probability vector ( INLINEFORM6 ) denoting the probability of sampling a task such that INLINEFORM7 , datasets for each task INLINEFORM8 and a loss function INLINEFORM9 .
INLINEFORM0 has not converged [1] Sample task INLINEFORM1 . Sample input, output pairs INLINEFORM2 . Input representation INLINEFORM3 . Prediction INLINEFORM4 INLINEFORM5 Adam INLINEFORM6 .
Evaluation Strategies, Experimental Results & Discussion
In this section, we describe our approach to evaluate the quality of our learned representations, present the results of our evaluation and discuss our findings.
Evaluation Strategy
We follow a similar evaluation protocol to those presented in BIBREF6 , BIBREF8 , BIBREF9 which is to use our learned representations as features for a low complexity classifier (typically linear) on a novel supervised task/domain unseen during training without updating the parameters of our sentence representation model. We also consider such a transfer learning evaluation in an artificially constructed low-resource setting. In addition, we also evaluate the quality of our learned individual word representations using standard benchmarks BIBREF36 , BIBREF37 .
The choice of transfer tasks and evaluation framework are borrowed largely from BIBREF9 . We provide a condensed summary of the tasks in section SECREF10 in the Appendix but refer readers to their paper for a more detailed description.
https://github.com/kudkudak/word-embeddings-benchmarks/wiki
Experimental Results & Discussion
Table 2 presents the results of training logistic regression on 10 different supervised transfer tasks using different fixed-length sentence representation. Supervised approaches trained from scratch on some of these tasks are also presented for comparison. We present performance ablations when adding more tasks and increasing the number of hidden units in our GRU (+L). Ablation specifics are presented in section SECREF9 of the Appendix.
It is evident from Table 2 that adding more tasks improves the transfer performance of our model. Increasing the capacity our sentence encoder with more hidden units (+L) as well as an additional layer (+2L) also lead to improved transfer performance. We observe gains of 1.1-2.0% on the sentiment classification tasks (MR, CR, SUBJ & MPQA) over Infersent. We demonstrate substantial gains on TREC (6% over Infersent and roughly 2% over the CNN-LSTM), outperforming even a competitive supervised baseline. We see similar gains (2.3%) on paraphrase identification (MPRC), closing the gap on supervised approaches trained from scratch. The addition of constituency parsing improves performance on sentence relatedness (SICK-R) and entailment (SICK-E) consistent with observations made by BIBREF48 .
In Table TABREF19 , we show that simply training an MLP on top of our fixed sentence representations outperforms several strong & complex supervised approaches that use attention mechanisms, even on this fairly large dataset. For example, we observe a 0.2-0.5% improvement over the decomposable attention model BIBREF49 . When using only a small fraction of the training data, indicated by the columns 1k-25k, we are able to outperform the Siamese and Multi-Perspective CNN using roughly 6% of the available training set. We also outperform the Deconv LVM model proposed by BIBREF47 in this low-resource setting.
Unlike BIBREF9 , who use pretrained GloVe word embeddings, we learn our word embeddings from scratch. Somewhat surprisingly, in Table TABREF18 we observe that the learned word embeddings are competitive with popular methods such as GloVe, word2vec, and fasttext BIBREF50 on the benchmarks presented by BIBREF36 and BIBREF37 .
In Table TABREF20 , we probe our sentence representations to determine if certain sentence characteristics and syntactic properties can be inferred following work by BIBREF17 and BIBREF14 . We observe that syntactic properties are better encoded with the addition of multi-lingual NMT and parsing. Representations learned solely from NLI do appear to encode syntax but incorporation into our multi-task framework does not amplify this signal. Similarly, we observe that sentence characteristics such as length and word order are better encoded with the addition of parsing.
In Appendix Table TABREF30 , we note that our sentence representations outperform skip-thoughts and are on par with Infersent for image-caption retrieval. We also observe that comparing sentences using cosine similarities correlates reasonably well with their relatedness on semantic textual similarity benchmarks (Appendix Table TABREF31 ).
We also present qualitative analysis of our learned representations by visualizations using dimensionality reduction techniques (Figure FIGREF11 ) and nearest neighbor exploration (Appendix Table TABREF32 ). Figure FIGREF11 shows t-sne plots of our sentence representations on three different datasets - SUBJ, TREC and DBpedia. DBpedia is a large corpus of sentences from Wikipedia labeled by category and used by BIBREF51 . Sentences appear to cluster reasonably well according to their labels. The clustering also appears better than that demonstrated in Figure 2 of BIBREF6 on TREC and SUBJ. Appendix Table TABREF32 contains sentences from the BookCorpus and their nearest neighbors. Sentences with some lexical overlap and similar discourse structure appear to be clustered together.
Conclusion & Future Work
We present a multi-task framework for learning general-purpose fixed-length sentence representations. Our primary motivation is to encapsulate the inductive biases of several diverse training signals used to learn sentence representations into a single model. Our multi-task framework includes a combination of sequence-to-sequence tasks such as multi-lingual NMT, constituency parsing and skip-thought vectors as well as a classification task - natural language inference. We demonstrate that the learned representations yield competitive or superior results to previous general-purpose sentence representation methods. We also observe that this approach produces good word embeddings.
In future work, we would like understand and interpret the inductive biases that our model learns and observe how it changes with the addition of different tasks beyond just our simple analysis of sentence characteristics and syntax. Having a rich, continuous sentence representation space could allow the application of state-of-the-art generative models of images such as that of BIBREF52 to language. One could also consider controllable text generation by directly manipulating the sentence representations and realizing it by decoding with a conditional language model.
Acknowledgements
The authors would like to thank Chinnadhurai Sankar, Sebastian Ruder, Eric Yuan, Tong Wang, Alessandro Sordoni, Guillaume Lample and Varsha Embar for useful discussions. We are also grateful to the PyTorch development team BIBREF53 . We thank NVIDIA for donating a DGX-1 computer used in this work and Fonds de recherche du Québec - Nature et technologies for funding.
Model Training
We present some architectural specifics and training details of our multi-task framework. Our shared encoder uses a common word embedding lookup table and GRU. We experiment with unidirectional, bidirectional and 2 layer bidirectional GRUs (details in Appendix section SECREF9 ). For each task, every decoder has its separate word embedding lookups, conditional GRUs and fully connected layers that project the GRU hidden states to the target vocabularies. The last hidden state of the encoder is used as the initial hidden state of the decoder and is also presented as input to all the gates of the GRU at every time step. For natural language inference, the same encoder is used to encode both the premise and hypothesis and a concatenation of their representations along with the absolute difference and hadamard product (as described in BIBREF9 ) are given to a single layer MLP with a dropout BIBREF55 rate of 0.3. All models use word embeddings of 512 dimensions and GRUs with either 1500 or 2048 hidden units. We used minibatches of 48 examples and the Adam BIBREF54 optimizer with a learning rate of 0.002. Models were trained for 7 days on an Nvidia Tesla P100-SXM2-16GB GPU. While BIBREF6 report close to a month of training, we only train for 7 days, made possible by advancements in GPU hardware and software (cuDNN RNNs).
We did not tune any of the architectural details and hyperparameters owing to the fact that we were unable to identify any clear criterion on which to tune them. Gains in performance on a specific task do not often translate to better transfer performance.
Vocabulary Expansion & Representation Pooling
In addition to performing 10-fold cross-validation to determine the L2 regularization penalty on the logistic regression models, we also tune the way in which our sentence representations are generated from the hidden states corresponding to words in a sentence. For example, BIBREF6 use the last hidden state while BIBREF9 perform max-pooling across all of the hidden states. We consider both of these approaches and pick the one with better performance on the validation set. We note that max-pooling works best on sentiment tasks such as MR, CR, SUBJ and MPQA, while the last hidden state works better on all other tasks.
We also employ vocabulary expansion on all tasks as in BIBREF6 by training a linear regression to map from the space of pre-trained word embeddings (GloVe) to our model's word embeddings.
Multi-task model details
This section describes the specifics our multi-task ablations in the experiments section. These definitions hold for all tables except for TABREF18 and TABREF20 . We refer to skip-thought next as STN, French and German NMT as Fr and De, natural language inference as NLI, skip-thought previous as STP and parsing as Par.
+STN +Fr +De : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a forward GRU with 1500-dimensional hidden vectors and a bidirectional GRU, also with 1500-dimensional hidden vectors.
+STN +Fr +De +NLI : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 1500-dimensional hidden vectors and another bidirectional GRU with 1500-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without NLI.
+STN +Fr +De +NLI +L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +2L +STP : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a 2-layer bidirectional GRU with 2048-dimensional hidden vectors and a 1-layer bidirectional GRU with 2048-dimensional hidden vectors trained without STP.
+STN +Fr +De +NLI +L +STP +Par : The sentence representation INLINEFORM0 is the concatenation of the final hidden vectors from a bidirectional GRU with 2048-dimensional hidden vectors and another bidirectional GRU with 2048-dimensional hidden vectors trained without Par.
In tables TABREF18 and TABREF20 we do not concatenate the representations of multiple models.
Description of evaluation tasks
BIBREF6 and BIBREF9 provide a detailed description of tasks that are typically used to evaluate sentence representations. We provide a condensed summary and refer readers to their work for a more thorough description.
Text Classification
We evaluate on text classification benchmarks - sentiment classification on movie reviews (MR), product reviews (CR) and Stanford sentiment (SST), question type classification (TREC), subjectivity/objectivity classification (SUBJ) and opinion polarity (MPQA). Representations are used to train a logistic regression classifier with 10-fold cross validation to tune the L2 weight penalty. The evaluation metric for all these tasks is classification accuracy.
Paraphrase Identification
We also evaluate on pairwise text classification tasks such as paraphrase identification on the Microsoft Research Paraphrase Corpus (MRPC) corpus. This is a binary classification problem to identify if two sentences are paraphrases of each other. The evaluation metric is classification accuracy and F1.
Entailment and Semantic Relatedness
To test if similar sentences share similar representations, we evaluate on the SICK relatedness (SICK-R) task where a linear model is trained to output a score from 1 to 5 indicating the relatedness of two sentences. We also evaluate using the entailment labels in the same dataset (SICK-E) which is a binary classification problem. The evaluation metric for SICK-R is Pearson correlation and classification accuracy for SICK-E.
Semantic Textual Similarity
In this evaluation, we measure the relatedness of two sentences using only the cosine similarity between their representations. We use the similarity textual similarity (STS) benchmark tasks from 2012-2016 (STS12, STS13, STS14, STS15, STS16, STSB). The STS dataset contains sentences from a diverse set of data sources. The evaluation criteria is Pearson correlation.
Image-caption retrieval
Image-caption retrieval is typically formulated as a ranking task wherein images are retrieved and ranked based on a textual description and vice-versa. We use 113k training images from MSCOCO with 5k images for validation and 5k for testing. Image features are extracted using a pre-trained 110 layer ResNet. The evaluation criterion is Recall@K and the median K across 5 different splits of the data.
Quora Duplicate Question Classification
In addition to the above tasks which were considered by BIBREF9 , we also evaluate on the recently published Quora duplicate question dataset since it is an order of magnitude larger than the others (approximately 400,000 question pairs). The task is to correctly identify question pairs that are duplicates of one another, which we formulate as a binary classification problem. We use the same data splits as in BIBREF45 . Given the size of this data, we consider a more expressive classifier on top of the representations of both questions. Specifically, we train a 4 layer MLP with 1024 hidden units, with a dropout rate of 0.5 after every hidden layer. The evaluation criterion is classification accuracy. We also artificially create a low-resource setting by reducing the number of training examples between 1,000 and 25,000 using the same splits as BIBREF47 .
Sentence Characteristics & Syntax
In an attempt to understand what information is encoded in by sentence representations, we consider six different classification tasks where the objective is to predict sentence characteristics such as length, word content and word order BIBREF17 or syntactic properties such as active/passive, tense and the top syntactic sequence (TSS) from the parse tree of a sentence BIBREF14 .
The sentence characteristic tasks are setup in the same way as described in BIBREF17 . The length task is an 8-way classification problem where sentence lengths are binned into 8 ranges. The content task is formulated as a binary classification problem that takes a concatenation of a sentence representation INLINEFORM0 and a word representation INLINEFORM1 to determine if the word is contained in the sentence. The order task is an extension of the content task where a concatenation of the sentence representation and word representations of two words in sentence is used to determine if the first word occurs before or after the second. We use a random subset of the 1-billion-word dataset for these experiments that were not used to train our multi-task representations.
The syntactic properties tasks are setup in the same way as described in BIBREF14 .The passive and tense tasks are characterized as binary classification problems given a sentence's representation. The former's objective is to determine if a sentence is written in active/passive voice while the latter's objective is to determine if the sentence is in the past tense or not. The top syntactic sequence (TSS) is a 20-way classification problem with 19 most frequent top syntactic sequences and 1 miscellaneous class. We use the same dataset as the authors but different training, validation and test splits. | - En-Fr (WMT14)
- En-De (WMT15)
- Skipthought (BookCorpus)
- AllNLI (SNLI + MultiNLI)
- Parsing (PTB + 1-billion word) |
317a6f211ecf48c58f008c12fbd5d41901db3e36 | 317a6f211ecf48c58f008c12fbd5d41901db3e36_0 | Q: Has there been previous work on SNMT?
Text: Introduction
Simultaneous translation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, the task of producing a partial translation of a sentence before the whole input sentence ends, is useful in many scenarios including outbound tourism, international summit and multilateral negotiations. Different from the consecutive translation in which translation quality alone matters, simultaneous translation trades off between translation quality and latency. The syntactic structure difference between the source and target language makes simultaneous translation more challenging. For example, when translating from a verb-final (SOV) language (e.g., Japanese) to a verb-media (SVO) language (e.g., English), the verb appears much later in the source sequence than in the target language. Some premature translations can lead to significant loss in quality BIBREF5.
Recently, a number of researchers have endeavored to explore methods for simultaneous translation in the context of NMT BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some of them propose sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5, BIBREF10. These approaches are either memory inefficient during training BIBREF5 or hard to implement BIBREF10. Others utilize a full-sentence base model to perform simultaneous translation by modifications to the encoder and the decoding process. To match the incremental source context, they replace the bidirectional encoder with a left-to-right encoder BIBREF3, BIBREF11, BIBREF4, BIBREF12 or recompute the encoder hidden states BIBREF13. On top of that, heuristic algorithms BIBREF3, BIBREF14 or a READ/WRITE model trained with reinforcement learning BIBREF11, BIBREF4, BIBREF12 or supervised learning BIBREF13 are used to decide, at every step, whether to wait for the next source token or output a target token. However, these models either cannot directly use a pretrained vanilla CNMT model with bidirectional encoder as the base model or work in a sub-optimal way in the decoding stage.
In this paper, we study the problem of how to do simultaneous translation better with a pretrained vanilla CNMT model. We formulate simultaneous translation as two nested loops: an outer loop that updates input buffer with newly observed source tokens and an inner loop that translates source tokens in the buffer updated at each outer step. For the outer loop, the input buffer can be updated by an ASR system with an arbitrary update schedule. For the inner loop, we perform prefix translation using the pretrained CNMT model with dynamically built encoder and decoder hidden states. We also design two novel stopping criteria for the inner loop: Length and EOS (LE) controller that stops with heuristics, and Trainable (TN) controller that learns to stop with a better quality and latency balance. We evaluate our method on IWSLT16 German-English (DE-EN) translation in both directions, WMT15 English-German (EN-DE) translation in both directions, and NIST Chinese-to-English (ZH$\rightarrow $EN) translation. The result shows our method consistently improves over the de-facto baselines, and achieves low latency and reasonable BLEU scores.
Background
Given a set of source–target sentence pairs $\left\langle \mathbf {x}_m,\mathbf {y}^*_m\right\rangle _{m=1}^M$, a consecutive NMT model can be trained by maximizing the log-likelihood of the target sentence from its entire source side context:
where $\phi $ is a set of model parameters. At inference time, the NMT model first encodes a source language sentence $\mathbf {x}=\lbrace x_1,...,x_{T_\eta }\rbrace $ with its encoder and passes the encoded representations $\mathbf {h}=\lbrace h_1,...,h_{T_\eta }\rbrace $ to a greedy decoder. Then the greedy decoder generates a translated sentence in the target language by sequentially choosing the most likely token at each step $t$:
The distribution of next target word is defined as:
where $z_{t}$ is the decoder hidden state at position $t$. In consecutive NMT, once obtained, the encoder hidden states $\mathbf {h}$ and the decoder hidden state $z_t$ are not updated anymore and will be reused during the entire decoding process.
Simultaneous NMT
In SNMT, we receive streaming input tokens, and learn to translate them in real-time. We formulate simultaneous translation as two nested loops: the outer loop that updates an input buffer with newly observed source tokens and the inner loop that translates source tokens in the buffer updated at each outer step.
More precisely, suppose at the end of an outer step $s-1$, the input buffer is $\mathbf {x}^{s-1} = \lbrace x_1, ..., x_{\eta \left[ s-1\right]}\rbrace $, and the output buffer is $\mathbf {y}^{s-1} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Then at outer step $s$, the system translates with the following steps:
The system observes $c_s > 0$ new source tokens and updates the input buffer to be $\mathbf {x}^{s} = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ where $\eta \left[ s\right]=\eta \left[ s-1\right]+c_s$.
Then, the system starts inner loop translation and writes $w_s>=0$ target tokens to the output buffer. The output buffer is updated to be $\mathbf {y}^{s} = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $ where $\tau \left[ s\right]=\tau \left[ s-1\right]+w_s$.
The simultaneous decoding process continues until no more source tokens are added in the outer loop. We define the last outer step as the terminal outer step $S$, and other outer steps as non-terminal outer steps.
For the outer loop, we make no assumption about the value of $c_s$, while all previous work assumes $c_s=1$. This setting is more realistic because a) increasing $c_s$ can reduce the number of outer steps, thus reducing computation cost; b) in a real speech translation application, an ASR system may generate multiple tokens at a time.
For the inner loop, we adapt a pretrained vanilla CNMT model to perform partial translation with two important concerns:
Prefix translation: given a source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and a target prefix $\mathbf {y}^s_{\tau \left[ s-1\right]} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, how to predict the remaining target tokens?
Stopping criterion: since the NMT model is trained with full sentences, how to design the stopping criterion for it when translating partial source sentcnes?
Simultaneous NMT ::: Prefix Translation
At an outer step $s$, given encoder hidden states $\mathbf {h}^s$ for source prefix $\mathbf {x}^s= \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ for target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s= \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, we perform prefix translation sequentially with a greedy decoder:
where $t$ starts from $t=\tau \left[ s-1\right]+1$. The prefix translation terminates when a stopping criterion meets, yielding a translation $\mathbf {y}^s = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $.
However, a major problem comes from the above translation method: how can we obtain the encoder hidden states $\mathbf {h}^s$ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ at the beginning of prefix translation? In CNMT, the encoder hidden states and previous decoder hidden states are reused at each decoding time step. Different from CNMT, SNMT is fed with an incremental source side context. On the encoder side, we can address this by either reusing previous encoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF12:
or dynamically re-building all encoder hidden states BIBREF5:
On the decoder side, since the encoder hidden states have been updated from $\mathbf {h}^{s-1}$ to $\mathbf {h}^s$, we can choose to reuse previous decoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF5:
or rebuild all previous decoder hidden states from current encoder hidden states $\mathbf {h}^s$ with force decoding:
To better predict the remaining target tokens, we rebuild all encoder and decoder hidden states following Eq. DISPLAY_FORM11 and DISPLAY_FORM13 at the beginning of prefix translation. This strategy ensures that all encoder and decoder hidden states are obtained by attending to the same source tokens, which is consistent with how encoder and decoder hidden states are computed at training time. Besides, these attainable source tokens are all available source context at current time. Compared with using Eq. DISPLAY_FORM10 or DISPLAY_FORM12, our method can potentially better utilize the available source context.
Simultaneous NMT ::: Stopping Criterion
In consecutive NMT, the decoding algorithm such as greedy decoding or beam search terminates when the translator predicts an EOS token or the length of the translation meets a predefined threshold:
where $\text{maxlen}$, $u$ and $v$ are all hyper-parameters. In fairseq-py, they set $\text{maxlen}=+\infty $, $u=0$ and $v=200$ at inference time by default. The decoding for most source sentences terminates when the translator predicts the EOS token. In simultaneous decoding, since we use a NMT model pretrained on full sentences to translate partial source sentences, it tends to predict EOS when the source context has been fully translated. However, such strategy could be too aggressive for simultaneous translation. Fig. FIGREF18 shows such an example. At outer step 2, the translator predicts “you EOS", emiting target token “you". However, “you" is not the expected translation for “你" in the context of “你好。". The right decision is that prefix translation at outer step 2 should stop without emitting any words.
To alleviate such problems and do better simultaneous translation with pretrained CNMT model, we propose two novel stopping criteria for prefix translation.
Simultaneous NMT ::: Stopping Criterion ::: Length and EOS Control
In consecutive translation, the decoding process stops mainly when predicting EOS. In contrast, for prefix translation at non-terminal outer step, we use both length and EOS to stop the prefix translation process. We achieve this by setting the hyper-parameters in Eq. DISPLAY_FORM15 as $\text{maxlen}=+\infty $, $u=1$ and $v=-d$, where $d$ is a non-negative integer. The hyper-parameter $d$ determines the translation latency of the system.
More specifically, before prefix translation at outer step $s$, we have source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Prefix translation terminates at inner step $w_s$ when predicting an EOS token or satisfying:
We call this stopping criterion as Length and EOS (LE) stopping controller.
Simultaneous NMT ::: Stopping Criterion ::: Learning When to Stop
Although simple and easy to implement, LE controller lacks the capability to learn the optimal timing with which to stop prefix translation. Therefore, we design a small trainable network called Trainable (TN) stopping controller to learn when to stop prefix translation for non-terminal outer step. Fig. FIGREF22 shows the illustration.
At each inner decoding step $k$ for non-terminal outer step $s$, the TN controller utilizes a stochastic policy $\pi _\theta $ parameterized by a neural network to make the binary decision on whether to stop translation at current stage:
where $z_{\tau \left[ s-1\right]+k}^s$ is the current decoder hidden state. The prefix translation stops if the TN controller predicts $a_{\tau \left[ s-1\right]+k}=1$. The controller function $f_\theta $ can take on a variety of forms, and for simplicity we implement with a feedforward network with two hidden layers, followed by a softmax layer.
To train the TN controller, we freeze the NMT model with pretrained parameters, and optimize the TN network with policy gradient for reward maximization $\mathcal {J}= \mathbb {E}_{\pi _{\theta }}(\sum _{t=1}^{T_\tau } r_t )$. With a trained TN controller, prefix translation stops at inner decoding step $w_s$ when predicting an EOS token or satisfying:
In the following, we talk about the details of the reward function and the training detail with policy gradient.
Simultaneous NMT ::: Stopping Criterion ::: Learning When to Stop ::: Reward
To trade-off between translation quality and latency, we define the reward function at inner decoding step $k$ of outer step $s$ as:
where $t=\tau \left[ s-1\right]+k$, and $r_t^Q$ and $r_t^D$ are rewards related to quality and delay, respectively. $\alpha \ge 0$ is a hyper-parameter that we adjust to balance the trade-off between translation quality and delay. Similar to BIBREF4, we utilize sentence-level BLEU BIBREF15, BIBREF16 with reward shaping BIBREF17 as the reward for quality:
where
is the intermediate reward. Note that the higher the values of BLEU are, the more rewards the TN controller receives.
Following BIBREF4, BIBREF5, we use average lagging (AL) as the reward for latency:
where
$l(t)$ is the number of observed source tokens when generating the $t$-th target token, $t_e= \mathop {\rm argmin}_{t}{(l(t)=|\mathbf {x}|)}$ denotes the earliest point when the system observes the full source sentence, $\lambda =\frac{|\mathbf {y}|}{|\mathbf {x}|}$ represents the target-to-source length ratio and $d^* \ge 0$ is a hyper-parameter called target delay that indicates the desired system latency. Note that the lower the values of AL are, the more rewards the TN controller receives.
Simultaneous NMT ::: Stopping Criterion ::: Learning When to Stop ::: Policy Gradient
We train the TN controller with policy gradientBIBREF18, and the gradients are:
where $R_t=\sum _{i=t}^{T_\tau } r_i$ is the cumulative future rewards for the current decision. We can adopt any sampling approach to estimate the expected gradient. In our experiments, we randomly sample multiple action trajectories from the current policy $\pi _{\theta }$ and estimate the gradient with the collected accumulated reward. We try the variance reduction techniques by subtracting a baseline average reward estimated by a linear regression model from $R_t$ and find that it does not help to improve the performance. Therefore, we just normalize the reward in each mini batch without using baseline reward for simplicity.
Experiments ::: Settings ::: Dataset
We compare our approach with the baselines on WMT15 German-English (DE-EN) translation in both directions. This is also the most widely used dataset to evaluate SNMT's performance BIBREF3, BIBREF4, BIBREF5, BIBREF10, BIBREF13. To further evaluate our approach's efficacy in trading off translation quality and latency on other language pair and spoken language, we also conduct experiments with the proposed LE and TN method on NIST Chinese-to-English (ZH$\rightarrow $EN) translation and IWSLT16 German-English (DE-EN) translation in both directions. For WMT15, we use newstest2014 for validation and newstest2015 for test. For NIST, we use MT02 for validation, and MT05, MT06, MT08 for test. For IWSLT16, we use tst13 for validation and tst14 for test. Table TABREF32 shows the details. All the data is tokenized and segmented into subword symbols using byte-pair encoding BIBREF19 to restrict the size of the vocabulary. We use 40,000 joint merge operations on WMT15, and 24,000 on IWSLT16. For NIST, we use 30,000 merge operations for source and target side separately. Without explicitly mention, we simulate simultaneous translation scenario at inference time with these datasets by assuming that the system observes one new source token at each outer step, i.e., $c_s=1$.
Experiments ::: Settings ::: Pretrained NMT Model
We use Transformer BIBREF8 trained with maximum likelihood estimation as the pretrained CNMT model and implement our method based on fairseq-py. We follow the setting in transformer_iwslt_de_en for IWSLT16 dataset, and transformer_wmt_en_de for WMT15 and NIST dataset. Fairseq-py adds an EOS token for all source sentences during training and inference. Therefore, to be consistent with the CNMT model implemented with fairseq-py, we also add an EOS token at the end of the source prefix for prefix translation.
Experiments ::: Settings ::: TN Controller
To train the TN controller, we use a mini-batch size of 8,16,16 and sample 5,10,10 trajectories for each sentence pair in a batch for IWSLT16, WMT15 and NIST, respectively. We set the number of newly observed source tokens at each outer step to be 1 during the training for simplicity. We set $\alpha $ to be $0.04$, and $d^*$ to be $2,5,8$. All our TN controllers are trained with policy gradient using Adam optimizer BIBREF20 with 30,000 updates. We select the last model as our final TN controller.
Experiments ::: Settings ::: Baseline
We compare our model against three baselines that utilize a pretrained CNMT model to perform simultaneous translation:
test_time_waitk: the test-time waitk simultaneous decoding algorithm proposed in BIBREF5, i.e., using a full-sentence model but decoding it with a waitk policy. We report the results when $k=1,3,5,7,9$.
SL: the SL model proposed in BIBREF13, which learns an adaptive READ/WRITE policy from oracle READ/WRITE sequences generated with heuristics. We report the results $\rho =0.65,0.6,0.55,0.5,0.45,0.4$.
BIBREF4: the adaptation of BIBREF4's two-staged full-sentence model + reinforcement learning on Transformer by BIBREF5. We report the results when using $CW=2,5,8$ as the target delay.
We report the result with $d=0,2,4,6,8$ for our proposed LE method and $d^*=2,5,8$ for our proposed TN method. For all baselines, we cite the results reported in BIBREF13. Since they did not mention the details of data preprocessing, we cannot compare the BLEU and AL scores directly with theirs. Therefore, we normalize the BLEU and AL scores with its corresponding upper bound, i.e. the BLEU and AL scores obtained when the pretrained Transformer performs standard greedy decoding (Greedy).
Experiments ::: Results
We compare our method with the baselines on the test set of WMT15 EN$\rightarrow $DE and DE$\rightarrow $EN translation tasks. Fig. FIGREF40 shows the result. The points closer to the upper left corner indicate better overall performance, namely low latency and high quality. In all these figures, we observe that, as latency increases, all methods improve in quality. The TN stopping controller significantly outperforms all the baseline systems in both translation tasks, demonstrating that it indeed learns the appropriate timing to stop prefix translation. The LE controller outperforms the baselines on WMT15 EN$\rightarrow $DE translation at high latency region and performs similarly or worse on other cases.
We show the model's efficacy in trading off quality and latency on other language pair and spoken language in Fig. FIGREF41. The TN controller obtains better performance on all translation tasks, especially at the low latency region. For example, on IWSLT16 EN$\rightarrow $ DE translation, it is +$2.5$ to +$3.3$ BLEU ahead of the LE method. TN also obtains promising translation quality with acceptable latency: with a lag of $<7$ tokens, TN obtains 96.95%, 97.20% and 94.03% BLEU with respect to consecutive greedy decoding for IWSLT16 EN$\rightarrow $DE, IWSLT16 DE$\rightarrow $EN and NIST ZH$\rightarrow $EN translation, respectively.
Experiments ::: Analyze
We analyze the effect of different ways (Eq. DISPLAY_FORM10-DISPLAY_FORM13) to obtain the encoder and decoder hidden states at the beginning of prefix translation with the LE controller. Fig. FIGREF42 shows the result. We try three variants: a) dynamically rebuild all encoder/decoder hidden states (none); b) reuse decoder hidden states and rebuild all encoder hidden states (decoder); c) reuse previous encoder hidden states and rebuild all decoder hidden states (encoder). The left Y axis and X axis show BLEU-vs-AL curve. We observe that if reusing previous encoder hidden states (encoder), the translation fails. We ascribe this to the discrepancy between training and decoding for the encoder. We also observe that when $d=0,2$, reusing decoder hidden states (decoder) obtain negative AL. To analyze this, we plot the translation to reference length ratio versus AL curve with the right Y axis and X axis. It shows that with decoder, the decoding process stops too early and generates too short translations. Therefore, to avoid such problem and to be consistent with the training process of the CNMT model, it is important to dynamically rebuild all encoder/decoder hidden states for prefix translation.
Since we make no assumption about the $c_s$, i.e., the number of newly observed source tokens at each outer step, we test the effect of different $c_s$ at this section. Fig. FIGREF43 shows the result with the LE and TN controllers on the test set of WMT15 EN$\rightarrow $DE translation. We observe that as $c_s$ increases, both LE and TN trend to improve in quality and worsen in latency. When $c_s=1$, LE controller obtains the best balance between quality and latency. In contrast, TN controller obtains similar quality and latency balance with different $c_s$, demonstrating that TN controller successfully learns the right timing to stop regardless of the input update schedule.
We also analyze the TN controller's adaptability by monitoring the initial delay, i.e., the number of observed source tokens before emitting the first target token, on the test set of WMT15 EN$\rightarrow $DE translation, as shown in Fig. FIGREF52. $d^*$ is the target delay measured with AL (used in Eq. DISPLAY_FORM29). It demonstrates that the TN controller has a lot of variance in it's initial delay. The distribution of initial delay changes with different target delay: with higher target delay, the average initial delay is larger. For most sentences, the initial delay is within $1-7$.
In speech translation, listeners are also concerned with long silences during which no translation occurs. Following BIBREF4, BIBREF5, we use Consecutive Wait (CW) to measure this:
Fig. FIGREF54 shows the BLEU-vs-CW plots for our proposed two algorithms. The TN controller has higher CW than the LE controller. This is because TN controller prefers consecutive updating output buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 3\ 0\ 0\ 0\ 0\ 0\ 5\ 0\ 0\ 0\ 0\ 4\ ...$) while the LE controller often updates its output buffer following the input buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 1\ 1\ 1\ 1\ 1\ 1\ ...$ when $d=4$). Although larger than LE, the CW for TN ($< 6$) is acceptable for most speech translation scenarios.
Experiments ::: Translation Examples
Fig. FIGREF55 shows three translation examples with the LE and TN controllers on the test set of NIST ZH$\rightarrow $EN and WMT15 EN$\rightarrow $DE translation. In manual inspection of these examples and others, we find that the TN controller learns a conservative timing for stopping prefix translation. For example, in example 2, our method outputs translation “wu bangguo attended the signing ceremony” when observing “吴邦国 出席 签字 仪式 并”, instead of a more radical translation “wu bangguo attended the signing ceremony and”. Such strategy helps to alleviate the problem of premature translation, i.e., translating before observing enough future context.
Related Work
A number of works in simultaneous translation divide the translation process into two stages. A segmentation component first divides the incoming text into segments, and then each segment is translated by a translator independently or with previous context. The segmentation boundaries can be predicted by prosodic pauses detected in speech BIBREF0, BIBREF21, linguistic cues BIBREF22, BIBREF23, or a classifier based on alignment information BIBREF24, BIBREF25 and translation accuracy BIBREF1, BIBREF2, BIBREF26.
Some authors have recently endeavored to perform simultaneous translation in the context of NMT. BIBREF3, BIBREF14, BIBREF5 introduce a manually designed criterion to control when to translate. BIBREF11, BIBREF4, BIBREF12 extend the criterion into a trainable agent in a reinforcement learning framework. However, these work either develop sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5 or fail to use a pretrained consecutive NMT model in an optimal way BIBREF3, BIBREF14, BIBREF11, BIBREF4, BIBREF12, BIBREF13. In contrast, our work is significantly different from theirs in the way of using pretrained consecutive NMT model to perform simultaneous translation and the design of the two stopping criteria.
Conclusion
We have presented a novel framework for improving simultaneous translation with a pretrained consecutive NMT model. The basic idea is to translate partial source sentence with the pretrained consecutive NMT model and stops the translation with two novel stopping criteria. Extensive experiments demonstrate that our method outperforms the state-of-the-art baselines in balancing between translation quality and latency. | Yes |
a726046eec1e2efa5fe3926963863bf755e64682 | a726046eec1e2efa5fe3926963863bf755e64682_0 | Q: Which languages do they experiment on?
Text: Introduction
Simultaneous translation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, the task of producing a partial translation of a sentence before the whole input sentence ends, is useful in many scenarios including outbound tourism, international summit and multilateral negotiations. Different from the consecutive translation in which translation quality alone matters, simultaneous translation trades off between translation quality and latency. The syntactic structure difference between the source and target language makes simultaneous translation more challenging. For example, when translating from a verb-final (SOV) language (e.g., Japanese) to a verb-media (SVO) language (e.g., English), the verb appears much later in the source sequence than in the target language. Some premature translations can lead to significant loss in quality BIBREF5.
Recently, a number of researchers have endeavored to explore methods for simultaneous translation in the context of NMT BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some of them propose sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5, BIBREF10. These approaches are either memory inefficient during training BIBREF5 or hard to implement BIBREF10. Others utilize a full-sentence base model to perform simultaneous translation by modifications to the encoder and the decoding process. To match the incremental source context, they replace the bidirectional encoder with a left-to-right encoder BIBREF3, BIBREF11, BIBREF4, BIBREF12 or recompute the encoder hidden states BIBREF13. On top of that, heuristic algorithms BIBREF3, BIBREF14 or a READ/WRITE model trained with reinforcement learning BIBREF11, BIBREF4, BIBREF12 or supervised learning BIBREF13 are used to decide, at every step, whether to wait for the next source token or output a target token. However, these models either cannot directly use a pretrained vanilla CNMT model with bidirectional encoder as the base model or work in a sub-optimal way in the decoding stage.
In this paper, we study the problem of how to do simultaneous translation better with a pretrained vanilla CNMT model. We formulate simultaneous translation as two nested loops: an outer loop that updates input buffer with newly observed source tokens and an inner loop that translates source tokens in the buffer updated at each outer step. For the outer loop, the input buffer can be updated by an ASR system with an arbitrary update schedule. For the inner loop, we perform prefix translation using the pretrained CNMT model with dynamically built encoder and decoder hidden states. We also design two novel stopping criteria for the inner loop: Length and EOS (LE) controller that stops with heuristics, and Trainable (TN) controller that learns to stop with a better quality and latency balance. We evaluate our method on IWSLT16 German-English (DE-EN) translation in both directions, WMT15 English-German (EN-DE) translation in both directions, and NIST Chinese-to-English (ZH$\rightarrow $EN) translation. The result shows our method consistently improves over the de-facto baselines, and achieves low latency and reasonable BLEU scores.
Background
Given a set of source–target sentence pairs $\left\langle \mathbf {x}_m,\mathbf {y}^*_m\right\rangle _{m=1}^M$, a consecutive NMT model can be trained by maximizing the log-likelihood of the target sentence from its entire source side context:
where $\phi $ is a set of model parameters. At inference time, the NMT model first encodes a source language sentence $\mathbf {x}=\lbrace x_1,...,x_{T_\eta }\rbrace $ with its encoder and passes the encoded representations $\mathbf {h}=\lbrace h_1,...,h_{T_\eta }\rbrace $ to a greedy decoder. Then the greedy decoder generates a translated sentence in the target language by sequentially choosing the most likely token at each step $t$:
The distribution of next target word is defined as:
where $z_{t}$ is the decoder hidden state at position $t$. In consecutive NMT, once obtained, the encoder hidden states $\mathbf {h}$ and the decoder hidden state $z_t$ are not updated anymore and will be reused during the entire decoding process.
Simultaneous NMT
In SNMT, we receive streaming input tokens, and learn to translate them in real-time. We formulate simultaneous translation as two nested loops: the outer loop that updates an input buffer with newly observed source tokens and the inner loop that translates source tokens in the buffer updated at each outer step.
More precisely, suppose at the end of an outer step $s-1$, the input buffer is $\mathbf {x}^{s-1} = \lbrace x_1, ..., x_{\eta \left[ s-1\right]}\rbrace $, and the output buffer is $\mathbf {y}^{s-1} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Then at outer step $s$, the system translates with the following steps:
The system observes $c_s > 0$ new source tokens and updates the input buffer to be $\mathbf {x}^{s} = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ where $\eta \left[ s\right]=\eta \left[ s-1\right]+c_s$.
Then, the system starts inner loop translation and writes $w_s>=0$ target tokens to the output buffer. The output buffer is updated to be $\mathbf {y}^{s} = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $ where $\tau \left[ s\right]=\tau \left[ s-1\right]+w_s$.
The simultaneous decoding process continues until no more source tokens are added in the outer loop. We define the last outer step as the terminal outer step $S$, and other outer steps as non-terminal outer steps.
For the outer loop, we make no assumption about the value of $c_s$, while all previous work assumes $c_s=1$. This setting is more realistic because a) increasing $c_s$ can reduce the number of outer steps, thus reducing computation cost; b) in a real speech translation application, an ASR system may generate multiple tokens at a time.
For the inner loop, we adapt a pretrained vanilla CNMT model to perform partial translation with two important concerns:
Prefix translation: given a source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and a target prefix $\mathbf {y}^s_{\tau \left[ s-1\right]} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, how to predict the remaining target tokens?
Stopping criterion: since the NMT model is trained with full sentences, how to design the stopping criterion for it when translating partial source sentcnes?
Simultaneous NMT ::: Prefix Translation
At an outer step $s$, given encoder hidden states $\mathbf {h}^s$ for source prefix $\mathbf {x}^s= \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ for target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s= \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, we perform prefix translation sequentially with a greedy decoder:
where $t$ starts from $t=\tau \left[ s-1\right]+1$. The prefix translation terminates when a stopping criterion meets, yielding a translation $\mathbf {y}^s = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $.
However, a major problem comes from the above translation method: how can we obtain the encoder hidden states $\mathbf {h}^s$ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ at the beginning of prefix translation? In CNMT, the encoder hidden states and previous decoder hidden states are reused at each decoding time step. Different from CNMT, SNMT is fed with an incremental source side context. On the encoder side, we can address this by either reusing previous encoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF12:
or dynamically re-building all encoder hidden states BIBREF5:
On the decoder side, since the encoder hidden states have been updated from $\mathbf {h}^{s-1}$ to $\mathbf {h}^s$, we can choose to reuse previous decoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF5:
or rebuild all previous decoder hidden states from current encoder hidden states $\mathbf {h}^s$ with force decoding:
To better predict the remaining target tokens, we rebuild all encoder and decoder hidden states following Eq. DISPLAY_FORM11 and DISPLAY_FORM13 at the beginning of prefix translation. This strategy ensures that all encoder and decoder hidden states are obtained by attending to the same source tokens, which is consistent with how encoder and decoder hidden states are computed at training time. Besides, these attainable source tokens are all available source context at current time. Compared with using Eq. DISPLAY_FORM10 or DISPLAY_FORM12, our method can potentially better utilize the available source context.
Simultaneous NMT ::: Stopping Criterion
In consecutive NMT, the decoding algorithm such as greedy decoding or beam search terminates when the translator predicts an EOS token or the length of the translation meets a predefined threshold:
where $\text{maxlen}$, $u$ and $v$ are all hyper-parameters. In fairseq-py, they set $\text{maxlen}=+\infty $, $u=0$ and $v=200$ at inference time by default. The decoding for most source sentences terminates when the translator predicts the EOS token. In simultaneous decoding, since we use a NMT model pretrained on full sentences to translate partial source sentences, it tends to predict EOS when the source context has been fully translated. However, such strategy could be too aggressive for simultaneous translation. Fig. FIGREF18 shows such an example. At outer step 2, the translator predicts “you EOS", emiting target token “you". However, “you" is not the expected translation for “你" in the context of “你好。". The right decision is that prefix translation at outer step 2 should stop without emitting any words.
To alleviate such problems and do better simultaneous translation with pretrained CNMT model, we propose two novel stopping criteria for prefix translation.
Simultaneous NMT ::: Stopping Criterion ::: Length and EOS Control
In consecutive translation, the decoding process stops mainly when predicting EOS. In contrast, for prefix translation at non-terminal outer step, we use both length and EOS to stop the prefix translation process. We achieve this by setting the hyper-parameters in Eq. DISPLAY_FORM15 as $\text{maxlen}=+\infty $, $u=1$ and $v=-d$, where $d$ is a non-negative integer. The hyper-parameter $d$ determines the translation latency of the system.
More specifically, before prefix translation at outer step $s$, we have source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Prefix translation terminates at inner step $w_s$ when predicting an EOS token or satisfying:
We call this stopping criterion as Length and EOS (LE) stopping controller.
Simultaneous NMT ::: Stopping Criterion ::: Learning When to Stop
Although simple and easy to implement, LE controller lacks the capability to learn the optimal timing with which to stop prefix translation. Therefore, we design a small trainable network called Trainable (TN) stopping controller to learn when to stop prefix translation for non-terminal outer step. Fig. FIGREF22 shows the illustration.
At each inner decoding step $k$ for non-terminal outer step $s$, the TN controller utilizes a stochastic policy $\pi _\theta $ parameterized by a neural network to make the binary decision on whether to stop translation at current stage:
where $z_{\tau \left[ s-1\right]+k}^s$ is the current decoder hidden state. The prefix translation stops if the TN controller predicts $a_{\tau \left[ s-1\right]+k}=1$. The controller function $f_\theta $ can take on a variety of forms, and for simplicity we implement with a feedforward network with two hidden layers, followed by a softmax layer.
To train the TN controller, we freeze the NMT model with pretrained parameters, and optimize the TN network with policy gradient for reward maximization $\mathcal {J}= \mathbb {E}_{\pi _{\theta }}(\sum _{t=1}^{T_\tau } r_t )$. With a trained TN controller, prefix translation stops at inner decoding step $w_s$ when predicting an EOS token or satisfying:
In the following, we talk about the details of the reward function and the training detail with policy gradient.
Simultaneous NMT ::: Stopping Criterion ::: Learning When to Stop ::: Reward
To trade-off between translation quality and latency, we define the reward function at inner decoding step $k$ of outer step $s$ as:
where $t=\tau \left[ s-1\right]+k$, and $r_t^Q$ and $r_t^D$ are rewards related to quality and delay, respectively. $\alpha \ge 0$ is a hyper-parameter that we adjust to balance the trade-off between translation quality and delay. Similar to BIBREF4, we utilize sentence-level BLEU BIBREF15, BIBREF16 with reward shaping BIBREF17 as the reward for quality:
where
is the intermediate reward. Note that the higher the values of BLEU are, the more rewards the TN controller receives.
Following BIBREF4, BIBREF5, we use average lagging (AL) as the reward for latency:
where
$l(t)$ is the number of observed source tokens when generating the $t$-th target token, $t_e= \mathop {\rm argmin}_{t}{(l(t)=|\mathbf {x}|)}$ denotes the earliest point when the system observes the full source sentence, $\lambda =\frac{|\mathbf {y}|}{|\mathbf {x}|}$ represents the target-to-source length ratio and $d^* \ge 0$ is a hyper-parameter called target delay that indicates the desired system latency. Note that the lower the values of AL are, the more rewards the TN controller receives.
Simultaneous NMT ::: Stopping Criterion ::: Learning When to Stop ::: Policy Gradient
We train the TN controller with policy gradientBIBREF18, and the gradients are:
where $R_t=\sum _{i=t}^{T_\tau } r_i$ is the cumulative future rewards for the current decision. We can adopt any sampling approach to estimate the expected gradient. In our experiments, we randomly sample multiple action trajectories from the current policy $\pi _{\theta }$ and estimate the gradient with the collected accumulated reward. We try the variance reduction techniques by subtracting a baseline average reward estimated by a linear regression model from $R_t$ and find that it does not help to improve the performance. Therefore, we just normalize the reward in each mini batch without using baseline reward for simplicity.
Experiments ::: Settings ::: Dataset
We compare our approach with the baselines on WMT15 German-English (DE-EN) translation in both directions. This is also the most widely used dataset to evaluate SNMT's performance BIBREF3, BIBREF4, BIBREF5, BIBREF10, BIBREF13. To further evaluate our approach's efficacy in trading off translation quality and latency on other language pair and spoken language, we also conduct experiments with the proposed LE and TN method on NIST Chinese-to-English (ZH$\rightarrow $EN) translation and IWSLT16 German-English (DE-EN) translation in both directions. For WMT15, we use newstest2014 for validation and newstest2015 for test. For NIST, we use MT02 for validation, and MT05, MT06, MT08 for test. For IWSLT16, we use tst13 for validation and tst14 for test. Table TABREF32 shows the details. All the data is tokenized and segmented into subword symbols using byte-pair encoding BIBREF19 to restrict the size of the vocabulary. We use 40,000 joint merge operations on WMT15, and 24,000 on IWSLT16. For NIST, we use 30,000 merge operations for source and target side separately. Without explicitly mention, we simulate simultaneous translation scenario at inference time with these datasets by assuming that the system observes one new source token at each outer step, i.e., $c_s=1$.
Experiments ::: Settings ::: Pretrained NMT Model
We use Transformer BIBREF8 trained with maximum likelihood estimation as the pretrained CNMT model and implement our method based on fairseq-py. We follow the setting in transformer_iwslt_de_en for IWSLT16 dataset, and transformer_wmt_en_de for WMT15 and NIST dataset. Fairseq-py adds an EOS token for all source sentences during training and inference. Therefore, to be consistent with the CNMT model implemented with fairseq-py, we also add an EOS token at the end of the source prefix for prefix translation.
Experiments ::: Settings ::: TN Controller
To train the TN controller, we use a mini-batch size of 8,16,16 and sample 5,10,10 trajectories for each sentence pair in a batch for IWSLT16, WMT15 and NIST, respectively. We set the number of newly observed source tokens at each outer step to be 1 during the training for simplicity. We set $\alpha $ to be $0.04$, and $d^*$ to be $2,5,8$. All our TN controllers are trained with policy gradient using Adam optimizer BIBREF20 with 30,000 updates. We select the last model as our final TN controller.
Experiments ::: Settings ::: Baseline
We compare our model against three baselines that utilize a pretrained CNMT model to perform simultaneous translation:
test_time_waitk: the test-time waitk simultaneous decoding algorithm proposed in BIBREF5, i.e., using a full-sentence model but decoding it with a waitk policy. We report the results when $k=1,3,5,7,9$.
SL: the SL model proposed in BIBREF13, which learns an adaptive READ/WRITE policy from oracle READ/WRITE sequences generated with heuristics. We report the results $\rho =0.65,0.6,0.55,0.5,0.45,0.4$.
BIBREF4: the adaptation of BIBREF4's two-staged full-sentence model + reinforcement learning on Transformer by BIBREF5. We report the results when using $CW=2,5,8$ as the target delay.
We report the result with $d=0,2,4,6,8$ for our proposed LE method and $d^*=2,5,8$ for our proposed TN method. For all baselines, we cite the results reported in BIBREF13. Since they did not mention the details of data preprocessing, we cannot compare the BLEU and AL scores directly with theirs. Therefore, we normalize the BLEU and AL scores with its corresponding upper bound, i.e. the BLEU and AL scores obtained when the pretrained Transformer performs standard greedy decoding (Greedy).
Experiments ::: Results
We compare our method with the baselines on the test set of WMT15 EN$\rightarrow $DE and DE$\rightarrow $EN translation tasks. Fig. FIGREF40 shows the result. The points closer to the upper left corner indicate better overall performance, namely low latency and high quality. In all these figures, we observe that, as latency increases, all methods improve in quality. The TN stopping controller significantly outperforms all the baseline systems in both translation tasks, demonstrating that it indeed learns the appropriate timing to stop prefix translation. The LE controller outperforms the baselines on WMT15 EN$\rightarrow $DE translation at high latency region and performs similarly or worse on other cases.
We show the model's efficacy in trading off quality and latency on other language pair and spoken language in Fig. FIGREF41. The TN controller obtains better performance on all translation tasks, especially at the low latency region. For example, on IWSLT16 EN$\rightarrow $ DE translation, it is +$2.5$ to +$3.3$ BLEU ahead of the LE method. TN also obtains promising translation quality with acceptable latency: with a lag of $<7$ tokens, TN obtains 96.95%, 97.20% and 94.03% BLEU with respect to consecutive greedy decoding for IWSLT16 EN$\rightarrow $DE, IWSLT16 DE$\rightarrow $EN and NIST ZH$\rightarrow $EN translation, respectively.
Experiments ::: Analyze
We analyze the effect of different ways (Eq. DISPLAY_FORM10-DISPLAY_FORM13) to obtain the encoder and decoder hidden states at the beginning of prefix translation with the LE controller. Fig. FIGREF42 shows the result. We try three variants: a) dynamically rebuild all encoder/decoder hidden states (none); b) reuse decoder hidden states and rebuild all encoder hidden states (decoder); c) reuse previous encoder hidden states and rebuild all decoder hidden states (encoder). The left Y axis and X axis show BLEU-vs-AL curve. We observe that if reusing previous encoder hidden states (encoder), the translation fails. We ascribe this to the discrepancy between training and decoding for the encoder. We also observe that when $d=0,2$, reusing decoder hidden states (decoder) obtain negative AL. To analyze this, we plot the translation to reference length ratio versus AL curve with the right Y axis and X axis. It shows that with decoder, the decoding process stops too early and generates too short translations. Therefore, to avoid such problem and to be consistent with the training process of the CNMT model, it is important to dynamically rebuild all encoder/decoder hidden states for prefix translation.
Since we make no assumption about the $c_s$, i.e., the number of newly observed source tokens at each outer step, we test the effect of different $c_s$ at this section. Fig. FIGREF43 shows the result with the LE and TN controllers on the test set of WMT15 EN$\rightarrow $DE translation. We observe that as $c_s$ increases, both LE and TN trend to improve in quality and worsen in latency. When $c_s=1$, LE controller obtains the best balance between quality and latency. In contrast, TN controller obtains similar quality and latency balance with different $c_s$, demonstrating that TN controller successfully learns the right timing to stop regardless of the input update schedule.
We also analyze the TN controller's adaptability by monitoring the initial delay, i.e., the number of observed source tokens before emitting the first target token, on the test set of WMT15 EN$\rightarrow $DE translation, as shown in Fig. FIGREF52. $d^*$ is the target delay measured with AL (used in Eq. DISPLAY_FORM29). It demonstrates that the TN controller has a lot of variance in it's initial delay. The distribution of initial delay changes with different target delay: with higher target delay, the average initial delay is larger. For most sentences, the initial delay is within $1-7$.
In speech translation, listeners are also concerned with long silences during which no translation occurs. Following BIBREF4, BIBREF5, we use Consecutive Wait (CW) to measure this:
Fig. FIGREF54 shows the BLEU-vs-CW plots for our proposed two algorithms. The TN controller has higher CW than the LE controller. This is because TN controller prefers consecutive updating output buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 3\ 0\ 0\ 0\ 0\ 0\ 5\ 0\ 0\ 0\ 0\ 4\ ...$) while the LE controller often updates its output buffer following the input buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 1\ 1\ 1\ 1\ 1\ 1\ ...$ when $d=4$). Although larger than LE, the CW for TN ($< 6$) is acceptable for most speech translation scenarios.
Experiments ::: Translation Examples
Fig. FIGREF55 shows three translation examples with the LE and TN controllers on the test set of NIST ZH$\rightarrow $EN and WMT15 EN$\rightarrow $DE translation. In manual inspection of these examples and others, we find that the TN controller learns a conservative timing for stopping prefix translation. For example, in example 2, our method outputs translation “wu bangguo attended the signing ceremony” when observing “吴邦国 出席 签字 仪式 并”, instead of a more radical translation “wu bangguo attended the signing ceremony and”. Such strategy helps to alleviate the problem of premature translation, i.e., translating before observing enough future context.
Related Work
A number of works in simultaneous translation divide the translation process into two stages. A segmentation component first divides the incoming text into segments, and then each segment is translated by a translator independently or with previous context. The segmentation boundaries can be predicted by prosodic pauses detected in speech BIBREF0, BIBREF21, linguistic cues BIBREF22, BIBREF23, or a classifier based on alignment information BIBREF24, BIBREF25 and translation accuracy BIBREF1, BIBREF2, BIBREF26.
Some authors have recently endeavored to perform simultaneous translation in the context of NMT. BIBREF3, BIBREF14, BIBREF5 introduce a manually designed criterion to control when to translate. BIBREF11, BIBREF4, BIBREF12 extend the criterion into a trainable agent in a reinforcement learning framework. However, these work either develop sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5 or fail to use a pretrained consecutive NMT model in an optimal way BIBREF3, BIBREF14, BIBREF11, BIBREF4, BIBREF12, BIBREF13. In contrast, our work is significantly different from theirs in the way of using pretrained consecutive NMT model to perform simultaneous translation and the design of the two stopping criteria.
Conclusion
We have presented a novel framework for improving simultaneous translation with a pretrained consecutive NMT model. The basic idea is to translate partial source sentence with the pretrained consecutive NMT model and stops the translation with two novel stopping criteria. Extensive experiments demonstrate that our method outperforms the state-of-the-art baselines in balancing between translation quality and latency. | German, English, Chinese |
6d9fbd42b54313cfdc2665809886330f209e9286 | 6d9fbd42b54313cfdc2665809886330f209e9286_0 | Q: What corpora is used?
Text: Introduction
Simultaneous translation BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, the task of producing a partial translation of a sentence before the whole input sentence ends, is useful in many scenarios including outbound tourism, international summit and multilateral negotiations. Different from the consecutive translation in which translation quality alone matters, simultaneous translation trades off between translation quality and latency. The syntactic structure difference between the source and target language makes simultaneous translation more challenging. For example, when translating from a verb-final (SOV) language (e.g., Japanese) to a verb-media (SVO) language (e.g., English), the verb appears much later in the source sequence than in the target language. Some premature translations can lead to significant loss in quality BIBREF5.
Recently, a number of researchers have endeavored to explore methods for simultaneous translation in the context of NMT BIBREF6, BIBREF7, BIBREF8, BIBREF9. Some of them propose sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5, BIBREF10. These approaches are either memory inefficient during training BIBREF5 or hard to implement BIBREF10. Others utilize a full-sentence base model to perform simultaneous translation by modifications to the encoder and the decoding process. To match the incremental source context, they replace the bidirectional encoder with a left-to-right encoder BIBREF3, BIBREF11, BIBREF4, BIBREF12 or recompute the encoder hidden states BIBREF13. On top of that, heuristic algorithms BIBREF3, BIBREF14 or a READ/WRITE model trained with reinforcement learning BIBREF11, BIBREF4, BIBREF12 or supervised learning BIBREF13 are used to decide, at every step, whether to wait for the next source token or output a target token. However, these models either cannot directly use a pretrained vanilla CNMT model with bidirectional encoder as the base model or work in a sub-optimal way in the decoding stage.
In this paper, we study the problem of how to do simultaneous translation better with a pretrained vanilla CNMT model. We formulate simultaneous translation as two nested loops: an outer loop that updates input buffer with newly observed source tokens and an inner loop that translates source tokens in the buffer updated at each outer step. For the outer loop, the input buffer can be updated by an ASR system with an arbitrary update schedule. For the inner loop, we perform prefix translation using the pretrained CNMT model with dynamically built encoder and decoder hidden states. We also design two novel stopping criteria for the inner loop: Length and EOS (LE) controller that stops with heuristics, and Trainable (TN) controller that learns to stop with a better quality and latency balance. We evaluate our method on IWSLT16 German-English (DE-EN) translation in both directions, WMT15 English-German (EN-DE) translation in both directions, and NIST Chinese-to-English (ZH$\rightarrow $EN) translation. The result shows our method consistently improves over the de-facto baselines, and achieves low latency and reasonable BLEU scores.
Background
Given a set of source–target sentence pairs $\left\langle \mathbf {x}_m,\mathbf {y}^*_m\right\rangle _{m=1}^M$, a consecutive NMT model can be trained by maximizing the log-likelihood of the target sentence from its entire source side context:
where $\phi $ is a set of model parameters. At inference time, the NMT model first encodes a source language sentence $\mathbf {x}=\lbrace x_1,...,x_{T_\eta }\rbrace $ with its encoder and passes the encoded representations $\mathbf {h}=\lbrace h_1,...,h_{T_\eta }\rbrace $ to a greedy decoder. Then the greedy decoder generates a translated sentence in the target language by sequentially choosing the most likely token at each step $t$:
The distribution of next target word is defined as:
where $z_{t}$ is the decoder hidden state at position $t$. In consecutive NMT, once obtained, the encoder hidden states $\mathbf {h}$ and the decoder hidden state $z_t$ are not updated anymore and will be reused during the entire decoding process.
Simultaneous NMT
In SNMT, we receive streaming input tokens, and learn to translate them in real-time. We formulate simultaneous translation as two nested loops: the outer loop that updates an input buffer with newly observed source tokens and the inner loop that translates source tokens in the buffer updated at each outer step.
More precisely, suppose at the end of an outer step $s-1$, the input buffer is $\mathbf {x}^{s-1} = \lbrace x_1, ..., x_{\eta \left[ s-1\right]}\rbrace $, and the output buffer is $\mathbf {y}^{s-1} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Then at outer step $s$, the system translates with the following steps:
The system observes $c_s > 0$ new source tokens and updates the input buffer to be $\mathbf {x}^{s} = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ where $\eta \left[ s\right]=\eta \left[ s-1\right]+c_s$.
Then, the system starts inner loop translation and writes $w_s>=0$ target tokens to the output buffer. The output buffer is updated to be $\mathbf {y}^{s} = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $ where $\tau \left[ s\right]=\tau \left[ s-1\right]+w_s$.
The simultaneous decoding process continues until no more source tokens are added in the outer loop. We define the last outer step as the terminal outer step $S$, and other outer steps as non-terminal outer steps.
For the outer loop, we make no assumption about the value of $c_s$, while all previous work assumes $c_s=1$. This setting is more realistic because a) increasing $c_s$ can reduce the number of outer steps, thus reducing computation cost; b) in a real speech translation application, an ASR system may generate multiple tokens at a time.
For the inner loop, we adapt a pretrained vanilla CNMT model to perform partial translation with two important concerns:
Prefix translation: given a source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and a target prefix $\mathbf {y}^s_{\tau \left[ s-1\right]} = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, how to predict the remaining target tokens?
Stopping criterion: since the NMT model is trained with full sentences, how to design the stopping criterion for it when translating partial source sentcnes?
Simultaneous NMT ::: Prefix Translation
At an outer step $s$, given encoder hidden states $\mathbf {h}^s$ for source prefix $\mathbf {x}^s= \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ for target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s= \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $, we perform prefix translation sequentially with a greedy decoder:
where $t$ starts from $t=\tau \left[ s-1\right]+1$. The prefix translation terminates when a stopping criterion meets, yielding a translation $\mathbf {y}^s = \lbrace y_1, ..., y_{\tau \left[ s\right]}\rbrace $.
However, a major problem comes from the above translation method: how can we obtain the encoder hidden states $\mathbf {h}^s$ and decoder hidden states $\mathbf {z}_{\tau \left[ s\right]-1}^s$ at the beginning of prefix translation? In CNMT, the encoder hidden states and previous decoder hidden states are reused at each decoding time step. Different from CNMT, SNMT is fed with an incremental source side context. On the encoder side, we can address this by either reusing previous encoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF12:
or dynamically re-building all encoder hidden states BIBREF5:
On the decoder side, since the encoder hidden states have been updated from $\mathbf {h}^{s-1}$ to $\mathbf {h}^s$, we can choose to reuse previous decoder hidden states BIBREF3, BIBREF4, BIBREF14, BIBREF5:
or rebuild all previous decoder hidden states from current encoder hidden states $\mathbf {h}^s$ with force decoding:
To better predict the remaining target tokens, we rebuild all encoder and decoder hidden states following Eq. DISPLAY_FORM11 and DISPLAY_FORM13 at the beginning of prefix translation. This strategy ensures that all encoder and decoder hidden states are obtained by attending to the same source tokens, which is consistent with how encoder and decoder hidden states are computed at training time. Besides, these attainable source tokens are all available source context at current time. Compared with using Eq. DISPLAY_FORM10 or DISPLAY_FORM12, our method can potentially better utilize the available source context.
Simultaneous NMT ::: Stopping Criterion
In consecutive NMT, the decoding algorithm such as greedy decoding or beam search terminates when the translator predicts an EOS token or the length of the translation meets a predefined threshold:
where $\text{maxlen}$, $u$ and $v$ are all hyper-parameters. In fairseq-py, they set $\text{maxlen}=+\infty $, $u=0$ and $v=200$ at inference time by default. The decoding for most source sentences terminates when the translator predicts the EOS token. In simultaneous decoding, since we use a NMT model pretrained on full sentences to translate partial source sentences, it tends to predict EOS when the source context has been fully translated. However, such strategy could be too aggressive for simultaneous translation. Fig. FIGREF18 shows such an example. At outer step 2, the translator predicts “you EOS", emiting target token “you". However, “you" is not the expected translation for “你" in the context of “你好。". The right decision is that prefix translation at outer step 2 should stop without emitting any words.
To alleviate such problems and do better simultaneous translation with pretrained CNMT model, we propose two novel stopping criteria for prefix translation.
Simultaneous NMT ::: Stopping Criterion ::: Length and EOS Control
In consecutive translation, the decoding process stops mainly when predicting EOS. In contrast, for prefix translation at non-terminal outer step, we use both length and EOS to stop the prefix translation process. We achieve this by setting the hyper-parameters in Eq. DISPLAY_FORM15 as $\text{maxlen}=+\infty $, $u=1$ and $v=-d$, where $d$ is a non-negative integer. The hyper-parameter $d$ determines the translation latency of the system.
More specifically, before prefix translation at outer step $s$, we have source prefix $\mathbf {x}^s = \lbrace x_1, ..., x_{\eta \left[ s\right]}\rbrace $ and target prefix $\mathbf {y}_{\tau \left[ s-1\right]}^s = \lbrace y_1, ..., y_{\tau \left[ s-1\right]}\rbrace $. Prefix translation terminates at inner step $w_s$ when predicting an EOS token or satisfying:
We call this stopping criterion as Length and EOS (LE) stopping controller.
Simultaneous NMT ::: Stopping Criterion ::: Learning When to Stop
Although simple and easy to implement, LE controller lacks the capability to learn the optimal timing with which to stop prefix translation. Therefore, we design a small trainable network called Trainable (TN) stopping controller to learn when to stop prefix translation for non-terminal outer step. Fig. FIGREF22 shows the illustration.
At each inner decoding step $k$ for non-terminal outer step $s$, the TN controller utilizes a stochastic policy $\pi _\theta $ parameterized by a neural network to make the binary decision on whether to stop translation at current stage:
where $z_{\tau \left[ s-1\right]+k}^s$ is the current decoder hidden state. The prefix translation stops if the TN controller predicts $a_{\tau \left[ s-1\right]+k}=1$. The controller function $f_\theta $ can take on a variety of forms, and for simplicity we implement with a feedforward network with two hidden layers, followed by a softmax layer.
To train the TN controller, we freeze the NMT model with pretrained parameters, and optimize the TN network with policy gradient for reward maximization $\mathcal {J}= \mathbb {E}_{\pi _{\theta }}(\sum _{t=1}^{T_\tau } r_t )$. With a trained TN controller, prefix translation stops at inner decoding step $w_s$ when predicting an EOS token or satisfying:
In the following, we talk about the details of the reward function and the training detail with policy gradient.
Simultaneous NMT ::: Stopping Criterion ::: Learning When to Stop ::: Reward
To trade-off between translation quality and latency, we define the reward function at inner decoding step $k$ of outer step $s$ as:
where $t=\tau \left[ s-1\right]+k$, and $r_t^Q$ and $r_t^D$ are rewards related to quality and delay, respectively. $\alpha \ge 0$ is a hyper-parameter that we adjust to balance the trade-off between translation quality and delay. Similar to BIBREF4, we utilize sentence-level BLEU BIBREF15, BIBREF16 with reward shaping BIBREF17 as the reward for quality:
where
is the intermediate reward. Note that the higher the values of BLEU are, the more rewards the TN controller receives.
Following BIBREF4, BIBREF5, we use average lagging (AL) as the reward for latency:
where
$l(t)$ is the number of observed source tokens when generating the $t$-th target token, $t_e= \mathop {\rm argmin}_{t}{(l(t)=|\mathbf {x}|)}$ denotes the earliest point when the system observes the full source sentence, $\lambda =\frac{|\mathbf {y}|}{|\mathbf {x}|}$ represents the target-to-source length ratio and $d^* \ge 0$ is a hyper-parameter called target delay that indicates the desired system latency. Note that the lower the values of AL are, the more rewards the TN controller receives.
Simultaneous NMT ::: Stopping Criterion ::: Learning When to Stop ::: Policy Gradient
We train the TN controller with policy gradientBIBREF18, and the gradients are:
where $R_t=\sum _{i=t}^{T_\tau } r_i$ is the cumulative future rewards for the current decision. We can adopt any sampling approach to estimate the expected gradient. In our experiments, we randomly sample multiple action trajectories from the current policy $\pi _{\theta }$ and estimate the gradient with the collected accumulated reward. We try the variance reduction techniques by subtracting a baseline average reward estimated by a linear regression model from $R_t$ and find that it does not help to improve the performance. Therefore, we just normalize the reward in each mini batch without using baseline reward for simplicity.
Experiments ::: Settings ::: Dataset
We compare our approach with the baselines on WMT15 German-English (DE-EN) translation in both directions. This is also the most widely used dataset to evaluate SNMT's performance BIBREF3, BIBREF4, BIBREF5, BIBREF10, BIBREF13. To further evaluate our approach's efficacy in trading off translation quality and latency on other language pair and spoken language, we also conduct experiments with the proposed LE and TN method on NIST Chinese-to-English (ZH$\rightarrow $EN) translation and IWSLT16 German-English (DE-EN) translation in both directions. For WMT15, we use newstest2014 for validation and newstest2015 for test. For NIST, we use MT02 for validation, and MT05, MT06, MT08 for test. For IWSLT16, we use tst13 for validation and tst14 for test. Table TABREF32 shows the details. All the data is tokenized and segmented into subword symbols using byte-pair encoding BIBREF19 to restrict the size of the vocabulary. We use 40,000 joint merge operations on WMT15, and 24,000 on IWSLT16. For NIST, we use 30,000 merge operations for source and target side separately. Without explicitly mention, we simulate simultaneous translation scenario at inference time with these datasets by assuming that the system observes one new source token at each outer step, i.e., $c_s=1$.
Experiments ::: Settings ::: Pretrained NMT Model
We use Transformer BIBREF8 trained with maximum likelihood estimation as the pretrained CNMT model and implement our method based on fairseq-py. We follow the setting in transformer_iwslt_de_en for IWSLT16 dataset, and transformer_wmt_en_de for WMT15 and NIST dataset. Fairseq-py adds an EOS token for all source sentences during training and inference. Therefore, to be consistent with the CNMT model implemented with fairseq-py, we also add an EOS token at the end of the source prefix for prefix translation.
Experiments ::: Settings ::: TN Controller
To train the TN controller, we use a mini-batch size of 8,16,16 and sample 5,10,10 trajectories for each sentence pair in a batch for IWSLT16, WMT15 and NIST, respectively. We set the number of newly observed source tokens at each outer step to be 1 during the training for simplicity. We set $\alpha $ to be $0.04$, and $d^*$ to be $2,5,8$. All our TN controllers are trained with policy gradient using Adam optimizer BIBREF20 with 30,000 updates. We select the last model as our final TN controller.
Experiments ::: Settings ::: Baseline
We compare our model against three baselines that utilize a pretrained CNMT model to perform simultaneous translation:
test_time_waitk: the test-time waitk simultaneous decoding algorithm proposed in BIBREF5, i.e., using a full-sentence model but decoding it with a waitk policy. We report the results when $k=1,3,5,7,9$.
SL: the SL model proposed in BIBREF13, which learns an adaptive READ/WRITE policy from oracle READ/WRITE sequences generated with heuristics. We report the results $\rho =0.65,0.6,0.55,0.5,0.45,0.4$.
BIBREF4: the adaptation of BIBREF4's two-staged full-sentence model + reinforcement learning on Transformer by BIBREF5. We report the results when using $CW=2,5,8$ as the target delay.
We report the result with $d=0,2,4,6,8$ for our proposed LE method and $d^*=2,5,8$ for our proposed TN method. For all baselines, we cite the results reported in BIBREF13. Since they did not mention the details of data preprocessing, we cannot compare the BLEU and AL scores directly with theirs. Therefore, we normalize the BLEU and AL scores with its corresponding upper bound, i.e. the BLEU and AL scores obtained when the pretrained Transformer performs standard greedy decoding (Greedy).
Experiments ::: Results
We compare our method with the baselines on the test set of WMT15 EN$\rightarrow $DE and DE$\rightarrow $EN translation tasks. Fig. FIGREF40 shows the result. The points closer to the upper left corner indicate better overall performance, namely low latency and high quality. In all these figures, we observe that, as latency increases, all methods improve in quality. The TN stopping controller significantly outperforms all the baseline systems in both translation tasks, demonstrating that it indeed learns the appropriate timing to stop prefix translation. The LE controller outperforms the baselines on WMT15 EN$\rightarrow $DE translation at high latency region and performs similarly or worse on other cases.
We show the model's efficacy in trading off quality and latency on other language pair and spoken language in Fig. FIGREF41. The TN controller obtains better performance on all translation tasks, especially at the low latency region. For example, on IWSLT16 EN$\rightarrow $ DE translation, it is +$2.5$ to +$3.3$ BLEU ahead of the LE method. TN also obtains promising translation quality with acceptable latency: with a lag of $<7$ tokens, TN obtains 96.95%, 97.20% and 94.03% BLEU with respect to consecutive greedy decoding for IWSLT16 EN$\rightarrow $DE, IWSLT16 DE$\rightarrow $EN and NIST ZH$\rightarrow $EN translation, respectively.
Experiments ::: Analyze
We analyze the effect of different ways (Eq. DISPLAY_FORM10-DISPLAY_FORM13) to obtain the encoder and decoder hidden states at the beginning of prefix translation with the LE controller. Fig. FIGREF42 shows the result. We try three variants: a) dynamically rebuild all encoder/decoder hidden states (none); b) reuse decoder hidden states and rebuild all encoder hidden states (decoder); c) reuse previous encoder hidden states and rebuild all decoder hidden states (encoder). The left Y axis and X axis show BLEU-vs-AL curve. We observe that if reusing previous encoder hidden states (encoder), the translation fails. We ascribe this to the discrepancy between training and decoding for the encoder. We also observe that when $d=0,2$, reusing decoder hidden states (decoder) obtain negative AL. To analyze this, we plot the translation to reference length ratio versus AL curve with the right Y axis and X axis. It shows that with decoder, the decoding process stops too early and generates too short translations. Therefore, to avoid such problem and to be consistent with the training process of the CNMT model, it is important to dynamically rebuild all encoder/decoder hidden states for prefix translation.
Since we make no assumption about the $c_s$, i.e., the number of newly observed source tokens at each outer step, we test the effect of different $c_s$ at this section. Fig. FIGREF43 shows the result with the LE and TN controllers on the test set of WMT15 EN$\rightarrow $DE translation. We observe that as $c_s$ increases, both LE and TN trend to improve in quality and worsen in latency. When $c_s=1$, LE controller obtains the best balance between quality and latency. In contrast, TN controller obtains similar quality and latency balance with different $c_s$, demonstrating that TN controller successfully learns the right timing to stop regardless of the input update schedule.
We also analyze the TN controller's adaptability by monitoring the initial delay, i.e., the number of observed source tokens before emitting the first target token, on the test set of WMT15 EN$\rightarrow $DE translation, as shown in Fig. FIGREF52. $d^*$ is the target delay measured with AL (used in Eq. DISPLAY_FORM29). It demonstrates that the TN controller has a lot of variance in it's initial delay. The distribution of initial delay changes with different target delay: with higher target delay, the average initial delay is larger. For most sentences, the initial delay is within $1-7$.
In speech translation, listeners are also concerned with long silences during which no translation occurs. Following BIBREF4, BIBREF5, we use Consecutive Wait (CW) to measure this:
Fig. FIGREF54 shows the BLEU-vs-CW plots for our proposed two algorithms. The TN controller has higher CW than the LE controller. This is because TN controller prefers consecutive updating output buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 3\ 0\ 0\ 0\ 0\ 0\ 5\ 0\ 0\ 0\ 0\ 4\ ...$) while the LE controller often updates its output buffer following the input buffer (e.g., it often produces $w_s$ as $0\ 0\ 0\ 0\ 1\ 1\ 1\ 1\ 1\ 1\ ...$ when $d=4$). Although larger than LE, the CW for TN ($< 6$) is acceptable for most speech translation scenarios.
Experiments ::: Translation Examples
Fig. FIGREF55 shows three translation examples with the LE and TN controllers on the test set of NIST ZH$\rightarrow $EN and WMT15 EN$\rightarrow $DE translation. In manual inspection of these examples and others, we find that the TN controller learns a conservative timing for stopping prefix translation. For example, in example 2, our method outputs translation “wu bangguo attended the signing ceremony” when observing “吴邦国 出席 签字 仪式 并”, instead of a more radical translation “wu bangguo attended the signing ceremony and”. Such strategy helps to alleviate the problem of premature translation, i.e., translating before observing enough future context.
Related Work
A number of works in simultaneous translation divide the translation process into two stages. A segmentation component first divides the incoming text into segments, and then each segment is translated by a translator independently or with previous context. The segmentation boundaries can be predicted by prosodic pauses detected in speech BIBREF0, BIBREF21, linguistic cues BIBREF22, BIBREF23, or a classifier based on alignment information BIBREF24, BIBREF25 and translation accuracy BIBREF1, BIBREF2, BIBREF26.
Some authors have recently endeavored to perform simultaneous translation in the context of NMT. BIBREF3, BIBREF14, BIBREF5 introduce a manually designed criterion to control when to translate. BIBREF11, BIBREF4, BIBREF12 extend the criterion into a trainable agent in a reinforcement learning framework. However, these work either develop sophisticated training frameworks explicitly designed for simultaneous translation BIBREF5 or fail to use a pretrained consecutive NMT model in an optimal way BIBREF3, BIBREF14, BIBREF11, BIBREF4, BIBREF12, BIBREF13. In contrast, our work is significantly different from theirs in the way of using pretrained consecutive NMT model to perform simultaneous translation and the design of the two stopping criteria.
Conclusion
We have presented a novel framework for improving simultaneous translation with a pretrained consecutive NMT model. The basic idea is to translate partial source sentence with the pretrained consecutive NMT model and stops the translation with two novel stopping criteria. Extensive experiments demonstrate that our method outperforms the state-of-the-art baselines in balancing between translation quality and latency. | IWSLT16, WMT15, NIST |
bb8f62950acbd4051774f1bfc50e3d424dd33b7c | bb8f62950acbd4051774f1bfc50e3d424dd33b7c_0 | Q: Do the authors report results only on English datasets?
Text: Introduction
Twitter has shown potential for monitoring public health trends, BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , disease surveillance, BIBREF6 , and providing a rich online forum for cancer patients, BIBREF7 . Social media has been validated as an effective educational and support tool for breast cancer patients, BIBREF8 , as well as for generating awareness, BIBREF9 . Successful supportive organizations use social media sites for patient interaction, public education, and donor outreach, BIBREF10 . The advantages, limitations, and future potential of using social media in healthcare has been thoroughly reviewed, BIBREF11 . Our study aims to investigate tweets mentioning “breast” and “cancer" to analyze patient populations and selectively obtain content relevant to patient treatment experiences.
Our previous study, BIBREF0 , collected tweets mentioning “cancer” over several months to investigate the potential for monitoring self-reported patient treatment experiences. Non-relevant tweets (e.g. astrological and horoscope references) were removed and the study identified a sample of 660 tweets from patients who were describing their condition. These self-reported diagnostic indicators allowed for a sentiment analysis of tweets authored by patients. However, this process was tedious, since the samples were hand verified and sifted through multiple keyword searches. Here, we aim to automate this process with machine learning context classifiers in order to build larger sets of patient self-reported outcomes in order to quantify the patent experience.
Patients with breast cancer represent a majority of people affected by and living with cancer. As such, it becomes increasingly important to learn from their experiences and understand their journey from their own perspective. The collection and analysis of invisible patient reported outcomes (iPROs) offers a unique opportunity to better understand the patient perspective of care and identify gaps meeting particular patient care needs.
Data Description
Twitter provides a free streaming Application Programming Interface (API), BIBREF12 , for researchers and developers to mine samples of public tweets. Language processing and data mining, BIBREF13 , was conducted using the Python programming language. The free public API allows targeted keyword mining of up to 1% of Twitter's full volume at any given time, referred to as the `Spritzer Feed'.
We collected tweets from two distinct Spritzer endpoints from September 15th, 2016 through December 9th, 2017. The primary feed for the analysis collected INLINEFORM0 million tweets containing the keywords `breast' AND `cancer'. See Figure FIGREF2 for detailed Twitter frequency statistics along with the user activity distribution. Our secondary feed searched just for the keyword `cancer' which served as a comparison ( INLINEFORM1 million tweets, see Appendix 1), and helped us collect additional tweets relevant to cancer from patients. The numeric account ID provided in tweets helps to distinguish high frequency tweeting entities.
Sentence classification combines natural language processing (NLP) with machine learning to identify trends in sentence structure, BIBREF14 , BIBREF15 . Each tweet is converted to a numeric word vector in order to identify distinguishing features by training an NLP classifier on a validated set of relevant tweets. The classifier acts as a tool to sift through ads, news, and comments not related to patients. Our scheme combines a logistic regression classifier, BIBREF16 , with a Convolutional Neural Network (CNN), BIBREF17 , BIBREF18 , to identify self-reported diagnostic tweets.
It is important to be wary of automated accounts (e.g. bots, spam) whose large output of tweets pollute relevant organic content, BIBREF19 , and can distort sentiment analyses, BIBREF20 . Prior to applying sentence classification, we removed tweets containing hyperlinks to remove automated content (some organic content is necessarily lost with this strict constraint).
The user tweet distribution in Figure FIGREF2 , shows the number of users as a function of the number of their tweets we collected. With an average frequency of INLINEFORM0 tweets per user, this is a relatively healthy activity distribution. High frequency tweeting accounts are present in the tail, with a single account producing over 12,000 tweets —an automated account served as a support tool called `ClearScan' for patients in recovery. Approximately 98% of the 2.4 million users shared less than 10 posts, which accounted for 70% of all sampled tweets.
The Twitter API also provided the number of tweets withheld from our sample, due to rate limiting. Using these overflow statistics, we estimated the sampled proportion of tweets mentioning these keywords. These targeted feeds were able to collect a large sample of all tweets mentioning these terms; approximately 96% of tweets mentioning “breast,cancer” and 65.2% of all tweets mentioning `cancer' while active. More information regarding the types of Twitter endpoints and calculating the sampling proportion of collected tweets is described in Appendix II.
Our goal was to analyze content authored only by patients. To help ensure this outcome we removed posts containing a URL for classification, BIBREF19 . Twitter allows users to spread content from other users via `retweets'. We also removed these posts prior to classification to isolate tweets authored by patients. We also accounted for non-relevant astrological content by removing all tweets containing any of the following horoscope indicators: `astrology',`zodiac',`astronomy',`horoscope',`aquarius',`pisces',`aries',`taurus',`leo',`virgo',`libra', and `scorpio'. We preprocessed tweets by lowercasing and removing punctuation. We also only analyzed tweets for which Twitter had identified `en' for the language English.
Sentiment Analysis and Hedonometrics
We evaluated tweet sentiments with hedonometrics, BIBREF21 , BIBREF22 , using LabMT, a labeled set of 10,000 frequently occurring words rated on a `happiness' scale by individuals contracted through Amazon Mechanical Turk, a crowd-sourced survey tool. These happiness scores helped quantify the average emotional rating of text by totaling the scores from applicable words and normalizing by their total frequency. Hence, the average happiness score, INLINEFORM0 , of a corpus with INLINEFORM1 words in common with LabMT was computed with the weighted arithmetic mean of each word's frequency, INLINEFORM2 , and associated happiness score, INLINEFORM3 : DISPLAYFORM0
The average happiness of each word was rated on a 9 point scale ranging from extremely negative (e.g., `emergency' 3.06, `hate' 2.34, `die' 1.74) to positive (e.g., `laughter' 8.50, `love' 8.42, `healthy' 8.02). Neutral `stop words' ( INLINEFORM0 , e.g., `of','the', etc.) were removed to enhance the emotional signal of each set of tweets. These high frequency, low sentiment words can dampen a signal, so their removal can help identify hidden trends. One application is to plot INLINEFORM1 as a function of time. The happiness time-series can provide insight driving emotional content in text. In particular, peak and dips (i.e., large deviations from the average) can help identify interesting themes that may be overlooked in the frequency distribution. Calculated scores can give us comparative insight into the context between sets of tweets.
“Word shift graphs” introduced in, BIBREF21 , compare the terms contributing to shifts in a computed word happiness from two term frequency distributions. This tool is useful in isolating emotional themes from large sets of text and has been previously validated in monitoring public opinion, BIBREF23 as well as for geographical sentiment comparative analyses, BIBREF24 . See Appendix III for a general description of word shift graphs and how to interpret them.
Relevance Classification: Logistic Model and CNN Architecture
We began by building a validated training set of tweets for our sentence classifier. We compiled the patient tweets verified by, BIBREF0 , to train a logistic regression content relevance classifier using a similar framework as, BIBREF16 . To test the classifier, we compiled over 5 million tweets mentioning the word cancer from a 10% `Gardenhose' random sample of Twitter spanning January through December 2015. See Appendix 1 for a statistical overview of this corpus.
We tested a maximum entropy logistic regression classifier using a similar scheme as, BIBREF16 . NLP classifiers operate by converting sentences to word vectors for identifying key characteristics — the vocabulary of the classifier. Within the vocabulary, weights were assigned to each word based upon a frequency statistic. We used the term frequency crossed with the inverse document frequency (tf-idf), as described in , BIBREF16 . The tf-idf weights helped distinguish each term's relative weight across the entire corpus, instead of relying on raw frequency. This statistic dampens highly frequent non-relevant words (e.g. `of', `the', etc.) and enhances relatively rare yet informative terms (e.g. survivor, diagnosed, fighting). This method is commonly implemented in information retrieval for text mining, BIBREF25 . The logistic regression context classifier then performs a binary classification of the tweets we collected from 2015. See Appendix IV for an expanded description of the sentence classification methodology.
We validated the logistic model's performance by manually verifying 1,000 tweets that were classified as `relevant'. We uncovered three categories of immediate interest including: tweets authored by patients regarding their condition (21.6%), tweets from friends/family with a direct connection to a patient (21.9%), and survivors in remission (8.8%). We also found users posting diagnostic related inquiries (7.6%) about possible symptoms that could be linked to breast cancer, or were interested in receiving preventative check-ups. The rest (40.2%) were related to `cancer', but not to patients and include public service updates as well as non-patient authored content (e.g., support groups). We note that the classifier was trained on very limited validated data (N=660), which certainly impacted the results. We used this validated annotated set of tweets to train a more sophisticated classifier to uncover self-diagnostic tweets from users describing their personal breast cancer experiences as current patients or survivors.
We implemented the Convolutional Neural Network (CNN) with Google's Tensorflow interface, BIBREF26 . We adapted our framework from, BIBREF18 , but instead trained the CNN on these 1000 labeled cancer related tweets. The trained CNN was applied to predict patient self-diagnostic tweets from our breast cancer dataset. The CNN outputs a binary value: positive for a predicted tweet relevant to patients or survivors and negative for these other described categories (patient connected, unrelated, diagnostic inquiry). The Tensorflow CNN interface reported a INLINEFORM0 accuracy when evaluating this set of labels with our trained model. These labels were used to predict self-reported diagnostic tweets relevant to breast cancer patients.
Results
A set of 845 breast cancer patient self-diagnostic Twitter profiles was compiled by implementing our logistic model followed by prediction with the trained CNN on 9 months of tweets. The logistic model sifted 4,836 relevant tweets of which 1,331 were predicted to be self-diagnostic by the CNN. Two independent groups annotated the 1,331 tweets to identify patients and evaluate the classifier's results. The raters, showing high inter-rater reliability, individually evaluated each tweet as self-diagnostic of a breast cancer patient or survivor. The rater's independent annotations had a 96% agreement.
The classifier correctly identified 1,140 tweets (85.6%) from 845 profiles. A total of 48,113 tweets from these accounts were compiled from both the `cancer' (69%) and `breast' `cancer' (31%) feeds. We provided tweet frequency statistics in Figure FIGREF7 . This is an indicator that this population of breast cancer patients and survivors are actively tweeting about topics related to `cancer' including their experiences and complications.
Next, we applied hedonometrics to compare the patient posts with all collected breast cancer tweets. We found that the surveyed patient tweets were less positive than breast cancer reference tweets. In Figure FIGREF8 , the time series plots computed average word happiness at monthly and daily resolutions. The daily happiness scores (small markers) have a high fluctuation, especially within the smaller patient sample (average 100 tweets/day) compared to the reference distribution (average 10,000 tweets/day). The monthly calculations (larger markers) highlight the negative shift in average word happiness between the patients and reference tweets. Large fluctuations in computed word happiness correspond to noteworthy events, including breast cancer awareness month in October, cancer awareness month in February, as well as political debate regarding healthcare beginning in March May and July 2017.
In Figure FIGREF9 word shift graphs display the top 50 words responsible for the shift in computed word happiness between distributions. On the left, tweets from patients were compared to all collected breast cancer tweets. Patient tweets, INLINEFORM0 , were less positive ( INLINEFORM1 v. INLINEFORM2 ) than the reference distribution, INLINEFORM3 . There were relatively less positive words `mom', `raise', `awareness', `women', `daughter', `pink', and `life' as well as an increase in the negative words `no(t)', `patients, `dying', `killing', `surgery' `sick', `sucks', and `bill'. Breast cancer awareness month, occurring in October, tends to be a high frequency period with generally more positive and supportive tweets from the general public which may account for some of the negative shift. Notably, there was a relative increase of the positive words `me', `thank', `you' ,'love', and `like' which may indicate that many tweet contexts were from the patient's perspective regarding positive experiences. Many tweets regarding treatment were enthusiastic, supportive, and proactive. Other posts were descriptive: over 165 sampled patient tweets mentioned personal chemo therapy experiences and details regarding their treatment schedule, and side effects.
Numerous patients and survivors in our sample had identified their condition in reference to the American healthcare regulation debate. Many sampled views of the proposed legislation were very negative, since repealing the Affordable Care Act without replacement could leave many uninsured. Other tweets mentioned worries regarding insurance premiums and costs for patients and survivors' continued screening. In particular the pre-existing condition mandate was a chief concern of patients/survivors future coverage. This was echoed by 55 of the sampled patients with the hashtag #iamapreexistingcondition (See Table TABREF10 ).
Hashtags (#) are terms that categorize topics within posts. In Table TABREF10 , the most frequently occurring hashtags from both the sampled patients (right) and full breast cancer corpus (left). Each entry contains the tweet frequency, number of distinct profiles, and the relative happiness score ( INLINEFORM0 ) for comparisons. Political terms were prevalent in both distributions describing the Affordable Care Act (#aca, #obamacare, #saveaca, #pretectourcare) and the newly introduced American Healthcare Act (#ahca, #trumpcare). A visual representation of these hashtags are displayed using a word-cloud in the Appendix (Figure A4).
Tweets referencing the AHCA were markedly more negative than those referencing the ACA. This shift was investigated in Figure FIGREF9 with a word shift graph. We compared American Healthcare Act Tweets, INLINEFORM0 , to posts mentioning the Affordable Care Act, INLINEFORM1 . AHCA were relatively more negative ( INLINEFORM2 v. INLINEFORM3 ) due to an increase of negatively charged words `scared', `lose', `tax', `zombie', `defects', `cut', `depression', `killing', and `worse' . These were references to the bill leaving many patients/survivors without insurance and jeopardizing future treatment options. `Zombie' referenced the bill's potential return for subsequent votes.
Discussion
We have demonstrated the potential of using sentence classification to isolate content authored by breast cancer patients and survivors. Our novel, multi-step sifting algorithm helped us differentiate topics relevant to patients and compare their sentiments to the global online discussion. The hedonometric comparison of frequent hashtags helped identify prominent topics how their sentiments differed. This shows the ambient happiness scores of terms and topics can provide useful information regarding comparative emotionally charged content. This process can be applied to disciplines across health care and beyond.
Throughout 2017, Healthcare was identified as a pressing issue causing anguish and fear among the breast cancer community; especially among patients and survivors. During this time frame, US legislation was proposed by Congress that could roll back regulations ensuring coverage for individuals with pre-existing conditions. Many individuals identifying as current breast cancer patients/survivors expressed concerns over future treatment and potential loss of their healthcare coverage. Twitter could provide a useful political outlet for patient populations to connect with legislators and sway political decisions.
March 2017 was a relatively negative month due to discussions over American healthcare reform. The American Congress held a vote to repeal the Affordable Care Act (ACA, also referred to as `Obamacare'), which could potentially leave many Americans without healthcare insurance, BIBREF27 . There was an overwhelming sense of apprehension within the `breast cancer' tweet sample. Many patients/survivors in our diagnostic tweet sample identified their condition and how the ACA ensured coverage throughout their treatment.
This period featured a notable tweet frequency spike, comparable to the peak during breast cancer awareness month. The burst event peaked on March 23rd and 24th (65k, 57k tweets respectively, see Figure FIGREF2 ). During the peak, 41,983 (34%) posts contained `care' in reference to healthcare, with a viral retweeted meme accounting for 39,183 of these mentions. The tweet read: "The group proposing to cut breast cancer screening, maternity care, and contraceptive coverage." with an embedded photo of a group of predominately male legislators, BIBREF28 . The criticism referenced the absence of female representation in a decision that could deprive many of coverage for breast cancer screenings. The online community condemned the decision to repeal and replace the ACA with the proposed legislation with references to people in treatment who could `die' (n=7,923) without appropriate healthcare insurance coverage. The vote was later postponed and eventually failed, BIBREF29 .
Public outcry likely influenced this legal outcome, demonstrating Twitter's innovative potential as a support tool for public lobbying of health benefits. Twitter can further be used to remind, motivate and change individual and population health behavior using messages of encouragement (translated to happiness) or dissatisfaction (translated to diminished happiness), for example, with memes that can have knock on social consequences when they are re-tweeted. Furthermore, Twitter may someday be used to benchmark treatment decisions to align with expressed patient sentiments, and to make or change clinical recommendations based upon the trend histories that evolve with identifiable sources but are entirely in the public domain.
Analyzing the fluctuation in average word happiness as well as bursts in the frequency distributions can help identify relevant events for further investigation. These tools helped us extract themes relevant to breast cancer patients in comparison to the global conversation.
One area in which Twitter has traditionally fallen short for a communication medium is that of the aural dimension, such as nuances and inflections. However, Twitter now includes pictures, videos and emojis with people revealing or conveying their emotions by use of these communication methods. It is envisaged that the aural and visual dimensions will eventually grow to complement the published text component towards a more refined understanding of feelings, attitudes and health and clinical sentiments.
Lack of widespread patient adoption of social media could be a limiting factor to our analysis. A study of breast cancer patients during 2013–2014, BIBREF30 , found social media was a less prominent form of online communication (N = 2578, 12.3%), however with the advent of smartphones and the internet of things (iot) movement, social media may influence a larger proportion of future patients. Another finding noted that online posts were more likely to be positive about their healthcare decision experience or about survivorship. Therefore we cannot at this time concretely draw population-based assumptions from social media sampling. Nevertheless, understanding this online patient community could serve as a valuable tool for healthcare providers and future studies should investigate current social media usage statistics across patients.
Because we trained the content classifier with a relatively small corpus, the model likely over-fit on a few particular word embeddings. For example: 'i have stage iv', `i am * survivor', `i had * cancer'. However, this is similar to the process of recursive keyword searches to gather related content. Also, the power of the CNN allows for multiple relative lingual syntax as opposed to searching for static phrases ('i have breast cancer', 'i am a survivor'). The CNN shows great promise in sifting relevant context from large sets of data.
Other social forums for patient self reporting and discussion should be incorporated into future studies. For example, as of 2017, https://community.breastcancer.org has built a population of over 199,000 members spanning 145,000 topics. These tools could help connect healthcare professionals with motivated patients. Labeled posts from patients could also help train future context models and help identify adverse symptoms shared among online social communities.
Our study focused primarily on English tweets, since this was the language of our diagnostic training sample. Future studies could incorporate other languages using our proposed framework. It would be important to also expand the API queries with translations of `breast' and `cancer'. This could allow for a cross cultural comparison of how social media influences patients and what patients express on social media.
Conclusion
We have demonstrated the potential of using context classifiers for identifying diagnostic tweets related to the experience of breast cancer patients. Our framework provides a proof of concept for integrating machine learning with natural language processing as a tool to help connect healthcare providers with patient experiences. These methods can inform the medical community to provide more personalized treatment regimens by evaluating patient satisfaction using social listening. Twitter has also been shown as a useful medium for political support of healthcare policies as well as spreading awareness. Applying these analyses across other social media platforms could provide comparably rich data-sets. For instance, Instagram has been found to contain indicative markers for depression, BIBREF31 . Integrating these applications into our healthcare system could provide a better means of tracking iPROs across treatment regimens and over time.
One area in which Twitter has traditionally fallen short for a communication medium is that of the aural dimension, such as nuances and inflections. However, Twitter now includes pictures, videos, and emojis with people revealing or conveying their emotions by use of these communication methods. With augmented reality, virtual reality, and even chatbot interfaces, it is envisaged that the aural and visual dimensions will eventually grow to complement the published text component towards a more refined understanding of feelings, attitudes and health and clinical sentiments.
Follow-on studies to our work could be intended to further develop these models and apply them to larger streams of data. Online crowd sourcing tools, like Amazon's Mechanical Turk, implemented in, BIBREF22 , can help compile larger sets of human validated labels to improve context classifiers. These methods can also be integrated into delivering online outreach surveys as another tool for validating healthcare providers. Future models, trained on several thousand labeled tweets for various real world applications should be explored. Invisible patient- reported outcomes should be further investigated via sentiment and context analyses for a better understanding of how to integrate the internet of things with healthcare.
Twitter has become a powerful platform for amplifying political voices of individuals. The response of the online breast cancer community to the American Healthcare Act as a replacement to the Affordable Care Act was largely negative due to concerns over loss of coverage. A widespread negative public reaction helped influence this political result. Social media opinion mining could present as a powerful tool for legislators to connect with and learn from their constituents. This can lead to positive impacts on population health and societal well-being.
Acknowledgments
The authors wish to acknowledge the Vermont Advanced Computing Core, which is supported by NASA (NNX-08AO96G) at the University of Vermont which provided High Performance Computing resources that contributed to the research results reported within this poster. EMC was supported by the Vermont Complex Systems Center. CMD and PSD were supported by an NSF BIGDATA grant IIS-1447634.
Appendix II: Calculating the Tweet Sampling Proportion
There are three types of endpoints to access data from Twitter. The `spritzer' (1%) and `gardenhose' (10%) endpoints were both implemented to collect publicly posted relevant data for our analysis. The third type of endpoint is the `Firehose' feed, a full 100% sample, which can be purchased via subscription from Twitter. This was unnecessary for our analysis, since our set of keywords yielded a high proportion of the true tweet sample. We quantified the sampled proportion of tweets using overflow statistics provided by Twitter. These `limit tweets', INLINEFORM0 , issue a timestamp along with the approximate number of posts withheld from our collected sample, INLINEFORM1 . The sampling percentage, INLINEFORM2 , of keyword tweets is approximated as the collected tweet total, INLINEFORM3 , as a proportion of itself combined with the sum of the limit counts, each INLINEFORM4 : DISPLAYFORM0
By the end of 2017, Twitter was accumulating an average of 500 million tweets per day, BIBREF32 . Our topics were relatively specific, which allowed us to collect a large sample of tweets. For the singular search term, `cancer', the keyword sampled proportion, INLINEFORM0 , was approximately 65.21% with a sample of 89.2 million tweets. Our separate Twitter spritzer feed searching for keywords `breast AND cancer` OR `lymphedema' rarely surpassed the 1% limit. We calculated a 96.1% sampling proportion while our stream was active (i.e. not accounting for network or power outages). We present the daily overflow limit counts of tweets not appearing in our data-set, and the approximation of the sampling size in Figure A2.
Appendix III: Interpreting Word Shift Graphs
Word shift graphs are essential tools for analyzing which terms are affecting the computed average happiness scores between two text distributions, BIBREF33 . The reference word distribution, INLINEFORM0 , serves as a lingual basis to compare with another text, INLINEFORM1 . The top 50 words causing the shift in computed word happiness are displayed along with their relative weight. The arrows ( INLINEFORM2 ) next to each word mark an increase or decrease in the word's frequency. The INLINEFORM3 , INLINEFORM4 , symbols indicate whether the word contributes positively or negatively to the shift in computed average word happiness.
In Figure A3, word shift graphs compare tweets mentioning `breast' `cancer' and a random 10% `Gardenhose' sample of non filtered tweets. On the left, `breast',`cancer' tweets were slightly less positive due to an increase in negative words like `fight', `battle', `risk', and `lost'. These distributions had similar average happiness scores, which was in part due to the relatively more positive words `women', mom', `raise', `awareness', `save', `support', and `survivor'. The word shift on the right compares breast cancer patient tweets to non filtered tweets. These were more negative ( INLINEFORM0 = 5.78 v. 6.01) due a relative increase in words like `fighting', `surgery', `against', `dying', `sick', `killing', `radiation', and `hospital'. This tool helped identify words that signal emotional themes and allow us to extract content from large corpora, and identify thematic emotional topics within the data.
Appendix IV: Sentence Classification Methodology
We built the vocabulary corpus for the logistic model by tokenizing the annotated set of patient tweets by word, removing punctuation, and lowercasing all text. We also included patient unrelated `cancer' tweets collected as a frame of reference to train the classifier. This set of tweets was not annotated, so we made the assumption that tweets not validated by, BIBREF0 were patient unrelated. The proportion, INLINEFORM0 , of unrelated to related tweets has a profound effect on the vocabulary of the logistic model, so we experimented with various ranges of INLINEFORM1 and settled on a 1:10 ratio of patient related to unrelated tweets. We then applied the tf-idf statistic to build the binary classification logistic model.
The Tensorflow open source machine learning library has previously shown great promise when applied to NLP benchmark data-sets, BIBREF17 . The CNN loosely works by implementing a filter, called convolution functions, across various subregions of the feature landscape, BIBREF34 , BIBREF35 , in this case the tweet vocabulary. The model tests the robustness of different word embeddings (e.g., phrases) by randomly removing filtered pieces during optimization to find the best predictive terms over the course of training. We divided the input labeled data into training and evaluation to successively test for the best word embedding predictors. The trained model can then be applied for binary classification of text content.
Appendix V: Hashtag Table Sorted by Average Word Happiness | Yes |
d653d994ef914d76c7d4011c0eb7873610ad795f | d653d994ef914d76c7d4011c0eb7873610ad795f_0 | Q: How were breast cancer related posts compiled from the Twitter streaming API?
Text: Introduction
Twitter has shown potential for monitoring public health trends, BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , disease surveillance, BIBREF6 , and providing a rich online forum for cancer patients, BIBREF7 . Social media has been validated as an effective educational and support tool for breast cancer patients, BIBREF8 , as well as for generating awareness, BIBREF9 . Successful supportive organizations use social media sites for patient interaction, public education, and donor outreach, BIBREF10 . The advantages, limitations, and future potential of using social media in healthcare has been thoroughly reviewed, BIBREF11 . Our study aims to investigate tweets mentioning “breast” and “cancer" to analyze patient populations and selectively obtain content relevant to patient treatment experiences.
Our previous study, BIBREF0 , collected tweets mentioning “cancer” over several months to investigate the potential for monitoring self-reported patient treatment experiences. Non-relevant tweets (e.g. astrological and horoscope references) were removed and the study identified a sample of 660 tweets from patients who were describing their condition. These self-reported diagnostic indicators allowed for a sentiment analysis of tweets authored by patients. However, this process was tedious, since the samples were hand verified and sifted through multiple keyword searches. Here, we aim to automate this process with machine learning context classifiers in order to build larger sets of patient self-reported outcomes in order to quantify the patent experience.
Patients with breast cancer represent a majority of people affected by and living with cancer. As such, it becomes increasingly important to learn from their experiences and understand their journey from their own perspective. The collection and analysis of invisible patient reported outcomes (iPROs) offers a unique opportunity to better understand the patient perspective of care and identify gaps meeting particular patient care needs.
Data Description
Twitter provides a free streaming Application Programming Interface (API), BIBREF12 , for researchers and developers to mine samples of public tweets. Language processing and data mining, BIBREF13 , was conducted using the Python programming language. The free public API allows targeted keyword mining of up to 1% of Twitter's full volume at any given time, referred to as the `Spritzer Feed'.
We collected tweets from two distinct Spritzer endpoints from September 15th, 2016 through December 9th, 2017. The primary feed for the analysis collected INLINEFORM0 million tweets containing the keywords `breast' AND `cancer'. See Figure FIGREF2 for detailed Twitter frequency statistics along with the user activity distribution. Our secondary feed searched just for the keyword `cancer' which served as a comparison ( INLINEFORM1 million tweets, see Appendix 1), and helped us collect additional tweets relevant to cancer from patients. The numeric account ID provided in tweets helps to distinguish high frequency tweeting entities.
Sentence classification combines natural language processing (NLP) with machine learning to identify trends in sentence structure, BIBREF14 , BIBREF15 . Each tweet is converted to a numeric word vector in order to identify distinguishing features by training an NLP classifier on a validated set of relevant tweets. The classifier acts as a tool to sift through ads, news, and comments not related to patients. Our scheme combines a logistic regression classifier, BIBREF16 , with a Convolutional Neural Network (CNN), BIBREF17 , BIBREF18 , to identify self-reported diagnostic tweets.
It is important to be wary of automated accounts (e.g. bots, spam) whose large output of tweets pollute relevant organic content, BIBREF19 , and can distort sentiment analyses, BIBREF20 . Prior to applying sentence classification, we removed tweets containing hyperlinks to remove automated content (some organic content is necessarily lost with this strict constraint).
The user tweet distribution in Figure FIGREF2 , shows the number of users as a function of the number of their tweets we collected. With an average frequency of INLINEFORM0 tweets per user, this is a relatively healthy activity distribution. High frequency tweeting accounts are present in the tail, with a single account producing over 12,000 tweets —an automated account served as a support tool called `ClearScan' for patients in recovery. Approximately 98% of the 2.4 million users shared less than 10 posts, which accounted for 70% of all sampled tweets.
The Twitter API also provided the number of tweets withheld from our sample, due to rate limiting. Using these overflow statistics, we estimated the sampled proportion of tweets mentioning these keywords. These targeted feeds were able to collect a large sample of all tweets mentioning these terms; approximately 96% of tweets mentioning “breast,cancer” and 65.2% of all tweets mentioning `cancer' while active. More information regarding the types of Twitter endpoints and calculating the sampling proportion of collected tweets is described in Appendix II.
Our goal was to analyze content authored only by patients. To help ensure this outcome we removed posts containing a URL for classification, BIBREF19 . Twitter allows users to spread content from other users via `retweets'. We also removed these posts prior to classification to isolate tweets authored by patients. We also accounted for non-relevant astrological content by removing all tweets containing any of the following horoscope indicators: `astrology',`zodiac',`astronomy',`horoscope',`aquarius',`pisces',`aries',`taurus',`leo',`virgo',`libra', and `scorpio'. We preprocessed tweets by lowercasing and removing punctuation. We also only analyzed tweets for which Twitter had identified `en' for the language English.
Sentiment Analysis and Hedonometrics
We evaluated tweet sentiments with hedonometrics, BIBREF21 , BIBREF22 , using LabMT, a labeled set of 10,000 frequently occurring words rated on a `happiness' scale by individuals contracted through Amazon Mechanical Turk, a crowd-sourced survey tool. These happiness scores helped quantify the average emotional rating of text by totaling the scores from applicable words and normalizing by their total frequency. Hence, the average happiness score, INLINEFORM0 , of a corpus with INLINEFORM1 words in common with LabMT was computed with the weighted arithmetic mean of each word's frequency, INLINEFORM2 , and associated happiness score, INLINEFORM3 : DISPLAYFORM0
The average happiness of each word was rated on a 9 point scale ranging from extremely negative (e.g., `emergency' 3.06, `hate' 2.34, `die' 1.74) to positive (e.g., `laughter' 8.50, `love' 8.42, `healthy' 8.02). Neutral `stop words' ( INLINEFORM0 , e.g., `of','the', etc.) were removed to enhance the emotional signal of each set of tweets. These high frequency, low sentiment words can dampen a signal, so their removal can help identify hidden trends. One application is to plot INLINEFORM1 as a function of time. The happiness time-series can provide insight driving emotional content in text. In particular, peak and dips (i.e., large deviations from the average) can help identify interesting themes that may be overlooked in the frequency distribution. Calculated scores can give us comparative insight into the context between sets of tweets.
“Word shift graphs” introduced in, BIBREF21 , compare the terms contributing to shifts in a computed word happiness from two term frequency distributions. This tool is useful in isolating emotional themes from large sets of text and has been previously validated in monitoring public opinion, BIBREF23 as well as for geographical sentiment comparative analyses, BIBREF24 . See Appendix III for a general description of word shift graphs and how to interpret them.
Relevance Classification: Logistic Model and CNN Architecture
We began by building a validated training set of tweets for our sentence classifier. We compiled the patient tweets verified by, BIBREF0 , to train a logistic regression content relevance classifier using a similar framework as, BIBREF16 . To test the classifier, we compiled over 5 million tweets mentioning the word cancer from a 10% `Gardenhose' random sample of Twitter spanning January through December 2015. See Appendix 1 for a statistical overview of this corpus.
We tested a maximum entropy logistic regression classifier using a similar scheme as, BIBREF16 . NLP classifiers operate by converting sentences to word vectors for identifying key characteristics — the vocabulary of the classifier. Within the vocabulary, weights were assigned to each word based upon a frequency statistic. We used the term frequency crossed with the inverse document frequency (tf-idf), as described in , BIBREF16 . The tf-idf weights helped distinguish each term's relative weight across the entire corpus, instead of relying on raw frequency. This statistic dampens highly frequent non-relevant words (e.g. `of', `the', etc.) and enhances relatively rare yet informative terms (e.g. survivor, diagnosed, fighting). This method is commonly implemented in information retrieval for text mining, BIBREF25 . The logistic regression context classifier then performs a binary classification of the tweets we collected from 2015. See Appendix IV for an expanded description of the sentence classification methodology.
We validated the logistic model's performance by manually verifying 1,000 tweets that were classified as `relevant'. We uncovered three categories of immediate interest including: tweets authored by patients regarding their condition (21.6%), tweets from friends/family with a direct connection to a patient (21.9%), and survivors in remission (8.8%). We also found users posting diagnostic related inquiries (7.6%) about possible symptoms that could be linked to breast cancer, or were interested in receiving preventative check-ups. The rest (40.2%) were related to `cancer', but not to patients and include public service updates as well as non-patient authored content (e.g., support groups). We note that the classifier was trained on very limited validated data (N=660), which certainly impacted the results. We used this validated annotated set of tweets to train a more sophisticated classifier to uncover self-diagnostic tweets from users describing their personal breast cancer experiences as current patients or survivors.
We implemented the Convolutional Neural Network (CNN) with Google's Tensorflow interface, BIBREF26 . We adapted our framework from, BIBREF18 , but instead trained the CNN on these 1000 labeled cancer related tweets. The trained CNN was applied to predict patient self-diagnostic tweets from our breast cancer dataset. The CNN outputs a binary value: positive for a predicted tweet relevant to patients or survivors and negative for these other described categories (patient connected, unrelated, diagnostic inquiry). The Tensorflow CNN interface reported a INLINEFORM0 accuracy when evaluating this set of labels with our trained model. These labels were used to predict self-reported diagnostic tweets relevant to breast cancer patients.
Results
A set of 845 breast cancer patient self-diagnostic Twitter profiles was compiled by implementing our logistic model followed by prediction with the trained CNN on 9 months of tweets. The logistic model sifted 4,836 relevant tweets of which 1,331 were predicted to be self-diagnostic by the CNN. Two independent groups annotated the 1,331 tweets to identify patients and evaluate the classifier's results. The raters, showing high inter-rater reliability, individually evaluated each tweet as self-diagnostic of a breast cancer patient or survivor. The rater's independent annotations had a 96% agreement.
The classifier correctly identified 1,140 tweets (85.6%) from 845 profiles. A total of 48,113 tweets from these accounts were compiled from both the `cancer' (69%) and `breast' `cancer' (31%) feeds. We provided tweet frequency statistics in Figure FIGREF7 . This is an indicator that this population of breast cancer patients and survivors are actively tweeting about topics related to `cancer' including their experiences and complications.
Next, we applied hedonometrics to compare the patient posts with all collected breast cancer tweets. We found that the surveyed patient tweets were less positive than breast cancer reference tweets. In Figure FIGREF8 , the time series plots computed average word happiness at monthly and daily resolutions. The daily happiness scores (small markers) have a high fluctuation, especially within the smaller patient sample (average 100 tweets/day) compared to the reference distribution (average 10,000 tweets/day). The monthly calculations (larger markers) highlight the negative shift in average word happiness between the patients and reference tweets. Large fluctuations in computed word happiness correspond to noteworthy events, including breast cancer awareness month in October, cancer awareness month in February, as well as political debate regarding healthcare beginning in March May and July 2017.
In Figure FIGREF9 word shift graphs display the top 50 words responsible for the shift in computed word happiness between distributions. On the left, tweets from patients were compared to all collected breast cancer tweets. Patient tweets, INLINEFORM0 , were less positive ( INLINEFORM1 v. INLINEFORM2 ) than the reference distribution, INLINEFORM3 . There were relatively less positive words `mom', `raise', `awareness', `women', `daughter', `pink', and `life' as well as an increase in the negative words `no(t)', `patients, `dying', `killing', `surgery' `sick', `sucks', and `bill'. Breast cancer awareness month, occurring in October, tends to be a high frequency period with generally more positive and supportive tweets from the general public which may account for some of the negative shift. Notably, there was a relative increase of the positive words `me', `thank', `you' ,'love', and `like' which may indicate that many tweet contexts were from the patient's perspective regarding positive experiences. Many tweets regarding treatment were enthusiastic, supportive, and proactive. Other posts were descriptive: over 165 sampled patient tweets mentioned personal chemo therapy experiences and details regarding their treatment schedule, and side effects.
Numerous patients and survivors in our sample had identified their condition in reference to the American healthcare regulation debate. Many sampled views of the proposed legislation were very negative, since repealing the Affordable Care Act without replacement could leave many uninsured. Other tweets mentioned worries regarding insurance premiums and costs for patients and survivors' continued screening. In particular the pre-existing condition mandate was a chief concern of patients/survivors future coverage. This was echoed by 55 of the sampled patients with the hashtag #iamapreexistingcondition (See Table TABREF10 ).
Hashtags (#) are terms that categorize topics within posts. In Table TABREF10 , the most frequently occurring hashtags from both the sampled patients (right) and full breast cancer corpus (left). Each entry contains the tweet frequency, number of distinct profiles, and the relative happiness score ( INLINEFORM0 ) for comparisons. Political terms were prevalent in both distributions describing the Affordable Care Act (#aca, #obamacare, #saveaca, #pretectourcare) and the newly introduced American Healthcare Act (#ahca, #trumpcare). A visual representation of these hashtags are displayed using a word-cloud in the Appendix (Figure A4).
Tweets referencing the AHCA were markedly more negative than those referencing the ACA. This shift was investigated in Figure FIGREF9 with a word shift graph. We compared American Healthcare Act Tweets, INLINEFORM0 , to posts mentioning the Affordable Care Act, INLINEFORM1 . AHCA were relatively more negative ( INLINEFORM2 v. INLINEFORM3 ) due to an increase of negatively charged words `scared', `lose', `tax', `zombie', `defects', `cut', `depression', `killing', and `worse' . These were references to the bill leaving many patients/survivors without insurance and jeopardizing future treatment options. `Zombie' referenced the bill's potential return for subsequent votes.
Discussion
We have demonstrated the potential of using sentence classification to isolate content authored by breast cancer patients and survivors. Our novel, multi-step sifting algorithm helped us differentiate topics relevant to patients and compare their sentiments to the global online discussion. The hedonometric comparison of frequent hashtags helped identify prominent topics how their sentiments differed. This shows the ambient happiness scores of terms and topics can provide useful information regarding comparative emotionally charged content. This process can be applied to disciplines across health care and beyond.
Throughout 2017, Healthcare was identified as a pressing issue causing anguish and fear among the breast cancer community; especially among patients and survivors. During this time frame, US legislation was proposed by Congress that could roll back regulations ensuring coverage for individuals with pre-existing conditions. Many individuals identifying as current breast cancer patients/survivors expressed concerns over future treatment and potential loss of their healthcare coverage. Twitter could provide a useful political outlet for patient populations to connect with legislators and sway political decisions.
March 2017 was a relatively negative month due to discussions over American healthcare reform. The American Congress held a vote to repeal the Affordable Care Act (ACA, also referred to as `Obamacare'), which could potentially leave many Americans without healthcare insurance, BIBREF27 . There was an overwhelming sense of apprehension within the `breast cancer' tweet sample. Many patients/survivors in our diagnostic tweet sample identified their condition and how the ACA ensured coverage throughout their treatment.
This period featured a notable tweet frequency spike, comparable to the peak during breast cancer awareness month. The burst event peaked on March 23rd and 24th (65k, 57k tweets respectively, see Figure FIGREF2 ). During the peak, 41,983 (34%) posts contained `care' in reference to healthcare, with a viral retweeted meme accounting for 39,183 of these mentions. The tweet read: "The group proposing to cut breast cancer screening, maternity care, and contraceptive coverage." with an embedded photo of a group of predominately male legislators, BIBREF28 . The criticism referenced the absence of female representation in a decision that could deprive many of coverage for breast cancer screenings. The online community condemned the decision to repeal and replace the ACA with the proposed legislation with references to people in treatment who could `die' (n=7,923) without appropriate healthcare insurance coverage. The vote was later postponed and eventually failed, BIBREF29 .
Public outcry likely influenced this legal outcome, demonstrating Twitter's innovative potential as a support tool for public lobbying of health benefits. Twitter can further be used to remind, motivate and change individual and population health behavior using messages of encouragement (translated to happiness) or dissatisfaction (translated to diminished happiness), for example, with memes that can have knock on social consequences when they are re-tweeted. Furthermore, Twitter may someday be used to benchmark treatment decisions to align with expressed patient sentiments, and to make or change clinical recommendations based upon the trend histories that evolve with identifiable sources but are entirely in the public domain.
Analyzing the fluctuation in average word happiness as well as bursts in the frequency distributions can help identify relevant events for further investigation. These tools helped us extract themes relevant to breast cancer patients in comparison to the global conversation.
One area in which Twitter has traditionally fallen short for a communication medium is that of the aural dimension, such as nuances and inflections. However, Twitter now includes pictures, videos and emojis with people revealing or conveying their emotions by use of these communication methods. It is envisaged that the aural and visual dimensions will eventually grow to complement the published text component towards a more refined understanding of feelings, attitudes and health and clinical sentiments.
Lack of widespread patient adoption of social media could be a limiting factor to our analysis. A study of breast cancer patients during 2013–2014, BIBREF30 , found social media was a less prominent form of online communication (N = 2578, 12.3%), however with the advent of smartphones and the internet of things (iot) movement, social media may influence a larger proportion of future patients. Another finding noted that online posts were more likely to be positive about their healthcare decision experience or about survivorship. Therefore we cannot at this time concretely draw population-based assumptions from social media sampling. Nevertheless, understanding this online patient community could serve as a valuable tool for healthcare providers and future studies should investigate current social media usage statistics across patients.
Because we trained the content classifier with a relatively small corpus, the model likely over-fit on a few particular word embeddings. For example: 'i have stage iv', `i am * survivor', `i had * cancer'. However, this is similar to the process of recursive keyword searches to gather related content. Also, the power of the CNN allows for multiple relative lingual syntax as opposed to searching for static phrases ('i have breast cancer', 'i am a survivor'). The CNN shows great promise in sifting relevant context from large sets of data.
Other social forums for patient self reporting and discussion should be incorporated into future studies. For example, as of 2017, https://community.breastcancer.org has built a population of over 199,000 members spanning 145,000 topics. These tools could help connect healthcare professionals with motivated patients. Labeled posts from patients could also help train future context models and help identify adverse symptoms shared among online social communities.
Our study focused primarily on English tweets, since this was the language of our diagnostic training sample. Future studies could incorporate other languages using our proposed framework. It would be important to also expand the API queries with translations of `breast' and `cancer'. This could allow for a cross cultural comparison of how social media influences patients and what patients express on social media.
Conclusion
We have demonstrated the potential of using context classifiers for identifying diagnostic tweets related to the experience of breast cancer patients. Our framework provides a proof of concept for integrating machine learning with natural language processing as a tool to help connect healthcare providers with patient experiences. These methods can inform the medical community to provide more personalized treatment regimens by evaluating patient satisfaction using social listening. Twitter has also been shown as a useful medium for political support of healthcare policies as well as spreading awareness. Applying these analyses across other social media platforms could provide comparably rich data-sets. For instance, Instagram has been found to contain indicative markers for depression, BIBREF31 . Integrating these applications into our healthcare system could provide a better means of tracking iPROs across treatment regimens and over time.
One area in which Twitter has traditionally fallen short for a communication medium is that of the aural dimension, such as nuances and inflections. However, Twitter now includes pictures, videos, and emojis with people revealing or conveying their emotions by use of these communication methods. With augmented reality, virtual reality, and even chatbot interfaces, it is envisaged that the aural and visual dimensions will eventually grow to complement the published text component towards a more refined understanding of feelings, attitudes and health and clinical sentiments.
Follow-on studies to our work could be intended to further develop these models and apply them to larger streams of data. Online crowd sourcing tools, like Amazon's Mechanical Turk, implemented in, BIBREF22 , can help compile larger sets of human validated labels to improve context classifiers. These methods can also be integrated into delivering online outreach surveys as another tool for validating healthcare providers. Future models, trained on several thousand labeled tweets for various real world applications should be explored. Invisible patient- reported outcomes should be further investigated via sentiment and context analyses for a better understanding of how to integrate the internet of things with healthcare.
Twitter has become a powerful platform for amplifying political voices of individuals. The response of the online breast cancer community to the American Healthcare Act as a replacement to the Affordable Care Act was largely negative due to concerns over loss of coverage. A widespread negative public reaction helped influence this political result. Social media opinion mining could present as a powerful tool for legislators to connect with and learn from their constituents. This can lead to positive impacts on population health and societal well-being.
Acknowledgments
The authors wish to acknowledge the Vermont Advanced Computing Core, which is supported by NASA (NNX-08AO96G) at the University of Vermont which provided High Performance Computing resources that contributed to the research results reported within this poster. EMC was supported by the Vermont Complex Systems Center. CMD and PSD were supported by an NSF BIGDATA grant IIS-1447634.
Appendix II: Calculating the Tweet Sampling Proportion
There are three types of endpoints to access data from Twitter. The `spritzer' (1%) and `gardenhose' (10%) endpoints were both implemented to collect publicly posted relevant data for our analysis. The third type of endpoint is the `Firehose' feed, a full 100% sample, which can be purchased via subscription from Twitter. This was unnecessary for our analysis, since our set of keywords yielded a high proportion of the true tweet sample. We quantified the sampled proportion of tweets using overflow statistics provided by Twitter. These `limit tweets', INLINEFORM0 , issue a timestamp along with the approximate number of posts withheld from our collected sample, INLINEFORM1 . The sampling percentage, INLINEFORM2 , of keyword tweets is approximated as the collected tweet total, INLINEFORM3 , as a proportion of itself combined with the sum of the limit counts, each INLINEFORM4 : DISPLAYFORM0
By the end of 2017, Twitter was accumulating an average of 500 million tweets per day, BIBREF32 . Our topics were relatively specific, which allowed us to collect a large sample of tweets. For the singular search term, `cancer', the keyword sampled proportion, INLINEFORM0 , was approximately 65.21% with a sample of 89.2 million tweets. Our separate Twitter spritzer feed searching for keywords `breast AND cancer` OR `lymphedema' rarely surpassed the 1% limit. We calculated a 96.1% sampling proportion while our stream was active (i.e. not accounting for network or power outages). We present the daily overflow limit counts of tweets not appearing in our data-set, and the approximation of the sampling size in Figure A2.
Appendix III: Interpreting Word Shift Graphs
Word shift graphs are essential tools for analyzing which terms are affecting the computed average happiness scores between two text distributions, BIBREF33 . The reference word distribution, INLINEFORM0 , serves as a lingual basis to compare with another text, INLINEFORM1 . The top 50 words causing the shift in computed word happiness are displayed along with their relative weight. The arrows ( INLINEFORM2 ) next to each word mark an increase or decrease in the word's frequency. The INLINEFORM3 , INLINEFORM4 , symbols indicate whether the word contributes positively or negatively to the shift in computed average word happiness.
In Figure A3, word shift graphs compare tweets mentioning `breast' `cancer' and a random 10% `Gardenhose' sample of non filtered tweets. On the left, `breast',`cancer' tweets were slightly less positive due to an increase in negative words like `fight', `battle', `risk', and `lost'. These distributions had similar average happiness scores, which was in part due to the relatively more positive words `women', mom', `raise', `awareness', `save', `support', and `survivor'. The word shift on the right compares breast cancer patient tweets to non filtered tweets. These were more negative ( INLINEFORM0 = 5.78 v. 6.01) due a relative increase in words like `fighting', `surgery', `against', `dying', `sick', `killing', `radiation', and `hospital'. This tool helped identify words that signal emotional themes and allow us to extract content from large corpora, and identify thematic emotional topics within the data.
Appendix IV: Sentence Classification Methodology
We built the vocabulary corpus for the logistic model by tokenizing the annotated set of patient tweets by word, removing punctuation, and lowercasing all text. We also included patient unrelated `cancer' tweets collected as a frame of reference to train the classifier. This set of tweets was not annotated, so we made the assumption that tweets not validated by, BIBREF0 were patient unrelated. The proportion, INLINEFORM0 , of unrelated to related tweets has a profound effect on the vocabulary of the logistic model, so we experimented with various ranges of INLINEFORM1 and settled on a 1:10 ratio of patient related to unrelated tweets. We then applied the tf-idf statistic to build the binary classification logistic model.
The Tensorflow open source machine learning library has previously shown great promise when applied to NLP benchmark data-sets, BIBREF17 . The CNN loosely works by implementing a filter, called convolution functions, across various subregions of the feature landscape, BIBREF34 , BIBREF35 , in this case the tweet vocabulary. The model tests the robustness of different word embeddings (e.g., phrases) by randomly removing filtered pieces during optimization to find the best predictive terms over the course of training. We divided the input labeled data into training and evaluation to successively test for the best word embedding predictors. The trained model can then be applied for binary classification of text content.
Appendix V: Hashtag Table Sorted by Average Word Happiness | By using keywords `breast' AND `cancer' in tweet collecting process.
|
880a76678e92970791f7c1aad301b5adfc41704f | 880a76678e92970791f7c1aad301b5adfc41704f_0 | Q: What machine learning and NLP methods were used to sift tweets relevant to breast cancer experiences?
Text: Introduction
Twitter has shown potential for monitoring public health trends, BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , disease surveillance, BIBREF6 , and providing a rich online forum for cancer patients, BIBREF7 . Social media has been validated as an effective educational and support tool for breast cancer patients, BIBREF8 , as well as for generating awareness, BIBREF9 . Successful supportive organizations use social media sites for patient interaction, public education, and donor outreach, BIBREF10 . The advantages, limitations, and future potential of using social media in healthcare has been thoroughly reviewed, BIBREF11 . Our study aims to investigate tweets mentioning “breast” and “cancer" to analyze patient populations and selectively obtain content relevant to patient treatment experiences.
Our previous study, BIBREF0 , collected tweets mentioning “cancer” over several months to investigate the potential for monitoring self-reported patient treatment experiences. Non-relevant tweets (e.g. astrological and horoscope references) were removed and the study identified a sample of 660 tweets from patients who were describing their condition. These self-reported diagnostic indicators allowed for a sentiment analysis of tweets authored by patients. However, this process was tedious, since the samples were hand verified and sifted through multiple keyword searches. Here, we aim to automate this process with machine learning context classifiers in order to build larger sets of patient self-reported outcomes in order to quantify the patent experience.
Patients with breast cancer represent a majority of people affected by and living with cancer. As such, it becomes increasingly important to learn from their experiences and understand their journey from their own perspective. The collection and analysis of invisible patient reported outcomes (iPROs) offers a unique opportunity to better understand the patient perspective of care and identify gaps meeting particular patient care needs.
Data Description
Twitter provides a free streaming Application Programming Interface (API), BIBREF12 , for researchers and developers to mine samples of public tweets. Language processing and data mining, BIBREF13 , was conducted using the Python programming language. The free public API allows targeted keyword mining of up to 1% of Twitter's full volume at any given time, referred to as the `Spritzer Feed'.
We collected tweets from two distinct Spritzer endpoints from September 15th, 2016 through December 9th, 2017. The primary feed for the analysis collected INLINEFORM0 million tweets containing the keywords `breast' AND `cancer'. See Figure FIGREF2 for detailed Twitter frequency statistics along with the user activity distribution. Our secondary feed searched just for the keyword `cancer' which served as a comparison ( INLINEFORM1 million tweets, see Appendix 1), and helped us collect additional tweets relevant to cancer from patients. The numeric account ID provided in tweets helps to distinguish high frequency tweeting entities.
Sentence classification combines natural language processing (NLP) with machine learning to identify trends in sentence structure, BIBREF14 , BIBREF15 . Each tweet is converted to a numeric word vector in order to identify distinguishing features by training an NLP classifier on a validated set of relevant tweets. The classifier acts as a tool to sift through ads, news, and comments not related to patients. Our scheme combines a logistic regression classifier, BIBREF16 , with a Convolutional Neural Network (CNN), BIBREF17 , BIBREF18 , to identify self-reported diagnostic tweets.
It is important to be wary of automated accounts (e.g. bots, spam) whose large output of tweets pollute relevant organic content, BIBREF19 , and can distort sentiment analyses, BIBREF20 . Prior to applying sentence classification, we removed tweets containing hyperlinks to remove automated content (some organic content is necessarily lost with this strict constraint).
The user tweet distribution in Figure FIGREF2 , shows the number of users as a function of the number of their tweets we collected. With an average frequency of INLINEFORM0 tweets per user, this is a relatively healthy activity distribution. High frequency tweeting accounts are present in the tail, with a single account producing over 12,000 tweets —an automated account served as a support tool called `ClearScan' for patients in recovery. Approximately 98% of the 2.4 million users shared less than 10 posts, which accounted for 70% of all sampled tweets.
The Twitter API also provided the number of tweets withheld from our sample, due to rate limiting. Using these overflow statistics, we estimated the sampled proportion of tweets mentioning these keywords. These targeted feeds were able to collect a large sample of all tweets mentioning these terms; approximately 96% of tweets mentioning “breast,cancer” and 65.2% of all tweets mentioning `cancer' while active. More information regarding the types of Twitter endpoints and calculating the sampling proportion of collected tweets is described in Appendix II.
Our goal was to analyze content authored only by patients. To help ensure this outcome we removed posts containing a URL for classification, BIBREF19 . Twitter allows users to spread content from other users via `retweets'. We also removed these posts prior to classification to isolate tweets authored by patients. We also accounted for non-relevant astrological content by removing all tweets containing any of the following horoscope indicators: `astrology',`zodiac',`astronomy',`horoscope',`aquarius',`pisces',`aries',`taurus',`leo',`virgo',`libra', and `scorpio'. We preprocessed tweets by lowercasing and removing punctuation. We also only analyzed tweets for which Twitter had identified `en' for the language English.
Sentiment Analysis and Hedonometrics
We evaluated tweet sentiments with hedonometrics, BIBREF21 , BIBREF22 , using LabMT, a labeled set of 10,000 frequently occurring words rated on a `happiness' scale by individuals contracted through Amazon Mechanical Turk, a crowd-sourced survey tool. These happiness scores helped quantify the average emotional rating of text by totaling the scores from applicable words and normalizing by their total frequency. Hence, the average happiness score, INLINEFORM0 , of a corpus with INLINEFORM1 words in common with LabMT was computed with the weighted arithmetic mean of each word's frequency, INLINEFORM2 , and associated happiness score, INLINEFORM3 : DISPLAYFORM0
The average happiness of each word was rated on a 9 point scale ranging from extremely negative (e.g., `emergency' 3.06, `hate' 2.34, `die' 1.74) to positive (e.g., `laughter' 8.50, `love' 8.42, `healthy' 8.02). Neutral `stop words' ( INLINEFORM0 , e.g., `of','the', etc.) were removed to enhance the emotional signal of each set of tweets. These high frequency, low sentiment words can dampen a signal, so their removal can help identify hidden trends. One application is to plot INLINEFORM1 as a function of time. The happiness time-series can provide insight driving emotional content in text. In particular, peak and dips (i.e., large deviations from the average) can help identify interesting themes that may be overlooked in the frequency distribution. Calculated scores can give us comparative insight into the context between sets of tweets.
“Word shift graphs” introduced in, BIBREF21 , compare the terms contributing to shifts in a computed word happiness from two term frequency distributions. This tool is useful in isolating emotional themes from large sets of text and has been previously validated in monitoring public opinion, BIBREF23 as well as for geographical sentiment comparative analyses, BIBREF24 . See Appendix III for a general description of word shift graphs and how to interpret them.
Relevance Classification: Logistic Model and CNN Architecture
We began by building a validated training set of tweets for our sentence classifier. We compiled the patient tweets verified by, BIBREF0 , to train a logistic regression content relevance classifier using a similar framework as, BIBREF16 . To test the classifier, we compiled over 5 million tweets mentioning the word cancer from a 10% `Gardenhose' random sample of Twitter spanning January through December 2015. See Appendix 1 for a statistical overview of this corpus.
We tested a maximum entropy logistic regression classifier using a similar scheme as, BIBREF16 . NLP classifiers operate by converting sentences to word vectors for identifying key characteristics — the vocabulary of the classifier. Within the vocabulary, weights were assigned to each word based upon a frequency statistic. We used the term frequency crossed with the inverse document frequency (tf-idf), as described in , BIBREF16 . The tf-idf weights helped distinguish each term's relative weight across the entire corpus, instead of relying on raw frequency. This statistic dampens highly frequent non-relevant words (e.g. `of', `the', etc.) and enhances relatively rare yet informative terms (e.g. survivor, diagnosed, fighting). This method is commonly implemented in information retrieval for text mining, BIBREF25 . The logistic regression context classifier then performs a binary classification of the tweets we collected from 2015. See Appendix IV for an expanded description of the sentence classification methodology.
We validated the logistic model's performance by manually verifying 1,000 tweets that were classified as `relevant'. We uncovered three categories of immediate interest including: tweets authored by patients regarding their condition (21.6%), tweets from friends/family with a direct connection to a patient (21.9%), and survivors in remission (8.8%). We also found users posting diagnostic related inquiries (7.6%) about possible symptoms that could be linked to breast cancer, or were interested in receiving preventative check-ups. The rest (40.2%) were related to `cancer', but not to patients and include public service updates as well as non-patient authored content (e.g., support groups). We note that the classifier was trained on very limited validated data (N=660), which certainly impacted the results. We used this validated annotated set of tweets to train a more sophisticated classifier to uncover self-diagnostic tweets from users describing their personal breast cancer experiences as current patients or survivors.
We implemented the Convolutional Neural Network (CNN) with Google's Tensorflow interface, BIBREF26 . We adapted our framework from, BIBREF18 , but instead trained the CNN on these 1000 labeled cancer related tweets. The trained CNN was applied to predict patient self-diagnostic tweets from our breast cancer dataset. The CNN outputs a binary value: positive for a predicted tweet relevant to patients or survivors and negative for these other described categories (patient connected, unrelated, diagnostic inquiry). The Tensorflow CNN interface reported a INLINEFORM0 accuracy when evaluating this set of labels with our trained model. These labels were used to predict self-reported diagnostic tweets relevant to breast cancer patients.
Results
A set of 845 breast cancer patient self-diagnostic Twitter profiles was compiled by implementing our logistic model followed by prediction with the trained CNN on 9 months of tweets. The logistic model sifted 4,836 relevant tweets of which 1,331 were predicted to be self-diagnostic by the CNN. Two independent groups annotated the 1,331 tweets to identify patients and evaluate the classifier's results. The raters, showing high inter-rater reliability, individually evaluated each tweet as self-diagnostic of a breast cancer patient or survivor. The rater's independent annotations had a 96% agreement.
The classifier correctly identified 1,140 tweets (85.6%) from 845 profiles. A total of 48,113 tweets from these accounts were compiled from both the `cancer' (69%) and `breast' `cancer' (31%) feeds. We provided tweet frequency statistics in Figure FIGREF7 . This is an indicator that this population of breast cancer patients and survivors are actively tweeting about topics related to `cancer' including their experiences and complications.
Next, we applied hedonometrics to compare the patient posts with all collected breast cancer tweets. We found that the surveyed patient tweets were less positive than breast cancer reference tweets. In Figure FIGREF8 , the time series plots computed average word happiness at monthly and daily resolutions. The daily happiness scores (small markers) have a high fluctuation, especially within the smaller patient sample (average 100 tweets/day) compared to the reference distribution (average 10,000 tweets/day). The monthly calculations (larger markers) highlight the negative shift in average word happiness between the patients and reference tweets. Large fluctuations in computed word happiness correspond to noteworthy events, including breast cancer awareness month in October, cancer awareness month in February, as well as political debate regarding healthcare beginning in March May and July 2017.
In Figure FIGREF9 word shift graphs display the top 50 words responsible for the shift in computed word happiness between distributions. On the left, tweets from patients were compared to all collected breast cancer tweets. Patient tweets, INLINEFORM0 , were less positive ( INLINEFORM1 v. INLINEFORM2 ) than the reference distribution, INLINEFORM3 . There were relatively less positive words `mom', `raise', `awareness', `women', `daughter', `pink', and `life' as well as an increase in the negative words `no(t)', `patients, `dying', `killing', `surgery' `sick', `sucks', and `bill'. Breast cancer awareness month, occurring in October, tends to be a high frequency period with generally more positive and supportive tweets from the general public which may account for some of the negative shift. Notably, there was a relative increase of the positive words `me', `thank', `you' ,'love', and `like' which may indicate that many tweet contexts were from the patient's perspective regarding positive experiences. Many tweets regarding treatment were enthusiastic, supportive, and proactive. Other posts were descriptive: over 165 sampled patient tweets mentioned personal chemo therapy experiences and details regarding their treatment schedule, and side effects.
Numerous patients and survivors in our sample had identified their condition in reference to the American healthcare regulation debate. Many sampled views of the proposed legislation were very negative, since repealing the Affordable Care Act without replacement could leave many uninsured. Other tweets mentioned worries regarding insurance premiums and costs for patients and survivors' continued screening. In particular the pre-existing condition mandate was a chief concern of patients/survivors future coverage. This was echoed by 55 of the sampled patients with the hashtag #iamapreexistingcondition (See Table TABREF10 ).
Hashtags (#) are terms that categorize topics within posts. In Table TABREF10 , the most frequently occurring hashtags from both the sampled patients (right) and full breast cancer corpus (left). Each entry contains the tweet frequency, number of distinct profiles, and the relative happiness score ( INLINEFORM0 ) for comparisons. Political terms were prevalent in both distributions describing the Affordable Care Act (#aca, #obamacare, #saveaca, #pretectourcare) and the newly introduced American Healthcare Act (#ahca, #trumpcare). A visual representation of these hashtags are displayed using a word-cloud in the Appendix (Figure A4).
Tweets referencing the AHCA were markedly more negative than those referencing the ACA. This shift was investigated in Figure FIGREF9 with a word shift graph. We compared American Healthcare Act Tweets, INLINEFORM0 , to posts mentioning the Affordable Care Act, INLINEFORM1 . AHCA were relatively more negative ( INLINEFORM2 v. INLINEFORM3 ) due to an increase of negatively charged words `scared', `lose', `tax', `zombie', `defects', `cut', `depression', `killing', and `worse' . These were references to the bill leaving many patients/survivors without insurance and jeopardizing future treatment options. `Zombie' referenced the bill's potential return for subsequent votes.
Discussion
We have demonstrated the potential of using sentence classification to isolate content authored by breast cancer patients and survivors. Our novel, multi-step sifting algorithm helped us differentiate topics relevant to patients and compare their sentiments to the global online discussion. The hedonometric comparison of frequent hashtags helped identify prominent topics how their sentiments differed. This shows the ambient happiness scores of terms and topics can provide useful information regarding comparative emotionally charged content. This process can be applied to disciplines across health care and beyond.
Throughout 2017, Healthcare was identified as a pressing issue causing anguish and fear among the breast cancer community; especially among patients and survivors. During this time frame, US legislation was proposed by Congress that could roll back regulations ensuring coverage for individuals with pre-existing conditions. Many individuals identifying as current breast cancer patients/survivors expressed concerns over future treatment and potential loss of their healthcare coverage. Twitter could provide a useful political outlet for patient populations to connect with legislators and sway political decisions.
March 2017 was a relatively negative month due to discussions over American healthcare reform. The American Congress held a vote to repeal the Affordable Care Act (ACA, also referred to as `Obamacare'), which could potentially leave many Americans without healthcare insurance, BIBREF27 . There was an overwhelming sense of apprehension within the `breast cancer' tweet sample. Many patients/survivors in our diagnostic tweet sample identified their condition and how the ACA ensured coverage throughout their treatment.
This period featured a notable tweet frequency spike, comparable to the peak during breast cancer awareness month. The burst event peaked on March 23rd and 24th (65k, 57k tweets respectively, see Figure FIGREF2 ). During the peak, 41,983 (34%) posts contained `care' in reference to healthcare, with a viral retweeted meme accounting for 39,183 of these mentions. The tweet read: "The group proposing to cut breast cancer screening, maternity care, and contraceptive coverage." with an embedded photo of a group of predominately male legislators, BIBREF28 . The criticism referenced the absence of female representation in a decision that could deprive many of coverage for breast cancer screenings. The online community condemned the decision to repeal and replace the ACA with the proposed legislation with references to people in treatment who could `die' (n=7,923) without appropriate healthcare insurance coverage. The vote was later postponed and eventually failed, BIBREF29 .
Public outcry likely influenced this legal outcome, demonstrating Twitter's innovative potential as a support tool for public lobbying of health benefits. Twitter can further be used to remind, motivate and change individual and population health behavior using messages of encouragement (translated to happiness) or dissatisfaction (translated to diminished happiness), for example, with memes that can have knock on social consequences when they are re-tweeted. Furthermore, Twitter may someday be used to benchmark treatment decisions to align with expressed patient sentiments, and to make or change clinical recommendations based upon the trend histories that evolve with identifiable sources but are entirely in the public domain.
Analyzing the fluctuation in average word happiness as well as bursts in the frequency distributions can help identify relevant events for further investigation. These tools helped us extract themes relevant to breast cancer patients in comparison to the global conversation.
One area in which Twitter has traditionally fallen short for a communication medium is that of the aural dimension, such as nuances and inflections. However, Twitter now includes pictures, videos and emojis with people revealing or conveying their emotions by use of these communication methods. It is envisaged that the aural and visual dimensions will eventually grow to complement the published text component towards a more refined understanding of feelings, attitudes and health and clinical sentiments.
Lack of widespread patient adoption of social media could be a limiting factor to our analysis. A study of breast cancer patients during 2013–2014, BIBREF30 , found social media was a less prominent form of online communication (N = 2578, 12.3%), however with the advent of smartphones and the internet of things (iot) movement, social media may influence a larger proportion of future patients. Another finding noted that online posts were more likely to be positive about their healthcare decision experience or about survivorship. Therefore we cannot at this time concretely draw population-based assumptions from social media sampling. Nevertheless, understanding this online patient community could serve as a valuable tool for healthcare providers and future studies should investigate current social media usage statistics across patients.
Because we trained the content classifier with a relatively small corpus, the model likely over-fit on a few particular word embeddings. For example: 'i have stage iv', `i am * survivor', `i had * cancer'. However, this is similar to the process of recursive keyword searches to gather related content. Also, the power of the CNN allows for multiple relative lingual syntax as opposed to searching for static phrases ('i have breast cancer', 'i am a survivor'). The CNN shows great promise in sifting relevant context from large sets of data.
Other social forums for patient self reporting and discussion should be incorporated into future studies. For example, as of 2017, https://community.breastcancer.org has built a population of over 199,000 members spanning 145,000 topics. These tools could help connect healthcare professionals with motivated patients. Labeled posts from patients could also help train future context models and help identify adverse symptoms shared among online social communities.
Our study focused primarily on English tweets, since this was the language of our diagnostic training sample. Future studies could incorporate other languages using our proposed framework. It would be important to also expand the API queries with translations of `breast' and `cancer'. This could allow for a cross cultural comparison of how social media influences patients and what patients express on social media.
Conclusion
We have demonstrated the potential of using context classifiers for identifying diagnostic tweets related to the experience of breast cancer patients. Our framework provides a proof of concept for integrating machine learning with natural language processing as a tool to help connect healthcare providers with patient experiences. These methods can inform the medical community to provide more personalized treatment regimens by evaluating patient satisfaction using social listening. Twitter has also been shown as a useful medium for political support of healthcare policies as well as spreading awareness. Applying these analyses across other social media platforms could provide comparably rich data-sets. For instance, Instagram has been found to contain indicative markers for depression, BIBREF31 . Integrating these applications into our healthcare system could provide a better means of tracking iPROs across treatment regimens and over time.
One area in which Twitter has traditionally fallen short for a communication medium is that of the aural dimension, such as nuances and inflections. However, Twitter now includes pictures, videos, and emojis with people revealing or conveying their emotions by use of these communication methods. With augmented reality, virtual reality, and even chatbot interfaces, it is envisaged that the aural and visual dimensions will eventually grow to complement the published text component towards a more refined understanding of feelings, attitudes and health and clinical sentiments.
Follow-on studies to our work could be intended to further develop these models and apply them to larger streams of data. Online crowd sourcing tools, like Amazon's Mechanical Turk, implemented in, BIBREF22 , can help compile larger sets of human validated labels to improve context classifiers. These methods can also be integrated into delivering online outreach surveys as another tool for validating healthcare providers. Future models, trained on several thousand labeled tweets for various real world applications should be explored. Invisible patient- reported outcomes should be further investigated via sentiment and context analyses for a better understanding of how to integrate the internet of things with healthcare.
Twitter has become a powerful platform for amplifying political voices of individuals. The response of the online breast cancer community to the American Healthcare Act as a replacement to the Affordable Care Act was largely negative due to concerns over loss of coverage. A widespread negative public reaction helped influence this political result. Social media opinion mining could present as a powerful tool for legislators to connect with and learn from their constituents. This can lead to positive impacts on population health and societal well-being.
Acknowledgments
The authors wish to acknowledge the Vermont Advanced Computing Core, which is supported by NASA (NNX-08AO96G) at the University of Vermont which provided High Performance Computing resources that contributed to the research results reported within this poster. EMC was supported by the Vermont Complex Systems Center. CMD and PSD were supported by an NSF BIGDATA grant IIS-1447634.
Appendix II: Calculating the Tweet Sampling Proportion
There are three types of endpoints to access data from Twitter. The `spritzer' (1%) and `gardenhose' (10%) endpoints were both implemented to collect publicly posted relevant data for our analysis. The third type of endpoint is the `Firehose' feed, a full 100% sample, which can be purchased via subscription from Twitter. This was unnecessary for our analysis, since our set of keywords yielded a high proportion of the true tweet sample. We quantified the sampled proportion of tweets using overflow statistics provided by Twitter. These `limit tweets', INLINEFORM0 , issue a timestamp along with the approximate number of posts withheld from our collected sample, INLINEFORM1 . The sampling percentage, INLINEFORM2 , of keyword tweets is approximated as the collected tweet total, INLINEFORM3 , as a proportion of itself combined with the sum of the limit counts, each INLINEFORM4 : DISPLAYFORM0
By the end of 2017, Twitter was accumulating an average of 500 million tweets per day, BIBREF32 . Our topics were relatively specific, which allowed us to collect a large sample of tweets. For the singular search term, `cancer', the keyword sampled proportion, INLINEFORM0 , was approximately 65.21% with a sample of 89.2 million tweets. Our separate Twitter spritzer feed searching for keywords `breast AND cancer` OR `lymphedema' rarely surpassed the 1% limit. We calculated a 96.1% sampling proportion while our stream was active (i.e. not accounting for network or power outages). We present the daily overflow limit counts of tweets not appearing in our data-set, and the approximation of the sampling size in Figure A2.
Appendix III: Interpreting Word Shift Graphs
Word shift graphs are essential tools for analyzing which terms are affecting the computed average happiness scores between two text distributions, BIBREF33 . The reference word distribution, INLINEFORM0 , serves as a lingual basis to compare with another text, INLINEFORM1 . The top 50 words causing the shift in computed word happiness are displayed along with their relative weight. The arrows ( INLINEFORM2 ) next to each word mark an increase or decrease in the word's frequency. The INLINEFORM3 , INLINEFORM4 , symbols indicate whether the word contributes positively or negatively to the shift in computed average word happiness.
In Figure A3, word shift graphs compare tweets mentioning `breast' `cancer' and a random 10% `Gardenhose' sample of non filtered tweets. On the left, `breast',`cancer' tweets were slightly less positive due to an increase in negative words like `fight', `battle', `risk', and `lost'. These distributions had similar average happiness scores, which was in part due to the relatively more positive words `women', mom', `raise', `awareness', `save', `support', and `survivor'. The word shift on the right compares breast cancer patient tweets to non filtered tweets. These were more negative ( INLINEFORM0 = 5.78 v. 6.01) due a relative increase in words like `fighting', `surgery', `against', `dying', `sick', `killing', `radiation', and `hospital'. This tool helped identify words that signal emotional themes and allow us to extract content from large corpora, and identify thematic emotional topics within the data.
Appendix IV: Sentence Classification Methodology
We built the vocabulary corpus for the logistic model by tokenizing the annotated set of patient tweets by word, removing punctuation, and lowercasing all text. We also included patient unrelated `cancer' tweets collected as a frame of reference to train the classifier. This set of tweets was not annotated, so we made the assumption that tweets not validated by, BIBREF0 were patient unrelated. The proportion, INLINEFORM0 , of unrelated to related tweets has a profound effect on the vocabulary of the logistic model, so we experimented with various ranges of INLINEFORM1 and settled on a 1:10 ratio of patient related to unrelated tweets. We then applied the tf-idf statistic to build the binary classification logistic model.
The Tensorflow open source machine learning library has previously shown great promise when applied to NLP benchmark data-sets, BIBREF17 . The CNN loosely works by implementing a filter, called convolution functions, across various subregions of the feature landscape, BIBREF34 , BIBREF35 , in this case the tweet vocabulary. The model tests the robustness of different word embeddings (e.g., phrases) by randomly removing filtered pieces during optimization to find the best predictive terms over the course of training. We divided the input labeled data into training and evaluation to successively test for the best word embedding predictors. The trained model can then be applied for binary classification of text content.
Appendix V: Hashtag Table Sorted by Average Word Happiness | ML logistic regression classifier combined with a Convolutional Neural Network (CNN) to identify self-reported diagnostic tweets.
NLP methods: tweet conversion to numeric word vector, removing tweets containing hyperlinks, removing "retweets", removing all tweets containing horoscope indicators, lowercasing and removing punctuation. |
e3dc8689d8db31f04797f515fe224f6075f5cb16 | e3dc8689d8db31f04797f515fe224f6075f5cb16_0 | Q: What kind of events do they extract?
Text: Introduction
The extraction of temporal relations among events is an important natural language understanding (NLU) task that can benefit many downstream tasks such as question answering, information retrieval, and narrative generation. The task can be modeled as building a graph for a given text, whose nodes represent events and edges are labeled with temporal relations correspondingly. Figure FIGREF1 illustrates such a graph for the text shown therein. The nodes assassination, slaughtered, rampage, war, and Hutu are the candidate events, and different types of edges specify different temporal relations between them: assassination is BEFORE rampage, rampage INCLUDES slaughtered, and the relation between slaughtered and war is VAGUE. Since “Hutu” is actually not an event, a system is expected to annotate the relations between “Hutu” and all other nodes in the graph as NONE (i.e., no relation).
As far as we know, all existing systems treat this task as a pipeline of two separate subtasks, i.e., event extraction and temporal relation classification, and they also assume that gold events are given when training the relation classifier BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Specifically, they built end-to-end systems that extract events first and then predict temporal relations between them (Fig. FIGREF1). In these pipeline models, event extraction errors will propagate to the relation classification step and cannot be corrected afterwards. Our first contribution is the proposal of a joint model that extracts both events and temporal relations simultaneously (see Fig. FIGREF1). The motivation is that if we train the relation classifier with NONE relations between non-events, then it will potentially have the capability of correcting event extraction mistakes. For instance in Fig. FIGREF1, if the relation classifier predicts NONE for (Hutu, war) with a high confidence, then this is a strong signal that can be used by the event classifier to infer that at least one of them is not an event.
Our second contribution is that we improve event representations by sharing the same contextualized embeddings and neural representation learner between the event extraction and temporal relation extraction modules for the first time. On top of the shared embeddings and neural representation learner, the proposed model produces a graph-structured output representing all the events and relations in the given sentences. A valid graph prediction in this context should satisfy two structural constraints. First, the temporal relation should always be NONE between two non-events or between one event and one non-event. Second, for those temporal relations among events, no loops should exist due to the transitive property of time (e.g., if A is before B and B is before C, then A must be before C). The validity of a graph is guaranteed by solving an integer linear programming (ILP) optimization problem with those structural constraints, and our joint model is trained by structural support vector machines (SSVM) in an end-to-end fashion.
Results show that, according to the end-to-end $F_1$ score for temporal relation extraction, the proposed method improves CAEVO BIBREF3 by 10% on TB-Dense, and improves CogCompTime BIBREF6 by 6.8% on MATRES. We further show ablation studies to confirm that the proposed joint model with shared representations and structured learning is very effective for this task.
Related Work
In this section we briefly summarize the existing work on event extraction and temporal relation extraction. To the best of our knowledge, there is no prior work on joint event and relation extraction, so we will review joint entity and relation extraction works instead.
Existing event extraction methods in the temporal relation domain, as in the TempEval3 workshop BIBREF2, all use conventional machine learning models (logistic regression, SVM, or Max-entropy) with hand-engineered features (e.g., ClearTK BIBREF7 and NavyTime BIBREF8). While other domains have shown progress on event extraction using neural methods BIBREF9, BIBREF10, BIBREF11, recent progress in the temporal relation domain is focused more on the setting where gold events are provided. Therefore, we first show the performance of a neural event extractor on this task, although it is not our main contribution.
Early attempts on temporal relation extraction use local pair-wise classification with hand-engineered features BIBREF12, BIBREF0, BIBREF13, BIBREF14. Later efforts, such as ClearTK BIBREF7, UTTime BIBREF15, NavyTime BIBREF8, and CAEVO BIBREF3 improve earlier work with better linguistic and syntactic rules. BIBREF16, BIBREF4, BIBREF17 explore structured learning for this task, and more recently, neural methods have also been shown effective BIBREF18, BIBREF19, BIBREF20, BIBREF5.
In practice, we need to extract both events and those temporal relations among them from raw text. All the works above treat this as two subtasks that are solved in a pipeline. To the best of our knowledge, there has been no existing work on joint event-temporal relation extraction. However, the idea of “joint” has been studied for entity-relation extraction in many works. BIBREF21 frame their joint model as table filling tasks, map tabular representation into sequential predictions with heuristic rules, and construct global loss to compute the best joint predictions. BIBREF22 define a global structure for joint entity and relation extraction, encode local and global features based on domain and linguistic knowledge. and leverage beam-search to find global optimal assignments for entities and relations. BIBREF23 leverage LSTM architectures to jointly predict both entity and relations, but fall short on ensuring prediction consistency. BIBREF24 combine the benefits of both neural net and global optimization with beam search. Motivated by these works, we propose an end-to-end trainable neural structured support vector machine (neural SSVM) model to simultaneously extract events and their relations from text and ensure the global structure via ILP constraints. Next, we will describe in detail our proposed method.
Joint Event-Relation Extraction Model
In this section we first provide an overview of our neural SSVM model, and then describe each component in our framework in detail (i.e., the multi-tasking neural scoring module, and how inference and learning are performed). We denote the set of all possible relation labels (including NONE) as $\mathcal {R}$, all event candidates (both events and non-events) as $\mathcal {E}$, and all relation candidates as $\mathcal {E}\mathcal {E}$.
Joint Event-Relation Extraction Model ::: Neural SSVM
Our neural SSVM adapts the SSVM loss as:
where $\bar{S}^n_{\mathcal {E}} = S(\hat{y}^n_\mathcal {E}; x^n) - S(y^n_\mathcal {E};x^n)$ and $\bar{S}^n_{\mathcal {R}} = S(\hat{y}^n_\mathcal {R}; x^n) - S(y^n_\mathcal {R};x^n)$ ; $\Phi $ denotes model parameters, $n$ indexes instances, $M^n = |\mathcal {E}|^n + |\mathcal {E}\mathcal {E}|^n$ denotes the total number of relations $|\mathcal {E}|^n$ and events $|\mathcal {E}\mathcal {E}|^n$ in instance $n$. $y^n,\hat{y}^n$ denote the gold and predicted global assignments of events and relations for instance $n$—each of which consists of either one hot vector representing true and predicted relation labels $y_{\mathcal {R}}^n, \hat{y}_{\mathcal {R}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}\mathcal {E}|}$, or entity labels $y_{\mathcal {E}}^n, \hat{y}_{\mathcal {E}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$. A maximum a posteriori probability (MAP) inference is needed to find $\hat{y}^n$, which we formulate as an interger linear programming (ILP) problem and describe more details in Section SECREF12. $\Delta (y^n, \hat{y}^n)$ is a distance measurement between the gold and the predicted assignments; we simply use the Hamming distance. $C$ and $C_{\mathcal {E}}$ are the hyper-parameters to balance the losses between event, relation and the regularizer, and $S(y^n_\mathcal {E};x^n), S(y^n_\mathcal {R};x^n)$ are scoring functions, which we design a multi-tasking neural architecture to learn. The intuition behind the SSVM loss is that it requires the score of gold output structure $y^n$ to be greater than the score of the best output structure under the current model $\hat{y}^n$ with a margin $\Delta (y^n, \hat{y}^n)$ or else there will be some loss. The training objective is to minimize the loss.
The major difference between our neural-SSVM and the traditional SSVM model is the scoring function. Traditional SSVM uses a linear function over hand-crafted features to compute the scores, whereas we propose to use a recurrent neural network to estimate the scoring function and train the entire architecture end-to-end.
Joint Event-Relation Extraction Model ::: Multi-Tasking Neural Scoring Function
The recurrent neural network (RNN) architecture has been widely adopted by prior temporal extraction work to encode context information BIBREF18, BIBREF19, BIBREF20. Motivated by these works, we adopt a RNN-based scoring function for both event and relation prediction in order to learn features in a data driven way and capture long-term contexts in the input. In Fig. FIGREF6, we skip the input layer for simplicity.
The bottom layer corresponds to contextualized word representations denoted as $v_k$. We use ($i, j$) $\in \mathcal {E}\mathcal {E}$ to denote a candidate relation and $i \in \mathcal {E}$ to indicate a candidate event in the input sentences of length N. We fix word embeddings computed by a pre-trained BERT-base model BIBREF27. They are then fed into a BiLSTM layer to further encode task-specific contextual information. Both event and relation tasks share this layer.
The event scorer is illustrated by the left two branches following the BiLSTM layer. We simply concatenate both forward and backward hidden vectors to encode the context of each token. As for the relation scorer shown in the right branches, for each pair ($i,j$) we take the forward and backward hidden vectors corresponding to them, $f_i, b_i, f_j, b_j$, and concatenate them with linguistic features as in previous event relation prediction research. We denote linguistic features as $L_{i,j}$ and only use simple features provided in the original datasets: token distance, tense, and polarity of events.
Finally, all hidden vectors and linguistic features are concatenated to form the input to compute the probability of being an event or a softmax distribution over all possible relation labels—which we refer to as the RNN-based scoring function in the following sections.
Joint Event-Relation Extraction Model ::: MAP Inference
A MAP inference is needed both during training to obtain $\hat{y}^n$ in the loss function (Equation DISPLAY_FORM8), as well as during the test time to get globally coherent assignments. We formulate the inference problem as an ILP problem. The inference framework is established by constructing a global objective function using scores from local scorers and imposing several global constraints: 1) one-label assignment, 2) event-relation consistency, and 3) symmetry and transitivity as in BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF4.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Objective Function
The objective function of the global inference is to find the global assignment that has the highest probability under the current model, as specified in Equation DISPLAY_FORM14:
where $y^e_k$ is a binary indicator of whether the $k$-th candidate is an event or not, and $y^r_{i,j}$ is a binary indicator specifying whether the global prediction of the relation between $(i,j)$ is $r \in \mathcal {R}$. $S(y^e_k,x), \forall e \in \lbrace 0, 1\rbrace $ and $S(y^r_{i,j},x), \forall r \in \mathcal {R}$ are the scoring functions obtained from the event and relation scoring functions, respectively. The output of the global inference $\bf {\hat{y}}$ is a collection of optimal label assignments for all events and relation candidates in a fixed context. $C_{\mathcal {E}}$ is a hyper-parameter controlling weights between relation and event. The constraint that follows immediately from the objective function is that the global inference should only assign one label for all entities and relations.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Constraints
We introduce several additional constraints to ensure the resulting optimal output graph forms a valid and plausible event graph.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Constraints ::: Event-Relation Consistency.
Event and relation prediction consistency is defined with the following property: a pair of input tokens have a positive temporal relation if and only if both tokens are events. The following global constraints will satisfy this property,
where $e^P_i$ denotes an event and $e^N_i$ denotes a non-event token. $r^P_{i,j}$ indicates positive relations: BEFORE, AFTER, SIMULTANEOUS, INCLUDES, IS_INCLUDED, VAGUE and $r^N_{i,j}$ indicate a negative relation, i.e., NONE. A formal proof of this property can be found in Appendix A.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Constraints ::: Symmetry and Transitivity Constraint.
We also explore the symmetry and transitivity constraints of relations. They are specified as follows:
Intuitively, the symmetry constraint forces two pairs of events with flipping orders to have reversed relations. For example, if $r_{i,j}$ = BEFORE, then $r_{j,i}$ = AFTER. The transitivity constraint rules that if ($i,j$), ($j,k$) and ($i,k$) pairs exist in the graph, the label (relation) prediction of ($i,k$) pair has to fall into the transitivity set specifyed by ($i,j$) and ($j,k$) pairs. The full transitivity table can be found in BIBREF25.
Joint Event-Relation Extraction Model ::: Learning
We begin by experimenting with optimizing SSVM loss directly, but model performance degrades. Therefore, we develop a two-state learning approach which first trains a pipeline version of the joint model without feedback from global constraints. In other words, the local neural scoring functions are optimized with cross-entropy loss using gold events and relation candidates that are constructed directly from the outputs of the event model. During the second stage, we switch to the global SSVM loss function in Equation DISPLAY_FORM8 and re-optimize the network to adjust for global properties. We will provide more details in Section SECREF4.
Implementation Details
In this section we describe implementation details of the baselines and our four models to build an end-to-end event temporal relation extraction system with an emphasis on the structured joint model. In Section SECREF6 we will compare and contrast them and show why our proposed structured joint model works the best.
Implementation Details ::: Baselines
We run two event and relation extraction systems, CAEVO BIBREF3 and CogCompTime BIBREF6, on TB-Dense and MATRES, respectively. These two methods both leverage conventional learning algorithms (i.e., MaxEnt and averaged perceptron, respectively) based on manually designed features to obtain separate models for events and temporal relations, and conduct end-to-end relation extraction as a pipeline. Note BIBREF3 does not report event and end-to-end temporal relation extraction performances, so we calculate the scores per our implementation.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Single-Task Model.
The most basic way to build an end-to-end system is to train separate event detection and relation prediction models with gold labels, as we mentioned in our introduction. In other words, the BiLSTM layer is not shared as in Fig. FIGREF6. During evaluation and test time, we use the outputs from the event detection model to construct relation candidates and apply the relation prediction model to make the final prediction.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Multi-Task Model.
This is the same as the single-task model except that the BiLSTM layer is now shared for both event and relation tasks. Note that both single-task and multi-task models are not trained to tackle the NONE relation directly. They both rely on the predictions of the event model to annotate relations as either positive pairs or NONE.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Pipeline Joint Model.
This shares the same architecture as the multi-task model, except that during training, we use the predictions of the event model to construct relation candidates to train the relation model. This strategy will generate NONE pairs during training if one argument of the relation candidate is not an event. These NONE pairs will help the relation model to distinguish negative relations from positive ones, and thus become more robust to event prediction errors. We train this model with gold events and relation candidates during the first several epochs in order to obtain a relatively accurate event model and switch to a pipeline version afterwards inspired by BIBREF23.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Structured Joint Model.
This is described in detail in Section SECREF3. However, we experience difficulties in training the model with SSVM loss from scratch. This is due to large amounts of non-event tokens, and the model is not capable of distinguishing them in the beginning. We thus adopt a two-stage learning procedure where we take the best pipeline joint model and re-optimize it with the SSVM loss.
To restrict the search space for events in the ILP inference of the SSVM loss, we use the predicted probabilities from the event detection model to filter out non-events since the event model has a strong performance, as shown in Section SECREF6. Note that this is very different from the pipeline model where events are first predicted and relations are constructed with predicted events. Here, we only leverage an additional hyper-parameter $T_{evt}$ to filter out highly unlikely event candidates. Both event and relation labels are assigned simutaneously during the global inference with ILP, as specified in Section SECREF12. We also filter out tokens with POS tags that do not appear in the training set as most of the events are either nouns or verbs in TB-Dense, and all events are verbs in MATRES.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Hyper-Parameters.
All single-task, multi-task and pipeline joint models are trained by minimizing cross-entropy loss. We observe that model performances vary significantly with dropout ratio, hidden layer dimensions of the BiLSTM model and entity weight in the loss function (with relation weight fixed at 1.0). We leverage a pre-trained BERT model to compute word embedding and all MLP scoring functions have one hidden layer. In the SSVM loss function, we fix the value of $C = 1$, but fine-tune $C_\mathcal {E}$ in the objective function in Equation DISPLAY_FORM14. Hyper-parameters are chosen using a standard development set for TB-Dense and a random holdout-set based on an 80/20 split of training data for MATRES. To solve ILP in the inference process, we leverage an off-the-shelf solver provided by Gurobi optimizer; i.e. the best solutions from the Gurobi optimizer are inputs to the global training. The best combination of hyper-parameters can be found in Table 9 in our appendix.
Experimental Setup
In this section we first provide a brief overview of temporal relation data and describe the specific datasets used in this paper. We also explain the evaluation metrics at the end.
Experimental Setup ::: Temporal Relation Data
Temporal relation corpora such as TimeBank BIBREF32 and RED BIBREF33 facilitate the research in temporal relation extraction. The common issue in these corpora is missing annotations. Collecting densely annotated temporal relation corpora with all events and relations fully annotated is reported to be a challenging task as annotators could easily overlook some facts BIBREF34, BIBREF35, BIBREF3, BIBREF4, which made both modeling and evaluation extremely difficult in previous event temporal relation research.
The TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences, and it has been widely evaluated on this task BIBREF3, BIBREF4, BIBREF19, BIBREF5. Recent data construction efforts such as MATRES BIBREF25 further enhance the data quality by using a multi-axis annotation scheme and adopting a start-point of events to improve inter-annotator agreements. We use TB-Dense and MATRES in our experiments and briefly summarize the data statistics in Table TABREF33.
Experimental Setup ::: Evaluation Metrics
To be consistent with previous research, we adopt two different evaluation metrics. The first one is the standard micro-average scores. For densely annotated data, the micro-average metric should share the same precision, recall and F1 scores. However, since our joint model includes NONE pairs, we follow the convention of IE tasks and exclude them from evaluation. The second one is similar except that we exclude both NONE and VAGUE pairs following BIBREF6. Please refer to Figure 4 in the appendix for a visualizations of the two metrics.
Results and Analysis
The main results of this paper can be found in Table TABREF34. All best-recall and F1 scores are achieved by our structured joint model, and the results outperform the baseline systems by 10.0% and 6.8% on end-to-end relation extraction per F1 scores and 3.5% and 2.6% on event extraction per F1 scores. The best precision score for the TB-Dense dataset is achieved by CAEVO, which indicates that the linguistic rule-based system can make highly precise predictions by being conservative.
Table TABREF35 shows a more detailed analysis, in which we can see that our single-task models with BERT embeddings and a BiLSTM encoder already outperform the baseline systems on end-to-end relation extraction tasks by 4.9% and 4.4% respectively. In the following sections we discuss step-by-step improvement by adopting multi-task, pipeline joint, and structured joint models on end-to-end relation extraction, event extraction, and relation extraction on gold event pairs.
Results and Analysis ::: End-to-End Relation Extraction ::: TB-Dense.
The improvements over the single-task model per F1 score are 4.1% and 4.2% for the multi-task and pipeline joint model respectively. This indicates that the pipeline joint model is helpful only marginally. Table TABREF46 shows that the structured joint model improves both precision and recall scores for BEFORE and AFTER and achieves the best end-to-end relation extraction performance at 49.4%—which outperforms the baseline system by 10.0% and the single-task model by 5.1%.
Results and Analysis ::: End-to-End Relation Extraction ::: MATRES.
Compared to the single-task model, the multi-task model improves F1 scores by 1.5%, while the pipeline joint model improves F1 scores by 1.3%—which means that pipeline joint training does not bring any gains for MATRES. The structured joint model reaches the best end-to-end F1 score at 59.6%, which outperforms the baseline system by 6.8% and the single-task model by 2.4%. We speculate that the gains come from the joint model's ability to help deal with NONE pairs, since recall scores for BEFORE and AFTER increase by 1.5% and 1.1% respectively (Table 10 in our appendix).
Results and Analysis ::: Event Extraction ::: TB-Dense.
Our structured joint model out-performs the CAEVO baseline by 3.5% and the single-task model by 1.3%. Improvements on event extraction can be difficult because our single-task model already works quite well with a close-to 89% F1 score, while the inter-annotator agreement for events in TimeBank documents is merely 87% BIBREF2.
Results and Analysis ::: Event Extraction ::: MATRES.
The structured model outperforms the the baseline model and the single-task model by 2.6% and 0.9% respectively. However, we observe that the multi-task model has a slight drop in event extraction performance over the single-task model (86.4% vs. 86.9%). This indicates that incorporating relation signals are not particularly helpful for event extraction on MATRES. We speculate that one of the reasons could be the unique event characteristics in MATERS. As we described in Section SECREF32, all events in MATRES are verbs. It is possible that a more concentrated single-task model works better when events are homogeneous, whereas a multi-task model is more powerful when we have a mixture of event types, e.g., both verbs and nouns as in TB-Dense.
Results and Analysis ::: Relation Extraction with Gold Events ::: TB-Dense.
There is much prior work on relation extraction based on gold events in TB-Dense. meng2018context proposed a neural model with global information that achieved the best results as far as we know. The improvement of our single-task model over that baseline is mostly attributable to the adoption of BERT embedding. We show that sharing the LSTM layer for both events and relations can help further improve performance of the relation classification task by 2.6%. For the joint models, since we do not train them on gold events, the evaluation would be meaningless. We simply skip this evaluation.
Results and Analysis ::: Relation Extraction with Gold Events ::: MATRES.
Both single-task and multi-task models outperform the baseline by nearly 10%, while the improvement of multi-task over single task is marginal. In MATRES, a relation pair is equivalent to a verb pair, and thus the event prediction task probably does not provide much more information for relation extraction.
In Table TABREF46 we further show the breakdown performances for each positive relation on TB-Dense. The breakdown on MATRES is shown in Table 10 in the appendix. BEFORE, AFTER and VAGUE are the three dominant label classes in TB-Dense. We observe that the linguistic rule-based model, CAEVO, tends to have a more evenly spread-out performance, whereas our neural network-based models are more likely to have concentrated predictions due to the imbalance of the training sample across different label classes.
Results and Analysis ::: Discussion ::: Label Imbalance.
One way to mitigate the label imbalance issue is to increase the sample weights for small classes during model training. We investigate the impact of class weights by refitting our single-task model with larger weights on INCLUDES, IS_INCLUDED and VAGUE in the cross-entropy loss.
Figure FIGREF50 shows that increasing class weights up to 4 times can significantly improve the F1 scores of INCLUDES and IS_INCLUDED classes with a decrease less than 2% for the overall F1 score. Performance of INCLUDES and IS_INCLUDED eventually degrades when class weights are too large. These results seem to suggest that more labels are needed in order to improve the performance on both of these two classes and the overall model. For SIMULTANEOUS, our model does not make any correct predictions for both TB-DENSE and MATRES by increasing class weight up to 10 times, which implies that SIMULTANEOUS could be a hard temporal relation to predict in general.
Results and Analysis ::: Discussion ::: Global Constraints.
In Table TABREF51 we conduct an ablation study to understand the contributions from the event-relation prediction consistency constraint and the temporal relation transitivity constraint for the structured joint model. As we can see, the event-relation consistency help s improve the F1 scores by 0.9% and 1% for TB-Dense and MATRES, respectively, but the gain by using transitivity is either non-existing or marginal. We hypothesize two potential reasons: 1) We leveraged BERT contextualized embedding as word representation, which could tackle transitivity in the input context; 2) NONE pairs could make transitivity rule less useful, as positive pairs can be predicted as NONE and transitivity rule does not apply to NONE pairs.
Results and Analysis ::: Discussion ::: Error Analysis.
By comparing gold and predicted labels for events and temporal relations and examining predicted probabilities for events, we identified three major sources of mistakes made by our structured model, as illustrated in Table TABREF57 with examples.
Results and Analysis ::: Discussion ::: Type 1.
Both events in Ex 1 are assigned low scores by the event module ($<< 0.01$). Although the structured joint model is designed to predict events and relations jointly, we leverage the event module to filter out tokens with scores lower than a threshold. Consequently, some true events can be mistakenly predicted as non-events, and the relation pairs including them are automatically assigned NONE.
Results and Analysis ::: Discussion ::: Type 2.
In Ex 2 the event module assigns high scores to tokens happened (0.97) and according (0.89), but according is not an event. When the structured model makes inference jointly, the decision will weigh heavily towards assigning 1 (event) to both tokens. With the event-relation consistency constraint, this pair is highly likely to be predicted as having a positive temporal relation. Nearly all mistakes made in this category follow the same pattern illustrated by this example.
Results and Analysis ::: Discussion ::: Type 3.
The existence of VAGUE makes temporal relation prediction challenging as it can be easily confused with other temporal relations, as shown in Ex 3. This challenge is compounded with NONE in our end-to-end extraction task.
Type 1 and Type 2 errors suggest that building a stronger event detection module will be helpful for both event and temporal relation extraction tasks. To improve the performance on VAGUE pairs, we could either build a stronger model that incorporates both contextual information and commonsense knowledge or create datasets with annotations that better separate VAGUE from other positive temporal relations.
Conclusion
In this paper we investigate building an end-to-end event temporal relation extraction system. We propose a novel neural structured prediction model with joint representation learning to make predictions on events and relations simultaneously; this can avoid error propagation in previous pipeline systems. Experiments and comparative studies on two benchmark datasets show that the proposed model is effective for end-to-end event temporal relation extraction. Specifically, we improve the performances of previously published systems by 10% and 6.8% on the TB-Dense and MATRES datasets, respectively.
Future research can focus on creating more robust structured constraints between events and relations, especially considering event types, to improve the quality of global assignments using ILP. Since a better event model is generally helpful for relation extraction, another promising direction would be to incorporate multiple datasets to enhance the performance of our event extraction systems.
Acknowledgements
This work is supported in part by Contracts W911NF-15-1-0543 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. | Unanswerable |
cfb5ab893ed77f9df7eeb4940b6bacdef5acccea | cfb5ab893ed77f9df7eeb4940b6bacdef5acccea_0 | Q: Is this the first paper to propose a joint model for event and temporal relation extraction?
Text: Introduction
The extraction of temporal relations among events is an important natural language understanding (NLU) task that can benefit many downstream tasks such as question answering, information retrieval, and narrative generation. The task can be modeled as building a graph for a given text, whose nodes represent events and edges are labeled with temporal relations correspondingly. Figure FIGREF1 illustrates such a graph for the text shown therein. The nodes assassination, slaughtered, rampage, war, and Hutu are the candidate events, and different types of edges specify different temporal relations between them: assassination is BEFORE rampage, rampage INCLUDES slaughtered, and the relation between slaughtered and war is VAGUE. Since “Hutu” is actually not an event, a system is expected to annotate the relations between “Hutu” and all other nodes in the graph as NONE (i.e., no relation).
As far as we know, all existing systems treat this task as a pipeline of two separate subtasks, i.e., event extraction and temporal relation classification, and they also assume that gold events are given when training the relation classifier BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Specifically, they built end-to-end systems that extract events first and then predict temporal relations between them (Fig. FIGREF1). In these pipeline models, event extraction errors will propagate to the relation classification step and cannot be corrected afterwards. Our first contribution is the proposal of a joint model that extracts both events and temporal relations simultaneously (see Fig. FIGREF1). The motivation is that if we train the relation classifier with NONE relations between non-events, then it will potentially have the capability of correcting event extraction mistakes. For instance in Fig. FIGREF1, if the relation classifier predicts NONE for (Hutu, war) with a high confidence, then this is a strong signal that can be used by the event classifier to infer that at least one of them is not an event.
Our second contribution is that we improve event representations by sharing the same contextualized embeddings and neural representation learner between the event extraction and temporal relation extraction modules for the first time. On top of the shared embeddings and neural representation learner, the proposed model produces a graph-structured output representing all the events and relations in the given sentences. A valid graph prediction in this context should satisfy two structural constraints. First, the temporal relation should always be NONE between two non-events or between one event and one non-event. Second, for those temporal relations among events, no loops should exist due to the transitive property of time (e.g., if A is before B and B is before C, then A must be before C). The validity of a graph is guaranteed by solving an integer linear programming (ILP) optimization problem with those structural constraints, and our joint model is trained by structural support vector machines (SSVM) in an end-to-end fashion.
Results show that, according to the end-to-end $F_1$ score for temporal relation extraction, the proposed method improves CAEVO BIBREF3 by 10% on TB-Dense, and improves CogCompTime BIBREF6 by 6.8% on MATRES. We further show ablation studies to confirm that the proposed joint model with shared representations and structured learning is very effective for this task.
Related Work
In this section we briefly summarize the existing work on event extraction and temporal relation extraction. To the best of our knowledge, there is no prior work on joint event and relation extraction, so we will review joint entity and relation extraction works instead.
Existing event extraction methods in the temporal relation domain, as in the TempEval3 workshop BIBREF2, all use conventional machine learning models (logistic regression, SVM, or Max-entropy) with hand-engineered features (e.g., ClearTK BIBREF7 and NavyTime BIBREF8). While other domains have shown progress on event extraction using neural methods BIBREF9, BIBREF10, BIBREF11, recent progress in the temporal relation domain is focused more on the setting where gold events are provided. Therefore, we first show the performance of a neural event extractor on this task, although it is not our main contribution.
Early attempts on temporal relation extraction use local pair-wise classification with hand-engineered features BIBREF12, BIBREF0, BIBREF13, BIBREF14. Later efforts, such as ClearTK BIBREF7, UTTime BIBREF15, NavyTime BIBREF8, and CAEVO BIBREF3 improve earlier work with better linguistic and syntactic rules. BIBREF16, BIBREF4, BIBREF17 explore structured learning for this task, and more recently, neural methods have also been shown effective BIBREF18, BIBREF19, BIBREF20, BIBREF5.
In practice, we need to extract both events and those temporal relations among them from raw text. All the works above treat this as two subtasks that are solved in a pipeline. To the best of our knowledge, there has been no existing work on joint event-temporal relation extraction. However, the idea of “joint” has been studied for entity-relation extraction in many works. BIBREF21 frame their joint model as table filling tasks, map tabular representation into sequential predictions with heuristic rules, and construct global loss to compute the best joint predictions. BIBREF22 define a global structure for joint entity and relation extraction, encode local and global features based on domain and linguistic knowledge. and leverage beam-search to find global optimal assignments for entities and relations. BIBREF23 leverage LSTM architectures to jointly predict both entity and relations, but fall short on ensuring prediction consistency. BIBREF24 combine the benefits of both neural net and global optimization with beam search. Motivated by these works, we propose an end-to-end trainable neural structured support vector machine (neural SSVM) model to simultaneously extract events and their relations from text and ensure the global structure via ILP constraints. Next, we will describe in detail our proposed method.
Joint Event-Relation Extraction Model
In this section we first provide an overview of our neural SSVM model, and then describe each component in our framework in detail (i.e., the multi-tasking neural scoring module, and how inference and learning are performed). We denote the set of all possible relation labels (including NONE) as $\mathcal {R}$, all event candidates (both events and non-events) as $\mathcal {E}$, and all relation candidates as $\mathcal {E}\mathcal {E}$.
Joint Event-Relation Extraction Model ::: Neural SSVM
Our neural SSVM adapts the SSVM loss as:
where $\bar{S}^n_{\mathcal {E}} = S(\hat{y}^n_\mathcal {E}; x^n) - S(y^n_\mathcal {E};x^n)$ and $\bar{S}^n_{\mathcal {R}} = S(\hat{y}^n_\mathcal {R}; x^n) - S(y^n_\mathcal {R};x^n)$ ; $\Phi $ denotes model parameters, $n$ indexes instances, $M^n = |\mathcal {E}|^n + |\mathcal {E}\mathcal {E}|^n$ denotes the total number of relations $|\mathcal {E}|^n$ and events $|\mathcal {E}\mathcal {E}|^n$ in instance $n$. $y^n,\hat{y}^n$ denote the gold and predicted global assignments of events and relations for instance $n$—each of which consists of either one hot vector representing true and predicted relation labels $y_{\mathcal {R}}^n, \hat{y}_{\mathcal {R}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}\mathcal {E}|}$, or entity labels $y_{\mathcal {E}}^n, \hat{y}_{\mathcal {E}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$. A maximum a posteriori probability (MAP) inference is needed to find $\hat{y}^n$, which we formulate as an interger linear programming (ILP) problem and describe more details in Section SECREF12. $\Delta (y^n, \hat{y}^n)$ is a distance measurement between the gold and the predicted assignments; we simply use the Hamming distance. $C$ and $C_{\mathcal {E}}$ are the hyper-parameters to balance the losses between event, relation and the regularizer, and $S(y^n_\mathcal {E};x^n), S(y^n_\mathcal {R};x^n)$ are scoring functions, which we design a multi-tasking neural architecture to learn. The intuition behind the SSVM loss is that it requires the score of gold output structure $y^n$ to be greater than the score of the best output structure under the current model $\hat{y}^n$ with a margin $\Delta (y^n, \hat{y}^n)$ or else there will be some loss. The training objective is to minimize the loss.
The major difference between our neural-SSVM and the traditional SSVM model is the scoring function. Traditional SSVM uses a linear function over hand-crafted features to compute the scores, whereas we propose to use a recurrent neural network to estimate the scoring function and train the entire architecture end-to-end.
Joint Event-Relation Extraction Model ::: Multi-Tasking Neural Scoring Function
The recurrent neural network (RNN) architecture has been widely adopted by prior temporal extraction work to encode context information BIBREF18, BIBREF19, BIBREF20. Motivated by these works, we adopt a RNN-based scoring function for both event and relation prediction in order to learn features in a data driven way and capture long-term contexts in the input. In Fig. FIGREF6, we skip the input layer for simplicity.
The bottom layer corresponds to contextualized word representations denoted as $v_k$. We use ($i, j$) $\in \mathcal {E}\mathcal {E}$ to denote a candidate relation and $i \in \mathcal {E}$ to indicate a candidate event in the input sentences of length N. We fix word embeddings computed by a pre-trained BERT-base model BIBREF27. They are then fed into a BiLSTM layer to further encode task-specific contextual information. Both event and relation tasks share this layer.
The event scorer is illustrated by the left two branches following the BiLSTM layer. We simply concatenate both forward and backward hidden vectors to encode the context of each token. As for the relation scorer shown in the right branches, for each pair ($i,j$) we take the forward and backward hidden vectors corresponding to them, $f_i, b_i, f_j, b_j$, and concatenate them with linguistic features as in previous event relation prediction research. We denote linguistic features as $L_{i,j}$ and only use simple features provided in the original datasets: token distance, tense, and polarity of events.
Finally, all hidden vectors and linguistic features are concatenated to form the input to compute the probability of being an event or a softmax distribution over all possible relation labels—which we refer to as the RNN-based scoring function in the following sections.
Joint Event-Relation Extraction Model ::: MAP Inference
A MAP inference is needed both during training to obtain $\hat{y}^n$ in the loss function (Equation DISPLAY_FORM8), as well as during the test time to get globally coherent assignments. We formulate the inference problem as an ILP problem. The inference framework is established by constructing a global objective function using scores from local scorers and imposing several global constraints: 1) one-label assignment, 2) event-relation consistency, and 3) symmetry and transitivity as in BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF4.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Objective Function
The objective function of the global inference is to find the global assignment that has the highest probability under the current model, as specified in Equation DISPLAY_FORM14:
where $y^e_k$ is a binary indicator of whether the $k$-th candidate is an event or not, and $y^r_{i,j}$ is a binary indicator specifying whether the global prediction of the relation between $(i,j)$ is $r \in \mathcal {R}$. $S(y^e_k,x), \forall e \in \lbrace 0, 1\rbrace $ and $S(y^r_{i,j},x), \forall r \in \mathcal {R}$ are the scoring functions obtained from the event and relation scoring functions, respectively. The output of the global inference $\bf {\hat{y}}$ is a collection of optimal label assignments for all events and relation candidates in a fixed context. $C_{\mathcal {E}}$ is a hyper-parameter controlling weights between relation and event. The constraint that follows immediately from the objective function is that the global inference should only assign one label for all entities and relations.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Constraints
We introduce several additional constraints to ensure the resulting optimal output graph forms a valid and plausible event graph.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Constraints ::: Event-Relation Consistency.
Event and relation prediction consistency is defined with the following property: a pair of input tokens have a positive temporal relation if and only if both tokens are events. The following global constraints will satisfy this property,
where $e^P_i$ denotes an event and $e^N_i$ denotes a non-event token. $r^P_{i,j}$ indicates positive relations: BEFORE, AFTER, SIMULTANEOUS, INCLUDES, IS_INCLUDED, VAGUE and $r^N_{i,j}$ indicate a negative relation, i.e., NONE. A formal proof of this property can be found in Appendix A.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Constraints ::: Symmetry and Transitivity Constraint.
We also explore the symmetry and transitivity constraints of relations. They are specified as follows:
Intuitively, the symmetry constraint forces two pairs of events with flipping orders to have reversed relations. For example, if $r_{i,j}$ = BEFORE, then $r_{j,i}$ = AFTER. The transitivity constraint rules that if ($i,j$), ($j,k$) and ($i,k$) pairs exist in the graph, the label (relation) prediction of ($i,k$) pair has to fall into the transitivity set specifyed by ($i,j$) and ($j,k$) pairs. The full transitivity table can be found in BIBREF25.
Joint Event-Relation Extraction Model ::: Learning
We begin by experimenting with optimizing SSVM loss directly, but model performance degrades. Therefore, we develop a two-state learning approach which first trains a pipeline version of the joint model without feedback from global constraints. In other words, the local neural scoring functions are optimized with cross-entropy loss using gold events and relation candidates that are constructed directly from the outputs of the event model. During the second stage, we switch to the global SSVM loss function in Equation DISPLAY_FORM8 and re-optimize the network to adjust for global properties. We will provide more details in Section SECREF4.
Implementation Details
In this section we describe implementation details of the baselines and our four models to build an end-to-end event temporal relation extraction system with an emphasis on the structured joint model. In Section SECREF6 we will compare and contrast them and show why our proposed structured joint model works the best.
Implementation Details ::: Baselines
We run two event and relation extraction systems, CAEVO BIBREF3 and CogCompTime BIBREF6, on TB-Dense and MATRES, respectively. These two methods both leverage conventional learning algorithms (i.e., MaxEnt and averaged perceptron, respectively) based on manually designed features to obtain separate models for events and temporal relations, and conduct end-to-end relation extraction as a pipeline. Note BIBREF3 does not report event and end-to-end temporal relation extraction performances, so we calculate the scores per our implementation.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Single-Task Model.
The most basic way to build an end-to-end system is to train separate event detection and relation prediction models with gold labels, as we mentioned in our introduction. In other words, the BiLSTM layer is not shared as in Fig. FIGREF6. During evaluation and test time, we use the outputs from the event detection model to construct relation candidates and apply the relation prediction model to make the final prediction.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Multi-Task Model.
This is the same as the single-task model except that the BiLSTM layer is now shared for both event and relation tasks. Note that both single-task and multi-task models are not trained to tackle the NONE relation directly. They both rely on the predictions of the event model to annotate relations as either positive pairs or NONE.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Pipeline Joint Model.
This shares the same architecture as the multi-task model, except that during training, we use the predictions of the event model to construct relation candidates to train the relation model. This strategy will generate NONE pairs during training if one argument of the relation candidate is not an event. These NONE pairs will help the relation model to distinguish negative relations from positive ones, and thus become more robust to event prediction errors. We train this model with gold events and relation candidates during the first several epochs in order to obtain a relatively accurate event model and switch to a pipeline version afterwards inspired by BIBREF23.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Structured Joint Model.
This is described in detail in Section SECREF3. However, we experience difficulties in training the model with SSVM loss from scratch. This is due to large amounts of non-event tokens, and the model is not capable of distinguishing them in the beginning. We thus adopt a two-stage learning procedure where we take the best pipeline joint model and re-optimize it with the SSVM loss.
To restrict the search space for events in the ILP inference of the SSVM loss, we use the predicted probabilities from the event detection model to filter out non-events since the event model has a strong performance, as shown in Section SECREF6. Note that this is very different from the pipeline model where events are first predicted and relations are constructed with predicted events. Here, we only leverage an additional hyper-parameter $T_{evt}$ to filter out highly unlikely event candidates. Both event and relation labels are assigned simutaneously during the global inference with ILP, as specified in Section SECREF12. We also filter out tokens with POS tags that do not appear in the training set as most of the events are either nouns or verbs in TB-Dense, and all events are verbs in MATRES.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Hyper-Parameters.
All single-task, multi-task and pipeline joint models are trained by minimizing cross-entropy loss. We observe that model performances vary significantly with dropout ratio, hidden layer dimensions of the BiLSTM model and entity weight in the loss function (with relation weight fixed at 1.0). We leverage a pre-trained BERT model to compute word embedding and all MLP scoring functions have one hidden layer. In the SSVM loss function, we fix the value of $C = 1$, but fine-tune $C_\mathcal {E}$ in the objective function in Equation DISPLAY_FORM14. Hyper-parameters are chosen using a standard development set for TB-Dense and a random holdout-set based on an 80/20 split of training data for MATRES. To solve ILP in the inference process, we leverage an off-the-shelf solver provided by Gurobi optimizer; i.e. the best solutions from the Gurobi optimizer are inputs to the global training. The best combination of hyper-parameters can be found in Table 9 in our appendix.
Experimental Setup
In this section we first provide a brief overview of temporal relation data and describe the specific datasets used in this paper. We also explain the evaluation metrics at the end.
Experimental Setup ::: Temporal Relation Data
Temporal relation corpora such as TimeBank BIBREF32 and RED BIBREF33 facilitate the research in temporal relation extraction. The common issue in these corpora is missing annotations. Collecting densely annotated temporal relation corpora with all events and relations fully annotated is reported to be a challenging task as annotators could easily overlook some facts BIBREF34, BIBREF35, BIBREF3, BIBREF4, which made both modeling and evaluation extremely difficult in previous event temporal relation research.
The TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences, and it has been widely evaluated on this task BIBREF3, BIBREF4, BIBREF19, BIBREF5. Recent data construction efforts such as MATRES BIBREF25 further enhance the data quality by using a multi-axis annotation scheme and adopting a start-point of events to improve inter-annotator agreements. We use TB-Dense and MATRES in our experiments and briefly summarize the data statistics in Table TABREF33.
Experimental Setup ::: Evaluation Metrics
To be consistent with previous research, we adopt two different evaluation metrics. The first one is the standard micro-average scores. For densely annotated data, the micro-average metric should share the same precision, recall and F1 scores. However, since our joint model includes NONE pairs, we follow the convention of IE tasks and exclude them from evaluation. The second one is similar except that we exclude both NONE and VAGUE pairs following BIBREF6. Please refer to Figure 4 in the appendix for a visualizations of the two metrics.
Results and Analysis
The main results of this paper can be found in Table TABREF34. All best-recall and F1 scores are achieved by our structured joint model, and the results outperform the baseline systems by 10.0% and 6.8% on end-to-end relation extraction per F1 scores and 3.5% and 2.6% on event extraction per F1 scores. The best precision score for the TB-Dense dataset is achieved by CAEVO, which indicates that the linguistic rule-based system can make highly precise predictions by being conservative.
Table TABREF35 shows a more detailed analysis, in which we can see that our single-task models with BERT embeddings and a BiLSTM encoder already outperform the baseline systems on end-to-end relation extraction tasks by 4.9% and 4.4% respectively. In the following sections we discuss step-by-step improvement by adopting multi-task, pipeline joint, and structured joint models on end-to-end relation extraction, event extraction, and relation extraction on gold event pairs.
Results and Analysis ::: End-to-End Relation Extraction ::: TB-Dense.
The improvements over the single-task model per F1 score are 4.1% and 4.2% for the multi-task and pipeline joint model respectively. This indicates that the pipeline joint model is helpful only marginally. Table TABREF46 shows that the structured joint model improves both precision and recall scores for BEFORE and AFTER and achieves the best end-to-end relation extraction performance at 49.4%—which outperforms the baseline system by 10.0% and the single-task model by 5.1%.
Results and Analysis ::: End-to-End Relation Extraction ::: MATRES.
Compared to the single-task model, the multi-task model improves F1 scores by 1.5%, while the pipeline joint model improves F1 scores by 1.3%—which means that pipeline joint training does not bring any gains for MATRES. The structured joint model reaches the best end-to-end F1 score at 59.6%, which outperforms the baseline system by 6.8% and the single-task model by 2.4%. We speculate that the gains come from the joint model's ability to help deal with NONE pairs, since recall scores for BEFORE and AFTER increase by 1.5% and 1.1% respectively (Table 10 in our appendix).
Results and Analysis ::: Event Extraction ::: TB-Dense.
Our structured joint model out-performs the CAEVO baseline by 3.5% and the single-task model by 1.3%. Improvements on event extraction can be difficult because our single-task model already works quite well with a close-to 89% F1 score, while the inter-annotator agreement for events in TimeBank documents is merely 87% BIBREF2.
Results and Analysis ::: Event Extraction ::: MATRES.
The structured model outperforms the the baseline model and the single-task model by 2.6% and 0.9% respectively. However, we observe that the multi-task model has a slight drop in event extraction performance over the single-task model (86.4% vs. 86.9%). This indicates that incorporating relation signals are not particularly helpful for event extraction on MATRES. We speculate that one of the reasons could be the unique event characteristics in MATERS. As we described in Section SECREF32, all events in MATRES are verbs. It is possible that a more concentrated single-task model works better when events are homogeneous, whereas a multi-task model is more powerful when we have a mixture of event types, e.g., both verbs and nouns as in TB-Dense.
Results and Analysis ::: Relation Extraction with Gold Events ::: TB-Dense.
There is much prior work on relation extraction based on gold events in TB-Dense. meng2018context proposed a neural model with global information that achieved the best results as far as we know. The improvement of our single-task model over that baseline is mostly attributable to the adoption of BERT embedding. We show that sharing the LSTM layer for both events and relations can help further improve performance of the relation classification task by 2.6%. For the joint models, since we do not train them on gold events, the evaluation would be meaningless. We simply skip this evaluation.
Results and Analysis ::: Relation Extraction with Gold Events ::: MATRES.
Both single-task and multi-task models outperform the baseline by nearly 10%, while the improvement of multi-task over single task is marginal. In MATRES, a relation pair is equivalent to a verb pair, and thus the event prediction task probably does not provide much more information for relation extraction.
In Table TABREF46 we further show the breakdown performances for each positive relation on TB-Dense. The breakdown on MATRES is shown in Table 10 in the appendix. BEFORE, AFTER and VAGUE are the three dominant label classes in TB-Dense. We observe that the linguistic rule-based model, CAEVO, tends to have a more evenly spread-out performance, whereas our neural network-based models are more likely to have concentrated predictions due to the imbalance of the training sample across different label classes.
Results and Analysis ::: Discussion ::: Label Imbalance.
One way to mitigate the label imbalance issue is to increase the sample weights for small classes during model training. We investigate the impact of class weights by refitting our single-task model with larger weights on INCLUDES, IS_INCLUDED and VAGUE in the cross-entropy loss.
Figure FIGREF50 shows that increasing class weights up to 4 times can significantly improve the F1 scores of INCLUDES and IS_INCLUDED classes with a decrease less than 2% for the overall F1 score. Performance of INCLUDES and IS_INCLUDED eventually degrades when class weights are too large. These results seem to suggest that more labels are needed in order to improve the performance on both of these two classes and the overall model. For SIMULTANEOUS, our model does not make any correct predictions for both TB-DENSE and MATRES by increasing class weight up to 10 times, which implies that SIMULTANEOUS could be a hard temporal relation to predict in general.
Results and Analysis ::: Discussion ::: Global Constraints.
In Table TABREF51 we conduct an ablation study to understand the contributions from the event-relation prediction consistency constraint and the temporal relation transitivity constraint for the structured joint model. As we can see, the event-relation consistency help s improve the F1 scores by 0.9% and 1% for TB-Dense and MATRES, respectively, but the gain by using transitivity is either non-existing or marginal. We hypothesize two potential reasons: 1) We leveraged BERT contextualized embedding as word representation, which could tackle transitivity in the input context; 2) NONE pairs could make transitivity rule less useful, as positive pairs can be predicted as NONE and transitivity rule does not apply to NONE pairs.
Results and Analysis ::: Discussion ::: Error Analysis.
By comparing gold and predicted labels for events and temporal relations and examining predicted probabilities for events, we identified three major sources of mistakes made by our structured model, as illustrated in Table TABREF57 with examples.
Results and Analysis ::: Discussion ::: Type 1.
Both events in Ex 1 are assigned low scores by the event module ($<< 0.01$). Although the structured joint model is designed to predict events and relations jointly, we leverage the event module to filter out tokens with scores lower than a threshold. Consequently, some true events can be mistakenly predicted as non-events, and the relation pairs including them are automatically assigned NONE.
Results and Analysis ::: Discussion ::: Type 2.
In Ex 2 the event module assigns high scores to tokens happened (0.97) and according (0.89), but according is not an event. When the structured model makes inference jointly, the decision will weigh heavily towards assigning 1 (event) to both tokens. With the event-relation consistency constraint, this pair is highly likely to be predicted as having a positive temporal relation. Nearly all mistakes made in this category follow the same pattern illustrated by this example.
Results and Analysis ::: Discussion ::: Type 3.
The existence of VAGUE makes temporal relation prediction challenging as it can be easily confused with other temporal relations, as shown in Ex 3. This challenge is compounded with NONE in our end-to-end extraction task.
Type 1 and Type 2 errors suggest that building a stronger event detection module will be helpful for both event and temporal relation extraction tasks. To improve the performance on VAGUE pairs, we could either build a stronger model that incorporates both contextual information and commonsense knowledge or create datasets with annotations that better separate VAGUE from other positive temporal relations.
Conclusion
In this paper we investigate building an end-to-end event temporal relation extraction system. We propose a novel neural structured prediction model with joint representation learning to make predictions on events and relations simultaneously; this can avoid error propagation in previous pipeline systems. Experiments and comparative studies on two benchmark datasets show that the proposed model is effective for end-to-end event temporal relation extraction. Specifically, we improve the performances of previously published systems by 10% and 6.8% on the TB-Dense and MATRES datasets, respectively.
Future research can focus on creating more robust structured constraints between events and relations, especially considering event types, to improve the quality of global assignments using ILP. Since a better event model is generally helpful for relation extraction, another promising direction would be to incorporate multiple datasets to enhance the performance of our event extraction systems.
Acknowledgements
This work is supported in part by Contracts W911NF-15-1-0543 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. | Yes |
a5abd4dd91e6f2855e9098bd6ae1481c0fdb0d4a | a5abd4dd91e6f2855e9098bd6ae1481c0fdb0d4a_0 | Q: What datasets were used for this work?
Text: Introduction
The extraction of temporal relations among events is an important natural language understanding (NLU) task that can benefit many downstream tasks such as question answering, information retrieval, and narrative generation. The task can be modeled as building a graph for a given text, whose nodes represent events and edges are labeled with temporal relations correspondingly. Figure FIGREF1 illustrates such a graph for the text shown therein. The nodes assassination, slaughtered, rampage, war, and Hutu are the candidate events, and different types of edges specify different temporal relations between them: assassination is BEFORE rampage, rampage INCLUDES slaughtered, and the relation between slaughtered and war is VAGUE. Since “Hutu” is actually not an event, a system is expected to annotate the relations between “Hutu” and all other nodes in the graph as NONE (i.e., no relation).
As far as we know, all existing systems treat this task as a pipeline of two separate subtasks, i.e., event extraction and temporal relation classification, and they also assume that gold events are given when training the relation classifier BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. Specifically, they built end-to-end systems that extract events first and then predict temporal relations between them (Fig. FIGREF1). In these pipeline models, event extraction errors will propagate to the relation classification step and cannot be corrected afterwards. Our first contribution is the proposal of a joint model that extracts both events and temporal relations simultaneously (see Fig. FIGREF1). The motivation is that if we train the relation classifier with NONE relations between non-events, then it will potentially have the capability of correcting event extraction mistakes. For instance in Fig. FIGREF1, if the relation classifier predicts NONE for (Hutu, war) with a high confidence, then this is a strong signal that can be used by the event classifier to infer that at least one of them is not an event.
Our second contribution is that we improve event representations by sharing the same contextualized embeddings and neural representation learner between the event extraction and temporal relation extraction modules for the first time. On top of the shared embeddings and neural representation learner, the proposed model produces a graph-structured output representing all the events and relations in the given sentences. A valid graph prediction in this context should satisfy two structural constraints. First, the temporal relation should always be NONE between two non-events or between one event and one non-event. Second, for those temporal relations among events, no loops should exist due to the transitive property of time (e.g., if A is before B and B is before C, then A must be before C). The validity of a graph is guaranteed by solving an integer linear programming (ILP) optimization problem with those structural constraints, and our joint model is trained by structural support vector machines (SSVM) in an end-to-end fashion.
Results show that, according to the end-to-end $F_1$ score for temporal relation extraction, the proposed method improves CAEVO BIBREF3 by 10% on TB-Dense, and improves CogCompTime BIBREF6 by 6.8% on MATRES. We further show ablation studies to confirm that the proposed joint model with shared representations and structured learning is very effective for this task.
Related Work
In this section we briefly summarize the existing work on event extraction and temporal relation extraction. To the best of our knowledge, there is no prior work on joint event and relation extraction, so we will review joint entity and relation extraction works instead.
Existing event extraction methods in the temporal relation domain, as in the TempEval3 workshop BIBREF2, all use conventional machine learning models (logistic regression, SVM, or Max-entropy) with hand-engineered features (e.g., ClearTK BIBREF7 and NavyTime BIBREF8). While other domains have shown progress on event extraction using neural methods BIBREF9, BIBREF10, BIBREF11, recent progress in the temporal relation domain is focused more on the setting where gold events are provided. Therefore, we first show the performance of a neural event extractor on this task, although it is not our main contribution.
Early attempts on temporal relation extraction use local pair-wise classification with hand-engineered features BIBREF12, BIBREF0, BIBREF13, BIBREF14. Later efforts, such as ClearTK BIBREF7, UTTime BIBREF15, NavyTime BIBREF8, and CAEVO BIBREF3 improve earlier work with better linguistic and syntactic rules. BIBREF16, BIBREF4, BIBREF17 explore structured learning for this task, and more recently, neural methods have also been shown effective BIBREF18, BIBREF19, BIBREF20, BIBREF5.
In practice, we need to extract both events and those temporal relations among them from raw text. All the works above treat this as two subtasks that are solved in a pipeline. To the best of our knowledge, there has been no existing work on joint event-temporal relation extraction. However, the idea of “joint” has been studied for entity-relation extraction in many works. BIBREF21 frame their joint model as table filling tasks, map tabular representation into sequential predictions with heuristic rules, and construct global loss to compute the best joint predictions. BIBREF22 define a global structure for joint entity and relation extraction, encode local and global features based on domain and linguistic knowledge. and leverage beam-search to find global optimal assignments for entities and relations. BIBREF23 leverage LSTM architectures to jointly predict both entity and relations, but fall short on ensuring prediction consistency. BIBREF24 combine the benefits of both neural net and global optimization with beam search. Motivated by these works, we propose an end-to-end trainable neural structured support vector machine (neural SSVM) model to simultaneously extract events and their relations from text and ensure the global structure via ILP constraints. Next, we will describe in detail our proposed method.
Joint Event-Relation Extraction Model
In this section we first provide an overview of our neural SSVM model, and then describe each component in our framework in detail (i.e., the multi-tasking neural scoring module, and how inference and learning are performed). We denote the set of all possible relation labels (including NONE) as $\mathcal {R}$, all event candidates (both events and non-events) as $\mathcal {E}$, and all relation candidates as $\mathcal {E}\mathcal {E}$.
Joint Event-Relation Extraction Model ::: Neural SSVM
Our neural SSVM adapts the SSVM loss as:
where $\bar{S}^n_{\mathcal {E}} = S(\hat{y}^n_\mathcal {E}; x^n) - S(y^n_\mathcal {E};x^n)$ and $\bar{S}^n_{\mathcal {R}} = S(\hat{y}^n_\mathcal {R}; x^n) - S(y^n_\mathcal {R};x^n)$ ; $\Phi $ denotes model parameters, $n$ indexes instances, $M^n = |\mathcal {E}|^n + |\mathcal {E}\mathcal {E}|^n$ denotes the total number of relations $|\mathcal {E}|^n$ and events $|\mathcal {E}\mathcal {E}|^n$ in instance $n$. $y^n,\hat{y}^n$ denote the gold and predicted global assignments of events and relations for instance $n$—each of which consists of either one hot vector representing true and predicted relation labels $y_{\mathcal {R}}^n, \hat{y}_{\mathcal {R}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}\mathcal {E}|}$, or entity labels $y_{\mathcal {E}}^n, \hat{y}_{\mathcal {E}}^n \in \lbrace 0, 1\rbrace ^{|\mathcal {E}|}$. A maximum a posteriori probability (MAP) inference is needed to find $\hat{y}^n$, which we formulate as an interger linear programming (ILP) problem and describe more details in Section SECREF12. $\Delta (y^n, \hat{y}^n)$ is a distance measurement between the gold and the predicted assignments; we simply use the Hamming distance. $C$ and $C_{\mathcal {E}}$ are the hyper-parameters to balance the losses between event, relation and the regularizer, and $S(y^n_\mathcal {E};x^n), S(y^n_\mathcal {R};x^n)$ are scoring functions, which we design a multi-tasking neural architecture to learn. The intuition behind the SSVM loss is that it requires the score of gold output structure $y^n$ to be greater than the score of the best output structure under the current model $\hat{y}^n$ with a margin $\Delta (y^n, \hat{y}^n)$ or else there will be some loss. The training objective is to minimize the loss.
The major difference between our neural-SSVM and the traditional SSVM model is the scoring function. Traditional SSVM uses a linear function over hand-crafted features to compute the scores, whereas we propose to use a recurrent neural network to estimate the scoring function and train the entire architecture end-to-end.
Joint Event-Relation Extraction Model ::: Multi-Tasking Neural Scoring Function
The recurrent neural network (RNN) architecture has been widely adopted by prior temporal extraction work to encode context information BIBREF18, BIBREF19, BIBREF20. Motivated by these works, we adopt a RNN-based scoring function for both event and relation prediction in order to learn features in a data driven way and capture long-term contexts in the input. In Fig. FIGREF6, we skip the input layer for simplicity.
The bottom layer corresponds to contextualized word representations denoted as $v_k$. We use ($i, j$) $\in \mathcal {E}\mathcal {E}$ to denote a candidate relation and $i \in \mathcal {E}$ to indicate a candidate event in the input sentences of length N. We fix word embeddings computed by a pre-trained BERT-base model BIBREF27. They are then fed into a BiLSTM layer to further encode task-specific contextual information. Both event and relation tasks share this layer.
The event scorer is illustrated by the left two branches following the BiLSTM layer. We simply concatenate both forward and backward hidden vectors to encode the context of each token. As for the relation scorer shown in the right branches, for each pair ($i,j$) we take the forward and backward hidden vectors corresponding to them, $f_i, b_i, f_j, b_j$, and concatenate them with linguistic features as in previous event relation prediction research. We denote linguistic features as $L_{i,j}$ and only use simple features provided in the original datasets: token distance, tense, and polarity of events.
Finally, all hidden vectors and linguistic features are concatenated to form the input to compute the probability of being an event or a softmax distribution over all possible relation labels—which we refer to as the RNN-based scoring function in the following sections.
Joint Event-Relation Extraction Model ::: MAP Inference
A MAP inference is needed both during training to obtain $\hat{y}^n$ in the loss function (Equation DISPLAY_FORM8), as well as during the test time to get globally coherent assignments. We formulate the inference problem as an ILP problem. The inference framework is established by constructing a global objective function using scores from local scorers and imposing several global constraints: 1) one-label assignment, 2) event-relation consistency, and 3) symmetry and transitivity as in BIBREF28, BIBREF29, BIBREF30, BIBREF31, BIBREF4.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Objective Function
The objective function of the global inference is to find the global assignment that has the highest probability under the current model, as specified in Equation DISPLAY_FORM14:
where $y^e_k$ is a binary indicator of whether the $k$-th candidate is an event or not, and $y^r_{i,j}$ is a binary indicator specifying whether the global prediction of the relation between $(i,j)$ is $r \in \mathcal {R}$. $S(y^e_k,x), \forall e \in \lbrace 0, 1\rbrace $ and $S(y^r_{i,j},x), \forall r \in \mathcal {R}$ are the scoring functions obtained from the event and relation scoring functions, respectively. The output of the global inference $\bf {\hat{y}}$ is a collection of optimal label assignments for all events and relation candidates in a fixed context. $C_{\mathcal {E}}$ is a hyper-parameter controlling weights between relation and event. The constraint that follows immediately from the objective function is that the global inference should only assign one label for all entities and relations.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Constraints
We introduce several additional constraints to ensure the resulting optimal output graph forms a valid and plausible event graph.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Constraints ::: Event-Relation Consistency.
Event and relation prediction consistency is defined with the following property: a pair of input tokens have a positive temporal relation if and only if both tokens are events. The following global constraints will satisfy this property,
where $e^P_i$ denotes an event and $e^N_i$ denotes a non-event token. $r^P_{i,j}$ indicates positive relations: BEFORE, AFTER, SIMULTANEOUS, INCLUDES, IS_INCLUDED, VAGUE and $r^N_{i,j}$ indicate a negative relation, i.e., NONE. A formal proof of this property can be found in Appendix A.
Joint Event-Relation Extraction Model ::: MAP Inference ::: Constraints ::: Symmetry and Transitivity Constraint.
We also explore the symmetry and transitivity constraints of relations. They are specified as follows:
Intuitively, the symmetry constraint forces two pairs of events with flipping orders to have reversed relations. For example, if $r_{i,j}$ = BEFORE, then $r_{j,i}$ = AFTER. The transitivity constraint rules that if ($i,j$), ($j,k$) and ($i,k$) pairs exist in the graph, the label (relation) prediction of ($i,k$) pair has to fall into the transitivity set specifyed by ($i,j$) and ($j,k$) pairs. The full transitivity table can be found in BIBREF25.
Joint Event-Relation Extraction Model ::: Learning
We begin by experimenting with optimizing SSVM loss directly, but model performance degrades. Therefore, we develop a two-state learning approach which first trains a pipeline version of the joint model without feedback from global constraints. In other words, the local neural scoring functions are optimized with cross-entropy loss using gold events and relation candidates that are constructed directly from the outputs of the event model. During the second stage, we switch to the global SSVM loss function in Equation DISPLAY_FORM8 and re-optimize the network to adjust for global properties. We will provide more details in Section SECREF4.
Implementation Details
In this section we describe implementation details of the baselines and our four models to build an end-to-end event temporal relation extraction system with an emphasis on the structured joint model. In Section SECREF6 we will compare and contrast them and show why our proposed structured joint model works the best.
Implementation Details ::: Baselines
We run two event and relation extraction systems, CAEVO BIBREF3 and CogCompTime BIBREF6, on TB-Dense and MATRES, respectively. These two methods both leverage conventional learning algorithms (i.e., MaxEnt and averaged perceptron, respectively) based on manually designed features to obtain separate models for events and temporal relations, and conduct end-to-end relation extraction as a pipeline. Note BIBREF3 does not report event and end-to-end temporal relation extraction performances, so we calculate the scores per our implementation.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Single-Task Model.
The most basic way to build an end-to-end system is to train separate event detection and relation prediction models with gold labels, as we mentioned in our introduction. In other words, the BiLSTM layer is not shared as in Fig. FIGREF6. During evaluation and test time, we use the outputs from the event detection model to construct relation candidates and apply the relation prediction model to make the final prediction.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Multi-Task Model.
This is the same as the single-task model except that the BiLSTM layer is now shared for both event and relation tasks. Note that both single-task and multi-task models are not trained to tackle the NONE relation directly. They both rely on the predictions of the event model to annotate relations as either positive pairs or NONE.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Pipeline Joint Model.
This shares the same architecture as the multi-task model, except that during training, we use the predictions of the event model to construct relation candidates to train the relation model. This strategy will generate NONE pairs during training if one argument of the relation candidate is not an event. These NONE pairs will help the relation model to distinguish negative relations from positive ones, and thus become more robust to event prediction errors. We train this model with gold events and relation candidates during the first several epochs in order to obtain a relatively accurate event model and switch to a pipeline version afterwards inspired by BIBREF23.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Structured Joint Model.
This is described in detail in Section SECREF3. However, we experience difficulties in training the model with SSVM loss from scratch. This is due to large amounts of non-event tokens, and the model is not capable of distinguishing them in the beginning. We thus adopt a two-stage learning procedure where we take the best pipeline joint model and re-optimize it with the SSVM loss.
To restrict the search space for events in the ILP inference of the SSVM loss, we use the predicted probabilities from the event detection model to filter out non-events since the event model has a strong performance, as shown in Section SECREF6. Note that this is very different from the pipeline model where events are first predicted and relations are constructed with predicted events. Here, we only leverage an additional hyper-parameter $T_{evt}$ to filter out highly unlikely event candidates. Both event and relation labels are assigned simutaneously during the global inference with ILP, as specified in Section SECREF12. We also filter out tokens with POS tags that do not appear in the training set as most of the events are either nouns or verbs in TB-Dense, and all events are verbs in MATRES.
Implementation Details ::: End-to-End Event Temporal Relation Extraction ::: Hyper-Parameters.
All single-task, multi-task and pipeline joint models are trained by minimizing cross-entropy loss. We observe that model performances vary significantly with dropout ratio, hidden layer dimensions of the BiLSTM model and entity weight in the loss function (with relation weight fixed at 1.0). We leverage a pre-trained BERT model to compute word embedding and all MLP scoring functions have one hidden layer. In the SSVM loss function, we fix the value of $C = 1$, but fine-tune $C_\mathcal {E}$ in the objective function in Equation DISPLAY_FORM14. Hyper-parameters are chosen using a standard development set for TB-Dense and a random holdout-set based on an 80/20 split of training data for MATRES. To solve ILP in the inference process, we leverage an off-the-shelf solver provided by Gurobi optimizer; i.e. the best solutions from the Gurobi optimizer are inputs to the global training. The best combination of hyper-parameters can be found in Table 9 in our appendix.
Experimental Setup
In this section we first provide a brief overview of temporal relation data and describe the specific datasets used in this paper. We also explain the evaluation metrics at the end.
Experimental Setup ::: Temporal Relation Data
Temporal relation corpora such as TimeBank BIBREF32 and RED BIBREF33 facilitate the research in temporal relation extraction. The common issue in these corpora is missing annotations. Collecting densely annotated temporal relation corpora with all events and relations fully annotated is reported to be a challenging task as annotators could easily overlook some facts BIBREF34, BIBREF35, BIBREF3, BIBREF4, which made both modeling and evaluation extremely difficult in previous event temporal relation research.
The TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences, and it has been widely evaluated on this task BIBREF3, BIBREF4, BIBREF19, BIBREF5. Recent data construction efforts such as MATRES BIBREF25 further enhance the data quality by using a multi-axis annotation scheme and adopting a start-point of events to improve inter-annotator agreements. We use TB-Dense and MATRES in our experiments and briefly summarize the data statistics in Table TABREF33.
Experimental Setup ::: Evaluation Metrics
To be consistent with previous research, we adopt two different evaluation metrics. The first one is the standard micro-average scores. For densely annotated data, the micro-average metric should share the same precision, recall and F1 scores. However, since our joint model includes NONE pairs, we follow the convention of IE tasks and exclude them from evaluation. The second one is similar except that we exclude both NONE and VAGUE pairs following BIBREF6. Please refer to Figure 4 in the appendix for a visualizations of the two metrics.
Results and Analysis
The main results of this paper can be found in Table TABREF34. All best-recall and F1 scores are achieved by our structured joint model, and the results outperform the baseline systems by 10.0% and 6.8% on end-to-end relation extraction per F1 scores and 3.5% and 2.6% on event extraction per F1 scores. The best precision score for the TB-Dense dataset is achieved by CAEVO, which indicates that the linguistic rule-based system can make highly precise predictions by being conservative.
Table TABREF35 shows a more detailed analysis, in which we can see that our single-task models with BERT embeddings and a BiLSTM encoder already outperform the baseline systems on end-to-end relation extraction tasks by 4.9% and 4.4% respectively. In the following sections we discuss step-by-step improvement by adopting multi-task, pipeline joint, and structured joint models on end-to-end relation extraction, event extraction, and relation extraction on gold event pairs.
Results and Analysis ::: End-to-End Relation Extraction ::: TB-Dense.
The improvements over the single-task model per F1 score are 4.1% and 4.2% for the multi-task and pipeline joint model respectively. This indicates that the pipeline joint model is helpful only marginally. Table TABREF46 shows that the structured joint model improves both precision and recall scores for BEFORE and AFTER and achieves the best end-to-end relation extraction performance at 49.4%—which outperforms the baseline system by 10.0% and the single-task model by 5.1%.
Results and Analysis ::: End-to-End Relation Extraction ::: MATRES.
Compared to the single-task model, the multi-task model improves F1 scores by 1.5%, while the pipeline joint model improves F1 scores by 1.3%—which means that pipeline joint training does not bring any gains for MATRES. The structured joint model reaches the best end-to-end F1 score at 59.6%, which outperforms the baseline system by 6.8% and the single-task model by 2.4%. We speculate that the gains come from the joint model's ability to help deal with NONE pairs, since recall scores for BEFORE and AFTER increase by 1.5% and 1.1% respectively (Table 10 in our appendix).
Results and Analysis ::: Event Extraction ::: TB-Dense.
Our structured joint model out-performs the CAEVO baseline by 3.5% and the single-task model by 1.3%. Improvements on event extraction can be difficult because our single-task model already works quite well with a close-to 89% F1 score, while the inter-annotator agreement for events in TimeBank documents is merely 87% BIBREF2.
Results and Analysis ::: Event Extraction ::: MATRES.
The structured model outperforms the the baseline model and the single-task model by 2.6% and 0.9% respectively. However, we observe that the multi-task model has a slight drop in event extraction performance over the single-task model (86.4% vs. 86.9%). This indicates that incorporating relation signals are not particularly helpful for event extraction on MATRES. We speculate that one of the reasons could be the unique event characteristics in MATERS. As we described in Section SECREF32, all events in MATRES are verbs. It is possible that a more concentrated single-task model works better when events are homogeneous, whereas a multi-task model is more powerful when we have a mixture of event types, e.g., both verbs and nouns as in TB-Dense.
Results and Analysis ::: Relation Extraction with Gold Events ::: TB-Dense.
There is much prior work on relation extraction based on gold events in TB-Dense. meng2018context proposed a neural model with global information that achieved the best results as far as we know. The improvement of our single-task model over that baseline is mostly attributable to the adoption of BERT embedding. We show that sharing the LSTM layer for both events and relations can help further improve performance of the relation classification task by 2.6%. For the joint models, since we do not train them on gold events, the evaluation would be meaningless. We simply skip this evaluation.
Results and Analysis ::: Relation Extraction with Gold Events ::: MATRES.
Both single-task and multi-task models outperform the baseline by nearly 10%, while the improvement of multi-task over single task is marginal. In MATRES, a relation pair is equivalent to a verb pair, and thus the event prediction task probably does not provide much more information for relation extraction.
In Table TABREF46 we further show the breakdown performances for each positive relation on TB-Dense. The breakdown on MATRES is shown in Table 10 in the appendix. BEFORE, AFTER and VAGUE are the three dominant label classes in TB-Dense. We observe that the linguistic rule-based model, CAEVO, tends to have a more evenly spread-out performance, whereas our neural network-based models are more likely to have concentrated predictions due to the imbalance of the training sample across different label classes.
Results and Analysis ::: Discussion ::: Label Imbalance.
One way to mitigate the label imbalance issue is to increase the sample weights for small classes during model training. We investigate the impact of class weights by refitting our single-task model with larger weights on INCLUDES, IS_INCLUDED and VAGUE in the cross-entropy loss.
Figure FIGREF50 shows that increasing class weights up to 4 times can significantly improve the F1 scores of INCLUDES and IS_INCLUDED classes with a decrease less than 2% for the overall F1 score. Performance of INCLUDES and IS_INCLUDED eventually degrades when class weights are too large. These results seem to suggest that more labels are needed in order to improve the performance on both of these two classes and the overall model. For SIMULTANEOUS, our model does not make any correct predictions for both TB-DENSE and MATRES by increasing class weight up to 10 times, which implies that SIMULTANEOUS could be a hard temporal relation to predict in general.
Results and Analysis ::: Discussion ::: Global Constraints.
In Table TABREF51 we conduct an ablation study to understand the contributions from the event-relation prediction consistency constraint and the temporal relation transitivity constraint for the structured joint model. As we can see, the event-relation consistency help s improve the F1 scores by 0.9% and 1% for TB-Dense and MATRES, respectively, but the gain by using transitivity is either non-existing or marginal. We hypothesize two potential reasons: 1) We leveraged BERT contextualized embedding as word representation, which could tackle transitivity in the input context; 2) NONE pairs could make transitivity rule less useful, as positive pairs can be predicted as NONE and transitivity rule does not apply to NONE pairs.
Results and Analysis ::: Discussion ::: Error Analysis.
By comparing gold and predicted labels for events and temporal relations and examining predicted probabilities for events, we identified three major sources of mistakes made by our structured model, as illustrated in Table TABREF57 with examples.
Results and Analysis ::: Discussion ::: Type 1.
Both events in Ex 1 are assigned low scores by the event module ($<< 0.01$). Although the structured joint model is designed to predict events and relations jointly, we leverage the event module to filter out tokens with scores lower than a threshold. Consequently, some true events can be mistakenly predicted as non-events, and the relation pairs including them are automatically assigned NONE.
Results and Analysis ::: Discussion ::: Type 2.
In Ex 2 the event module assigns high scores to tokens happened (0.97) and according (0.89), but according is not an event. When the structured model makes inference jointly, the decision will weigh heavily towards assigning 1 (event) to both tokens. With the event-relation consistency constraint, this pair is highly likely to be predicted as having a positive temporal relation. Nearly all mistakes made in this category follow the same pattern illustrated by this example.
Results and Analysis ::: Discussion ::: Type 3.
The existence of VAGUE makes temporal relation prediction challenging as it can be easily confused with other temporal relations, as shown in Ex 3. This challenge is compounded with NONE in our end-to-end extraction task.
Type 1 and Type 2 errors suggest that building a stronger event detection module will be helpful for both event and temporal relation extraction tasks. To improve the performance on VAGUE pairs, we could either build a stronger model that incorporates both contextual information and commonsense knowledge or create datasets with annotations that better separate VAGUE from other positive temporal relations.
Conclusion
In this paper we investigate building an end-to-end event temporal relation extraction system. We propose a novel neural structured prediction model with joint representation learning to make predictions on events and relations simultaneously; this can avoid error propagation in previous pipeline systems. Experiments and comparative studies on two benchmark datasets show that the proposed model is effective for end-to-end event temporal relation extraction. Specifically, we improve the performances of previously published systems by 10% and 6.8% on the TB-Dense and MATRES datasets, respectively.
Future research can focus on creating more robust structured constraints between events and relations, especially considering event types, to improve the quality of global assignments using ILP. Since a better event model is generally helpful for relation extraction, another promising direction would be to incorporate multiple datasets to enhance the performance of our event extraction systems.
Acknowledgements
This work is supported in part by Contracts W911NF-15-1-0543 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. | TB-Dense, MATRES |
e67d2266476abd157fc8c396b3dfb70cb343471e | e67d2266476abd157fc8c396b3dfb70cb343471e_0 | Q: What languages did they experiment with?
Text: Cross-lingual transfer learning of sequence models
The people of the world speak about 6,900 different languages. Open-source off-the-shelf natural language processing (NLP) toolboxes like OpenNLP and CoreNLP cover only 6–7 languages, and we have sufficient labeled training data for inducing models for about 20–30 languages. In other words, supervised sequence learning algorithms are not sufficient to induce POS models for but a small minority of the world's languages.
What can we do for all the languages for which no training data is available? Unsupervised POS induction algorithms have methodological problems (in-sample evaluation, community-wide hyper-parameter tuning, etc.), and performance is prohibitive of downstream applications. Some work on unsupervised POS tagging has assumed other resources such as tag dictionaries BIBREF0 , but such resources are also only available for a limited number of languages. In our experiments, we assume that no training data or tag dictionaries are available. Our only assumption is a bit of text translated into multiple languages, specifically, fragments of the Bible. We will use Bible data for annotation projection, as well as for learning cross-lingual word embeddings (§3).
Unsupervised learning with typologically informed priors BIBREF1 is an interesting approach to unsupervised POS induction that is more applicable to low-resource languages. Our work is related to this work, but we learn informed priors rather than stipulate them and combine these priors with annotation projection (learning from noisy labels) rather than unsupervised learning.
Annotation projection refers to transferring annotation from one or more source languages to the target language (for which no labeled data is otherwise available), typically through word alignments. In our experiments below, we use an unsupervised word alignment algorithm to align $15\times 12$ language pairs. For 15 languages, we have predicted POS tags for each word in our multi-parallel corpus. For each word in one of our 12 target language training datasets, we thus have up to 15 votes for each word token, possibly weighted by the confidence of the word alignment algorithm. In this paper, we simply use the majority votes. This is the set-up assumed throughout in this paper (see §3 for more details):
Empirical Gaussian priors
We will apply empirical Gaussian priors to linear-chain conditional random fields (CRFs; BIBREF3 ) and averaged structured perceptrons BIBREF4 . Linear-chain CRFs are trained by maximising the conditional log-likelihood of labeled sequences $LL(\mathbf {w},\mathcal {D})=\sum _{\langle \mathbf {x},\mathbf {y}\rangle \in \mathcal {D}}\log P(\mathbf {y}|\mathbf {x})$ with $\mathbf {w}\in \mathbb {R}^m$ and $\mathcal {D}$ a dataset consisting of sequences of discrete input symbols $\mathbf {x}=x_1,\ldots ,x_n$ associated with sequences of discrete labels $\mathbf {y}=y_1,\ldots ,y_n$ . L $k$ -regularized CRFs maximize $LL(\mathbf {w},\mathcal {D})-|\mathbf {w}|^k$ with typically $k\in \lbrace 0,1,2,\infty \rbrace $ , which all introduce costant-width, zero-mean regularizers. We refer to L $k$ -regularized CRFs as L2-CRF. L $k$ regularizers are parametric priors where the only parameter is the width of the bounding shape. The L2-regularizer is a Gaussian prior with zero mean, for example. The regularised log-likelihood with a Gaussian prior is $\mathbf {w}\in \mathbb {R}^m$0 . For practical reasons, hyper-parameters $\mathbf {w}\in \mathbb {R}^m$1 and $\mathbf {w}\in \mathbb {R}^m$2 are typically assumed to be constant for all values of $\mathbf {w}\in \mathbb {R}^m$3 . This also holds for recent work on parametric noise injection, e.g., BIBREF5 . If these parameters are assumed to be constant, the above objective becomes equivalent to L2-regularization. However, you can also try to learn these parameters. In empirical Bayes BIBREF6 , the parameters are learned from $\mathbf {w}\in \mathbb {R}^m$4 itself. BIBREF7 suggest learning the parameters from a validation set. In our set-up, we do not assume that we can learn the priors from training data (which is noisy) or validation data (which is generally not available in cross-lingual learning scenarios). Instead we estimate these parameters directly from source language models.
When we estimate Gaussian priors from source language models, we will learn which features are invariant across languages, and which are not. We thereby introduce an ellipsoid regularizer whose centre is the average source model. In our experiments, we consider both the case where variance is assumed to be constant – which we call L2-regularization with priors (L2-Prior) — and the case where both variances and means are learned – which we call empirical Gaussian priors (EmpGauss). L2-Prior is the L2-CRF objective with $\sigma ^2_j=C$ with $C$ a regularization parameter, and $\mu _j=\hat{\mu _j}$ the average value of the corresponding parameter in the observed source models. EmpGauss replaces the above objective with $LL(\lambda )+\sum _j\log \frac{1}{\sigma \sqrt{2\pi }}e^{-\frac{(\lambda _j-\mu _j)^2}{2\sigma ^2}}$ , which, assuming model parameters are mutually independent, is the same as jointly optimising model probability and likelihood of the data. Note that minimizing the squared weights is equivalent to maximizing the log probability of the weights under a zero-mean Gaussian prior, and in the same way, this is equivalent to minimising the above objective with empirically estimated parameters $\hat{\mu _j}$ and $\sigma {\mu _j}$ . In other words, empirical Gaussian priors are bounding ellipsoids on the hypothesis space with learned widths and centres. Also, note that in single-source cross-lingual transfer learning, observed variance is zero, and we therefore replace this with a regularization parameter $C$ shared with the baseline. In the single-source set-up, L2-Prior is thus equivalent to EmpGauss. We use L-BFGS to maximize our baseline L2-regularized objectives, as well as our empirical Gaussian prior objectives.
Empirical Gaussian noise injection
We also introduce a drop-out variant of empirical Gaussian priors. Our point of departure is average structured perceptron. We implement empirical Gaussian noise injection with Gaussians $\langle (\mu _1,\sigma _1),\ldots , (\mu _m,\sigma _m)\rangle $ for $m$ features as follows. We initialise our model parameters with the means $\mu _j$ . For every instance we pass over, we draw a corruption vector $\mathbf {g}$ of random values $v_i$ from the corresponding Gaussians $(1,\sigma _i)$ . We inject the noise in $\mathbf {g}$ by taking pairwise multiplications of $\mathbf {g}$ and our feature representations of the input sequence with the relevant label sequences. Note that this drop-out algorithm is parameter-free, but of course we could easily throw in a hyper-parameter controlling the degree of regularization. We give the algorithm in Algorithm 1.
[1] $T=\lbrace \langle \mathbf {x}^1,\mathbf {y}^1\rangle ,\ldots ,\langle \mathbf {x}_n,\mathbf {y}_n\rangle \rbrace \text{~w.~}\mathbf {x}_i=\langle v_1,\ldots \rangle \text{ and }v_k=\langle f_1,\ldots ,f_m\rangle , \mathbf {w}^0=\langle w_1:\hat{\mu _1},\ldots , w_m:\hat{\mu _m}\rangle $ $i\le I\times |T|$ $j\le n$ $\mathbf {g}\leftarrow \mathbf {sample}(\mathcal {N}(1,\sigma _1),\ldots ,\mathcal {N}(1,\sigma _m))$ $\hat{\mathbf {y}}\leftarrow \arg \max _{\mathbf {y}}\mathbf {w}^i \cdot \mathbf {g}$ $\mathbf {w}^{i+1}\leftarrow \mathbf {w}^i+\Phi (\mathbf {x}_j,\mathbf {y}_j)\cdot \mathbf {g}-\Phi (\mathbf {x}_j,\hat{\mathbf {y}})\cdot \mathbf {g}$ Averaged structured perceptron with empirical Gaussian noise
Observations
We make the following additional observations: (i) Following the procedure in BIBREF11 , we can compute the Rademacher complexity of our models, i.e., their ability to learn noise in the labels (overfit). Sampling POS tags randomly from a uniform distribution, chance complexity is 0.083. With small sample sizes, L2-CRFs actually begin to learn patterns with Rademacher complexity rising to 0.086, whereas both L2-Prior and EmpGauss never learn a better fit than chance. (ii) BIBREF2 present a simple approach to explicitly studying bias-variance trade-offs during learning. They draw subsamples of $l< m$ training data points $\mathcal {D}_1, \ldots , \mathcal {D}_k$ and use a validation dataset of $m^{\prime }$ data points to define the integrated variance of our methods. Again, we see that using empirical Gaussian priors lead to less integrated variance. (iii) An empirical Gaussian prior effectively limits us to hypotheses in $\mathcal {H}$ in a ellipsoid around the average source model. When inference is exact, and our loss function is convex, we learn the model with the smallest loss on the training data within this ellipsoid. Model interpolation of (some weighting of) the average source model and the unregularized target model can potentially result in the same model, but since model interpolation is limited to the hyperplane connecting the two models, the probability of this to happen is infinitely small ( $\frac{1}{\infty }$ ). Since for any effective regularization parameter value (such that the regularized model is different from the unregularized model), the empirical Gaussian prior can be expected to have the same Rademacher complexity as model interpolation, we conclude that using empirical Gaussian priors is superior to model interpolation (and data concatenation). | Unanswerable |
c69f4df4943a2ca4c10933683a02b179a5e76f64 | c69f4df4943a2ca4c10933683a02b179a5e76f64_0 | Q: What approach performs better in experiments global latent or sequence of fine-grained latent variables?
Text: Introduction
Convolutional and fully-attentional feed-forward architectures, such as Transformers BIBREF0, have emerged as effective alternatives to RNNs BIBREF1 in wide range of NLP tasks. These architectures remove the computational temporal dependency during the training and effectively address the long-standing vanishing gradients problem of recurrent models by processing all inputs simultaneously. Notably, transformers apply a fully attention strategy, where each token in the sequence is informed by other tokens via a self-attention mechanism. It acts as an effectively global receptive field across the whole sequences which absence in RNNs. Despite the powerful modeling capability of trasnformers, they often fail to model one-to-many relation in dialogue response generation tasks BIBREF2 due to their deterministic nature. As a result, they generate dull and generic response (e.g., “I am not sure"), especially with greedy and beam search, which are widely used in other sequence modeling tasks. There have been attempts to generate diverse and informative dialogue responses by incorporating latent variable(s) into the RNN encoder-decoder architecture. In particular BIBREF2 adapt a conditional variational autoencoder (CVAE) to capture discourse-level variations of dialogue, while BIBREF3 and BIBREF4 integrates latent variables in the hidden states of the RNN decoder. However, the inherently sequential computation of aforementioned models limit the efficiency for large scale training.
In this paper, we introduce the Variational Transformer (VT) a variational self-attentive feed-forward sequence model to address the aforementioned issues. The VT combine the parallelizability and global receptive field of the transformer with the variational nature of CVAE by incorporating stochastic latent variables into transformers. We explore two types of VT: 1) Global Variational Transformer (GVT), and 2) Sequential Variational Transformer. The GVT is the extension of CVAE in BIBREF2, which modeling the discourse-level diversity with a global latent variable, While SVT, inspired by variational autoregressive models BIBREF3, BIBREF4, incorporates a sequence of latent variables into decoding process by using a novel variational decoder layer. Unlike previous approaches BIBREF2, BIBREF3, BIBREF4, SVT uses Non-causal Multi-head Attention, which attend to future tokens for computing posterior latent variables instead of using an additional encoder.
The proposed VT architectures integrate stochastic latent variables into Transformers. The experimental results on a three conversation dataset demonstrate that our models can generate more informative and coherent responses.
Related work ::: Neural Conversational Models
Conversational systems has been widely studied BIBREF5, BIBREF6, BIBREF7, BIBREF8. Compare to rule-based systems BIBREF5, BIBREF6, sequence-to-sequence conversation models achieve superior performance in terms of scalable training and generalization ability BIBREF7. However, it has been pointed out that encoder-decoder models tend to generate generic and repetitive responses like “I am sorry" BIBREF9. To address this issue, there have been three main lines of work. The first is adding additional information (e.g., persona) as input to guild model generate more informative responses BIBREF10, BIBREF11. The second modifies the learning objective to promote more diverse generation BIBREF9, and the third integrates stochastic latent variables into Seq2Seq models by using the CVAE framework BIBREF12, BIBREF2. Our work comes within this third line introducing a novel model, the Variational Transformer, to improve dialogue response generation.
Related work ::: Conditional Variational Autoencoders
Many works have attempted to combine CVAEs with encoder-decoder architectures for sequence generation tasks. BIBREF13 propose a variational encoder-decoder model for neural machine translation, while BIBREF14 apply variational recurrent neural networks (VRNN) BIBREF15 for text summarization. BIBREF2 and BIBREF16 explore incorporating meta features into CVAE framework in dialogue response generation tasks. BIBREF3 and BIBREF4 propose variational autoregressive decoders which enhanced by highly multi-modal latent variables to capture the high variability in dialogue responses. BIBREF17 further augment variational autoregressive decoders with dynamic memory networks for improving generation quality. We unify the previous successful ideas of CVAE, and explore the combinations of CVAE and Transformer.
Related work ::: Fully Attentional Networks
Taking advantage of the parallel-in-time structure and global receptive field, Transformers BIBREF0 have recently been shown to achieve impressive results on various sequence modeling tasks. Based on this, several follow-up models have been presented. The Image Transformer BIBREF18 has been proposed for image generation, while the MultiModel BIBREF19 integrates convolution, attention and sparsely-gated mixture-of-expert blocks into a single deep-learning model for simultaneously learning multiple tasks from various domains. BIBREF20 proposed a fully attentional mixture-of-expert model (MoEL) for empathetic dialogue modeling. The Universal Transformer BIBREF1 incorporates the recurrent inductive bias of RNNs into the standard Transformer, and achieves better result on a wide range of algorithmic and language understanding tasks. BIBREF21 introduce the Latent Transformer (LT) for non-autoregressive machine translation. During training, the LT first autoencodes a target sequence into a shorter sequence discrete latent variables. Then a parallel decoder decodes the target using discrete latent variables and an input sequence. Different from the LT BIBREF21, the VT generates continuous latent variables during the decoding process.
Preliminaries ::: Conditional Variational Autoencoder for Dialogue Generation
The CVAE framework BIBREF22 represents a dyadic conversation via three random variables: the input condition $c$, including conversation context and meta features (meta features can be ignored when not available); a latent variable $z$; and the target response $x$. A CVAE can be efficiently trained with Stochastic Gradient Variational Bayes (SGVB) BIBREF23 by maximizing the variational lower bound of $x$ given c, according to:
The typical CVAE consists of a prior network $p_{\theta }(z | c)$, which is used to approximate $p(z | c)$, a recognition network $p_{\phi }(z | c, x)$, which is used to approximate posterior distribution $q(z | c, x)$, and a decoder $p_{\theta }(x | z, c)$, which is used to approximate $p(x | z, c)$. By assuming z follows multivariate Gaussian distribution with a diagonal co-variance matrix, the evidence lower bound (ELBO) can be written as
where $\mathcal {L}_{REC}$ denotes the reconstruction loss and $\mathcal {L}_{KL}$ denotes the Kullback-Leibler (KL) divergence between the posterior and prior.
In dialogue generation tasks, previous works BIBREF2, BIBREF16 apply RNN encoders (with GRU or LSTM cell) to encode dialogue contexts and responses separately. The condition $c$ is represented by the concatenation of the last hidden state of the context encoder and the meta features (e.g., topic, emotion), while the response $x$ is represented by the last hidden state of response encoder. Then the prior network $p_{\theta }(z | c)$ and the recognition network $p_{\phi }(z | c, x)$ parameterized by multi-layer perceptrons (MLPs) are applied to approximate the means and the log variances of the prior latent distribution $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and posterior latent distribution $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. With the reparameterization trick BIBREF23, we can obtain samples of the prior latent variable (for testing) from $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and samples of the posterior latent variable (for training) from $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. Finally, an RNN decoder use $z$ and $c$ as the initial state to predicts the response $x$.
The vanishing latent variable problem BIBREF24 is a common issue in RNN-based CVAEs. That is, the powerful autoregressive RNN decoder first learns to ignore the latent variable, and decodes the response by only condition on the previous tokens. Thus the latent variable fails to encode the meaningful information, and the CVAE deteriorates to seq2seq model. To alleviate this issue, KL annealing BIBREF24 and bag-of-word loss BIBREF2 have been proposed, and have shown effectiveness in various dialogue tasks BIBREF2, BIBREF16.
Preliminaries ::: CVAE with Transformer
The aforementioned RNN-based CVAE framework integrate the latent variable into the initial state of RNN decoder, while in transformer, it is more flexible to incorporate the latent variable embedding into the first input token of the decoder to generate the initial state.
The overall architecture of GVT is depicted in Figure FIGREF9. Different from RNNs, the Transformer encoder maps an input sequence of symbol representations to a sequence of contextualized representations BIBREF0. In order to get fixed dimension representations of the response and context, we add a special token $CLS$ at the beginning of the input sequence as in BERT BIBREF25, to compute the weighted sum of the output representations via self-attention. Thus the output representation of the token $CLS$ is considered as the representation of the whole sequence. Then we introduce a recognition network and a prior network to compute the posterior latent variable and prior latent variable as in BIBREF2, BIBREF16. We add the latent variable sample $z$ and meta features $m$ (can be ignored when not available) into $e_{SOS}$, the embedding of the start-of-sequence token $SOS$:
Finally, the transformer decoder decodes the response $x$ sequentially while attending to the new embedding $e^{\prime }_{SOS}$ of token $SOS$ with latent information.
This design enhances the CVAE framework with the global receptive field, and each position of the GVT can directly access the latent information via the multi-head self-attention mechanism. However, we still observe that the GVT suffers the vanishing latent variable problem as RNN-based CVAE because the decoder can bypass the latent information by paying less attention to the $SOS$ token. Hence, we apply the KL annealing, and bag-of-word auxiliary loss $\mathcal {L}_{bow}$ as in BIBREF2, BIBREF16 to preserve the useful information of the latent variable. Therefore, the learning objective of the GVT is defined as follows:
Sequential Variational Transformer
In order to augment the capacity of the latent variable with multi-modal distributions and to better utilize the latent information, we further explore incorporating a sequence of latent variables in decoding process. We introduce Sequential Variational Transformer (SVT) with a novel variational decoder layer which generate latent variables for each position: $z=\left(z_{1}, \dots , z_{T}\right)$. Similar to BIBREF3, we interpret the latent variables as a generation plan for the future sequence. Unlike previous CVAE models which use an extra encoder to encode the response separately BIBREF2, BIBREF16 or use a backward RNN to encode the future sequence for each time step BIBREF3, BIBREF4, SVT uses a Non-causal Multi-head Attention which leaks the future information to the recognition network for computing the posterior latent variables.
As shown in Figure FIGREF13, the SVT shares the same encoder as the standard Transformer BIBREF0, while its decoder consists of a variational decoder layer followed by a stack of $N$ standard Transformer decoder layers. The variational decoder layer has two paths for computing the posterior latent variable and prior latent variable respectively. We denote them as Posterior Path and Prior Path.
Sequential Variational Transformer ::: Prior Path
The Prior Path (solid line in Figure FIGREF13) has a masked multi-head self-attention sub-layer which performs causal attention on the shifted response, followed by a multi-head self-attention sub-layer which performs encoder-decoder multi-head attention on the context encoder. The last sub-layer is composed of a MLP prior network which approximates a sequence of prior latent variable for each position, and a Position-wise Feed-Forward Network (FFN) which fuse the latent information $z$ with the observed information representation $o^P$ before the prior network (shown in Figure FIGREF13). Specifically, we concatenate $o^P$ with $z$ as the input to the FNN, and the FNN pass the fused representation to the next layer. Same as BIBREF0, in the variational decoder layer, each sub-layer is followed by a residual connection and layer normalization. That is, the output of each sub-layer is $LayerNorm(x + Sublayer(x))$.
We decompose the response $x$ as $x = \left(x_1, \cdots , x_T\right)$ and the latent variable $z$ as $z=\left(z_{1}, \dots , z_{T}\right)$. The prior model produces latent variables at each position $z_t$ by not only conditioning on the input condition $c$ (the concatenation of context and meta features), but also conditioning on the observed response tokens $x_{1:t-1}$. By assuming $z_t$ follows a multivariate Gaussian distribution, the prior model becomes:
where
Sequential Variational Transformer ::: Posterior Path
The only difference between the Posterior Path (dash line in Figure FIGREF13) and Prior Path is that the mask is removed from the masked multi-head attention. Thus the masked (casual) multi-head attention become non-casual multi-head attention, which allows each position to attend to the subsequent positions. Then, the second multi-head attention sub-layer (shared the same weight with prior path) performs posterior attention on the encoder and passes the posterior observed information $o_R$ to the recognition network. The recognition network produces the posterior latent variable for each position $z_t$ as:
where
During the training, the posterior path guides the learning of prior path via KL divergence constraint:
In the training phase, the posterior latent variables from Equation DISPLAY_FORM17 are passed to the FFN, while in the testing phase the Posterior Path will be blocked and the posterior latent variables will be replaced with the prior latent variables from Equation DISPLAY_FORM15.
During the decoding process, each response token $x_t$ is generated by conditioning on observed response tokens $x_{1:t-1}$, latent variables $z_{1:t}$, and the input condition $c$. The decoding process of the SVT is:
Sequential Variational Transformer ::: Auxiliary Loss
As we expect the latent variables to be a generation plan for the future sequence, we inject such bias into latent variables by using an auxiliary loss: Sequential-Bag-of-Word (SBOW) which proposed by BIBREF4. The idea of the SBOW auxiliary objective is to sequentially predict the bag of succeeding target words $x_{t:T}$ by using latent variable $z_t$. In our case, the succeeding words prediction also leverages the observed information $c$ and $x_{1:t-1}$. Thus the auxiliary loss at each position is computed by:
where $f_{aux}$ is a feed-forward neural network with the softmax output.
Sequential Variational Transformer ::: Learning
The evidence lower bound (ELBO) objective of SVT is the sum of the reconstruction loss $\mathcal {L}_{REC}(t)$ and Kullback-Leibler divergence loss $\mathcal {L}_{KL}(t)$ at each position:
We regularize the ELBO learning objective with an auxiliary loss $\mathcal {L}_{sbow}$ to enhance the expressiveness of the latent variables. Therefore, the final learning objective is formulated as follows:
where,
Experiments ::: Dataset
We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26.
Experiments ::: Dataset ::: MojiTalk
dataset consists of 596,959 post and response pairs from Twitter. Each response is labeled by one emoji which indicates the response emotion. There are 64 emoji labels in total with unbalanced distribution. We use the preprocessed data and vocabulary released from BIBREF16 and follow the same split of train/validation/test set.
Experiments ::: Dataset ::: PersonaChat & Empathetic-Dialogues
are one-to-one multi-turn conversation datasets. In PersonaChat (Persona), the conversations are revolve around personas which are established by four to six persona sentences. While in Empathetic-Dialogues (ED), the conversation are mostly about situation that happened to one of the speaker and another speaker is trying to understand the feeling and reply accordingly. Both datasets are about modeling social skills and the goal is to make user more engaging. Therefore, we combine the train/validation/test set of two datasets.
Experiments ::: Baselines
We compare the proposed models with the following baselines:
Experiments ::: Baselines ::: Seq2Seq.
An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16.
Experiments ::: Baselines ::: CVAE.
An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16.
Experiments ::: Baselines ::: Transformer.
A transformer BIBREF0 trained by using a Maximum Likelihood Estimation (MLE) objective and can be considered as the base model for both the GVT and SVT.
Experiments ::: Hyper-parameters and Training Setup
We use a 4-layer Transformer as our base model. The hidden size is set to be 300 everywhere, and the word embedding is initialized with the 300-dimensional pre-trained GloVe embeddings for both encoder and decoder. The multi-head attention sub-layers are made up of 4 attention heads each with embedding dimension 64. The size of latent variable is 300. The recognition network and the prior network are parameterized by 3-layer MLPs with 512 hidden dimension. Following the training setup of BIBREF16, we first train our baseline transformer model with the MLE objective and use it to initialize its counterparts in both GVT and SVT. Then the models are trained end-to-end by the Adam optimizer with the initial learning rate $2\times 10^{-4}$. KL annealing and early stopping strategy are applied as in BIBREF16. In the test time, we use greedy decoding strategy for all models.
Experiments ::: Automatic Evaluation ::: PPL & KLD.
The evaluation metrics include Perplexity (PPL) and Kullback-Leibler divergence between the posterior and prior (KLD). A well trained model should achieve a low reconstruction and small but non-trivial KL distance BIBREF27.
Experiments ::: Automatic Evaluation ::: Diversity.
To measure the generation diversity, we calculate Dist-1, Dist-2, and Dist-3, the ratio of the number of distinct n-grams (unigrams, bigrams, and trigrams) over the total number of n-grams. A higher distinct n-grams ratio indicates more diverse generation.
Experiments ::: Automatic Evaluation ::: Embeddings Similarity.
This metric computes the cosine similarity between the sentence embedding of a generated sequence and that of a ground-truth response. In our experiments, we introduce two different ways to represent sentence embeddings. The first is $\textbf {EMB}_\textbf {FT}$ BIBREF28 that calculates the average of word embeddings in a sentence using FastText BIBREF29 which is trained with Common Crawl and Wikipedia data. We use FastText embeddings instead of other pre-trained word embeddings because it can handle out-of-vocabulary issue. However, representing a sentence by simply taking the average of word embeddings ignores the context information. Therefore, we propose to use a pre-trained language model BERT BIBREF25 to compute the contextualized sentence representation. Specifically, we use a pre-trained BERT to encode a generated sentence and a ground-truth response, and average the output representation of both to obtain the sentence embeddings. We denote such contextualized sentence embedding as $\textbf {EMB}_\textbf {BERT}$.
Experiments ::: Human Evaluation
In the human evaluation, we prepare multiple-choice questions for human evaluators and the answers are the generation results from the five models (Seq2Seq, CVAE, Transformer, GVT, and SVT). we first randomly sample 100 dialogues and their corresponding responses from our models and the baselines. For each response, we assign three human annotators to select the most coherent (on topic) response to the context (multiple answers are allowed). In addition, annotators also need to choose the best response correlated to the given emoji label in Mojitalk and the most engaging response in PersonaChat and Empathetic-Dialogues. If there is no response that satisfies the evaluators, they can choose “all answers are bad", which means none of the answer is chosen. We compute the rate that each model is chosen to quantify generation quality regarding to the human standard.
Results ::: Quantitative Analysis
The automatic evaluation results are shown in Table TABREF35. Transformer-based models have significantly lower perplexity compared to RNN-based models which indicate that the global receptive field performed by multi-head self-attention boost the modeling capacity. However, deterministic Seq2Seq and Transformer models tends to generate generic responses which leads to a low diversity score. Meanwhile incorporating a stochastic latent variable into both models (CVAE and GVT) promote more diverse generation results and boost the diversity scores such as Dist-1, Dist-2, and Dist-3.
Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL.
On the other hand, SVT achieves the highest score in terms of two semantic relevance-oriented metrics such as $\textbf {EMB}_\textbf {FT}$ and $\textbf {EMB}_\textbf {BERT}$ in MojiTalk dataset, while in the combined dataset of Persona and ED, we observe performance drop of SVT compare to other models. This is because both Persona and ED are well designed and have lower entropy than MojiTalk which collected from Twitter. We hypothesize that the sequential latent variables have no advantage in term of similarity to single, fixed "gold response" when model low entropy response. Indeed, in open domain dialogue response generation, automatic metric is not always aligned with the human judgement BIBREF28. In contrast, human evaluation result reported in Table TABREF35 demonstrates the generations of SVT are closer to the human standard in terms of coherence, invoked emotion and engagedness.
Results ::: Qualitative Analysis
Table TABREF42 compares the generation of the proposed models with baselines given the same contexts. We observe that the Seq2Seq and vanilla transformer tend to generate generic and repetitive responses (e.g., i am not sure) in MojiTalk due to their deterministic structure fail to capture the variability in dialogue response. By incorporating stochastic latent variables, the CVAE and GVT can generate more diverse responses, but their responses are sometimes digressive (e.g., example 5). Interestingly, GVT and SVT generalize the topic beyong the context which make the dialogue more engaging (e.g., example 4). In general, SVT is able to generate more coherent and informative responses.
Conclusion
This paper introduces the Variational Transformer (VT), a variational self-attentive feed-forward sequence model that combines the global receptive field of a Transformer with the variational nature of a CVAE. We propose two types of the VT: 1) the Global Variational Transformer (GVT) which incorporates a global latent variable as additional input to the transformer decoder; and 2) the Sequential Variational Transformer (SVT) which generates latent variables for each position during decoding process. Quantitative and qualitative experimental results shows that our models outperform baselines in terms of diversity, semantic relevance, and human judgment. In future work, we will utilize the pre-training language models BIBREF30 as the back-bone to strengthen the language model of the VT for better generation. | PPL: SVT
Diversity: GVT
Embeddings Similarity: SVT
Human Evaluation: SVT |
6aed1122050b2d508dc1790c13cdbe38ff126089 | 6aed1122050b2d508dc1790c13cdbe38ff126089_0 | Q: What baselines other than standard transformers are used in experiments?
Text: Introduction
Convolutional and fully-attentional feed-forward architectures, such as Transformers BIBREF0, have emerged as effective alternatives to RNNs BIBREF1 in wide range of NLP tasks. These architectures remove the computational temporal dependency during the training and effectively address the long-standing vanishing gradients problem of recurrent models by processing all inputs simultaneously. Notably, transformers apply a fully attention strategy, where each token in the sequence is informed by other tokens via a self-attention mechanism. It acts as an effectively global receptive field across the whole sequences which absence in RNNs. Despite the powerful modeling capability of trasnformers, they often fail to model one-to-many relation in dialogue response generation tasks BIBREF2 due to their deterministic nature. As a result, they generate dull and generic response (e.g., “I am not sure"), especially with greedy and beam search, which are widely used in other sequence modeling tasks. There have been attempts to generate diverse and informative dialogue responses by incorporating latent variable(s) into the RNN encoder-decoder architecture. In particular BIBREF2 adapt a conditional variational autoencoder (CVAE) to capture discourse-level variations of dialogue, while BIBREF3 and BIBREF4 integrates latent variables in the hidden states of the RNN decoder. However, the inherently sequential computation of aforementioned models limit the efficiency for large scale training.
In this paper, we introduce the Variational Transformer (VT) a variational self-attentive feed-forward sequence model to address the aforementioned issues. The VT combine the parallelizability and global receptive field of the transformer with the variational nature of CVAE by incorporating stochastic latent variables into transformers. We explore two types of VT: 1) Global Variational Transformer (GVT), and 2) Sequential Variational Transformer. The GVT is the extension of CVAE in BIBREF2, which modeling the discourse-level diversity with a global latent variable, While SVT, inspired by variational autoregressive models BIBREF3, BIBREF4, incorporates a sequence of latent variables into decoding process by using a novel variational decoder layer. Unlike previous approaches BIBREF2, BIBREF3, BIBREF4, SVT uses Non-causal Multi-head Attention, which attend to future tokens for computing posterior latent variables instead of using an additional encoder.
The proposed VT architectures integrate stochastic latent variables into Transformers. The experimental results on a three conversation dataset demonstrate that our models can generate more informative and coherent responses.
Related work ::: Neural Conversational Models
Conversational systems has been widely studied BIBREF5, BIBREF6, BIBREF7, BIBREF8. Compare to rule-based systems BIBREF5, BIBREF6, sequence-to-sequence conversation models achieve superior performance in terms of scalable training and generalization ability BIBREF7. However, it has been pointed out that encoder-decoder models tend to generate generic and repetitive responses like “I am sorry" BIBREF9. To address this issue, there have been three main lines of work. The first is adding additional information (e.g., persona) as input to guild model generate more informative responses BIBREF10, BIBREF11. The second modifies the learning objective to promote more diverse generation BIBREF9, and the third integrates stochastic latent variables into Seq2Seq models by using the CVAE framework BIBREF12, BIBREF2. Our work comes within this third line introducing a novel model, the Variational Transformer, to improve dialogue response generation.
Related work ::: Conditional Variational Autoencoders
Many works have attempted to combine CVAEs with encoder-decoder architectures for sequence generation tasks. BIBREF13 propose a variational encoder-decoder model for neural machine translation, while BIBREF14 apply variational recurrent neural networks (VRNN) BIBREF15 for text summarization. BIBREF2 and BIBREF16 explore incorporating meta features into CVAE framework in dialogue response generation tasks. BIBREF3 and BIBREF4 propose variational autoregressive decoders which enhanced by highly multi-modal latent variables to capture the high variability in dialogue responses. BIBREF17 further augment variational autoregressive decoders with dynamic memory networks for improving generation quality. We unify the previous successful ideas of CVAE, and explore the combinations of CVAE and Transformer.
Related work ::: Fully Attentional Networks
Taking advantage of the parallel-in-time structure and global receptive field, Transformers BIBREF0 have recently been shown to achieve impressive results on various sequence modeling tasks. Based on this, several follow-up models have been presented. The Image Transformer BIBREF18 has been proposed for image generation, while the MultiModel BIBREF19 integrates convolution, attention and sparsely-gated mixture-of-expert blocks into a single deep-learning model for simultaneously learning multiple tasks from various domains. BIBREF20 proposed a fully attentional mixture-of-expert model (MoEL) for empathetic dialogue modeling. The Universal Transformer BIBREF1 incorporates the recurrent inductive bias of RNNs into the standard Transformer, and achieves better result on a wide range of algorithmic and language understanding tasks. BIBREF21 introduce the Latent Transformer (LT) for non-autoregressive machine translation. During training, the LT first autoencodes a target sequence into a shorter sequence discrete latent variables. Then a parallel decoder decodes the target using discrete latent variables and an input sequence. Different from the LT BIBREF21, the VT generates continuous latent variables during the decoding process.
Preliminaries ::: Conditional Variational Autoencoder for Dialogue Generation
The CVAE framework BIBREF22 represents a dyadic conversation via three random variables: the input condition $c$, including conversation context and meta features (meta features can be ignored when not available); a latent variable $z$; and the target response $x$. A CVAE can be efficiently trained with Stochastic Gradient Variational Bayes (SGVB) BIBREF23 by maximizing the variational lower bound of $x$ given c, according to:
The typical CVAE consists of a prior network $p_{\theta }(z | c)$, which is used to approximate $p(z | c)$, a recognition network $p_{\phi }(z | c, x)$, which is used to approximate posterior distribution $q(z | c, x)$, and a decoder $p_{\theta }(x | z, c)$, which is used to approximate $p(x | z, c)$. By assuming z follows multivariate Gaussian distribution with a diagonal co-variance matrix, the evidence lower bound (ELBO) can be written as
where $\mathcal {L}_{REC}$ denotes the reconstruction loss and $\mathcal {L}_{KL}$ denotes the Kullback-Leibler (KL) divergence between the posterior and prior.
In dialogue generation tasks, previous works BIBREF2, BIBREF16 apply RNN encoders (with GRU or LSTM cell) to encode dialogue contexts and responses separately. The condition $c$ is represented by the concatenation of the last hidden state of the context encoder and the meta features (e.g., topic, emotion), while the response $x$ is represented by the last hidden state of response encoder. Then the prior network $p_{\theta }(z | c)$ and the recognition network $p_{\phi }(z | c, x)$ parameterized by multi-layer perceptrons (MLPs) are applied to approximate the means and the log variances of the prior latent distribution $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and posterior latent distribution $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. With the reparameterization trick BIBREF23, we can obtain samples of the prior latent variable (for testing) from $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and samples of the posterior latent variable (for training) from $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. Finally, an RNN decoder use $z$ and $c$ as the initial state to predicts the response $x$.
The vanishing latent variable problem BIBREF24 is a common issue in RNN-based CVAEs. That is, the powerful autoregressive RNN decoder first learns to ignore the latent variable, and decodes the response by only condition on the previous tokens. Thus the latent variable fails to encode the meaningful information, and the CVAE deteriorates to seq2seq model. To alleviate this issue, KL annealing BIBREF24 and bag-of-word loss BIBREF2 have been proposed, and have shown effectiveness in various dialogue tasks BIBREF2, BIBREF16.
Preliminaries ::: CVAE with Transformer
The aforementioned RNN-based CVAE framework integrate the latent variable into the initial state of RNN decoder, while in transformer, it is more flexible to incorporate the latent variable embedding into the first input token of the decoder to generate the initial state.
The overall architecture of GVT is depicted in Figure FIGREF9. Different from RNNs, the Transformer encoder maps an input sequence of symbol representations to a sequence of contextualized representations BIBREF0. In order to get fixed dimension representations of the response and context, we add a special token $CLS$ at the beginning of the input sequence as in BERT BIBREF25, to compute the weighted sum of the output representations via self-attention. Thus the output representation of the token $CLS$ is considered as the representation of the whole sequence. Then we introduce a recognition network and a prior network to compute the posterior latent variable and prior latent variable as in BIBREF2, BIBREF16. We add the latent variable sample $z$ and meta features $m$ (can be ignored when not available) into $e_{SOS}$, the embedding of the start-of-sequence token $SOS$:
Finally, the transformer decoder decodes the response $x$ sequentially while attending to the new embedding $e^{\prime }_{SOS}$ of token $SOS$ with latent information.
This design enhances the CVAE framework with the global receptive field, and each position of the GVT can directly access the latent information via the multi-head self-attention mechanism. However, we still observe that the GVT suffers the vanishing latent variable problem as RNN-based CVAE because the decoder can bypass the latent information by paying less attention to the $SOS$ token. Hence, we apply the KL annealing, and bag-of-word auxiliary loss $\mathcal {L}_{bow}$ as in BIBREF2, BIBREF16 to preserve the useful information of the latent variable. Therefore, the learning objective of the GVT is defined as follows:
Sequential Variational Transformer
In order to augment the capacity of the latent variable with multi-modal distributions and to better utilize the latent information, we further explore incorporating a sequence of latent variables in decoding process. We introduce Sequential Variational Transformer (SVT) with a novel variational decoder layer which generate latent variables for each position: $z=\left(z_{1}, \dots , z_{T}\right)$. Similar to BIBREF3, we interpret the latent variables as a generation plan for the future sequence. Unlike previous CVAE models which use an extra encoder to encode the response separately BIBREF2, BIBREF16 or use a backward RNN to encode the future sequence for each time step BIBREF3, BIBREF4, SVT uses a Non-causal Multi-head Attention which leaks the future information to the recognition network for computing the posterior latent variables.
As shown in Figure FIGREF13, the SVT shares the same encoder as the standard Transformer BIBREF0, while its decoder consists of a variational decoder layer followed by a stack of $N$ standard Transformer decoder layers. The variational decoder layer has two paths for computing the posterior latent variable and prior latent variable respectively. We denote them as Posterior Path and Prior Path.
Sequential Variational Transformer ::: Prior Path
The Prior Path (solid line in Figure FIGREF13) has a masked multi-head self-attention sub-layer which performs causal attention on the shifted response, followed by a multi-head self-attention sub-layer which performs encoder-decoder multi-head attention on the context encoder. The last sub-layer is composed of a MLP prior network which approximates a sequence of prior latent variable for each position, and a Position-wise Feed-Forward Network (FFN) which fuse the latent information $z$ with the observed information representation $o^P$ before the prior network (shown in Figure FIGREF13). Specifically, we concatenate $o^P$ with $z$ as the input to the FNN, and the FNN pass the fused representation to the next layer. Same as BIBREF0, in the variational decoder layer, each sub-layer is followed by a residual connection and layer normalization. That is, the output of each sub-layer is $LayerNorm(x + Sublayer(x))$.
We decompose the response $x$ as $x = \left(x_1, \cdots , x_T\right)$ and the latent variable $z$ as $z=\left(z_{1}, \dots , z_{T}\right)$. The prior model produces latent variables at each position $z_t$ by not only conditioning on the input condition $c$ (the concatenation of context and meta features), but also conditioning on the observed response tokens $x_{1:t-1}$. By assuming $z_t$ follows a multivariate Gaussian distribution, the prior model becomes:
where
Sequential Variational Transformer ::: Posterior Path
The only difference between the Posterior Path (dash line in Figure FIGREF13) and Prior Path is that the mask is removed from the masked multi-head attention. Thus the masked (casual) multi-head attention become non-casual multi-head attention, which allows each position to attend to the subsequent positions. Then, the second multi-head attention sub-layer (shared the same weight with prior path) performs posterior attention on the encoder and passes the posterior observed information $o_R$ to the recognition network. The recognition network produces the posterior latent variable for each position $z_t$ as:
where
During the training, the posterior path guides the learning of prior path via KL divergence constraint:
In the training phase, the posterior latent variables from Equation DISPLAY_FORM17 are passed to the FFN, while in the testing phase the Posterior Path will be blocked and the posterior latent variables will be replaced with the prior latent variables from Equation DISPLAY_FORM15.
During the decoding process, each response token $x_t$ is generated by conditioning on observed response tokens $x_{1:t-1}$, latent variables $z_{1:t}$, and the input condition $c$. The decoding process of the SVT is:
Sequential Variational Transformer ::: Auxiliary Loss
As we expect the latent variables to be a generation plan for the future sequence, we inject such bias into latent variables by using an auxiliary loss: Sequential-Bag-of-Word (SBOW) which proposed by BIBREF4. The idea of the SBOW auxiliary objective is to sequentially predict the bag of succeeding target words $x_{t:T}$ by using latent variable $z_t$. In our case, the succeeding words prediction also leverages the observed information $c$ and $x_{1:t-1}$. Thus the auxiliary loss at each position is computed by:
where $f_{aux}$ is a feed-forward neural network with the softmax output.
Sequential Variational Transformer ::: Learning
The evidence lower bound (ELBO) objective of SVT is the sum of the reconstruction loss $\mathcal {L}_{REC}(t)$ and Kullback-Leibler divergence loss $\mathcal {L}_{KL}(t)$ at each position:
We regularize the ELBO learning objective with an auxiliary loss $\mathcal {L}_{sbow}$ to enhance the expressiveness of the latent variables. Therefore, the final learning objective is formulated as follows:
where,
Experiments ::: Dataset
We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26.
Experiments ::: Dataset ::: MojiTalk
dataset consists of 596,959 post and response pairs from Twitter. Each response is labeled by one emoji which indicates the response emotion. There are 64 emoji labels in total with unbalanced distribution. We use the preprocessed data and vocabulary released from BIBREF16 and follow the same split of train/validation/test set.
Experiments ::: Dataset ::: PersonaChat & Empathetic-Dialogues
are one-to-one multi-turn conversation datasets. In PersonaChat (Persona), the conversations are revolve around personas which are established by four to six persona sentences. While in Empathetic-Dialogues (ED), the conversation are mostly about situation that happened to one of the speaker and another speaker is trying to understand the feeling and reply accordingly. Both datasets are about modeling social skills and the goal is to make user more engaging. Therefore, we combine the train/validation/test set of two datasets.
Experiments ::: Baselines
We compare the proposed models with the following baselines:
Experiments ::: Baselines ::: Seq2Seq.
An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16.
Experiments ::: Baselines ::: CVAE.
An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16.
Experiments ::: Baselines ::: Transformer.
A transformer BIBREF0 trained by using a Maximum Likelihood Estimation (MLE) objective and can be considered as the base model for both the GVT and SVT.
Experiments ::: Hyper-parameters and Training Setup
We use a 4-layer Transformer as our base model. The hidden size is set to be 300 everywhere, and the word embedding is initialized with the 300-dimensional pre-trained GloVe embeddings for both encoder and decoder. The multi-head attention sub-layers are made up of 4 attention heads each with embedding dimension 64. The size of latent variable is 300. The recognition network and the prior network are parameterized by 3-layer MLPs with 512 hidden dimension. Following the training setup of BIBREF16, we first train our baseline transformer model with the MLE objective and use it to initialize its counterparts in both GVT and SVT. Then the models are trained end-to-end by the Adam optimizer with the initial learning rate $2\times 10^{-4}$. KL annealing and early stopping strategy are applied as in BIBREF16. In the test time, we use greedy decoding strategy for all models.
Experiments ::: Automatic Evaluation ::: PPL & KLD.
The evaluation metrics include Perplexity (PPL) and Kullback-Leibler divergence between the posterior and prior (KLD). A well trained model should achieve a low reconstruction and small but non-trivial KL distance BIBREF27.
Experiments ::: Automatic Evaluation ::: Diversity.
To measure the generation diversity, we calculate Dist-1, Dist-2, and Dist-3, the ratio of the number of distinct n-grams (unigrams, bigrams, and trigrams) over the total number of n-grams. A higher distinct n-grams ratio indicates more diverse generation.
Experiments ::: Automatic Evaluation ::: Embeddings Similarity.
This metric computes the cosine similarity between the sentence embedding of a generated sequence and that of a ground-truth response. In our experiments, we introduce two different ways to represent sentence embeddings. The first is $\textbf {EMB}_\textbf {FT}$ BIBREF28 that calculates the average of word embeddings in a sentence using FastText BIBREF29 which is trained with Common Crawl and Wikipedia data. We use FastText embeddings instead of other pre-trained word embeddings because it can handle out-of-vocabulary issue. However, representing a sentence by simply taking the average of word embeddings ignores the context information. Therefore, we propose to use a pre-trained language model BERT BIBREF25 to compute the contextualized sentence representation. Specifically, we use a pre-trained BERT to encode a generated sentence and a ground-truth response, and average the output representation of both to obtain the sentence embeddings. We denote such contextualized sentence embedding as $\textbf {EMB}_\textbf {BERT}$.
Experiments ::: Human Evaluation
In the human evaluation, we prepare multiple-choice questions for human evaluators and the answers are the generation results from the five models (Seq2Seq, CVAE, Transformer, GVT, and SVT). we first randomly sample 100 dialogues and their corresponding responses from our models and the baselines. For each response, we assign three human annotators to select the most coherent (on topic) response to the context (multiple answers are allowed). In addition, annotators also need to choose the best response correlated to the given emoji label in Mojitalk and the most engaging response in PersonaChat and Empathetic-Dialogues. If there is no response that satisfies the evaluators, they can choose “all answers are bad", which means none of the answer is chosen. We compute the rate that each model is chosen to quantify generation quality regarding to the human standard.
Results ::: Quantitative Analysis
The automatic evaluation results are shown in Table TABREF35. Transformer-based models have significantly lower perplexity compared to RNN-based models which indicate that the global receptive field performed by multi-head self-attention boost the modeling capacity. However, deterministic Seq2Seq and Transformer models tends to generate generic responses which leads to a low diversity score. Meanwhile incorporating a stochastic latent variable into both models (CVAE and GVT) promote more diverse generation results and boost the diversity scores such as Dist-1, Dist-2, and Dist-3.
Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL.
On the other hand, SVT achieves the highest score in terms of two semantic relevance-oriented metrics such as $\textbf {EMB}_\textbf {FT}$ and $\textbf {EMB}_\textbf {BERT}$ in MojiTalk dataset, while in the combined dataset of Persona and ED, we observe performance drop of SVT compare to other models. This is because both Persona and ED are well designed and have lower entropy than MojiTalk which collected from Twitter. We hypothesize that the sequential latent variables have no advantage in term of similarity to single, fixed "gold response" when model low entropy response. Indeed, in open domain dialogue response generation, automatic metric is not always aligned with the human judgement BIBREF28. In contrast, human evaluation result reported in Table TABREF35 demonstrates the generations of SVT are closer to the human standard in terms of coherence, invoked emotion and engagedness.
Results ::: Qualitative Analysis
Table TABREF42 compares the generation of the proposed models with baselines given the same contexts. We observe that the Seq2Seq and vanilla transformer tend to generate generic and repetitive responses (e.g., i am not sure) in MojiTalk due to their deterministic structure fail to capture the variability in dialogue response. By incorporating stochastic latent variables, the CVAE and GVT can generate more diverse responses, but their responses are sometimes digressive (e.g., example 5). Interestingly, GVT and SVT generalize the topic beyong the context which make the dialogue more engaging (e.g., example 4). In general, SVT is able to generate more coherent and informative responses.
Conclusion
This paper introduces the Variational Transformer (VT), a variational self-attentive feed-forward sequence model that combines the global receptive field of a Transformer with the variational nature of a CVAE. We propose two types of the VT: 1) the Global Variational Transformer (GVT) which incorporates a global latent variable as additional input to the transformer decoder; and 2) the Sequential Variational Transformer (SVT) which generates latent variables for each position during decoding process. Quantitative and qualitative experimental results shows that our models outperform baselines in terms of diversity, semantic relevance, and human judgment. In future work, we will utilize the pre-training language models BIBREF30 as the back-bone to strengthen the language model of the VT for better generation. | attention-based sequence-to-sequence model , CVAE |
8740c3000e740ac5c0bc8f329d908309f7ffeff6 | 8740c3000e740ac5c0bc8f329d908309f7ffeff6_0 | Q: What three conversational datasets are used for evaluation?
Text: Introduction
Convolutional and fully-attentional feed-forward architectures, such as Transformers BIBREF0, have emerged as effective alternatives to RNNs BIBREF1 in wide range of NLP tasks. These architectures remove the computational temporal dependency during the training and effectively address the long-standing vanishing gradients problem of recurrent models by processing all inputs simultaneously. Notably, transformers apply a fully attention strategy, where each token in the sequence is informed by other tokens via a self-attention mechanism. It acts as an effectively global receptive field across the whole sequences which absence in RNNs. Despite the powerful modeling capability of trasnformers, they often fail to model one-to-many relation in dialogue response generation tasks BIBREF2 due to their deterministic nature. As a result, they generate dull and generic response (e.g., “I am not sure"), especially with greedy and beam search, which are widely used in other sequence modeling tasks. There have been attempts to generate diverse and informative dialogue responses by incorporating latent variable(s) into the RNN encoder-decoder architecture. In particular BIBREF2 adapt a conditional variational autoencoder (CVAE) to capture discourse-level variations of dialogue, while BIBREF3 and BIBREF4 integrates latent variables in the hidden states of the RNN decoder. However, the inherently sequential computation of aforementioned models limit the efficiency for large scale training.
In this paper, we introduce the Variational Transformer (VT) a variational self-attentive feed-forward sequence model to address the aforementioned issues. The VT combine the parallelizability and global receptive field of the transformer with the variational nature of CVAE by incorporating stochastic latent variables into transformers. We explore two types of VT: 1) Global Variational Transformer (GVT), and 2) Sequential Variational Transformer. The GVT is the extension of CVAE in BIBREF2, which modeling the discourse-level diversity with a global latent variable, While SVT, inspired by variational autoregressive models BIBREF3, BIBREF4, incorporates a sequence of latent variables into decoding process by using a novel variational decoder layer. Unlike previous approaches BIBREF2, BIBREF3, BIBREF4, SVT uses Non-causal Multi-head Attention, which attend to future tokens for computing posterior latent variables instead of using an additional encoder.
The proposed VT architectures integrate stochastic latent variables into Transformers. The experimental results on a three conversation dataset demonstrate that our models can generate more informative and coherent responses.
Related work ::: Neural Conversational Models
Conversational systems has been widely studied BIBREF5, BIBREF6, BIBREF7, BIBREF8. Compare to rule-based systems BIBREF5, BIBREF6, sequence-to-sequence conversation models achieve superior performance in terms of scalable training and generalization ability BIBREF7. However, it has been pointed out that encoder-decoder models tend to generate generic and repetitive responses like “I am sorry" BIBREF9. To address this issue, there have been three main lines of work. The first is adding additional information (e.g., persona) as input to guild model generate more informative responses BIBREF10, BIBREF11. The second modifies the learning objective to promote more diverse generation BIBREF9, and the third integrates stochastic latent variables into Seq2Seq models by using the CVAE framework BIBREF12, BIBREF2. Our work comes within this third line introducing a novel model, the Variational Transformer, to improve dialogue response generation.
Related work ::: Conditional Variational Autoencoders
Many works have attempted to combine CVAEs with encoder-decoder architectures for sequence generation tasks. BIBREF13 propose a variational encoder-decoder model for neural machine translation, while BIBREF14 apply variational recurrent neural networks (VRNN) BIBREF15 for text summarization. BIBREF2 and BIBREF16 explore incorporating meta features into CVAE framework in dialogue response generation tasks. BIBREF3 and BIBREF4 propose variational autoregressive decoders which enhanced by highly multi-modal latent variables to capture the high variability in dialogue responses. BIBREF17 further augment variational autoregressive decoders with dynamic memory networks for improving generation quality. We unify the previous successful ideas of CVAE, and explore the combinations of CVAE and Transformer.
Related work ::: Fully Attentional Networks
Taking advantage of the parallel-in-time structure and global receptive field, Transformers BIBREF0 have recently been shown to achieve impressive results on various sequence modeling tasks. Based on this, several follow-up models have been presented. The Image Transformer BIBREF18 has been proposed for image generation, while the MultiModel BIBREF19 integrates convolution, attention and sparsely-gated mixture-of-expert blocks into a single deep-learning model for simultaneously learning multiple tasks from various domains. BIBREF20 proposed a fully attentional mixture-of-expert model (MoEL) for empathetic dialogue modeling. The Universal Transformer BIBREF1 incorporates the recurrent inductive bias of RNNs into the standard Transformer, and achieves better result on a wide range of algorithmic and language understanding tasks. BIBREF21 introduce the Latent Transformer (LT) for non-autoregressive machine translation. During training, the LT first autoencodes a target sequence into a shorter sequence discrete latent variables. Then a parallel decoder decodes the target using discrete latent variables and an input sequence. Different from the LT BIBREF21, the VT generates continuous latent variables during the decoding process.
Preliminaries ::: Conditional Variational Autoencoder for Dialogue Generation
The CVAE framework BIBREF22 represents a dyadic conversation via three random variables: the input condition $c$, including conversation context and meta features (meta features can be ignored when not available); a latent variable $z$; and the target response $x$. A CVAE can be efficiently trained with Stochastic Gradient Variational Bayes (SGVB) BIBREF23 by maximizing the variational lower bound of $x$ given c, according to:
The typical CVAE consists of a prior network $p_{\theta }(z | c)$, which is used to approximate $p(z | c)$, a recognition network $p_{\phi }(z | c, x)$, which is used to approximate posterior distribution $q(z | c, x)$, and a decoder $p_{\theta }(x | z, c)$, which is used to approximate $p(x | z, c)$. By assuming z follows multivariate Gaussian distribution with a diagonal co-variance matrix, the evidence lower bound (ELBO) can be written as
where $\mathcal {L}_{REC}$ denotes the reconstruction loss and $\mathcal {L}_{KL}$ denotes the Kullback-Leibler (KL) divergence between the posterior and prior.
In dialogue generation tasks, previous works BIBREF2, BIBREF16 apply RNN encoders (with GRU or LSTM cell) to encode dialogue contexts and responses separately. The condition $c$ is represented by the concatenation of the last hidden state of the context encoder and the meta features (e.g., topic, emotion), while the response $x$ is represented by the last hidden state of response encoder. Then the prior network $p_{\theta }(z | c)$ and the recognition network $p_{\phi }(z | c, x)$ parameterized by multi-layer perceptrons (MLPs) are applied to approximate the means and the log variances of the prior latent distribution $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and posterior latent distribution $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. With the reparameterization trick BIBREF23, we can obtain samples of the prior latent variable (for testing) from $\mathcal {N}\left(z ; \mu ^{\prime }, \sigma ^{\prime 2} \mathbf {I}\right)$ and samples of the posterior latent variable (for training) from $\mathcal {N}\left(z ; \mu , \sigma ^{2} \mathbf {I}\right)$. Finally, an RNN decoder use $z$ and $c$ as the initial state to predicts the response $x$.
The vanishing latent variable problem BIBREF24 is a common issue in RNN-based CVAEs. That is, the powerful autoregressive RNN decoder first learns to ignore the latent variable, and decodes the response by only condition on the previous tokens. Thus the latent variable fails to encode the meaningful information, and the CVAE deteriorates to seq2seq model. To alleviate this issue, KL annealing BIBREF24 and bag-of-word loss BIBREF2 have been proposed, and have shown effectiveness in various dialogue tasks BIBREF2, BIBREF16.
Preliminaries ::: CVAE with Transformer
The aforementioned RNN-based CVAE framework integrate the latent variable into the initial state of RNN decoder, while in transformer, it is more flexible to incorporate the latent variable embedding into the first input token of the decoder to generate the initial state.
The overall architecture of GVT is depicted in Figure FIGREF9. Different from RNNs, the Transformer encoder maps an input sequence of symbol representations to a sequence of contextualized representations BIBREF0. In order to get fixed dimension representations of the response and context, we add a special token $CLS$ at the beginning of the input sequence as in BERT BIBREF25, to compute the weighted sum of the output representations via self-attention. Thus the output representation of the token $CLS$ is considered as the representation of the whole sequence. Then we introduce a recognition network and a prior network to compute the posterior latent variable and prior latent variable as in BIBREF2, BIBREF16. We add the latent variable sample $z$ and meta features $m$ (can be ignored when not available) into $e_{SOS}$, the embedding of the start-of-sequence token $SOS$:
Finally, the transformer decoder decodes the response $x$ sequentially while attending to the new embedding $e^{\prime }_{SOS}$ of token $SOS$ with latent information.
This design enhances the CVAE framework with the global receptive field, and each position of the GVT can directly access the latent information via the multi-head self-attention mechanism. However, we still observe that the GVT suffers the vanishing latent variable problem as RNN-based CVAE because the decoder can bypass the latent information by paying less attention to the $SOS$ token. Hence, we apply the KL annealing, and bag-of-word auxiliary loss $\mathcal {L}_{bow}$ as in BIBREF2, BIBREF16 to preserve the useful information of the latent variable. Therefore, the learning objective of the GVT is defined as follows:
Sequential Variational Transformer
In order to augment the capacity of the latent variable with multi-modal distributions and to better utilize the latent information, we further explore incorporating a sequence of latent variables in decoding process. We introduce Sequential Variational Transformer (SVT) with a novel variational decoder layer which generate latent variables for each position: $z=\left(z_{1}, \dots , z_{T}\right)$. Similar to BIBREF3, we interpret the latent variables as a generation plan for the future sequence. Unlike previous CVAE models which use an extra encoder to encode the response separately BIBREF2, BIBREF16 or use a backward RNN to encode the future sequence for each time step BIBREF3, BIBREF4, SVT uses a Non-causal Multi-head Attention which leaks the future information to the recognition network for computing the posterior latent variables.
As shown in Figure FIGREF13, the SVT shares the same encoder as the standard Transformer BIBREF0, while its decoder consists of a variational decoder layer followed by a stack of $N$ standard Transformer decoder layers. The variational decoder layer has two paths for computing the posterior latent variable and prior latent variable respectively. We denote them as Posterior Path and Prior Path.
Sequential Variational Transformer ::: Prior Path
The Prior Path (solid line in Figure FIGREF13) has a masked multi-head self-attention sub-layer which performs causal attention on the shifted response, followed by a multi-head self-attention sub-layer which performs encoder-decoder multi-head attention on the context encoder. The last sub-layer is composed of a MLP prior network which approximates a sequence of prior latent variable for each position, and a Position-wise Feed-Forward Network (FFN) which fuse the latent information $z$ with the observed information representation $o^P$ before the prior network (shown in Figure FIGREF13). Specifically, we concatenate $o^P$ with $z$ as the input to the FNN, and the FNN pass the fused representation to the next layer. Same as BIBREF0, in the variational decoder layer, each sub-layer is followed by a residual connection and layer normalization. That is, the output of each sub-layer is $LayerNorm(x + Sublayer(x))$.
We decompose the response $x$ as $x = \left(x_1, \cdots , x_T\right)$ and the latent variable $z$ as $z=\left(z_{1}, \dots , z_{T}\right)$. The prior model produces latent variables at each position $z_t$ by not only conditioning on the input condition $c$ (the concatenation of context and meta features), but also conditioning on the observed response tokens $x_{1:t-1}$. By assuming $z_t$ follows a multivariate Gaussian distribution, the prior model becomes:
where
Sequential Variational Transformer ::: Posterior Path
The only difference between the Posterior Path (dash line in Figure FIGREF13) and Prior Path is that the mask is removed from the masked multi-head attention. Thus the masked (casual) multi-head attention become non-casual multi-head attention, which allows each position to attend to the subsequent positions. Then, the second multi-head attention sub-layer (shared the same weight with prior path) performs posterior attention on the encoder and passes the posterior observed information $o_R$ to the recognition network. The recognition network produces the posterior latent variable for each position $z_t$ as:
where
During the training, the posterior path guides the learning of prior path via KL divergence constraint:
In the training phase, the posterior latent variables from Equation DISPLAY_FORM17 are passed to the FFN, while in the testing phase the Posterior Path will be blocked and the posterior latent variables will be replaced with the prior latent variables from Equation DISPLAY_FORM15.
During the decoding process, each response token $x_t$ is generated by conditioning on observed response tokens $x_{1:t-1}$, latent variables $z_{1:t}$, and the input condition $c$. The decoding process of the SVT is:
Sequential Variational Transformer ::: Auxiliary Loss
As we expect the latent variables to be a generation plan for the future sequence, we inject such bias into latent variables by using an auxiliary loss: Sequential-Bag-of-Word (SBOW) which proposed by BIBREF4. The idea of the SBOW auxiliary objective is to sequentially predict the bag of succeeding target words $x_{t:T}$ by using latent variable $z_t$. In our case, the succeeding words prediction also leverages the observed information $c$ and $x_{1:t-1}$. Thus the auxiliary loss at each position is computed by:
where $f_{aux}$ is a feed-forward neural network with the softmax output.
Sequential Variational Transformer ::: Learning
The evidence lower bound (ELBO) objective of SVT is the sum of the reconstruction loss $\mathcal {L}_{REC}(t)$ and Kullback-Leibler divergence loss $\mathcal {L}_{KL}(t)$ at each position:
We regularize the ELBO learning objective with an auxiliary loss $\mathcal {L}_{sbow}$ to enhance the expressiveness of the latent variables. Therefore, the final learning objective is formulated as follows:
where,
Experiments ::: Dataset
We evaluate the proposed models on three conversationet dataset such as MojiTalk BIBREF16, PersonaChat BIBREF11, Empathetic-Dialogues BIBREF26.
Experiments ::: Dataset ::: MojiTalk
dataset consists of 596,959 post and response pairs from Twitter. Each response is labeled by one emoji which indicates the response emotion. There are 64 emoji labels in total with unbalanced distribution. We use the preprocessed data and vocabulary released from BIBREF16 and follow the same split of train/validation/test set.
Experiments ::: Dataset ::: PersonaChat & Empathetic-Dialogues
are one-to-one multi-turn conversation datasets. In PersonaChat (Persona), the conversations are revolve around personas which are established by four to six persona sentences. While in Empathetic-Dialogues (ED), the conversation are mostly about situation that happened to one of the speaker and another speaker is trying to understand the feeling and reply accordingly. Both datasets are about modeling social skills and the goal is to make user more engaging. Therefore, we combine the train/validation/test set of two datasets.
Experiments ::: Baselines
We compare the proposed models with the following baselines:
Experiments ::: Baselines ::: Seq2Seq.
An attention-based sequence-to-sequence model with the emoji vector as additional input as discribed in MojiTalk BIBREF16.
Experiments ::: Baselines ::: CVAE.
An RNN-based conditional variational autoencoder for dialogue response generation BIBREF16, which uses a multivariate Gaussian latent variable to model the response and concatenate it with the last hidden state of the encoder as the initial state of the decoder. KL annealing, early stopping strategy and bag-of-word auxiliary loss are applied during the training. We use the implementation released by BIBREF16.
Experiments ::: Baselines ::: Transformer.
A transformer BIBREF0 trained by using a Maximum Likelihood Estimation (MLE) objective and can be considered as the base model for both the GVT and SVT.
Experiments ::: Hyper-parameters and Training Setup
We use a 4-layer Transformer as our base model. The hidden size is set to be 300 everywhere, and the word embedding is initialized with the 300-dimensional pre-trained GloVe embeddings for both encoder and decoder. The multi-head attention sub-layers are made up of 4 attention heads each with embedding dimension 64. The size of latent variable is 300. The recognition network and the prior network are parameterized by 3-layer MLPs with 512 hidden dimension. Following the training setup of BIBREF16, we first train our baseline transformer model with the MLE objective and use it to initialize its counterparts in both GVT and SVT. Then the models are trained end-to-end by the Adam optimizer with the initial learning rate $2\times 10^{-4}$. KL annealing and early stopping strategy are applied as in BIBREF16. In the test time, we use greedy decoding strategy for all models.
Experiments ::: Automatic Evaluation ::: PPL & KLD.
The evaluation metrics include Perplexity (PPL) and Kullback-Leibler divergence between the posterior and prior (KLD). A well trained model should achieve a low reconstruction and small but non-trivial KL distance BIBREF27.
Experiments ::: Automatic Evaluation ::: Diversity.
To measure the generation diversity, we calculate Dist-1, Dist-2, and Dist-3, the ratio of the number of distinct n-grams (unigrams, bigrams, and trigrams) over the total number of n-grams. A higher distinct n-grams ratio indicates more diverse generation.
Experiments ::: Automatic Evaluation ::: Embeddings Similarity.
This metric computes the cosine similarity between the sentence embedding of a generated sequence and that of a ground-truth response. In our experiments, we introduce two different ways to represent sentence embeddings. The first is $\textbf {EMB}_\textbf {FT}$ BIBREF28 that calculates the average of word embeddings in a sentence using FastText BIBREF29 which is trained with Common Crawl and Wikipedia data. We use FastText embeddings instead of other pre-trained word embeddings because it can handle out-of-vocabulary issue. However, representing a sentence by simply taking the average of word embeddings ignores the context information. Therefore, we propose to use a pre-trained language model BERT BIBREF25 to compute the contextualized sentence representation. Specifically, we use a pre-trained BERT to encode a generated sentence and a ground-truth response, and average the output representation of both to obtain the sentence embeddings. We denote such contextualized sentence embedding as $\textbf {EMB}_\textbf {BERT}$.
Experiments ::: Human Evaluation
In the human evaluation, we prepare multiple-choice questions for human evaluators and the answers are the generation results from the five models (Seq2Seq, CVAE, Transformer, GVT, and SVT). we first randomly sample 100 dialogues and their corresponding responses from our models and the baselines. For each response, we assign three human annotators to select the most coherent (on topic) response to the context (multiple answers are allowed). In addition, annotators also need to choose the best response correlated to the given emoji label in Mojitalk and the most engaging response in PersonaChat and Empathetic-Dialogues. If there is no response that satisfies the evaluators, they can choose “all answers are bad", which means none of the answer is chosen. We compute the rate that each model is chosen to quantify generation quality regarding to the human standard.
Results ::: Quantitative Analysis
The automatic evaluation results are shown in Table TABREF35. Transformer-based models have significantly lower perplexity compared to RNN-based models which indicate that the global receptive field performed by multi-head self-attention boost the modeling capacity. However, deterministic Seq2Seq and Transformer models tends to generate generic responses which leads to a low diversity score. Meanwhile incorporating a stochastic latent variable into both models (CVAE and GVT) promote more diverse generation results and boost the diversity scores such as Dist-1, Dist-2, and Dist-3.
Compare to baseline models, the GVT achieves relatively lower reconstruction PPL, which suggests that the global latent variable contains rich latent information (e.g., topic) for response generation. Meanwhile, the sequential latent variables of the SVT encode fine-grained latent information and further improve the reconstruction PPL.
On the other hand, SVT achieves the highest score in terms of two semantic relevance-oriented metrics such as $\textbf {EMB}_\textbf {FT}$ and $\textbf {EMB}_\textbf {BERT}$ in MojiTalk dataset, while in the combined dataset of Persona and ED, we observe performance drop of SVT compare to other models. This is because both Persona and ED are well designed and have lower entropy than MojiTalk which collected from Twitter. We hypothesize that the sequential latent variables have no advantage in term of similarity to single, fixed "gold response" when model low entropy response. Indeed, in open domain dialogue response generation, automatic metric is not always aligned with the human judgement BIBREF28. In contrast, human evaluation result reported in Table TABREF35 demonstrates the generations of SVT are closer to the human standard in terms of coherence, invoked emotion and engagedness.
Results ::: Qualitative Analysis
Table TABREF42 compares the generation of the proposed models with baselines given the same contexts. We observe that the Seq2Seq and vanilla transformer tend to generate generic and repetitive responses (e.g., i am not sure) in MojiTalk due to their deterministic structure fail to capture the variability in dialogue response. By incorporating stochastic latent variables, the CVAE and GVT can generate more diverse responses, but their responses are sometimes digressive (e.g., example 5). Interestingly, GVT and SVT generalize the topic beyong the context which make the dialogue more engaging (e.g., example 4). In general, SVT is able to generate more coherent and informative responses.
Conclusion
This paper introduces the Variational Transformer (VT), a variational self-attentive feed-forward sequence model that combines the global receptive field of a Transformer with the variational nature of a CVAE. We propose two types of the VT: 1) the Global Variational Transformer (GVT) which incorporates a global latent variable as additional input to the transformer decoder; and 2) the Sequential Variational Transformer (SVT) which generates latent variables for each position during decoding process. Quantitative and qualitative experimental results shows that our models outperform baselines in terms of diversity, semantic relevance, and human judgment. In future work, we will utilize the pre-training language models BIBREF30 as the back-bone to strengthen the language model of the VT for better generation. | MojiTalk , PersonaChat , Empathetic-Dialogues |
7772cb23b7609f1d4cfd6511ac3fcdc20f8481ba | 7772cb23b7609f1d4cfd6511ac3fcdc20f8481ba_0 | Q: What previous approaches did this method outperform?
Text: Introduction
Recently, a novel way of computing word embeddings has been proposed. Instead of computing one word embedding for each word which sums over all its occurrences, ignoring the appropriate word meaning in various contexts, the contextualized embeddings are computed for each word occurrence, taking into account the whole sentence. Three ways of computing such contextualized embeddings have been proposed: ELMo BIBREF0, BERT BIBREF1 and Flair BIBREF2, along with precomputed models.
Peters et al. (2018) BIBREF0 obtain the proposed embeddings, called ELMo, from internal states of deep bidirectional language model, pretrained on a large corpus. Akbik et al. (2018) BIBREF2 introduced Flair, contextualized word embeddings obtained from internal states of a character-level bidirectional language model, thus significantly increasing state of the art of POS tagging, chunking and NER tasks. Last, but not least, Devlin et al. (2018) BIBREF1 employ a Transformer BIBREF3 to compute contextualized embeddings from preceeding and following context at the same time, at the cost of increased processing costs. The new BERT embeddings achieved state-of-the-art results in eleven natural language tasks.
Using two of these methods, for which precomputed models for Czech are available, namely BERT and Flair, we present our models for four NLP tasks: part-of-speech (POS) tagging, lemmatization, dependency parsing and named entity recognition (NER). Adding the contextualized embeddings as optional inputs in strong artificial neural network baselines, we report state-of-the-art results in these four tasks.
Related Work
As for the Prague Dependency Treebank (PDT) BIBREF4, most of the previous works are non-neural systems with one exception of BIBREF5 who hold the state of the art for Czech POS tagging and lemmatization, achieved with the recurrent neural network (RNN) using end-to-end trainable word embeddings and character-level word embeddings. Otherwise, Spoustová et al. (2009) BIBREF6 used an averaged perceptron for POS tagging. For parsing the PDT, Holan and Zabokrtský (2006) BIBREF7 and Novák and Žabokrtský (2007) BIBREF8 used a combination of non-neural parsing techniques .
In the multilingual shared task CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9, raw text is processed and the POS tagging, lemmatization and dependency parsing are evaluated on the Universal Dependencies (UD) BIBREF10. Czech is one of the 57 evaluated languages. Interestingly, all 26 participant systems employed the artificial neural networks in some way. Of these, 3 participant systems used (a slightly modified variant of) the only newly presented contextualized embeddings called ELMo BIBREF0, most notably one of the shared task winners BIBREF11. BERT and Flair were not available at the time.
For the Czech NER, Straková et al. (2016) BIBREF12 use an artificial neural network with word- and character-level word embeddings to perform NER on the Czech Named Entity Corpus (CNEC) BIBREF13, BIBREF14, BIBREF15.
Datasets ::: Prague Dependency Treebank 3.5
The Prague Dependency Treebank 3.5 BIBREF4 is a 2018 edition of the core Prague Dependency Treebank. The Prague Dependency Treebank 3.5 contains the same texts as the previous versions since 2.0, and is divided into train, dtest, and etest subparts, where dtest is used as a development set and etest as a test set. The dataset consists of several layers – the morphological m-layer is the largest and contains morphological annotations (POS tags and lemmas), the analytical a-layer contains labeled dependency trees, and the t-layer is the smallest and contains tectogrammatical trees. The statistics of PDT 3.5 sizes is presented in Table TABREF7.
A detailed description of the morphological system can be found in BIBREF16, a specification of the syntactic annotations has been presented in BIBREF17. We note that in PDT, lemmas with the same word form are disambiguated using a number suffix – for example, English lemmas for the word forms can (noun) and can (verb) would be annotated as can-1 and can-2.
In evaluation, we compute:
[noitemsep,topsep=0pt]
POS tagging accuracy,
lemmatization accuracy,
unlabeled attachment score (UAS),
labeled attachment score (LAS).
Datasets ::: Universal Dependencies
The Universal Dependencies project BIBREF10 seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. We evaluate the Czech PDT treebank of UD 2.3 BIBREF18, which is an automated conversion of PDT 3.5 a-layer to Universal Dependencies annotation. The original POS tags are used to generate UPOS (universal POS tags), XPOS (language-specific POS tags, in this case the original PDT tags), and Feats (universal morphological features). The UD lemmas are the raw textual lemmas, so the discriminative numeric suffix of PDT is dropped. The dependency trees are converted according to the UD guidelines, adapting both the unlabeled trees and the dependency labels.
To compute the evaluation scores, we use the official CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 evaluation script, which produces the following metrics:
[noitemsep,topsep=0pt]
UPOS – universal POS tags accuracy,
XPOS – language-specific POS tags accuracy,
UFeats – universal subset of morphological features accuracy,
Lemmas – lemmatization accuracy,
UAS – unlabeled attachment score, LAS – labeled attachment score,
MLAS – morphology-aware LAS, BLEX – bi-lexical dependency score.
Datasets ::: Czech Named Entity Corpus
The Czech Named Entity Corpus 1.1 BIBREF13, BIBREF14 is a corpus of $5\,868$ Czech sentences with manually annotated $33\,662$ Czech named entities, classified according to a two-level hierarchy of 62 named entities.
The Czech Named Entity Corpus 2.0 BIBREF15 contains $8\,993$ Czech sentences with manually annotated $35\,220$ Czech named entities, classified according to a two-level hierarchy of 46 named entities.
We evaluate the NER task with the official CNEC evaluation script. Similarly to previous literature BIBREF13, BIBREF12 etc., the script only evaluates the first round annotation classes for the CNEC 1.1. For the CNEC 2.0, the script evaluates all annotated classes.
Neural Architectures
All our neural architectures are recurrent neural networks (RNNs). The POS tagging, lemmatization and dependency parsing is performed with the UDPipe 2.0 (Section SECREF16) and NER is performed with our new sequence-to-sequence model (Section SECREF36).
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing
We perform POS tagging, lemmatization and dependency parsing using UDPipe 2.0 BIBREF19, one of the three winning systems of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 and an overall winner of The 2018 Shared Task on Extrinsic Parser Evaluation BIBREF20. An overview of this architecture is presented in Figure FIGREF17 and the full details of the architecture and the training procedure are available in BIBREF19.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: POS Tagging and Lemmatization
The tagger employs a standard bi-LSTM architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are performed, followed by a softmax output layers for POS tags and lemmas. While a classification output layer is natural for POS tags, we also apply it to lemmatization and generate lemmas by classifying the input words into lemma generation rules, therefore considering lemmatization as another tagging task.
We construct a lemma generation rule from a given form and lemma as follows:
[noitemsep,topsep=0pt]
We start by finding the longest continuous substring of the form and the lemma. If it is empty, we use the lemma itself as the class.
If there is a common substring of the form and the lemma, we compute the shortest edit script converting the prefix of the form into the prefix of the lemma, and the shortest edit script converting the suffix of the form to the suffix of the lemma. The edit scripts permit the operations delete_current_char and insert_char(c).
All above operations are performed case insensitively. To indicate correct casing of the lemma, we consider the lemma to be a concatenation of segments, where each segment is composed of either a sequence of lowercase characters, or a sequence of uppercase characters. We represent the lemma casing by encoding the beginning of every such segment, where the offsets in the first half of the lemma are computed relatively to the start of the lemma, and the offsets in the second half of the lemma are computed relatively to the end of the lemma.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: Dependency Parsing
The dependency parsing is again predicted using UDPipe 2.0 architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are again performed, followed by a biaffine attention layer BIBREF22 producing labeled dependency trees.
In our evaluation we do not utilize gold POS tags and lemmas on the test set for dependency parsing. Instead, we consider three ways of employing them during parsing:
[noitemsep,topsep=0pt]
not using them at all;
adding predicted POS tags and lemmas on input;
perform joint training of POS tags, lemmatization, and dependency parsing. In this case, we share first two bidirectional LSTM layers between the tagger and the parser.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: Input Embeddings
In our baseline model, we use the end-to-end word embeddings and also character-level word embeddings (bidirectional GRUs, BIBREF23, BIBREF24, BIBREF25 of dimension 256) trained specifically for the task.
Our architecture can optionally employ the following additional inputs
[noitemsep,topsep=0pt]
pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data.
BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word.
Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: POS Tags and Lemmas Decoding
Optionally, we employ a morphological dictionary MorfFlex BIBREF28 during decoding. If the morphological dictionary is used, it may produce analyses for an input word as (POS tag, lemma) pairs. If any are generated, we choose the pair with maximum likelihood given by both the POS tag and lemmatization model.
Neural Architectures ::: Named Entity Recognition
We use a novel approach BIBREF29 for nested named entity recognition (NER) to capture the nested entities in the Czech Named Entity Corpus. The nested entities are encoded in a sequence and the problem of nested NER is then viewed as a sequence-to-sequence (seq2seq) problem, in which the input sequence consists of the input tokens (forms) and the output sequence of the linearized entity labels.
The system is a encoder-decoder architecture. The encoder is a bi-directional LSTM and the decoder is a LSTM. The encoded labels are predicted one by one by the decoder, until the decoder outputs the "<eow>" (end of word) label and moves to the next token. We use a hard attention on the word whose label(s) is being predicted.
We train the network using the lazy variant of the Adam optimizer BIBREF30, which only updates accumulators for variables that appear in the current batch, with parameters $\beta _1=0.9$ and $\beta _2=0.98$. We use mini-batches of size 8. As a regularization, we apply dropout with rate $0.5$ and the word dropout replaces $20\%$ of words by the unknown token to force the network to rely more on context. We did not perform any complex hyperparameter search.
In this model, we use the following word- and character-level word embeddings:
[noitemsep,topsep=0pt]
pretrained word embeddings: We use the FastText BIBREF27 word embeddings of dimension 300 from the publicly available Czech model.
end-to-end word embeddings: We embed the input forms and lemmas (256 dimensions) and POS tags (one-hot).
end-to-end character-level word embeddings: We use bidirectional GRUs BIBREF23, BIBREF24 of dimension 128 in line with BIBREF25: we represent every Unicode character with a vector of dimension 128, and concatenate GRU outputs for forward and reversed word characters.
Optionally, we add the BERT BIBREF1 and the Flair BIBREF2 contextualized embeddings in the same way as in the UDPipe 2.0 (Section SECREF16).
Results ::: POS Tagging and Lemmatization on PDT 3.5
The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations.
The BERT embeddings alone bring highest improvement in performance. Furthermore, combination with WE or Flair again yields performance increase. The best results are achieved by exploiting all three embedding methods, substantially exceeding state-of-the-art results.
Utilization of morphological dictionary improves prediction accuracy. However, as the performance of a model itself increases, the gains obtained by the morphological dictionary diminishes – for a model without any pretrained embeddings, morphological dictionary improves POS tagging by and lemmatization by $0.43\%$ and $0.45\%$, while the best performing model gains only $0.11\%$ and $0.23\%$.
Results ::: Dependency Parsing on PDT 3.5
The evaluation of the contextualized embeddings methods as well as various ways of POS tag utilization is presented in Table TABREF44. Without POS tags and lemmas, the Flair embeddings bring only a slight improvement in dependency parsing when added to WE. In contrast, BERT embeddings employment results in substantial gains, increasing UAS and LAS by 1.6% and 2.1%. A combination of BERT and Flair embeddings does not result in any performance improvement, demonstrating that BERT syntactic representations encompass the Flair embeddings.
When introducing POS tags and lemmas predicted by the best model from Section SECREF43 as inputs for dependency parsing, the performance increases only slightly. A better way of POS tags and lemmas exploitation is achieved in a joint model, which predicts POS tags, lemmas, and dependency trees simultaneously. Again, BERT embeddings bring significant improvements, but in contrast to syntax parsing only, adding Flair embeddings to BERT results in moderate gain – we hypothesise that the increase is due to the complementary morphological information present in Flair embeddings (cf. Section SECREF43). Note that the joint model achieves better parsing accuracy than the one given gold POS tags and lemmas on input. However, the POS tags and lemmas predicted by the joint model are of slightly lower quality compared to a standalone tagger of the best configuration from Section SECREF43.
Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS.
Results ::: POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies
Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines.
We observe that the word2vec WEs perform similarly to Flair embeddings in this setting. Our hypothesis is that the word2vec WEs performance loss (compared to WEs in Section SECREF43) is caused by using a considerably smaller raw corpus to pretrain the WEs (Czech Wikipedia with 785M words, compared to 4G words used in Section SECREF43), due to licensing reasons. BERT embeddings once more deliver the highest improvement, especially in dependency parsing, and our best model employs all three embedding methods.
In the previous ablation experiments, we used the gold segmentation and tokenization in the Czech PDT UD 2.3 treebank. For comparison with state of the art, Czech PDT UD 2.2 treebank without gold segmentation and tokenization is used in evaluation, according to the CoNLL 2018 shared task training and evaluation protocol. Our system reuses segmentation and tokenization produced by UDPipe 2.0 in the CoNLL 2018 shared task and surpasses previous works substantially in all metrics (bottom part of Table TABREF47).
Comparing the results with a joint tagging and parsing PDT 3.5 model from Table TABREF7, we observe that the XPOS results are nearly identical as expected. Lemmatization on the UD treebank is performed without the discriminative numeric suffixes (see Section SECREF3) and therefore reaches better performance. Both UAS and LAS are also better on the UD treebank, which we assume is caused by the different annotation scheme.
Results ::: Named Entity Recognition
Table TABREF47 shows NER results (F1 score) on CNEC 1.1 and CNEC 2.0. Our sequence-to-sequence (seq2seq) model which captures the nested entities, clearly surpasses the current Czech NER state of the art. Furthermore, significant improvement is gained when adding the contextualized word embeddings (BERT and Flair) as optional input to the LSTM encoder. The strongest model is a combination of the sequence-to-sequence architecture with both BERT and Flair contextual word embeddings.
Conclusion
We have presented an evaluation of two contextualized embeddings methods, namely BERT and Flair. By utilizing these embeddings as input to deep neural networks, we have achieved state-of-the-art results in several Czech text processing tasks, namely in POS tagging, lemmatization, dependency parsing and named entity recognition.
Acknowledgements
The work described herein has been supported by OP VVV VI LINDAT/CLARIN project (CZ.02.1.01/0.0/0.0/16_013/0001781) and it has been supported and has been using language resources developed by the LINDAT/CLARIN project (LM2015071) of the Ministry of Education, Youth and Sports of the Czech Republic. | Table TABREF44, Table TABREF44, Table TABREF47, Table TABREF47 |
acf278679c584ae4f332f6134711602af26edfb4 | acf278679c584ae4f332f6134711602af26edfb4_0 | Q: How big is the Universal Dependencies corpus?
Text: Introduction
Recently, a novel way of computing word embeddings has been proposed. Instead of computing one word embedding for each word which sums over all its occurrences, ignoring the appropriate word meaning in various contexts, the contextualized embeddings are computed for each word occurrence, taking into account the whole sentence. Three ways of computing such contextualized embeddings have been proposed: ELMo BIBREF0, BERT BIBREF1 and Flair BIBREF2, along with precomputed models.
Peters et al. (2018) BIBREF0 obtain the proposed embeddings, called ELMo, from internal states of deep bidirectional language model, pretrained on a large corpus. Akbik et al. (2018) BIBREF2 introduced Flair, contextualized word embeddings obtained from internal states of a character-level bidirectional language model, thus significantly increasing state of the art of POS tagging, chunking and NER tasks. Last, but not least, Devlin et al. (2018) BIBREF1 employ a Transformer BIBREF3 to compute contextualized embeddings from preceeding and following context at the same time, at the cost of increased processing costs. The new BERT embeddings achieved state-of-the-art results in eleven natural language tasks.
Using two of these methods, for which precomputed models for Czech are available, namely BERT and Flair, we present our models for four NLP tasks: part-of-speech (POS) tagging, lemmatization, dependency parsing and named entity recognition (NER). Adding the contextualized embeddings as optional inputs in strong artificial neural network baselines, we report state-of-the-art results in these four tasks.
Related Work
As for the Prague Dependency Treebank (PDT) BIBREF4, most of the previous works are non-neural systems with one exception of BIBREF5 who hold the state of the art for Czech POS tagging and lemmatization, achieved with the recurrent neural network (RNN) using end-to-end trainable word embeddings and character-level word embeddings. Otherwise, Spoustová et al. (2009) BIBREF6 used an averaged perceptron for POS tagging. For parsing the PDT, Holan and Zabokrtský (2006) BIBREF7 and Novák and Žabokrtský (2007) BIBREF8 used a combination of non-neural parsing techniques .
In the multilingual shared task CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9, raw text is processed and the POS tagging, lemmatization and dependency parsing are evaluated on the Universal Dependencies (UD) BIBREF10. Czech is one of the 57 evaluated languages. Interestingly, all 26 participant systems employed the artificial neural networks in some way. Of these, 3 participant systems used (a slightly modified variant of) the only newly presented contextualized embeddings called ELMo BIBREF0, most notably one of the shared task winners BIBREF11. BERT and Flair were not available at the time.
For the Czech NER, Straková et al. (2016) BIBREF12 use an artificial neural network with word- and character-level word embeddings to perform NER on the Czech Named Entity Corpus (CNEC) BIBREF13, BIBREF14, BIBREF15.
Datasets ::: Prague Dependency Treebank 3.5
The Prague Dependency Treebank 3.5 BIBREF4 is a 2018 edition of the core Prague Dependency Treebank. The Prague Dependency Treebank 3.5 contains the same texts as the previous versions since 2.0, and is divided into train, dtest, and etest subparts, where dtest is used as a development set and etest as a test set. The dataset consists of several layers – the morphological m-layer is the largest and contains morphological annotations (POS tags and lemmas), the analytical a-layer contains labeled dependency trees, and the t-layer is the smallest and contains tectogrammatical trees. The statistics of PDT 3.5 sizes is presented in Table TABREF7.
A detailed description of the morphological system can be found in BIBREF16, a specification of the syntactic annotations has been presented in BIBREF17. We note that in PDT, lemmas with the same word form are disambiguated using a number suffix – for example, English lemmas for the word forms can (noun) and can (verb) would be annotated as can-1 and can-2.
In evaluation, we compute:
[noitemsep,topsep=0pt]
POS tagging accuracy,
lemmatization accuracy,
unlabeled attachment score (UAS),
labeled attachment score (LAS).
Datasets ::: Universal Dependencies
The Universal Dependencies project BIBREF10 seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. We evaluate the Czech PDT treebank of UD 2.3 BIBREF18, which is an automated conversion of PDT 3.5 a-layer to Universal Dependencies annotation. The original POS tags are used to generate UPOS (universal POS tags), XPOS (language-specific POS tags, in this case the original PDT tags), and Feats (universal morphological features). The UD lemmas are the raw textual lemmas, so the discriminative numeric suffix of PDT is dropped. The dependency trees are converted according to the UD guidelines, adapting both the unlabeled trees and the dependency labels.
To compute the evaluation scores, we use the official CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 evaluation script, which produces the following metrics:
[noitemsep,topsep=0pt]
UPOS – universal POS tags accuracy,
XPOS – language-specific POS tags accuracy,
UFeats – universal subset of morphological features accuracy,
Lemmas – lemmatization accuracy,
UAS – unlabeled attachment score, LAS – labeled attachment score,
MLAS – morphology-aware LAS, BLEX – bi-lexical dependency score.
Datasets ::: Czech Named Entity Corpus
The Czech Named Entity Corpus 1.1 BIBREF13, BIBREF14 is a corpus of $5\,868$ Czech sentences with manually annotated $33\,662$ Czech named entities, classified according to a two-level hierarchy of 62 named entities.
The Czech Named Entity Corpus 2.0 BIBREF15 contains $8\,993$ Czech sentences with manually annotated $35\,220$ Czech named entities, classified according to a two-level hierarchy of 46 named entities.
We evaluate the NER task with the official CNEC evaluation script. Similarly to previous literature BIBREF13, BIBREF12 etc., the script only evaluates the first round annotation classes for the CNEC 1.1. For the CNEC 2.0, the script evaluates all annotated classes.
Neural Architectures
All our neural architectures are recurrent neural networks (RNNs). The POS tagging, lemmatization and dependency parsing is performed with the UDPipe 2.0 (Section SECREF16) and NER is performed with our new sequence-to-sequence model (Section SECREF36).
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing
We perform POS tagging, lemmatization and dependency parsing using UDPipe 2.0 BIBREF19, one of the three winning systems of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 and an overall winner of The 2018 Shared Task on Extrinsic Parser Evaluation BIBREF20. An overview of this architecture is presented in Figure FIGREF17 and the full details of the architecture and the training procedure are available in BIBREF19.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: POS Tagging and Lemmatization
The tagger employs a standard bi-LSTM architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are performed, followed by a softmax output layers for POS tags and lemmas. While a classification output layer is natural for POS tags, we also apply it to lemmatization and generate lemmas by classifying the input words into lemma generation rules, therefore considering lemmatization as another tagging task.
We construct a lemma generation rule from a given form and lemma as follows:
[noitemsep,topsep=0pt]
We start by finding the longest continuous substring of the form and the lemma. If it is empty, we use the lemma itself as the class.
If there is a common substring of the form and the lemma, we compute the shortest edit script converting the prefix of the form into the prefix of the lemma, and the shortest edit script converting the suffix of the form to the suffix of the lemma. The edit scripts permit the operations delete_current_char and insert_char(c).
All above operations are performed case insensitively. To indicate correct casing of the lemma, we consider the lemma to be a concatenation of segments, where each segment is composed of either a sequence of lowercase characters, or a sequence of uppercase characters. We represent the lemma casing by encoding the beginning of every such segment, where the offsets in the first half of the lemma are computed relatively to the start of the lemma, and the offsets in the second half of the lemma are computed relatively to the end of the lemma.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: Dependency Parsing
The dependency parsing is again predicted using UDPipe 2.0 architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are again performed, followed by a biaffine attention layer BIBREF22 producing labeled dependency trees.
In our evaluation we do not utilize gold POS tags and lemmas on the test set for dependency parsing. Instead, we consider three ways of employing them during parsing:
[noitemsep,topsep=0pt]
not using them at all;
adding predicted POS tags and lemmas on input;
perform joint training of POS tags, lemmatization, and dependency parsing. In this case, we share first two bidirectional LSTM layers between the tagger and the parser.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: Input Embeddings
In our baseline model, we use the end-to-end word embeddings and also character-level word embeddings (bidirectional GRUs, BIBREF23, BIBREF24, BIBREF25 of dimension 256) trained specifically for the task.
Our architecture can optionally employ the following additional inputs
[noitemsep,topsep=0pt]
pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data.
BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word.
Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: POS Tags and Lemmas Decoding
Optionally, we employ a morphological dictionary MorfFlex BIBREF28 during decoding. If the morphological dictionary is used, it may produce analyses for an input word as (POS tag, lemma) pairs. If any are generated, we choose the pair with maximum likelihood given by both the POS tag and lemmatization model.
Neural Architectures ::: Named Entity Recognition
We use a novel approach BIBREF29 for nested named entity recognition (NER) to capture the nested entities in the Czech Named Entity Corpus. The nested entities are encoded in a sequence and the problem of nested NER is then viewed as a sequence-to-sequence (seq2seq) problem, in which the input sequence consists of the input tokens (forms) and the output sequence of the linearized entity labels.
The system is a encoder-decoder architecture. The encoder is a bi-directional LSTM and the decoder is a LSTM. The encoded labels are predicted one by one by the decoder, until the decoder outputs the "<eow>" (end of word) label and moves to the next token. We use a hard attention on the word whose label(s) is being predicted.
We train the network using the lazy variant of the Adam optimizer BIBREF30, which only updates accumulators for variables that appear in the current batch, with parameters $\beta _1=0.9$ and $\beta _2=0.98$. We use mini-batches of size 8. As a regularization, we apply dropout with rate $0.5$ and the word dropout replaces $20\%$ of words by the unknown token to force the network to rely more on context. We did not perform any complex hyperparameter search.
In this model, we use the following word- and character-level word embeddings:
[noitemsep,topsep=0pt]
pretrained word embeddings: We use the FastText BIBREF27 word embeddings of dimension 300 from the publicly available Czech model.
end-to-end word embeddings: We embed the input forms and lemmas (256 dimensions) and POS tags (one-hot).
end-to-end character-level word embeddings: We use bidirectional GRUs BIBREF23, BIBREF24 of dimension 128 in line with BIBREF25: we represent every Unicode character with a vector of dimension 128, and concatenate GRU outputs for forward and reversed word characters.
Optionally, we add the BERT BIBREF1 and the Flair BIBREF2 contextualized embeddings in the same way as in the UDPipe 2.0 (Section SECREF16).
Results ::: POS Tagging and Lemmatization on PDT 3.5
The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations.
The BERT embeddings alone bring highest improvement in performance. Furthermore, combination with WE or Flair again yields performance increase. The best results are achieved by exploiting all three embedding methods, substantially exceeding state-of-the-art results.
Utilization of morphological dictionary improves prediction accuracy. However, as the performance of a model itself increases, the gains obtained by the morphological dictionary diminishes – for a model without any pretrained embeddings, morphological dictionary improves POS tagging by and lemmatization by $0.43\%$ and $0.45\%$, while the best performing model gains only $0.11\%$ and $0.23\%$.
Results ::: Dependency Parsing on PDT 3.5
The evaluation of the contextualized embeddings methods as well as various ways of POS tag utilization is presented in Table TABREF44. Without POS tags and lemmas, the Flair embeddings bring only a slight improvement in dependency parsing when added to WE. In contrast, BERT embeddings employment results in substantial gains, increasing UAS and LAS by 1.6% and 2.1%. A combination of BERT and Flair embeddings does not result in any performance improvement, demonstrating that BERT syntactic representations encompass the Flair embeddings.
When introducing POS tags and lemmas predicted by the best model from Section SECREF43 as inputs for dependency parsing, the performance increases only slightly. A better way of POS tags and lemmas exploitation is achieved in a joint model, which predicts POS tags, lemmas, and dependency trees simultaneously. Again, BERT embeddings bring significant improvements, but in contrast to syntax parsing only, adding Flair embeddings to BERT results in moderate gain – we hypothesise that the increase is due to the complementary morphological information present in Flair embeddings (cf. Section SECREF43). Note that the joint model achieves better parsing accuracy than the one given gold POS tags and lemmas on input. However, the POS tags and lemmas predicted by the joint model are of slightly lower quality compared to a standalone tagger of the best configuration from Section SECREF43.
Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS.
Results ::: POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies
Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines.
We observe that the word2vec WEs perform similarly to Flair embeddings in this setting. Our hypothesis is that the word2vec WEs performance loss (compared to WEs in Section SECREF43) is caused by using a considerably smaller raw corpus to pretrain the WEs (Czech Wikipedia with 785M words, compared to 4G words used in Section SECREF43), due to licensing reasons. BERT embeddings once more deliver the highest improvement, especially in dependency parsing, and our best model employs all three embedding methods.
In the previous ablation experiments, we used the gold segmentation and tokenization in the Czech PDT UD 2.3 treebank. For comparison with state of the art, Czech PDT UD 2.2 treebank without gold segmentation and tokenization is used in evaluation, according to the CoNLL 2018 shared task training and evaluation protocol. Our system reuses segmentation and tokenization produced by UDPipe 2.0 in the CoNLL 2018 shared task and surpasses previous works substantially in all metrics (bottom part of Table TABREF47).
Comparing the results with a joint tagging and parsing PDT 3.5 model from Table TABREF7, we observe that the XPOS results are nearly identical as expected. Lemmatization on the UD treebank is performed without the discriminative numeric suffixes (see Section SECREF3) and therefore reaches better performance. Both UAS and LAS are also better on the UD treebank, which we assume is caused by the different annotation scheme.
Results ::: Named Entity Recognition
Table TABREF47 shows NER results (F1 score) on CNEC 1.1 and CNEC 2.0. Our sequence-to-sequence (seq2seq) model which captures the nested entities, clearly surpasses the current Czech NER state of the art. Furthermore, significant improvement is gained when adding the contextualized word embeddings (BERT and Flair) as optional input to the LSTM encoder. The strongest model is a combination of the sequence-to-sequence architecture with both BERT and Flair contextual word embeddings.
Conclusion
We have presented an evaluation of two contextualized embeddings methods, namely BERT and Flair. By utilizing these embeddings as input to deep neural networks, we have achieved state-of-the-art results in several Czech text processing tasks, namely in POS tagging, lemmatization, dependency parsing and named entity recognition.
Acknowledgements
The work described herein has been supported by OP VVV VI LINDAT/CLARIN project (CZ.02.1.01/0.0/0.0/16_013/0001781) and it has been supported and has been using language resources developed by the LINDAT/CLARIN project (LM2015071) of the Ministry of Education, Youth and Sports of the Czech Republic. | Unanswerable |
13f7d50b3b8b0b97d90401eeb0a4e97c9eab3a76 | 13f7d50b3b8b0b97d90401eeb0a4e97c9eab3a76_0 | Q: What data is the Prague Dependency Treebank built on?
Text: Introduction
Recently, a novel way of computing word embeddings has been proposed. Instead of computing one word embedding for each word which sums over all its occurrences, ignoring the appropriate word meaning in various contexts, the contextualized embeddings are computed for each word occurrence, taking into account the whole sentence. Three ways of computing such contextualized embeddings have been proposed: ELMo BIBREF0, BERT BIBREF1 and Flair BIBREF2, along with precomputed models.
Peters et al. (2018) BIBREF0 obtain the proposed embeddings, called ELMo, from internal states of deep bidirectional language model, pretrained on a large corpus. Akbik et al. (2018) BIBREF2 introduced Flair, contextualized word embeddings obtained from internal states of a character-level bidirectional language model, thus significantly increasing state of the art of POS tagging, chunking and NER tasks. Last, but not least, Devlin et al. (2018) BIBREF1 employ a Transformer BIBREF3 to compute contextualized embeddings from preceeding and following context at the same time, at the cost of increased processing costs. The new BERT embeddings achieved state-of-the-art results in eleven natural language tasks.
Using two of these methods, for which precomputed models for Czech are available, namely BERT and Flair, we present our models for four NLP tasks: part-of-speech (POS) tagging, lemmatization, dependency parsing and named entity recognition (NER). Adding the contextualized embeddings as optional inputs in strong artificial neural network baselines, we report state-of-the-art results in these four tasks.
Related Work
As for the Prague Dependency Treebank (PDT) BIBREF4, most of the previous works are non-neural systems with one exception of BIBREF5 who hold the state of the art for Czech POS tagging and lemmatization, achieved with the recurrent neural network (RNN) using end-to-end trainable word embeddings and character-level word embeddings. Otherwise, Spoustová et al. (2009) BIBREF6 used an averaged perceptron for POS tagging. For parsing the PDT, Holan and Zabokrtský (2006) BIBREF7 and Novák and Žabokrtský (2007) BIBREF8 used a combination of non-neural parsing techniques .
In the multilingual shared task CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9, raw text is processed and the POS tagging, lemmatization and dependency parsing are evaluated on the Universal Dependencies (UD) BIBREF10. Czech is one of the 57 evaluated languages. Interestingly, all 26 participant systems employed the artificial neural networks in some way. Of these, 3 participant systems used (a slightly modified variant of) the only newly presented contextualized embeddings called ELMo BIBREF0, most notably one of the shared task winners BIBREF11. BERT and Flair were not available at the time.
For the Czech NER, Straková et al. (2016) BIBREF12 use an artificial neural network with word- and character-level word embeddings to perform NER on the Czech Named Entity Corpus (CNEC) BIBREF13, BIBREF14, BIBREF15.
Datasets ::: Prague Dependency Treebank 3.5
The Prague Dependency Treebank 3.5 BIBREF4 is a 2018 edition of the core Prague Dependency Treebank. The Prague Dependency Treebank 3.5 contains the same texts as the previous versions since 2.0, and is divided into train, dtest, and etest subparts, where dtest is used as a development set and etest as a test set. The dataset consists of several layers – the morphological m-layer is the largest and contains morphological annotations (POS tags and lemmas), the analytical a-layer contains labeled dependency trees, and the t-layer is the smallest and contains tectogrammatical trees. The statistics of PDT 3.5 sizes is presented in Table TABREF7.
A detailed description of the morphological system can be found in BIBREF16, a specification of the syntactic annotations has been presented in BIBREF17. We note that in PDT, lemmas with the same word form are disambiguated using a number suffix – for example, English lemmas for the word forms can (noun) and can (verb) would be annotated as can-1 and can-2.
In evaluation, we compute:
[noitemsep,topsep=0pt]
POS tagging accuracy,
lemmatization accuracy,
unlabeled attachment score (UAS),
labeled attachment score (LAS).
Datasets ::: Universal Dependencies
The Universal Dependencies project BIBREF10 seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. We evaluate the Czech PDT treebank of UD 2.3 BIBREF18, which is an automated conversion of PDT 3.5 a-layer to Universal Dependencies annotation. The original POS tags are used to generate UPOS (universal POS tags), XPOS (language-specific POS tags, in this case the original PDT tags), and Feats (universal morphological features). The UD lemmas are the raw textual lemmas, so the discriminative numeric suffix of PDT is dropped. The dependency trees are converted according to the UD guidelines, adapting both the unlabeled trees and the dependency labels.
To compute the evaluation scores, we use the official CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 evaluation script, which produces the following metrics:
[noitemsep,topsep=0pt]
UPOS – universal POS tags accuracy,
XPOS – language-specific POS tags accuracy,
UFeats – universal subset of morphological features accuracy,
Lemmas – lemmatization accuracy,
UAS – unlabeled attachment score, LAS – labeled attachment score,
MLAS – morphology-aware LAS, BLEX – bi-lexical dependency score.
Datasets ::: Czech Named Entity Corpus
The Czech Named Entity Corpus 1.1 BIBREF13, BIBREF14 is a corpus of $5\,868$ Czech sentences with manually annotated $33\,662$ Czech named entities, classified according to a two-level hierarchy of 62 named entities.
The Czech Named Entity Corpus 2.0 BIBREF15 contains $8\,993$ Czech sentences with manually annotated $35\,220$ Czech named entities, classified according to a two-level hierarchy of 46 named entities.
We evaluate the NER task with the official CNEC evaluation script. Similarly to previous literature BIBREF13, BIBREF12 etc., the script only evaluates the first round annotation classes for the CNEC 1.1. For the CNEC 2.0, the script evaluates all annotated classes.
Neural Architectures
All our neural architectures are recurrent neural networks (RNNs). The POS tagging, lemmatization and dependency parsing is performed with the UDPipe 2.0 (Section SECREF16) and NER is performed with our new sequence-to-sequence model (Section SECREF36).
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing
We perform POS tagging, lemmatization and dependency parsing using UDPipe 2.0 BIBREF19, one of the three winning systems of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 and an overall winner of The 2018 Shared Task on Extrinsic Parser Evaluation BIBREF20. An overview of this architecture is presented in Figure FIGREF17 and the full details of the architecture and the training procedure are available in BIBREF19.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: POS Tagging and Lemmatization
The tagger employs a standard bi-LSTM architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are performed, followed by a softmax output layers for POS tags and lemmas. While a classification output layer is natural for POS tags, we also apply it to lemmatization and generate lemmas by classifying the input words into lemma generation rules, therefore considering lemmatization as another tagging task.
We construct a lemma generation rule from a given form and lemma as follows:
[noitemsep,topsep=0pt]
We start by finding the longest continuous substring of the form and the lemma. If it is empty, we use the lemma itself as the class.
If there is a common substring of the form and the lemma, we compute the shortest edit script converting the prefix of the form into the prefix of the lemma, and the shortest edit script converting the suffix of the form to the suffix of the lemma. The edit scripts permit the operations delete_current_char and insert_char(c).
All above operations are performed case insensitively. To indicate correct casing of the lemma, we consider the lemma to be a concatenation of segments, where each segment is composed of either a sequence of lowercase characters, or a sequence of uppercase characters. We represent the lemma casing by encoding the beginning of every such segment, where the offsets in the first half of the lemma are computed relatively to the start of the lemma, and the offsets in the second half of the lemma are computed relatively to the end of the lemma.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: Dependency Parsing
The dependency parsing is again predicted using UDPipe 2.0 architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are again performed, followed by a biaffine attention layer BIBREF22 producing labeled dependency trees.
In our evaluation we do not utilize gold POS tags and lemmas on the test set for dependency parsing. Instead, we consider three ways of employing them during parsing:
[noitemsep,topsep=0pt]
not using them at all;
adding predicted POS tags and lemmas on input;
perform joint training of POS tags, lemmatization, and dependency parsing. In this case, we share first two bidirectional LSTM layers between the tagger and the parser.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: Input Embeddings
In our baseline model, we use the end-to-end word embeddings and also character-level word embeddings (bidirectional GRUs, BIBREF23, BIBREF24, BIBREF25 of dimension 256) trained specifically for the task.
Our architecture can optionally employ the following additional inputs
[noitemsep,topsep=0pt]
pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data.
BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word.
Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: POS Tags and Lemmas Decoding
Optionally, we employ a morphological dictionary MorfFlex BIBREF28 during decoding. If the morphological dictionary is used, it may produce analyses for an input word as (POS tag, lemma) pairs. If any are generated, we choose the pair with maximum likelihood given by both the POS tag and lemmatization model.
Neural Architectures ::: Named Entity Recognition
We use a novel approach BIBREF29 for nested named entity recognition (NER) to capture the nested entities in the Czech Named Entity Corpus. The nested entities are encoded in a sequence and the problem of nested NER is then viewed as a sequence-to-sequence (seq2seq) problem, in which the input sequence consists of the input tokens (forms) and the output sequence of the linearized entity labels.
The system is a encoder-decoder architecture. The encoder is a bi-directional LSTM and the decoder is a LSTM. The encoded labels are predicted one by one by the decoder, until the decoder outputs the "<eow>" (end of word) label and moves to the next token. We use a hard attention on the word whose label(s) is being predicted.
We train the network using the lazy variant of the Adam optimizer BIBREF30, which only updates accumulators for variables that appear in the current batch, with parameters $\beta _1=0.9$ and $\beta _2=0.98$. We use mini-batches of size 8. As a regularization, we apply dropout with rate $0.5$ and the word dropout replaces $20\%$ of words by the unknown token to force the network to rely more on context. We did not perform any complex hyperparameter search.
In this model, we use the following word- and character-level word embeddings:
[noitemsep,topsep=0pt]
pretrained word embeddings: We use the FastText BIBREF27 word embeddings of dimension 300 from the publicly available Czech model.
end-to-end word embeddings: We embed the input forms and lemmas (256 dimensions) and POS tags (one-hot).
end-to-end character-level word embeddings: We use bidirectional GRUs BIBREF23, BIBREF24 of dimension 128 in line with BIBREF25: we represent every Unicode character with a vector of dimension 128, and concatenate GRU outputs for forward and reversed word characters.
Optionally, we add the BERT BIBREF1 and the Flair BIBREF2 contextualized embeddings in the same way as in the UDPipe 2.0 (Section SECREF16).
Results ::: POS Tagging and Lemmatization on PDT 3.5
The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations.
The BERT embeddings alone bring highest improvement in performance. Furthermore, combination with WE or Flair again yields performance increase. The best results are achieved by exploiting all three embedding methods, substantially exceeding state-of-the-art results.
Utilization of morphological dictionary improves prediction accuracy. However, as the performance of a model itself increases, the gains obtained by the morphological dictionary diminishes – for a model without any pretrained embeddings, morphological dictionary improves POS tagging by and lemmatization by $0.43\%$ and $0.45\%$, while the best performing model gains only $0.11\%$ and $0.23\%$.
Results ::: Dependency Parsing on PDT 3.5
The evaluation of the contextualized embeddings methods as well as various ways of POS tag utilization is presented in Table TABREF44. Without POS tags and lemmas, the Flair embeddings bring only a slight improvement in dependency parsing when added to WE. In contrast, BERT embeddings employment results in substantial gains, increasing UAS and LAS by 1.6% and 2.1%. A combination of BERT and Flair embeddings does not result in any performance improvement, demonstrating that BERT syntactic representations encompass the Flair embeddings.
When introducing POS tags and lemmas predicted by the best model from Section SECREF43 as inputs for dependency parsing, the performance increases only slightly. A better way of POS tags and lemmas exploitation is achieved in a joint model, which predicts POS tags, lemmas, and dependency trees simultaneously. Again, BERT embeddings bring significant improvements, but in contrast to syntax parsing only, adding Flair embeddings to BERT results in moderate gain – we hypothesise that the increase is due to the complementary morphological information present in Flair embeddings (cf. Section SECREF43). Note that the joint model achieves better parsing accuracy than the one given gold POS tags and lemmas on input. However, the POS tags and lemmas predicted by the joint model are of slightly lower quality compared to a standalone tagger of the best configuration from Section SECREF43.
Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS.
Results ::: POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies
Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines.
We observe that the word2vec WEs perform similarly to Flair embeddings in this setting. Our hypothesis is that the word2vec WEs performance loss (compared to WEs in Section SECREF43) is caused by using a considerably smaller raw corpus to pretrain the WEs (Czech Wikipedia with 785M words, compared to 4G words used in Section SECREF43), due to licensing reasons. BERT embeddings once more deliver the highest improvement, especially in dependency parsing, and our best model employs all three embedding methods.
In the previous ablation experiments, we used the gold segmentation and tokenization in the Czech PDT UD 2.3 treebank. For comparison with state of the art, Czech PDT UD 2.2 treebank without gold segmentation and tokenization is used in evaluation, according to the CoNLL 2018 shared task training and evaluation protocol. Our system reuses segmentation and tokenization produced by UDPipe 2.0 in the CoNLL 2018 shared task and surpasses previous works substantially in all metrics (bottom part of Table TABREF47).
Comparing the results with a joint tagging and parsing PDT 3.5 model from Table TABREF7, we observe that the XPOS results are nearly identical as expected. Lemmatization on the UD treebank is performed without the discriminative numeric suffixes (see Section SECREF3) and therefore reaches better performance. Both UAS and LAS are also better on the UD treebank, which we assume is caused by the different annotation scheme.
Results ::: Named Entity Recognition
Table TABREF47 shows NER results (F1 score) on CNEC 1.1 and CNEC 2.0. Our sequence-to-sequence (seq2seq) model which captures the nested entities, clearly surpasses the current Czech NER state of the art. Furthermore, significant improvement is gained when adding the contextualized word embeddings (BERT and Flair) as optional input to the LSTM encoder. The strongest model is a combination of the sequence-to-sequence architecture with both BERT and Flair contextual word embeddings.
Conclusion
We have presented an evaluation of two contextualized embeddings methods, namely BERT and Flair. By utilizing these embeddings as input to deep neural networks, we have achieved state-of-the-art results in several Czech text processing tasks, namely in POS tagging, lemmatization, dependency parsing and named entity recognition.
Acknowledgements
The work described herein has been supported by OP VVV VI LINDAT/CLARIN project (CZ.02.1.01/0.0/0.0/16_013/0001781) and it has been supported and has been using language resources developed by the LINDAT/CLARIN project (LM2015071) of the Ministry of Education, Youth and Sports of the Czech Republic. | Unanswerable |
48fb76ae9921c9d181f65afc63a42af8ba3bc519 | 48fb76ae9921c9d181f65afc63a42af8ba3bc519_0 | Q: What data is used to build the embeddings?
Text: Introduction
Recently, a novel way of computing word embeddings has been proposed. Instead of computing one word embedding for each word which sums over all its occurrences, ignoring the appropriate word meaning in various contexts, the contextualized embeddings are computed for each word occurrence, taking into account the whole sentence. Three ways of computing such contextualized embeddings have been proposed: ELMo BIBREF0, BERT BIBREF1 and Flair BIBREF2, along with precomputed models.
Peters et al. (2018) BIBREF0 obtain the proposed embeddings, called ELMo, from internal states of deep bidirectional language model, pretrained on a large corpus. Akbik et al. (2018) BIBREF2 introduced Flair, contextualized word embeddings obtained from internal states of a character-level bidirectional language model, thus significantly increasing state of the art of POS tagging, chunking and NER tasks. Last, but not least, Devlin et al. (2018) BIBREF1 employ a Transformer BIBREF3 to compute contextualized embeddings from preceeding and following context at the same time, at the cost of increased processing costs. The new BERT embeddings achieved state-of-the-art results in eleven natural language tasks.
Using two of these methods, for which precomputed models for Czech are available, namely BERT and Flair, we present our models for four NLP tasks: part-of-speech (POS) tagging, lemmatization, dependency parsing and named entity recognition (NER). Adding the contextualized embeddings as optional inputs in strong artificial neural network baselines, we report state-of-the-art results in these four tasks.
Related Work
As for the Prague Dependency Treebank (PDT) BIBREF4, most of the previous works are non-neural systems with one exception of BIBREF5 who hold the state of the art for Czech POS tagging and lemmatization, achieved with the recurrent neural network (RNN) using end-to-end trainable word embeddings and character-level word embeddings. Otherwise, Spoustová et al. (2009) BIBREF6 used an averaged perceptron for POS tagging. For parsing the PDT, Holan and Zabokrtský (2006) BIBREF7 and Novák and Žabokrtský (2007) BIBREF8 used a combination of non-neural parsing techniques .
In the multilingual shared task CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9, raw text is processed and the POS tagging, lemmatization and dependency parsing are evaluated on the Universal Dependencies (UD) BIBREF10. Czech is one of the 57 evaluated languages. Interestingly, all 26 participant systems employed the artificial neural networks in some way. Of these, 3 participant systems used (a slightly modified variant of) the only newly presented contextualized embeddings called ELMo BIBREF0, most notably one of the shared task winners BIBREF11. BERT and Flair were not available at the time.
For the Czech NER, Straková et al. (2016) BIBREF12 use an artificial neural network with word- and character-level word embeddings to perform NER on the Czech Named Entity Corpus (CNEC) BIBREF13, BIBREF14, BIBREF15.
Datasets ::: Prague Dependency Treebank 3.5
The Prague Dependency Treebank 3.5 BIBREF4 is a 2018 edition of the core Prague Dependency Treebank. The Prague Dependency Treebank 3.5 contains the same texts as the previous versions since 2.0, and is divided into train, dtest, and etest subparts, where dtest is used as a development set and etest as a test set. The dataset consists of several layers – the morphological m-layer is the largest and contains morphological annotations (POS tags and lemmas), the analytical a-layer contains labeled dependency trees, and the t-layer is the smallest and contains tectogrammatical trees. The statistics of PDT 3.5 sizes is presented in Table TABREF7.
A detailed description of the morphological system can be found in BIBREF16, a specification of the syntactic annotations has been presented in BIBREF17. We note that in PDT, lemmas with the same word form are disambiguated using a number suffix – for example, English lemmas for the word forms can (noun) and can (verb) would be annotated as can-1 and can-2.
In evaluation, we compute:
[noitemsep,topsep=0pt]
POS tagging accuracy,
lemmatization accuracy,
unlabeled attachment score (UAS),
labeled attachment score (LAS).
Datasets ::: Universal Dependencies
The Universal Dependencies project BIBREF10 seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. We evaluate the Czech PDT treebank of UD 2.3 BIBREF18, which is an automated conversion of PDT 3.5 a-layer to Universal Dependencies annotation. The original POS tags are used to generate UPOS (universal POS tags), XPOS (language-specific POS tags, in this case the original PDT tags), and Feats (universal morphological features). The UD lemmas are the raw textual lemmas, so the discriminative numeric suffix of PDT is dropped. The dependency trees are converted according to the UD guidelines, adapting both the unlabeled trees and the dependency labels.
To compute the evaluation scores, we use the official CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 evaluation script, which produces the following metrics:
[noitemsep,topsep=0pt]
UPOS – universal POS tags accuracy,
XPOS – language-specific POS tags accuracy,
UFeats – universal subset of morphological features accuracy,
Lemmas – lemmatization accuracy,
UAS – unlabeled attachment score, LAS – labeled attachment score,
MLAS – morphology-aware LAS, BLEX – bi-lexical dependency score.
Datasets ::: Czech Named Entity Corpus
The Czech Named Entity Corpus 1.1 BIBREF13, BIBREF14 is a corpus of $5\,868$ Czech sentences with manually annotated $33\,662$ Czech named entities, classified according to a two-level hierarchy of 62 named entities.
The Czech Named Entity Corpus 2.0 BIBREF15 contains $8\,993$ Czech sentences with manually annotated $35\,220$ Czech named entities, classified according to a two-level hierarchy of 46 named entities.
We evaluate the NER task with the official CNEC evaluation script. Similarly to previous literature BIBREF13, BIBREF12 etc., the script only evaluates the first round annotation classes for the CNEC 1.1. For the CNEC 2.0, the script evaluates all annotated classes.
Neural Architectures
All our neural architectures are recurrent neural networks (RNNs). The POS tagging, lemmatization and dependency parsing is performed with the UDPipe 2.0 (Section SECREF16) and NER is performed with our new sequence-to-sequence model (Section SECREF36).
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing
We perform POS tagging, lemmatization and dependency parsing using UDPipe 2.0 BIBREF19, one of the three winning systems of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies BIBREF9 and an overall winner of The 2018 Shared Task on Extrinsic Parser Evaluation BIBREF20. An overview of this architecture is presented in Figure FIGREF17 and the full details of the architecture and the training procedure are available in BIBREF19.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: POS Tagging and Lemmatization
The tagger employs a standard bi-LSTM architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are performed, followed by a softmax output layers for POS tags and lemmas. While a classification output layer is natural for POS tags, we also apply it to lemmatization and generate lemmas by classifying the input words into lemma generation rules, therefore considering lemmatization as another tagging task.
We construct a lemma generation rule from a given form and lemma as follows:
[noitemsep,topsep=0pt]
We start by finding the longest continuous substring of the form and the lemma. If it is empty, we use the lemma itself as the class.
If there is a common substring of the form and the lemma, we compute the shortest edit script converting the prefix of the form into the prefix of the lemma, and the shortest edit script converting the suffix of the form to the suffix of the lemma. The edit scripts permit the operations delete_current_char and insert_char(c).
All above operations are performed case insensitively. To indicate correct casing of the lemma, we consider the lemma to be a concatenation of segments, where each segment is composed of either a sequence of lowercase characters, or a sequence of uppercase characters. We represent the lemma casing by encoding the beginning of every such segment, where the offsets in the first half of the lemma are computed relatively to the start of the lemma, and the offsets in the second half of the lemma are computed relatively to the end of the lemma.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: Dependency Parsing
The dependency parsing is again predicted using UDPipe 2.0 architecture. After embedding input words, three bidirectional LSTM BIBREF21 layers are again performed, followed by a biaffine attention layer BIBREF22 producing labeled dependency trees.
In our evaluation we do not utilize gold POS tags and lemmas on the test set for dependency parsing. Instead, we consider three ways of employing them during parsing:
[noitemsep,topsep=0pt]
not using them at all;
adding predicted POS tags and lemmas on input;
perform joint training of POS tags, lemmatization, and dependency parsing. In this case, we share first two bidirectional LSTM layers between the tagger and the parser.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: Input Embeddings
In our baseline model, we use the end-to-end word embeddings and also character-level word embeddings (bidirectional GRUs, BIBREF23, BIBREF24, BIBREF25 of dimension 256) trained specifically for the task.
Our architecture can optionally employ the following additional inputs
[noitemsep,topsep=0pt]
pretrained word embeddings (WE): For the PDT experiments, we generate the word embeddings with word2vec on a concatenation of large raw Czech corpora available from the LINDAT/CLARIN repository. For UD Czech, we use FastText word embeddings BIBREF27 of dimension 300, which we pretrain on Czech Wikipedia using segmentation and tokenization trained from the UD data.
BERT BIBREF1: Pretrained contextual word embeddings of dimension 768 from the Base model. We average the last four layers of the BERT model to produce the embeddings. Because BERT utilizes word pieces, we decompose UD words into appropriate subwords and then average the generated embeddings over subwords belonging to the same word.
Flair BIBREF2: Pretrained contextual word embeddings of dimension 4096.
Neural Architectures ::: POS Tagging, Lemmatization, and Dependency Parsing ::: POS Tags and Lemmas Decoding
Optionally, we employ a morphological dictionary MorfFlex BIBREF28 during decoding. If the morphological dictionary is used, it may produce analyses for an input word as (POS tag, lemma) pairs. If any are generated, we choose the pair with maximum likelihood given by both the POS tag and lemmatization model.
Neural Architectures ::: Named Entity Recognition
We use a novel approach BIBREF29 for nested named entity recognition (NER) to capture the nested entities in the Czech Named Entity Corpus. The nested entities are encoded in a sequence and the problem of nested NER is then viewed as a sequence-to-sequence (seq2seq) problem, in which the input sequence consists of the input tokens (forms) and the output sequence of the linearized entity labels.
The system is a encoder-decoder architecture. The encoder is a bi-directional LSTM and the decoder is a LSTM. The encoded labels are predicted one by one by the decoder, until the decoder outputs the "<eow>" (end of word) label and moves to the next token. We use a hard attention on the word whose label(s) is being predicted.
We train the network using the lazy variant of the Adam optimizer BIBREF30, which only updates accumulators for variables that appear in the current batch, with parameters $\beta _1=0.9$ and $\beta _2=0.98$. We use mini-batches of size 8. As a regularization, we apply dropout with rate $0.5$ and the word dropout replaces $20\%$ of words by the unknown token to force the network to rely more on context. We did not perform any complex hyperparameter search.
In this model, we use the following word- and character-level word embeddings:
[noitemsep,topsep=0pt]
pretrained word embeddings: We use the FastText BIBREF27 word embeddings of dimension 300 from the publicly available Czech model.
end-to-end word embeddings: We embed the input forms and lemmas (256 dimensions) and POS tags (one-hot).
end-to-end character-level word embeddings: We use bidirectional GRUs BIBREF23, BIBREF24 of dimension 128 in line with BIBREF25: we represent every Unicode character with a vector of dimension 128, and concatenate GRU outputs for forward and reversed word characters.
Optionally, we add the BERT BIBREF1 and the Flair BIBREF2 contextualized embeddings in the same way as in the UDPipe 2.0 (Section SECREF16).
Results ::: POS Tagging and Lemmatization on PDT 3.5
The POS tagging and lemmatization results are presented in Table TABREF44. The word2vec word embeddings (WE) considerably increase performance compared to the baseline, especially in POS tagging. When only Flair embeddings are added to the baseline, we also observe an improvement, but not as high. We hypothesise that the lower performance (in contrast with the results reported in BIBREF2) is caused by the size of the training data, because we train the word2vec WE on considerably larger dataset than the Czech Flair model. However, when WE and Flair embeddings are combined, performance moderately increases, demonstrating that the two embedding methods produce at least partially complementary representations.
The BERT embeddings alone bring highest improvement in performance. Furthermore, combination with WE or Flair again yields performance increase. The best results are achieved by exploiting all three embedding methods, substantially exceeding state-of-the-art results.
Utilization of morphological dictionary improves prediction accuracy. However, as the performance of a model itself increases, the gains obtained by the morphological dictionary diminishes – for a model without any pretrained embeddings, morphological dictionary improves POS tagging by and lemmatization by $0.43\%$ and $0.45\%$, while the best performing model gains only $0.11\%$ and $0.23\%$.
Results ::: Dependency Parsing on PDT 3.5
The evaluation of the contextualized embeddings methods as well as various ways of POS tag utilization is presented in Table TABREF44. Without POS tags and lemmas, the Flair embeddings bring only a slight improvement in dependency parsing when added to WE. In contrast, BERT embeddings employment results in substantial gains, increasing UAS and LAS by 1.6% and 2.1%. A combination of BERT and Flair embeddings does not result in any performance improvement, demonstrating that BERT syntactic representations encompass the Flair embeddings.
When introducing POS tags and lemmas predicted by the best model from Section SECREF43 as inputs for dependency parsing, the performance increases only slightly. A better way of POS tags and lemmas exploitation is achieved in a joint model, which predicts POS tags, lemmas, and dependency trees simultaneously. Again, BERT embeddings bring significant improvements, but in contrast to syntax parsing only, adding Flair embeddings to BERT results in moderate gain – we hypothesise that the increase is due to the complementary morphological information present in Flair embeddings (cf. Section SECREF43). Note that the joint model achieves better parsing accuracy than the one given gold POS tags and lemmas on input. However, the POS tags and lemmas predicted by the joint model are of slightly lower quality compared to a standalone tagger of the best configuration from Section SECREF43.
Table TABREF44 compares our best model with state-of-the-art results on PDT 2.0 (note that some of the related work used only a subset of PDT 2.0 and/or utilized gold morphological annotation). To our best knowledge, research on PDT parsing was performed mostly in the first decade of this century, therefore even our baseline model substantially surpasses previous works. Our best model with contextualized embeddings achieves nearly 50% error reduction both in UAS and LAS.
Results ::: POS Tagging, Lemmatization and Dependency Parsing on Universal Dependencies
Table TABREF47 shows the performance of analyzed embedding methods in a joint model performing POS tagging, lemmatization, and dependency parsing on Czech PDT UD 2.3 treebank. This treebank is derived from PDT 3.5 a-layer, with original POS tags kept in XPOS, and the dependency trees and lemmas modified according to UD guidelines.
We observe that the word2vec WEs perform similarly to Flair embeddings in this setting. Our hypothesis is that the word2vec WEs performance loss (compared to WEs in Section SECREF43) is caused by using a considerably smaller raw corpus to pretrain the WEs (Czech Wikipedia with 785M words, compared to 4G words used in Section SECREF43), due to licensing reasons. BERT embeddings once more deliver the highest improvement, especially in dependency parsing, and our best model employs all three embedding methods.
In the previous ablation experiments, we used the gold segmentation and tokenization in the Czech PDT UD 2.3 treebank. For comparison with state of the art, Czech PDT UD 2.2 treebank without gold segmentation and tokenization is used in evaluation, according to the CoNLL 2018 shared task training and evaluation protocol. Our system reuses segmentation and tokenization produced by UDPipe 2.0 in the CoNLL 2018 shared task and surpasses previous works substantially in all metrics (bottom part of Table TABREF47).
Comparing the results with a joint tagging and parsing PDT 3.5 model from Table TABREF7, we observe that the XPOS results are nearly identical as expected. Lemmatization on the UD treebank is performed without the discriminative numeric suffixes (see Section SECREF3) and therefore reaches better performance. Both UAS and LAS are also better on the UD treebank, which we assume is caused by the different annotation scheme.
Results ::: Named Entity Recognition
Table TABREF47 shows NER results (F1 score) on CNEC 1.1 and CNEC 2.0. Our sequence-to-sequence (seq2seq) model which captures the nested entities, clearly surpasses the current Czech NER state of the art. Furthermore, significant improvement is gained when adding the contextualized word embeddings (BERT and Flair) as optional input to the LSTM encoder. The strongest model is a combination of the sequence-to-sequence architecture with both BERT and Flair contextual word embeddings.
Conclusion
We have presented an evaluation of two contextualized embeddings methods, namely BERT and Flair. By utilizing these embeddings as input to deep neural networks, we have achieved state-of-the-art results in several Czech text processing tasks, namely in POS tagging, lemmatization, dependency parsing and named entity recognition.
Acknowledgements
The work described herein has been supported by OP VVV VI LINDAT/CLARIN project (CZ.02.1.01/0.0/0.0/16_013/0001781) and it has been supported and has been using language resources developed by the LINDAT/CLARIN project (LM2015071) of the Ministry of Education, Youth and Sports of the Czech Republic. | large raw Czech corpora available from the LINDAT/CLARIN repository, Czech Wikipedia |
13149342ccbb7a6df9b4b1bed890cfbdc1331c1f | 13149342ccbb7a6df9b4b1bed890cfbdc1331c1f_0 | Q: How big is dataset used for fine-tuning model for detection of red flag medical symptoms in individual statements?
Text: Introduction
The spread of influenza is a major health concern. Without appropriate preventative measures, this can escalate to an epidemic, causing high levels of mortality. A potential route to early detection is to analyse statements on social media platforms to identify individuals who have reported experiencing symptoms of the illness. These numbers can be used as a proxy to monitor the spread of the virus.
Since disease does not respect cultural borders and may spread between populations speaking different languages, we would like to build models for several languages without going through the difficult, expensive and time-consuming process of generating task-specific labelled data for each language. In this paper we explore ways of taking data and models generated in one language and transferring to other languages for which there is little or no data.
Related Work
Previously, authors have created multilingual models which should allow transfer between languages by aligning models BIBREF0 or embedding spaces BIBREF1, BIBREF2. An alternative is translation of a high-resource language into the target low-resource language; for instance, BIBREF3 combined translation with subsequent selective correction by active learning of uncertain words and phrases believed to describe entities, to create a labelled dataset for named entity recognition.
MedWeb Dataset
We use the MedWeb (“Medical Natural Language Processing for Web Document”) dataset BIBREF4 that was provided as part of a subtask at the NTCIR-13 Conference BIBREF5. The data is summarised in Table TABREF1. There are a total of 2,560 pseudo-tweets in three different languages: Japanese (ja), English (en) and Chinese (zh). These were created in Japanese and then manually translated into English and Chinese (see Figure FIGREF2). Each pseudo-tweet is labelled with a subset of the following 8 labels: influenza, diarrhoea/stomach ache, hay fever, cough/sore throat, headache, fever, runny nose, and cold. A positive label is assigned if the author (or someone they live with) has the symptom in question. As such it is more than a named entity recognition task, as can be seen in pseudo-tweet #3 in Figure FIGREF2 where the term “flu” is mentioned but the label is negative.
Methods ::: Bidirectional Encoder Representations from Transformers (BERT):
The BERT model BIBREF6 base version is a 12-layer Transformer model trained on two self-supervised tasks using a large corpus of text. In the first (denoising autoencoding) task, the model must map input sentences with some words replaced with a special “MASK” token back to the original unmasked sentences. In the second (binary classification) task, the model is given two sentences and must predict whether or not the second sentence immediately follows the first in the corpus. The output of the final Transformer layer is passed through a logistic output layer for classification. We have used the original (English) BERT-base, trained on Wikipedia and books corpus BIBREF7, and a Japanese BERT (jBERT) BIBREF8 trained on Japanese Wikipedia. The original BERT model and jBERT use a standard sentence piece tokeniser with roughly 30,000 tokens.
Methods ::: Multilingual BERT:
Multilingual BERT (mBERT) is a BERT model simultaneously trained on Wikipedia in 100 different languages. It makes use of a shared sentence piece tokeniser with roughly 100,000 tokens trained on the same data. This model provides state-of-the-art zero-shot transfer results on natural language inference and part-of-speech tagging tasks BIBREF9.
Methods ::: Translation:
We use two publicly available machine translation systems to provide two possible translations for each original sentence: Google's neural translation system BIBREF10 via Google Cloud, and Amazon Translate. We experiment using the translations singly and together.
Methods ::: Training procedure:
Models are trained for 20 epochs, using the Adam optimiser BIBREF11 and a cyclical learning rate BIBREF12 varied linearly between $5 \times 10^{-6}$ and $3 \times 10^{-5}$.
Experiments
Using the multilingual BERT model, we run three experiments as described below. The “exact match” metric from the original MedWeb challenge is reported, which means that all labels must be predicted correctly for a given pseudo-tweet to be considered correct; macro-averaged F1 is also reported. Each experiment is run 5 times (with different random seeds) and the mean performance is shown in Table TABREF11. Our experiments are focused around using Japanese as the low-resource target language, with English and Chinese as the more readily available source languages.
Experiments ::: Baselines
To establish a target for our transfer techniques we train and test models on a single language, i.e. English to English, Japanese to Japanese, and Chinese to Chinese. For English we use the uncased base-BERT, for Japanese we use jBERT, and for Chinese we use mBERT (since there is no Chinese-specific model available in the public domain). This last choice seems reasonable since mBERT performed similarly to the single-language models when trained and tested on the same language.
For comparison, we show the results of BIBREF13 who created the most successful model for the MedWeb challenge. Their final system was an ensemble of 120 trained models, using two architectures: a hierarchical attention network and a convolutional neural network. They exploited the fact that parallel data is available in three languages by ensuring consistency between outputs of the models in each language, giving a final exact match score of 0.880. However, for the purpose of demonstrating language transfer we report their highest single-model scores to show that our single-language models are competitive with the released results. We also show results for a majority class classifier (predicting all negative labels, see Table TABREF1) and a random classifier that uses the label frequencies from the training set to randomly predict labels.
Experiments ::: Zero-shot transfer with multilingual pre-training
Our first experiment investigates the zero-shot transfer ability of multilingual BERT. If mBERT has learned a shared embedding space for all languages, we would expect that if the model is fine-tuned on the English training dataset, then it should be applicable also to the Japanese dataset. To test this we have run this with both the English and Chinese training data, results are shown in Table TABREF11. We ran additional experiments where we froze layers within BERT, but observed no improvement.
The results indicate poor transfer, especially between English and Japanese. To investigate why the model does not perform well, we visualise the output vectors of mBERT using t-SNE BIBREF14 in Figure FIGREF14. We can see that the language representations occupy separate parts of the representation space, with only small amounts of overlap. Further, no clear correlation can be observed between sentence pairs.
The better transfer between Chinese and Japanese likely reflects the fact that these languages share tokens; one of the Japanese alphabets (the Kanji logographic alphabet) consists of Chinese characters. There is 21% vocabulary overlap for the training data and 19% for the test data, whereas there is no token overlap between English and Japanese. Our finding is consistent with previous claims that token overlap impacts mBERT's transfer capability BIBREF9.
Experiments ::: Training on machine translated data
Our second experiment investigates the use of machine translated data for training a model. We train on the machine translated source data and test on the target test set. Results are shown in Table TABREF11. Augmenting the data by using two sets of translations rather than one proves beneficial. In the end, the difference between training on real Japanese and training on translations from English is around 9% while training on translations from Chinese is around 4%.
Experiments ::: Mixing translated data with original data
Whilst the results for translated data are promising, we would like to bridge the gap to the performance of the original target data. Our premise is that we start with a fixed-size dataset in the source language, and we have a limited annotation budget to manually translate a proportion of this data into the target language. For this experiment we mix all the translated data with different portions of original Japanese data, varying the amount between 1% and 100%. The results of these experiments are shown in Figure FIGREF17. Using the translated data with just 10% of the original Japanese data, we close the gap by half, with 50% we match the single-language model, and with 100% appear to even achieve a small improvement (for English), likely through the data augmentation provided by the translations.
Discussion and Conclusions
Zero-shot transfer using multilingual BERT performs poorly when transferring to Japanese on the MedWeb data. However, training on machine translations gives promising performance, and this performance can be increased by adding small amounts of original target data. On inspection, the drop in performance between translated and original Japanese was often a result of translations that were reasonable but not consistent with the labels. For example, when translating the first example in Figure FIGREF2, both machine translations map “UTF8min風邪”, which means cold (the illness), into “UTF8min寒さ”, which means cold (low temperature). Another example is where the Japanese pseudo-tweet “UTF8min花粉症の時期はすごい疲れる。” was provided alongside an English pseudo-tweet “Allergy season is so exhausting.”. Here, the Japanese word for hay fever “UTF8min花粉症。” has been manually mapped to the less specific word “allergies” in English; the machine translation maps back to Japanese using the word for “allergies” i.e. “UTF8minアレルギー” in the katakana alphabet (katakana is used to express words derived from foreign languages), since there is no kanji character for the concept of allergies. In future work, it would be interesting to understand how to detect such ambiguities in order to best deploy our annotation budget. | a total of 2,560 pseudo-tweets in three different languages: Japanese (ja), English (en) and Chinese (zh) |
fbe149bd76863575b98fafb3679f411d3d21b4a3 | fbe149bd76863575b98fafb3679f411d3d21b4a3_0 | Q: Is there any explanation why some choice of language pair is better than the other?
Text: Introduction
The spread of influenza is a major health concern. Without appropriate preventative measures, this can escalate to an epidemic, causing high levels of mortality. A potential route to early detection is to analyse statements on social media platforms to identify individuals who have reported experiencing symptoms of the illness. These numbers can be used as a proxy to monitor the spread of the virus.
Since disease does not respect cultural borders and may spread between populations speaking different languages, we would like to build models for several languages without going through the difficult, expensive and time-consuming process of generating task-specific labelled data for each language. In this paper we explore ways of taking data and models generated in one language and transferring to other languages for which there is little or no data.
Related Work
Previously, authors have created multilingual models which should allow transfer between languages by aligning models BIBREF0 or embedding spaces BIBREF1, BIBREF2. An alternative is translation of a high-resource language into the target low-resource language; for instance, BIBREF3 combined translation with subsequent selective correction by active learning of uncertain words and phrases believed to describe entities, to create a labelled dataset for named entity recognition.
MedWeb Dataset
We use the MedWeb (“Medical Natural Language Processing for Web Document”) dataset BIBREF4 that was provided as part of a subtask at the NTCIR-13 Conference BIBREF5. The data is summarised in Table TABREF1. There are a total of 2,560 pseudo-tweets in three different languages: Japanese (ja), English (en) and Chinese (zh). These were created in Japanese and then manually translated into English and Chinese (see Figure FIGREF2). Each pseudo-tweet is labelled with a subset of the following 8 labels: influenza, diarrhoea/stomach ache, hay fever, cough/sore throat, headache, fever, runny nose, and cold. A positive label is assigned if the author (or someone they live with) has the symptom in question. As such it is more than a named entity recognition task, as can be seen in pseudo-tweet #3 in Figure FIGREF2 where the term “flu” is mentioned but the label is negative.
Methods ::: Bidirectional Encoder Representations from Transformers (BERT):
The BERT model BIBREF6 base version is a 12-layer Transformer model trained on two self-supervised tasks using a large corpus of text. In the first (denoising autoencoding) task, the model must map input sentences with some words replaced with a special “MASK” token back to the original unmasked sentences. In the second (binary classification) task, the model is given two sentences and must predict whether or not the second sentence immediately follows the first in the corpus. The output of the final Transformer layer is passed through a logistic output layer for classification. We have used the original (English) BERT-base, trained on Wikipedia and books corpus BIBREF7, and a Japanese BERT (jBERT) BIBREF8 trained on Japanese Wikipedia. The original BERT model and jBERT use a standard sentence piece tokeniser with roughly 30,000 tokens.
Methods ::: Multilingual BERT:
Multilingual BERT (mBERT) is a BERT model simultaneously trained on Wikipedia in 100 different languages. It makes use of a shared sentence piece tokeniser with roughly 100,000 tokens trained on the same data. This model provides state-of-the-art zero-shot transfer results on natural language inference and part-of-speech tagging tasks BIBREF9.
Methods ::: Translation:
We use two publicly available machine translation systems to provide two possible translations for each original sentence: Google's neural translation system BIBREF10 via Google Cloud, and Amazon Translate. We experiment using the translations singly and together.
Methods ::: Training procedure:
Models are trained for 20 epochs, using the Adam optimiser BIBREF11 and a cyclical learning rate BIBREF12 varied linearly between $5 \times 10^{-6}$ and $3 \times 10^{-5}$.
Experiments
Using the multilingual BERT model, we run three experiments as described below. The “exact match” metric from the original MedWeb challenge is reported, which means that all labels must be predicted correctly for a given pseudo-tweet to be considered correct; macro-averaged F1 is also reported. Each experiment is run 5 times (with different random seeds) and the mean performance is shown in Table TABREF11. Our experiments are focused around using Japanese as the low-resource target language, with English and Chinese as the more readily available source languages.
Experiments ::: Baselines
To establish a target for our transfer techniques we train and test models on a single language, i.e. English to English, Japanese to Japanese, and Chinese to Chinese. For English we use the uncased base-BERT, for Japanese we use jBERT, and for Chinese we use mBERT (since there is no Chinese-specific model available in the public domain). This last choice seems reasonable since mBERT performed similarly to the single-language models when trained and tested on the same language.
For comparison, we show the results of BIBREF13 who created the most successful model for the MedWeb challenge. Their final system was an ensemble of 120 trained models, using two architectures: a hierarchical attention network and a convolutional neural network. They exploited the fact that parallel data is available in three languages by ensuring consistency between outputs of the models in each language, giving a final exact match score of 0.880. However, for the purpose of demonstrating language transfer we report their highest single-model scores to show that our single-language models are competitive with the released results. We also show results for a majority class classifier (predicting all negative labels, see Table TABREF1) and a random classifier that uses the label frequencies from the training set to randomly predict labels.
Experiments ::: Zero-shot transfer with multilingual pre-training
Our first experiment investigates the zero-shot transfer ability of multilingual BERT. If mBERT has learned a shared embedding space for all languages, we would expect that if the model is fine-tuned on the English training dataset, then it should be applicable also to the Japanese dataset. To test this we have run this with both the English and Chinese training data, results are shown in Table TABREF11. We ran additional experiments where we froze layers within BERT, but observed no improvement.
The results indicate poor transfer, especially between English and Japanese. To investigate why the model does not perform well, we visualise the output vectors of mBERT using t-SNE BIBREF14 in Figure FIGREF14. We can see that the language representations occupy separate parts of the representation space, with only small amounts of overlap. Further, no clear correlation can be observed between sentence pairs.
The better transfer between Chinese and Japanese likely reflects the fact that these languages share tokens; one of the Japanese alphabets (the Kanji logographic alphabet) consists of Chinese characters. There is 21% vocabulary overlap for the training data and 19% for the test data, whereas there is no token overlap between English and Japanese. Our finding is consistent with previous claims that token overlap impacts mBERT's transfer capability BIBREF9.
Experiments ::: Training on machine translated data
Our second experiment investigates the use of machine translated data for training a model. We train on the machine translated source data and test on the target test set. Results are shown in Table TABREF11. Augmenting the data by using two sets of translations rather than one proves beneficial. In the end, the difference between training on real Japanese and training on translations from English is around 9% while training on translations from Chinese is around 4%.
Experiments ::: Mixing translated data with original data
Whilst the results for translated data are promising, we would like to bridge the gap to the performance of the original target data. Our premise is that we start with a fixed-size dataset in the source language, and we have a limited annotation budget to manually translate a proportion of this data into the target language. For this experiment we mix all the translated data with different portions of original Japanese data, varying the amount between 1% and 100%. The results of these experiments are shown in Figure FIGREF17. Using the translated data with just 10% of the original Japanese data, we close the gap by half, with 50% we match the single-language model, and with 100% appear to even achieve a small improvement (for English), likely through the data augmentation provided by the translations.
Discussion and Conclusions
Zero-shot transfer using multilingual BERT performs poorly when transferring to Japanese on the MedWeb data. However, training on machine translations gives promising performance, and this performance can be increased by adding small amounts of original target data. On inspection, the drop in performance between translated and original Japanese was often a result of translations that were reasonable but not consistent with the labels. For example, when translating the first example in Figure FIGREF2, both machine translations map “UTF8min風邪”, which means cold (the illness), into “UTF8min寒さ”, which means cold (low temperature). Another example is where the Japanese pseudo-tweet “UTF8min花粉症の時期はすごい疲れる。” was provided alongside an English pseudo-tweet “Allergy season is so exhausting.”. Here, the Japanese word for hay fever “UTF8min花粉症。” has been manually mapped to the less specific word “allergies” in English; the machine translation maps back to Japanese using the word for “allergies” i.e. “UTF8minアレルギー” in the katakana alphabet (katakana is used to express words derived from foreign languages), since there is no kanji character for the concept of allergies. In future work, it would be interesting to understand how to detect such ambiguities in order to best deploy our annotation budget. | translations that were reasonable but not consistent with the labels |
6992f8e5a33f0af0f2206769484c72fecc14700b | 6992f8e5a33f0af0f2206769484c72fecc14700b_0 | Q: Is the new model evaluated on the tasks that BERT and ELMo are evaluated on?
Text: Introduction
It has become clear over the last year that pretraining sentence encoder neural networks on unsupervised tasks, such as language modeling, then fine-tuning them on individual target tasks, can yield significantly better target task performance than could be achieved using target task training data alone BIBREF1 , BIBREF0 , BIBREF2 . Large-scale unsupervised pretraining in experiments like these seems to produce pretrained sentence encoders with substantial knowledge of the target language (which, so far, is generally English). These works have shown that a mostly task-agnostic, one-size-fits-all approach to fine-tuning a large pretrained model with a thin output layer for a given task can achieve results superior to individually optimized models.
However, it is not obvious that the model parameters obtained during unsupervised pretraining should be ideally suited to supporting this kind of transfer learning. Especially when only a small amount of training data is available for the target task, experiments of this kind are potentially brittle, and rely on the pretrained encoder parameters to be reasonably close to an optimal setting for the target task. During target task training, the encoder must learn and adapt enough to be able to solve the target task—potentially involving a very different input distribution and output label space than was seen in pretraining—but it must avoid adapting so much that it overfits and ceases to take advantage of what was learned during pretraining.
This work explores the possibility that the use of a second stage of pretraining with data-rich intermediate supervised tasks might mitigate this brittleness and improve both the robustness and effectiveness of the resulting target task model. We name this approach Supplementary Training on Intermediate Labeled-data Tasks (STILTs).
Experiments with sentence encoders on STILTs take the following form: (i) A model is first trained on an unlabeled-data task like language modeling that can teach it to handle data in the target language; (ii) The model is then further trained on an intermediate, labeled-data task for which ample labeled data is available; (iii) The model is finally fine-tuned further on the target task and evaluated. Our experiments evaluate STILTs as a means of improving target task performance on the GLUE benchmark suite BIBREF3 —a collection of target tasks drawn from the NLP literature—using the publicly-distributed OpenAI generatively-pretrained (GPT) Transformer language model BIBREF0 as our pretrained encoder. We follow Radford et al. in our basic mechanism for fine-tuning both for the intermediate and final tasks, and use the following intermediate tasks: the Multi-Genre NLI Corpus BIBREF4 , the Stanford NLI Corpus BIBREF5 , the Quora Question Pairs (QQP) dataset, and a custom fake-sentence-detection task based on the BooksCorpus dataset BIBREF6 using a method adapted from BIBREF7 . We show that using STILTs yields significant gains across most of the GLUE tasks.
As we expect that any kind of pretraining will be most valuable in a limited training data regime, we also conduct a set of fine-tuning experiments where the model is fine tuned on only 1k- or 5k-example sample of the target task training set. The results show that STILTs substantially improve model performance across most tasks in this downsampled data setting. For target tasks such as MRPC, using STILTs is critical to obtaining good performance.
Related Work
BIBREF8 compare several pretraining tasks for syntactic target tasks, and find that language model pretraining reliably performs well. BIBREF9 investigate the architectural choices behind ELMo-style pretraining with a fixed encoder, and find that the precise choice of encoder architecture strongly influences training speed, but has a relatively small impact on performance. In an publicly-available ICLR 2019 submission, BIBREF10 compare a variety of tasks for pretraining in an ELMo-style setting with no encoder fine-tuning. They conclude that language modeling generally works best among candidate single tasks for pretraining, but show some cases in which a cascade of a model pretrained on language modeling followed by another model pretrained on tasks like MNLI can work well. The paper introducing BERT BIBREF2 briefly mentions encouraging results in a direction similar to ours: One footnote notes that unpublished experiments show “substantial improvements on RTE from multi-task training with MNLI”.
In the area of sentence-to-vector sentence encoding, BIBREF11 offer one of the most comprehensive suites of diagnostic tasks, and higlight the importance of ensuring that these models preserve lexical content information.
In earlier work less closely tied to the unsupervised pretraining setup used here, BIBREF12 investigate the conditions under which task combinations can be productively combined in multi-task learning, and show that the success of a task combination can be determined by the shape of the learning curve during training for each task. In their words: “Multi-task gains are more likely for target tasks that quickly plateau with non-plateauing auxiliary tasks”.
In word representations, this work shares motivations with work on embedding space retrofitting BIBREF13 , in which a labeled dataset like WordNet is used to refine representations learned by an unsupervised embedding learning algorithm before those representations can then be used in a target task.
Results
Table 1 shows our results on GLUE with and without STILTs. Our addition of supplementary training boosts performance across many of the two sentence tasks. On each of our models trained with STILTs, we show improved overall average GLUE scores on the development set. For MNLI and QNLI target tasks, we observe marginal or no gains, likely owing to the two tasks already having large training sets. For the two single sentence tasks—the syntax-oriented CoLA task and the SST sentiment task—we find somewhat deteriorated performance. For CoLA, this mirrors results reported in BIBREF10 , who show that few pretraining tasks other than language modeling offer any advantage for CoLA. The Overall Best score is computed based on taking the best score for each task.
On the test set, we show similar performance gains across most tasks. Here, we compute Best based on Dev, which shows scores based on choosing the best supplementary training scheme for each task based on corresponding development set score. This is a more realistic estimate of test set performance, attaining a GLUE score of 76.9, a 2.3 point gain over the score of our baseline system adapted from Radford et al. This significantly closes the gap between Radford et al.'s model and the BERT model BIBREF2 variant with a similar number of parameters and layers, which attains a GLUE score of 78.3.
We perform the same experiment on the development set without the auxiliary language modeling objective. The results are shown in Table 3 in the Appendix. We similarly find improvements across many tasks by applying STILTs, showing that the benefits of supplementary training do not require language modeling at either the supplementary training or the fine-tuning stage.
Discussion
We find that sentence pair tasks seem to benefit more from supplementary training than single-sentence ones. This is true even for the case of supplementary training on the single-sentence fake-sentence-detection task, so the benefits cannot be wholly attributed to task similarity. We also find that data-constrained tasks benefit much more from supplementary training. Indeed, when applied to RTE, supplementary training on MNLI leads to a eleven-point increase in test set score, pushing the performance of Radford et al.'s GPT model with supplementary training above the BERT model of similar size, which achieves a test set score of 66.4. Based on the improvements seen from applying supplementary training on the fake-sentence-detection task, which is built on the same BooksCorpus dataset that the GPT model was trained on, it is also clear that the benefits from supplementary training do not entirely stem from the trained model being exposed to different textual domains.
Applying STILTs also comes with little complexity or computational overhead. The same infrastructure used to fine-tune the GPT model can be used to implement the supplementary training. The computational cost of the supplementary training phase is another phase of fine-tuning, which is small compared to the cost of training the original model.
However, using STILTs is not always beneficial. In particular, we show that most of our intermediate tasks were actually detrimental to the single-sentence tasks in GLUE. The interaction between the intermediate task, the target task, and the use of the auxiliary language modeling objective is a subject due for further investigation. Therefore, for best target task performance, we recommend experimenting with supplementary training with several closely-related data-rich tasks and use the development set to select the most promising approach for each task, as in the Best based on Dev formulation shown in Table 1 .
Conclusion
This work represents only an initial investigation into the benefits of supplementary supervised pretraining. More work remains to be done to firmly establish when methods like STILTs can be productively applied and what criteria can be used to predict which combinations of intermediate and target tasks should work well. Nevertheless, in our initial work with four example intermediate training tasks, GPT on STILTs achieves a test set GLUE score of 76.9, which markedly improves on our strong pretrained Transformer baseline. We also show that in data-constrained regimes, the benefits of using STILTs are even more pronounced.
Acknowledgments
We would like to thank Nikita Nangia for her helpful feedback. | Yes |
a91abc7983fffa6b2e1e46133f559cec3d7d9438 | a91abc7983fffa6b2e1e46133f559cec3d7d9438_0 | Q: Does the additional training on supervised tasks hurt performance in some tasks?
Text: Introduction
It has become clear over the last year that pretraining sentence encoder neural networks on unsupervised tasks, such as language modeling, then fine-tuning them on individual target tasks, can yield significantly better target task performance than could be achieved using target task training data alone BIBREF1 , BIBREF0 , BIBREF2 . Large-scale unsupervised pretraining in experiments like these seems to produce pretrained sentence encoders with substantial knowledge of the target language (which, so far, is generally English). These works have shown that a mostly task-agnostic, one-size-fits-all approach to fine-tuning a large pretrained model with a thin output layer for a given task can achieve results superior to individually optimized models.
However, it is not obvious that the model parameters obtained during unsupervised pretraining should be ideally suited to supporting this kind of transfer learning. Especially when only a small amount of training data is available for the target task, experiments of this kind are potentially brittle, and rely on the pretrained encoder parameters to be reasonably close to an optimal setting for the target task. During target task training, the encoder must learn and adapt enough to be able to solve the target task—potentially involving a very different input distribution and output label space than was seen in pretraining—but it must avoid adapting so much that it overfits and ceases to take advantage of what was learned during pretraining.
This work explores the possibility that the use of a second stage of pretraining with data-rich intermediate supervised tasks might mitigate this brittleness and improve both the robustness and effectiveness of the resulting target task model. We name this approach Supplementary Training on Intermediate Labeled-data Tasks (STILTs).
Experiments with sentence encoders on STILTs take the following form: (i) A model is first trained on an unlabeled-data task like language modeling that can teach it to handle data in the target language; (ii) The model is then further trained on an intermediate, labeled-data task for which ample labeled data is available; (iii) The model is finally fine-tuned further on the target task and evaluated. Our experiments evaluate STILTs as a means of improving target task performance on the GLUE benchmark suite BIBREF3 —a collection of target tasks drawn from the NLP literature—using the publicly-distributed OpenAI generatively-pretrained (GPT) Transformer language model BIBREF0 as our pretrained encoder. We follow Radford et al. in our basic mechanism for fine-tuning both for the intermediate and final tasks, and use the following intermediate tasks: the Multi-Genre NLI Corpus BIBREF4 , the Stanford NLI Corpus BIBREF5 , the Quora Question Pairs (QQP) dataset, and a custom fake-sentence-detection task based on the BooksCorpus dataset BIBREF6 using a method adapted from BIBREF7 . We show that using STILTs yields significant gains across most of the GLUE tasks.
As we expect that any kind of pretraining will be most valuable in a limited training data regime, we also conduct a set of fine-tuning experiments where the model is fine tuned on only 1k- or 5k-example sample of the target task training set. The results show that STILTs substantially improve model performance across most tasks in this downsampled data setting. For target tasks such as MRPC, using STILTs is critical to obtaining good performance.
Related Work
BIBREF8 compare several pretraining tasks for syntactic target tasks, and find that language model pretraining reliably performs well. BIBREF9 investigate the architectural choices behind ELMo-style pretraining with a fixed encoder, and find that the precise choice of encoder architecture strongly influences training speed, but has a relatively small impact on performance. In an publicly-available ICLR 2019 submission, BIBREF10 compare a variety of tasks for pretraining in an ELMo-style setting with no encoder fine-tuning. They conclude that language modeling generally works best among candidate single tasks for pretraining, but show some cases in which a cascade of a model pretrained on language modeling followed by another model pretrained on tasks like MNLI can work well. The paper introducing BERT BIBREF2 briefly mentions encouraging results in a direction similar to ours: One footnote notes that unpublished experiments show “substantial improvements on RTE from multi-task training with MNLI”.
In the area of sentence-to-vector sentence encoding, BIBREF11 offer one of the most comprehensive suites of diagnostic tasks, and higlight the importance of ensuring that these models preserve lexical content information.
In earlier work less closely tied to the unsupervised pretraining setup used here, BIBREF12 investigate the conditions under which task combinations can be productively combined in multi-task learning, and show that the success of a task combination can be determined by the shape of the learning curve during training for each task. In their words: “Multi-task gains are more likely for target tasks that quickly plateau with non-plateauing auxiliary tasks”.
In word representations, this work shares motivations with work on embedding space retrofitting BIBREF13 , in which a labeled dataset like WordNet is used to refine representations learned by an unsupervised embedding learning algorithm before those representations can then be used in a target task.
Results
Table 1 shows our results on GLUE with and without STILTs. Our addition of supplementary training boosts performance across many of the two sentence tasks. On each of our models trained with STILTs, we show improved overall average GLUE scores on the development set. For MNLI and QNLI target tasks, we observe marginal or no gains, likely owing to the two tasks already having large training sets. For the two single sentence tasks—the syntax-oriented CoLA task and the SST sentiment task—we find somewhat deteriorated performance. For CoLA, this mirrors results reported in BIBREF10 , who show that few pretraining tasks other than language modeling offer any advantage for CoLA. The Overall Best score is computed based on taking the best score for each task.
On the test set, we show similar performance gains across most tasks. Here, we compute Best based on Dev, which shows scores based on choosing the best supplementary training scheme for each task based on corresponding development set score. This is a more realistic estimate of test set performance, attaining a GLUE score of 76.9, a 2.3 point gain over the score of our baseline system adapted from Radford et al. This significantly closes the gap between Radford et al.'s model and the BERT model BIBREF2 variant with a similar number of parameters and layers, which attains a GLUE score of 78.3.
We perform the same experiment on the development set without the auxiliary language modeling objective. The results are shown in Table 3 in the Appendix. We similarly find improvements across many tasks by applying STILTs, showing that the benefits of supplementary training do not require language modeling at either the supplementary training or the fine-tuning stage.
Discussion
We find that sentence pair tasks seem to benefit more from supplementary training than single-sentence ones. This is true even for the case of supplementary training on the single-sentence fake-sentence-detection task, so the benefits cannot be wholly attributed to task similarity. We also find that data-constrained tasks benefit much more from supplementary training. Indeed, when applied to RTE, supplementary training on MNLI leads to a eleven-point increase in test set score, pushing the performance of Radford et al.'s GPT model with supplementary training above the BERT model of similar size, which achieves a test set score of 66.4. Based on the improvements seen from applying supplementary training on the fake-sentence-detection task, which is built on the same BooksCorpus dataset that the GPT model was trained on, it is also clear that the benefits from supplementary training do not entirely stem from the trained model being exposed to different textual domains.
Applying STILTs also comes with little complexity or computational overhead. The same infrastructure used to fine-tune the GPT model can be used to implement the supplementary training. The computational cost of the supplementary training phase is another phase of fine-tuning, which is small compared to the cost of training the original model.
However, using STILTs is not always beneficial. In particular, we show that most of our intermediate tasks were actually detrimental to the single-sentence tasks in GLUE. The interaction between the intermediate task, the target task, and the use of the auxiliary language modeling objective is a subject due for further investigation. Therefore, for best target task performance, we recommend experimenting with supplementary training with several closely-related data-rich tasks and use the development set to select the most promising approach for each task, as in the Best based on Dev formulation shown in Table 1 .
Conclusion
This work represents only an initial investigation into the benefits of supplementary supervised pretraining. More work remains to be done to firmly establish when methods like STILTs can be productively applied and what criteria can be used to predict which combinations of intermediate and target tasks should work well. Nevertheless, in our initial work with four example intermediate training tasks, GPT on STILTs achieves a test set GLUE score of 76.9, which markedly improves on our strong pretrained Transformer baseline. We also show that in data-constrained regimes, the benefits of using STILTs are even more pronounced.
Acknowledgments
We would like to thank Nikita Nangia for her helpful feedback. | Yes |
c45feda62f23245f53e855706e2d8ea733b7fd03 | c45feda62f23245f53e855706e2d8ea733b7fd03_0 | Q: Which translation system do they use to translate to English?
Text: Introduction
Named entity recognition (NER) is a sequence tagging task that extracts the continuous tokens into specified classes, such as person names, organizations and locations. Current state-of-the-art approaches for NER usually base themselves on long short-term memory recurrent neural networks (LSTM RNNs) and a subsequent conditional random field (CRF) to predict the sequence labels BIBREF0 . Performances of neural NER methods are compromised if the training data are not enough BIBREF1 . This problem is severe for many languages due to a lack of labeled datasets, e.g., German and Spanish. In comparison, NER on English is well developed and there exist abundant labeled data for training purpose. Therefore, in this work, we regard English as a high-resource language, while other languages, even Chinese, as low-resource languages.
There is an intractable problem when leveraging English NER system for other languages. The sentences with the same meaning in different languages may have different lengths and the positions of words in these sentences usually do not correspond. Previous work such as BIBREF2 used each single word translation information to enrich the monolingual word embedding. To our knowledge, there is no approach that employs the whole translation information to improve the performance of the monolingual NER system.
To address above problem, we introduce an extension to the BiLSTM-CRF model, which could obtain transferred knowledge from a pre-trained English NER system. First, we translate other languages into English. Since the proposed models of BIBREF3 and BIBREF4 , the performance of attention-based machine translation systems is close to the human level. The attention mechanism can make the translation results more accurate. Furthermore, this mechanism has another useful property: the attention weights can represent the alignment information. After translating the low-resource language into English, we utilize the pre-trained English NER model to predict the sentences and record the output states of BiLSTM in this model. The states contain the semantic and task-specific information of the sentences. By using soft alignment attention weights as a transformation matrix, we manage to transfer the knowledge of high resource language — English to other languages. Finally, using both word vectors and the transfer knowledge, we obtain new state-of-the-art results on four datasets.
Model
In this section, we will introduce the BAN in three parts. Our model is based on the mainstream NER model BIBREF5 , using BiLSTM-CRF as the basic network structure. Given a sentence INLINEFORM0 and corresponding labels INLINEFORM1 , where INLINEFORM2 denotes the INLINEFORM3 th token and INLINEFORM4 denotes the INLINEFORM5 th label. The NER task is to estimate the probability INLINEFORM6 . Figure FIGREF1 shows the main architecture of our model.
Pre-trained Translation and NER Model
Attention-base translation model We use the system of BIBREF6 , a convolutional sequence to sequence model. It divides translation process into two steps. First, in the encoder step, given an input sentence INLINEFORM0 of length INLINEFORM1 , INLINEFORM2 represents each word as word embedding INLINEFORM3 . After that, we obtain the absolute position of input elements INLINEFORM4 . Both vectors are concatenated to get input sentence representations INLINEFORM5 . Similarly, output elements INLINEFORM6 generated from decoder network have the same structure. A convolutional neural network (CNN) is used to get the hidden state of the sentence representation from left to right. Second, in the decoder step, attention mechanism is used in each CNN layer. In order to acquire the attention value, we combine the current decoder state INLINEFORM7 with the embedding of previous decoder output value INLINEFORM8 : DISPLAYFORM0
For INLINEFORM0 th layer, the attention INLINEFORM1 of the INLINEFORM2 th source element and INLINEFORM3 th state is computed as a dot-product between the decoder state summary INLINEFORM4 and each output INLINEFORM5 of the last encoder layer: DISPLAYFORM0
Then we follow the normal decoder implementation and get target sentence INLINEFORM0 by beam search algorithm.
Pre-trained English NER model We construct the English NER system following BIBREF7 . This system uses a bidirectional LSTM as a character-level language model to take context information for word embedding generation. The hidden states of the character language model (CharLM) are used to create contextualized word embeddings. The final embedding INLINEFORM0 is concatenated by the CharLM embedding INLINEFORM1 and GLOVE embedding INLINEFORM2 BIBREF8 . A standard BiLSTM-CRF named entity recognition model BIBREF0 takes INLINEFORM3 to address the NER task.
Back Attention Knowledge Transfer
The sentences in low-resource languages are used as input to the model. Given a input sentence INLINEFORM0 in low-resource language, we use pre-trained translation model to translate INLINEFORM1 into English and the output is INLINEFORM2 . Simultaneously, we record the average of values for all INLINEFORM3 attention layers: DISPLAYFORM0
After that, we use the pre-trained English NER model to predict the translated sentence INLINEFORM0 . Then, we have the BiLSTM output states: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 denote the INLINEFORM2 th forward and backward outputs, respectively. INLINEFORM3 contains the semantic and task-specific information of the translated sentence. And the INLINEFORM4 th row of attention weights matrix INLINEFORM5 represents the correlation between source word INLINEFORM6 with all words in target sentence INLINEFORM7 . Thereafter, to obtain the transfer information INLINEFORM8 of source word, we reversely use the attention weights: DISPLAYFORM0
where INLINEFORM0 represent the whole outputs of BiLSTM, and INLINEFORM1 , INLINEFORM2 . INLINEFORM3 denotes the transfer information of INLINEFORM4 th word in low-resource language and has the same dimensions with INLINEFORM5 .
Named Entity Recognition Architecture
The low-resource language named entity recognition architecture is based on BIBREF5 . The word embeddings of low-resource language are passed into a BiLSTM-CRF sequence labeling network. The embeddings INLINEFORM0 are used as inputs to the BiLSTM. Then we have: DISPLAYFORM0
Before passing the forward and backward output states INLINEFORM0 into CRF, we concatenate INLINEFORM1 and INLINEFORM2 as a new representation: DISPLAYFORM0
CRF model uses INLINEFORM0 to give the final sequence probability on the possible sequence label INLINEFORM1 : DISPLAYFORM0
At last, the named entity labels are predicted by: DISPLAYFORM0
Experiments
We use experiments to evaluate the effectiveness of our proposed method on NER task. On three different low-resource languages, we conducted an experimental evaluation to prove the effectiveness of our back attention mechanism on the NER task. Four datasets are used in our work, including CoNLL 2003 German BIBREF9 , CoNLL 2002 Spanish BIBREF10 , OntoNotes 4 BIBREF11 and Weibo NER BIBREF12 . All the annotations are mapped to the BIOES format. Table TABREF14 shows the detailed statistics of the datasets.
Experimental Setup
We implement the basic BiLSTM-CRF model using PyTorch framework. FASTTEXT embeddings are used for generating word embeddings. Translation models are trained on United Nation Parallel Corpus. For pre-trained English NER system, we use the default NER model of Flair.
Settings
We train our NER model using vanilla SGD with no momentum for 150 epochs, with an initial learning rate of 0.1 and a learning rate annealing method in which the train loss does not fall in 3 consecutive epochs. The hidden size of BiLSTM model is set to 256 and mini-batch size is set to 16. Dropout is applied to word embeddings with a rate of 0.1 and to BiLSTM with a rate of 0.5. We repeat each experiment 5 times under different random seeds and report the average of test set as final performance.
German and Spanish NER
Experimental results of German and Spanish are shown in table TABREF20 . Evaluation metric is F1-score. We can find that our method CharLM+BiLSTM-CRF+BAN yields the best performance on two languages. And after adding our network to each of the basic models, the performance of each model has been improved. This suggests that the transfer information, obtained from BAN, is helpful for low-resource NER.
Chinese NER
Chinese is distinct from Latin-based languages. Thence, there are some tricks when processing Chinese corpus. But we only suppose to verify the validity of our method, so we just use the character-level embeddings.
Table TABREF22 shows the results on Chinese OntoNotes 4.0. Adding BAN to baseline model leads to an increase from 63.25% to 72.15% F1-score. In order to further improve the performance, we use the BERT model BIBREF20 to produce word embeddings. With no segmentation, we surpass the previous state-of-the-art approach by 6.33% F1-score. For Weibo dataset, the experiment results are shown in Table TABREF23 , where NE, NM and Overall denote named entities, nominal entities and both. The baseline model gives a 33.18% F1-score. Using the transfer knowledge by BAN, the baseline model achieves an immense improvement in F1-score, rising by 10.39%. We find that BAN still gets consistent improvement on a strong model. With BAN, the F1-score of BERT+BiLSTM+CRF increases to 70.76%.
Task-Specific Information from Back Attention Network
BIBREF21 indicates that the representations from higher-level layers of NLP models are more task-specific. Although we do the same task among different languages, the target domains of different datasets are slightly different. So, to prove that back attention knowledge generated by BAN could capture valuable task-specific information between different languages, we use the back attention knowledge alone as word embedding to predict Weibo dataset. We compare three different word embeddings on the baseline model. Experimental results are shown in Table TABREF25 and illustrate that back attention knowledge from BAN has inherent semantic information.
Analysis
Our proposed approach is the first to leverage hidden states of NER model from another language to improve monolingual NER performance. The training time with or without BAN is almost the same due to the translation module and the English NER module are pre-trained.
On large datasets, our model makes a small improvement because some of transfer knowledge obtained from our method is duplicated with the information learned by the monolingual models. On small datasets, e.g., Weibo dataset, a great improvement has been achieved after adding transfer knowledge to the baseline model. The reason maybe is that these datasets are too small to be fully trained and the test datasets have many non-existent characters of the training dataset, even some unrecognized characters. Therefore, some tags labeled incorrectly by monolingual models could be labeled correctly with the additional transfer knowledge which contains task-specific information obtained from BAN. So, the transfer information plays an important role in this dataset.
Conclusion
In this paper, we seek to improve the performance of NER on low-resource languages by leveraging the well-trained English NER system. This is achieved by way of BAN, which is a simple but extensible approach. It can transfer information between different languages. Empirical experiments show that, on small datasets, our approach can lead to significant improvement on the performance. This property is of great practical importance for low-resource languages. In future work, we plan to extend our method on other NLP tasks, e.g., relation extraction, coreference resolution. | Attention-based translation model with convolution sequence to sequence model |
9785ecf1107090c84c57112d01a8e83418a913c1 | 9785ecf1107090c84c57112d01a8e83418a913c1_0 | Q: Which languages do they work with?
Text: Introduction
Named entity recognition (NER) is a sequence tagging task that extracts the continuous tokens into specified classes, such as person names, organizations and locations. Current state-of-the-art approaches for NER usually base themselves on long short-term memory recurrent neural networks (LSTM RNNs) and a subsequent conditional random field (CRF) to predict the sequence labels BIBREF0 . Performances of neural NER methods are compromised if the training data are not enough BIBREF1 . This problem is severe for many languages due to a lack of labeled datasets, e.g., German and Spanish. In comparison, NER on English is well developed and there exist abundant labeled data for training purpose. Therefore, in this work, we regard English as a high-resource language, while other languages, even Chinese, as low-resource languages.
There is an intractable problem when leveraging English NER system for other languages. The sentences with the same meaning in different languages may have different lengths and the positions of words in these sentences usually do not correspond. Previous work such as BIBREF2 used each single word translation information to enrich the monolingual word embedding. To our knowledge, there is no approach that employs the whole translation information to improve the performance of the monolingual NER system.
To address above problem, we introduce an extension to the BiLSTM-CRF model, which could obtain transferred knowledge from a pre-trained English NER system. First, we translate other languages into English. Since the proposed models of BIBREF3 and BIBREF4 , the performance of attention-based machine translation systems is close to the human level. The attention mechanism can make the translation results more accurate. Furthermore, this mechanism has another useful property: the attention weights can represent the alignment information. After translating the low-resource language into English, we utilize the pre-trained English NER model to predict the sentences and record the output states of BiLSTM in this model. The states contain the semantic and task-specific information of the sentences. By using soft alignment attention weights as a transformation matrix, we manage to transfer the knowledge of high resource language — English to other languages. Finally, using both word vectors and the transfer knowledge, we obtain new state-of-the-art results on four datasets.
Model
In this section, we will introduce the BAN in three parts. Our model is based on the mainstream NER model BIBREF5 , using BiLSTM-CRF as the basic network structure. Given a sentence INLINEFORM0 and corresponding labels INLINEFORM1 , where INLINEFORM2 denotes the INLINEFORM3 th token and INLINEFORM4 denotes the INLINEFORM5 th label. The NER task is to estimate the probability INLINEFORM6 . Figure FIGREF1 shows the main architecture of our model.
Pre-trained Translation and NER Model
Attention-base translation model We use the system of BIBREF6 , a convolutional sequence to sequence model. It divides translation process into two steps. First, in the encoder step, given an input sentence INLINEFORM0 of length INLINEFORM1 , INLINEFORM2 represents each word as word embedding INLINEFORM3 . After that, we obtain the absolute position of input elements INLINEFORM4 . Both vectors are concatenated to get input sentence representations INLINEFORM5 . Similarly, output elements INLINEFORM6 generated from decoder network have the same structure. A convolutional neural network (CNN) is used to get the hidden state of the sentence representation from left to right. Second, in the decoder step, attention mechanism is used in each CNN layer. In order to acquire the attention value, we combine the current decoder state INLINEFORM7 with the embedding of previous decoder output value INLINEFORM8 : DISPLAYFORM0
For INLINEFORM0 th layer, the attention INLINEFORM1 of the INLINEFORM2 th source element and INLINEFORM3 th state is computed as a dot-product between the decoder state summary INLINEFORM4 and each output INLINEFORM5 of the last encoder layer: DISPLAYFORM0
Then we follow the normal decoder implementation and get target sentence INLINEFORM0 by beam search algorithm.
Pre-trained English NER model We construct the English NER system following BIBREF7 . This system uses a bidirectional LSTM as a character-level language model to take context information for word embedding generation. The hidden states of the character language model (CharLM) are used to create contextualized word embeddings. The final embedding INLINEFORM0 is concatenated by the CharLM embedding INLINEFORM1 and GLOVE embedding INLINEFORM2 BIBREF8 . A standard BiLSTM-CRF named entity recognition model BIBREF0 takes INLINEFORM3 to address the NER task.
Back Attention Knowledge Transfer
The sentences in low-resource languages are used as input to the model. Given a input sentence INLINEFORM0 in low-resource language, we use pre-trained translation model to translate INLINEFORM1 into English and the output is INLINEFORM2 . Simultaneously, we record the average of values for all INLINEFORM3 attention layers: DISPLAYFORM0
After that, we use the pre-trained English NER model to predict the translated sentence INLINEFORM0 . Then, we have the BiLSTM output states: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 denote the INLINEFORM2 th forward and backward outputs, respectively. INLINEFORM3 contains the semantic and task-specific information of the translated sentence. And the INLINEFORM4 th row of attention weights matrix INLINEFORM5 represents the correlation between source word INLINEFORM6 with all words in target sentence INLINEFORM7 . Thereafter, to obtain the transfer information INLINEFORM8 of source word, we reversely use the attention weights: DISPLAYFORM0
where INLINEFORM0 represent the whole outputs of BiLSTM, and INLINEFORM1 , INLINEFORM2 . INLINEFORM3 denotes the transfer information of INLINEFORM4 th word in low-resource language and has the same dimensions with INLINEFORM5 .
Named Entity Recognition Architecture
The low-resource language named entity recognition architecture is based on BIBREF5 . The word embeddings of low-resource language are passed into a BiLSTM-CRF sequence labeling network. The embeddings INLINEFORM0 are used as inputs to the BiLSTM. Then we have: DISPLAYFORM0
Before passing the forward and backward output states INLINEFORM0 into CRF, we concatenate INLINEFORM1 and INLINEFORM2 as a new representation: DISPLAYFORM0
CRF model uses INLINEFORM0 to give the final sequence probability on the possible sequence label INLINEFORM1 : DISPLAYFORM0
At last, the named entity labels are predicted by: DISPLAYFORM0
Experiments
We use experiments to evaluate the effectiveness of our proposed method on NER task. On three different low-resource languages, we conducted an experimental evaluation to prove the effectiveness of our back attention mechanism on the NER task. Four datasets are used in our work, including CoNLL 2003 German BIBREF9 , CoNLL 2002 Spanish BIBREF10 , OntoNotes 4 BIBREF11 and Weibo NER BIBREF12 . All the annotations are mapped to the BIOES format. Table TABREF14 shows the detailed statistics of the datasets.
Experimental Setup
We implement the basic BiLSTM-CRF model using PyTorch framework. FASTTEXT embeddings are used for generating word embeddings. Translation models are trained on United Nation Parallel Corpus. For pre-trained English NER system, we use the default NER model of Flair.
Settings
We train our NER model using vanilla SGD with no momentum for 150 epochs, with an initial learning rate of 0.1 and a learning rate annealing method in which the train loss does not fall in 3 consecutive epochs. The hidden size of BiLSTM model is set to 256 and mini-batch size is set to 16. Dropout is applied to word embeddings with a rate of 0.1 and to BiLSTM with a rate of 0.5. We repeat each experiment 5 times under different random seeds and report the average of test set as final performance.
German and Spanish NER
Experimental results of German and Spanish are shown in table TABREF20 . Evaluation metric is F1-score. We can find that our method CharLM+BiLSTM-CRF+BAN yields the best performance on two languages. And after adding our network to each of the basic models, the performance of each model has been improved. This suggests that the transfer information, obtained from BAN, is helpful for low-resource NER.
Chinese NER
Chinese is distinct from Latin-based languages. Thence, there are some tricks when processing Chinese corpus. But we only suppose to verify the validity of our method, so we just use the character-level embeddings.
Table TABREF22 shows the results on Chinese OntoNotes 4.0. Adding BAN to baseline model leads to an increase from 63.25% to 72.15% F1-score. In order to further improve the performance, we use the BERT model BIBREF20 to produce word embeddings. With no segmentation, we surpass the previous state-of-the-art approach by 6.33% F1-score. For Weibo dataset, the experiment results are shown in Table TABREF23 , where NE, NM and Overall denote named entities, nominal entities and both. The baseline model gives a 33.18% F1-score. Using the transfer knowledge by BAN, the baseline model achieves an immense improvement in F1-score, rising by 10.39%. We find that BAN still gets consistent improvement on a strong model. With BAN, the F1-score of BERT+BiLSTM+CRF increases to 70.76%.
Task-Specific Information from Back Attention Network
BIBREF21 indicates that the representations from higher-level layers of NLP models are more task-specific. Although we do the same task among different languages, the target domains of different datasets are slightly different. So, to prove that back attention knowledge generated by BAN could capture valuable task-specific information between different languages, we use the back attention knowledge alone as word embedding to predict Weibo dataset. We compare three different word embeddings on the baseline model. Experimental results are shown in Table TABREF25 and illustrate that back attention knowledge from BAN has inherent semantic information.
Analysis
Our proposed approach is the first to leverage hidden states of NER model from another language to improve monolingual NER performance. The training time with or without BAN is almost the same due to the translation module and the English NER module are pre-trained.
On large datasets, our model makes a small improvement because some of transfer knowledge obtained from our method is duplicated with the information learned by the monolingual models. On small datasets, e.g., Weibo dataset, a great improvement has been achieved after adding transfer knowledge to the baseline model. The reason maybe is that these datasets are too small to be fully trained and the test datasets have many non-existent characters of the training dataset, even some unrecognized characters. Therefore, some tags labeled incorrectly by monolingual models could be labeled correctly with the additional transfer knowledge which contains task-specific information obtained from BAN. So, the transfer information plays an important role in this dataset.
Conclusion
In this paper, we seek to improve the performance of NER on low-resource languages by leveraging the well-trained English NER system. This is achieved by way of BAN, which is a simple but extensible approach. It can transfer information between different languages. Empirical experiments show that, on small datasets, our approach can lead to significant improvement on the performance. This property is of great practical importance for low-resource languages. In future work, we plan to extend our method on other NLP tasks, e.g., relation extraction, coreference resolution. | German, Spanish, Chinese |
e051d68a7932f700e6c3f48da57d3e2519936c6d | e051d68a7932f700e6c3f48da57d3e2519936c6d_0 | Q: Which pre-trained English NER model do they use?
Text: Introduction
Named entity recognition (NER) is a sequence tagging task that extracts the continuous tokens into specified classes, such as person names, organizations and locations. Current state-of-the-art approaches for NER usually base themselves on long short-term memory recurrent neural networks (LSTM RNNs) and a subsequent conditional random field (CRF) to predict the sequence labels BIBREF0 . Performances of neural NER methods are compromised if the training data are not enough BIBREF1 . This problem is severe for many languages due to a lack of labeled datasets, e.g., German and Spanish. In comparison, NER on English is well developed and there exist abundant labeled data for training purpose. Therefore, in this work, we regard English as a high-resource language, while other languages, even Chinese, as low-resource languages.
There is an intractable problem when leveraging English NER system for other languages. The sentences with the same meaning in different languages may have different lengths and the positions of words in these sentences usually do not correspond. Previous work such as BIBREF2 used each single word translation information to enrich the monolingual word embedding. To our knowledge, there is no approach that employs the whole translation information to improve the performance of the monolingual NER system.
To address above problem, we introduce an extension to the BiLSTM-CRF model, which could obtain transferred knowledge from a pre-trained English NER system. First, we translate other languages into English. Since the proposed models of BIBREF3 and BIBREF4 , the performance of attention-based machine translation systems is close to the human level. The attention mechanism can make the translation results more accurate. Furthermore, this mechanism has another useful property: the attention weights can represent the alignment information. After translating the low-resource language into English, we utilize the pre-trained English NER model to predict the sentences and record the output states of BiLSTM in this model. The states contain the semantic and task-specific information of the sentences. By using soft alignment attention weights as a transformation matrix, we manage to transfer the knowledge of high resource language — English to other languages. Finally, using both word vectors and the transfer knowledge, we obtain new state-of-the-art results on four datasets.
Model
In this section, we will introduce the BAN in three parts. Our model is based on the mainstream NER model BIBREF5 , using BiLSTM-CRF as the basic network structure. Given a sentence INLINEFORM0 and corresponding labels INLINEFORM1 , where INLINEFORM2 denotes the INLINEFORM3 th token and INLINEFORM4 denotes the INLINEFORM5 th label. The NER task is to estimate the probability INLINEFORM6 . Figure FIGREF1 shows the main architecture of our model.
Pre-trained Translation and NER Model
Attention-base translation model We use the system of BIBREF6 , a convolutional sequence to sequence model. It divides translation process into two steps. First, in the encoder step, given an input sentence INLINEFORM0 of length INLINEFORM1 , INLINEFORM2 represents each word as word embedding INLINEFORM3 . After that, we obtain the absolute position of input elements INLINEFORM4 . Both vectors are concatenated to get input sentence representations INLINEFORM5 . Similarly, output elements INLINEFORM6 generated from decoder network have the same structure. A convolutional neural network (CNN) is used to get the hidden state of the sentence representation from left to right. Second, in the decoder step, attention mechanism is used in each CNN layer. In order to acquire the attention value, we combine the current decoder state INLINEFORM7 with the embedding of previous decoder output value INLINEFORM8 : DISPLAYFORM0
For INLINEFORM0 th layer, the attention INLINEFORM1 of the INLINEFORM2 th source element and INLINEFORM3 th state is computed as a dot-product between the decoder state summary INLINEFORM4 and each output INLINEFORM5 of the last encoder layer: DISPLAYFORM0
Then we follow the normal decoder implementation and get target sentence INLINEFORM0 by beam search algorithm.
Pre-trained English NER model We construct the English NER system following BIBREF7 . This system uses a bidirectional LSTM as a character-level language model to take context information for word embedding generation. The hidden states of the character language model (CharLM) are used to create contextualized word embeddings. The final embedding INLINEFORM0 is concatenated by the CharLM embedding INLINEFORM1 and GLOVE embedding INLINEFORM2 BIBREF8 . A standard BiLSTM-CRF named entity recognition model BIBREF0 takes INLINEFORM3 to address the NER task.
Back Attention Knowledge Transfer
The sentences in low-resource languages are used as input to the model. Given a input sentence INLINEFORM0 in low-resource language, we use pre-trained translation model to translate INLINEFORM1 into English and the output is INLINEFORM2 . Simultaneously, we record the average of values for all INLINEFORM3 attention layers: DISPLAYFORM0
After that, we use the pre-trained English NER model to predict the translated sentence INLINEFORM0 . Then, we have the BiLSTM output states: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 denote the INLINEFORM2 th forward and backward outputs, respectively. INLINEFORM3 contains the semantic and task-specific information of the translated sentence. And the INLINEFORM4 th row of attention weights matrix INLINEFORM5 represents the correlation between source word INLINEFORM6 with all words in target sentence INLINEFORM7 . Thereafter, to obtain the transfer information INLINEFORM8 of source word, we reversely use the attention weights: DISPLAYFORM0
where INLINEFORM0 represent the whole outputs of BiLSTM, and INLINEFORM1 , INLINEFORM2 . INLINEFORM3 denotes the transfer information of INLINEFORM4 th word in low-resource language and has the same dimensions with INLINEFORM5 .
Named Entity Recognition Architecture
The low-resource language named entity recognition architecture is based on BIBREF5 . The word embeddings of low-resource language are passed into a BiLSTM-CRF sequence labeling network. The embeddings INLINEFORM0 are used as inputs to the BiLSTM. Then we have: DISPLAYFORM0
Before passing the forward and backward output states INLINEFORM0 into CRF, we concatenate INLINEFORM1 and INLINEFORM2 as a new representation: DISPLAYFORM0
CRF model uses INLINEFORM0 to give the final sequence probability on the possible sequence label INLINEFORM1 : DISPLAYFORM0
At last, the named entity labels are predicted by: DISPLAYFORM0
Experiments
We use experiments to evaluate the effectiveness of our proposed method on NER task. On three different low-resource languages, we conducted an experimental evaluation to prove the effectiveness of our back attention mechanism on the NER task. Four datasets are used in our work, including CoNLL 2003 German BIBREF9 , CoNLL 2002 Spanish BIBREF10 , OntoNotes 4 BIBREF11 and Weibo NER BIBREF12 . All the annotations are mapped to the BIOES format. Table TABREF14 shows the detailed statistics of the datasets.
Experimental Setup
We implement the basic BiLSTM-CRF model using PyTorch framework. FASTTEXT embeddings are used for generating word embeddings. Translation models are trained on United Nation Parallel Corpus. For pre-trained English NER system, we use the default NER model of Flair.
Settings
We train our NER model using vanilla SGD with no momentum for 150 epochs, with an initial learning rate of 0.1 and a learning rate annealing method in which the train loss does not fall in 3 consecutive epochs. The hidden size of BiLSTM model is set to 256 and mini-batch size is set to 16. Dropout is applied to word embeddings with a rate of 0.1 and to BiLSTM with a rate of 0.5. We repeat each experiment 5 times under different random seeds and report the average of test set as final performance.
German and Spanish NER
Experimental results of German and Spanish are shown in table TABREF20 . Evaluation metric is F1-score. We can find that our method CharLM+BiLSTM-CRF+BAN yields the best performance on two languages. And after adding our network to each of the basic models, the performance of each model has been improved. This suggests that the transfer information, obtained from BAN, is helpful for low-resource NER.
Chinese NER
Chinese is distinct from Latin-based languages. Thence, there are some tricks when processing Chinese corpus. But we only suppose to verify the validity of our method, so we just use the character-level embeddings.
Table TABREF22 shows the results on Chinese OntoNotes 4.0. Adding BAN to baseline model leads to an increase from 63.25% to 72.15% F1-score. In order to further improve the performance, we use the BERT model BIBREF20 to produce word embeddings. With no segmentation, we surpass the previous state-of-the-art approach by 6.33% F1-score. For Weibo dataset, the experiment results are shown in Table TABREF23 , where NE, NM and Overall denote named entities, nominal entities and both. The baseline model gives a 33.18% F1-score. Using the transfer knowledge by BAN, the baseline model achieves an immense improvement in F1-score, rising by 10.39%. We find that BAN still gets consistent improvement on a strong model. With BAN, the F1-score of BERT+BiLSTM+CRF increases to 70.76%.
Task-Specific Information from Back Attention Network
BIBREF21 indicates that the representations from higher-level layers of NLP models are more task-specific. Although we do the same task among different languages, the target domains of different datasets are slightly different. So, to prove that back attention knowledge generated by BAN could capture valuable task-specific information between different languages, we use the back attention knowledge alone as word embedding to predict Weibo dataset. We compare three different word embeddings on the baseline model. Experimental results are shown in Table TABREF25 and illustrate that back attention knowledge from BAN has inherent semantic information.
Analysis
Our proposed approach is the first to leverage hidden states of NER model from another language to improve monolingual NER performance. The training time with or without BAN is almost the same due to the translation module and the English NER module are pre-trained.
On large datasets, our model makes a small improvement because some of transfer knowledge obtained from our method is duplicated with the information learned by the monolingual models. On small datasets, e.g., Weibo dataset, a great improvement has been achieved after adding transfer knowledge to the baseline model. The reason maybe is that these datasets are too small to be fully trained and the test datasets have many non-existent characters of the training dataset, even some unrecognized characters. Therefore, some tags labeled incorrectly by monolingual models could be labeled correctly with the additional transfer knowledge which contains task-specific information obtained from BAN. So, the transfer information plays an important role in this dataset.
Conclusion
In this paper, we seek to improve the performance of NER on low-resource languages by leveraging the well-trained English NER system. This is achieved by way of BAN, which is a simple but extensible approach. It can transfer information between different languages. Empirical experiments show that, on small datasets, our approach can lead to significant improvement on the performance. This property is of great practical importance for low-resource languages. In future work, we plan to extend our method on other NLP tasks, e.g., relation extraction, coreference resolution. | Bidirectional LSTM based NER model of Flair |
9e2e5918608a2911b341d4887f58a4595d7d1429 | 9e2e5918608a2911b341d4887f58a4595d7d1429_0 | Q: How much training data is required for each low-resource language?
Text: Introduction
It can be challenging to build high-accuracy automatic speech recognition (ASR) systems in real world due to the vast language diversity and the requirement of extensive manual annotations on which the ASR algorithms are typically built. Series of research efforts have thus far been focused on guiding the ASR of a target language by using the supervised data from multiple languages.
Consider the standard hidden Markov models (HMM) based ASR system with a phonemic lexicon, where the vocabulary is specified by a pronunciation lexicon. One popular strategy is to make all languages share the same phonemic representations through a universal phonetic alphabet such as International Phonetic Alphabet (IPA) phone set BIBREF0, BIBREF1, BIBREF2, BIBREF3, or X-SAMPA phone set BIBREF4, BIBREF5, BIBREF6, BIBREF7. In this case, multilingual joint training can be directly applied. Given the effective neural network based acoustic modeling, another line of research is to share the hidden layers across multiple languages while the softmax layers are language dependent BIBREF8, BIBREF9; such multitask learning procedure can improve ASR accuracies for both within training set languages, and also unseen languages after language-specific adaptation, i.e., cross-lingual transfer learning. Different nodes in hidden layers have been shown in response to distinct phonetic features BIBREF10, and hidden layers can be potentially transferable across languages. Note that the above works all assume the test language identity to be known at decoding time, and the language specific lexicon and language model applied.
In the absence of a phonetic lexicon, building graphemic systems has shown comparable performance to phonetic lexicon-based approaches in extensive monolingual evaluations BIBREF11, BIBREF12, BIBREF13. Recent advances in end-to-end ASR models have attempted to take the union of multiple language-specific grapheme (i.e. orthographic character) sets, and use such union as a universal grapheme set for a single sequence-to-sequence ASR model BIBREF14, BIBREF15, BIBREF16. It allows for learning a grapheme-based model jointly on data from multiple languages, and performing ASR on within training set languages. In various cases it can produce performance gains over monolingual modeling that uses in-language data only.
In our work, we aim to examine the same approach of building a multilingual graphemic lexicon, while using a standard hybrid ASR system – based on Bidirectional Long Short-Term Memory (BLSTM) and HMM – learned with lattice-free maximum mutual information (MMI) objective BIBREF17. Our initial attempt is on building a single cascade of an acoustic model, a phonetic decision tree, a graphemic lexicon and a language model – for 7 geographically proximal languages that have little overlap in their character sets. We evaluate it in a low resource context where each language has around 160 hours training data. We find that, despite the lack of explicit language identification (ID) guidance, our multilingual model can accurately produce ASR transcripts in the correct test language scripts, and provide higher ASR accuracies than each language-specific ASR model. We further examine if using a subset of closely related languages – along language family or orthography – can achieve the same performance improvements as using all 7 languages.
We proceed with our investigation on various data augmentation techniques to overcome the lack of training data in the above low-resource setting. Given the highly scalable neural network acoustic modeling, extensive alternatives to increasing the amount or diversity of existing training data have been explored in prior works, e.g., applying vocal tract length perturbation and speed perturbation BIBREF18, volume perturbation and normalization BIBREF19, additive noises BIBREF20, reverberation BIBREF19, BIBREF21, BIBREF22, and SpecAugment BIBREF23. In this work we focus particularly on techniques that mostly apply to our wildly collected video datasets. In comparing their individual and complementary effects, we aim to answer: (i) if there is benefit in scaling the model training to significantly larger quantities, e.g., up to 9 times greater than the original training set size, and (ii) if any, is the data augmentation efficacy comparable or complementary with the above multilingual modeling.
Improving accessibility to videos “in the wild” such as automatic captioning on YouTube has been studied in BIBREF24, BIBREF25. While allowing for applications like video captions, indexing and retrieval, transcribing the heterogeneous Facebook videos of extensively diverse languages is highly challenging for ASR systems. On the whole, we present empirical studies in building a single multilingual ASR model capable of language-independent decoding on multiple languages, and in effective data augmentation techniques for video datasets.
Multilingual ASR
In this section we first briefly describe our deployed ASR architecture based on the weighted finite-state transducers (WFSTs) outlined in BIBREF26. Then we present its extension to multilingual training. Lastly, we discuss its language-independent decoding and language-specific decoding.
Multilingual ASR ::: Graphemic ASR with WFST
In the ASR framework of a hybrid BLSTM-HMM, the decoding graph can be interpreted as a composed WFST of cascade $H \circ C \circ L \circ G$. Acoustic models, i.e. BLSTMs, produce acoustic scores over context-dependent HMM (i.e. triphone) states. A WFST $H$, which represents the HMM set, maps the triphone states to context-dependent phones.
While in graphemic ASR, the notion of phone is turned to grapheme, and we typically create the grapheme set via modeling each orthographic character as a separate grapheme. Then a WFST $C$ maps each context-dependent grapheme, i.e. tri-grapheme, to an orthographic character. The lexicon $L$ is specified where each word is mapped to a sequence of characters forming that word. $G$ encodes either the transcript during training, or a language model during decoding.
Multilingual ASR ::: A single multilingual ASR model using lattice-free MMI
To build a single grapheme-based acoustic model for multiple languages, a multilingual graphemic set is obtained by taking a union of each grapheme set from each language considered, each of which can be either overlapping or non-overlapping. In the multilingual graphemic lexicon, each word in any language is mapped to a sequence of characters in that language.
A context-dependent acoustic model is constructed using the decision tree clustering of tri-grapheme states, in the same fashion as the context dependent triphone state tying BIBREF27. The graphemic-context decision tree is constructed over all the multilingual acoustic data including each language of interest. The optimal number of leaves for the multilingual model tends to be larger than for a monolingual neural network.
The acoustic model is a BLSTM network, using sequence discriminative training with lattice-free MMI objective BIBREF17. The BLSTM model is bootstrapped from a standard Gaussian mixture model (GMM)-HMM system. A multilingual $n$-gram language model is learned over the combined transcripts including each language considered.
Multilingual ASR ::: Language-independent and language-specific decoding in the WFST framework
Given the multilingual lexicon and language model, the multilingual ASR above can decode any within training set language, even though not explicitly given any information about language identity. We refer to it as language-independent decoding or multilingual decoding. Note that such ASR can thus far produce any word in the multilingual lexicon, and the hypothesized word can either be in the vocabulary of the considered test language, or out of test language vocabulary as a mismatched-language error.
We further consider applying language-specific decoding, assuming the test language identity to be known at decoding time. Again consider the decoding graph $H \circ C \circ L \circ G$, and $H$ & $C$ are thus multilingual while the lexicon $L$ and language model $G$ can include only the words in test language vocabulary. The multilingual acoustic model can therefore make use of multilingual training data, while its language-specific decoding operation only produces monolingual words matched with test language identity.
Data augmentation
In this section, we consider 3 categories of data augmentation techniques that are effectively applicable to video datasets.
Data augmentation ::: Speed and volume perturbation
Both speed and volume perturbation emulate mean shifts in spectrum BIBREF18, BIBREF19. To perform speed perturbation of the training data, we produce three versions of each audio with speed factors $0.9$, $1.0$, and $1.1$. The training data size is thus tripled. For volume perturbation, each audio is scaled with a random variable drawn from a uniform distribution $[0.125, 2]$.
Data augmentation ::: Additive noise
To further increase training data size and diversity, we can create new audios via superimposing each original audio with additional noisy audios in time domain. To obtain diverse noisy audios, we use AudioSet, which consists of 632 audio event classes and a collection of over 2 million manually-annotated 10-second sound clips from YouTube videos BIBREF28.
Note that in our video datasets, video lengths vary between 10 seconds and 5 minutes, with an average duration of about 2 minutes. Rather than constantly repeating the 10-second sound clip to match the original minute-long audio, we superpose each sound clip on the short utterances via audio segmentation. Specifically, we first use an initial bootstrap model to align each original long audio, and segment each audio into around 10-second utterances via word boundaries.
Then for each utterance in the original train set, we can create a new noisy utterance by the steps:
Sample a sound clip from AudioSet.
Trim or repeat the sound clip as necessary to match the duration of the original utterance.
Sample a signal-to-noise ratio (SNR) from a Gaussian distribution with mean 10, and round the SNR up to 0 or down to 20 if the sample is beyond 0-20dB. Then scale the sound clip signal to obtain the target SNR.
Superimpose the original utterance signal with the scaled sound clip signal in time domain to create the resulting utterance.
Thus for each original utterance, we can create a variable number of new noisy utterances via sampling sound clips. We use a 3-fold augmentation that combines the original train set with two noisy copies.
Data augmentation ::: SpecAugment
We consider applying the frequency and time masking techniques – which are shown to greatly improve the performance of end-to-end ASR models BIBREF23 – to our hybrid systems. Similarly, they can be applied online during each epoch of LF-MMI training, without the need for realignment.
Consider each utterance (i.e. after the audio segmentation in Section SECREF5), and we compute its log mel spectrogram with $\nu $ dimension and $\tau $ time steps:
Frequency masking is applied $m_F$ times, and each time the frequency bands $[f_0$, $f_0+ f)$ are masked, where $f$ is sampled from $[0, F]$ and $f_0$ is sampled from $[0, \nu - f)$.
Time masking is optionally applied $m_T$ times, and each time the time steps $[t_0$, $t_0+ t)$ are masked, where $t$ is sampled from $[0, T]$ and $t_0$ is sampled from $[0, \tau - t)$.
As in BIBREF23, we increase the training schedule accordingly, i.e., number of epochs.
Experiments
Experiments ::: Data
Our multilingual ASR attempt was on 7 geographically proximal languages: Kannada, Malayalam, Sinhala, Tamil, Bengali, Hindi and Marathi. The datasets were a set of public Facebook videos, which were wildly collected and anonymized. We categorized them into four video types:
Ads: any video content where the publisher paid for a promo on it.
Pages: content published by a page that was not paid content promoted to users.
UserLive: live streams from users.
UserVOD (video on demand): was-live videos.
For each language, the train and test set size are described in Table TABREF10, and most training data were Pages. On each language we also had a small validation set for model parameter tuning. Each monolingual ASR baseline was trained on language-specific data only.
The character sets of these 7 languages have little overlap except that (i) they all include common basic Latin alphabet, and (ii) both Hindi and Marathi use Devanagari script. We took the union of 7 character sets therein as the multilingual grapheme set (Section SECREF2), which contained 432 characters. In addition, we deliberately split 7 languages into two groups, such that the languages within each group were more closely related in terms of language family, orthography or phonology. We thus built 3 multilingual ASR models trained on:
all 7 languages, for 1059 training hours in total,
4 languages – Kannada, Malayalam, Sinhala and Tamil – for 590 training hours,
3 languages – Bengali, Hindi and Marathi – for 469 training hours,
which are referred to as 7lang, 4lang, and 3lang respectively. Note that Kannada, Malayalam and Tamil are Dravidian languages, which have rich agglutinative inflectional morphology BIBREF2 and resulted in around 10% OOV token rates on test sets (Hindi had the lowest OOV rate as 2-3%). Such experimental setup was designed to answer the questions:
If a single graphemic ASR model could scale its language-independent recognition up to all 7 languages.
If including all 7 languages could yield better ASR performance than using a small subset of closely related languages.
Experiments ::: Model configurations
Each bootstrap model was a GMM-HMM based system with speaker adaptive training, implemented with Kaldi BIBREF29. Each neural network acoustic model was a latency-controlled BLSTM BIBREF30, learned with lattice-free MMI objective and Adam optimizer BIBREF31. All neural networks were implemented with Caffe2 BIBREF32. Due to the production real time factor (RTF) requirements, we used the same model size in all cases – a 4 layer BLSTM network with 600 cells in each layer and direction – except that, the softmax dimensions, i.e. the optimal decision tree leaves, were determined through experiments on validation sets, varying within 7-30k. Input acoustic features were 80-dimensional log-mel filterbank coefficients. We used standard 5-gram language models. After lattice-free MMI training, the model with the best accuracy on validation set was used for evaluation on test set.
Experiments ::: Results with multilingual ASR
ASR word error rate (WER%) results are shown in Table TABREF11. We found that, although not explicitly given any information on test language identities, multilingual ASR with language-independent decoding (Section SECREF3) - trained on 3, 4, or 7 languages - substantially outperformed each monolingual ASR in all cases, and on average led to relative WER reductions between 4.6% (Sinhala) and 10.3% (Hindi).
Note that the word hypotheses from language-independent decoding could be language mismatched, e.g., part of a Kannada utterance was decoded into Marathi words. So we counted how many word tokens in the decoding transcripts were not in the lexicon of corresponding test language. We found in general only 1-3% word tokens are language mismatched, indicating that the multilingual model was very effective in identifying the language implicitly and jointly recognizing the speech.
Consider the scenario that, test language identities are known likewise in each monolingual ASR, and we proceed with language-specific decoding (Section SECREF3) on Kannada and Hindi, via language-specific lexicon and language model at decoding time. We found that, the language-specific decoding provided only moderate gains, presumably as discussed above, the language-independent decoding had given the mismatched-language word token rates as sufficiently low as 1-3%.
Additionally, the multilingual ASR of 4lang and 3lang (Section SECREF15) achieved the same, or even slightly better performance as compared to the ASR of 7lang, suggesting that incorporating closely related languages into multilingual training is most useful for improving ASR performance. However, the 7lang ASR by itself still yields the advantage in language-independent recognition of more languages.
Experiments ::: Results with data augmentation
First, we experimented with monolingual ASR on Kannada and Hindi, and performed comprehensive evaluations of the data augmentation techniques described in Section SECREF3. As in Table TABREF11, the performance gains of using frequency masking were substantial and comparable to those of using speed perturbation, where $m_F = 2$ and $F=15$ (Section SECREF12) worked best. In addition, combining both frequency masking and speed perturbation could provide further improvements. However, applying additional volume perturbation (Section SECREF4) or time masking (Section SECREF12) was not helpful in our preliminary experimentation.
Note that after speed perturbation, the training data tripled, to which we could apply another 3-fold augmentation based on additive noise (Section SECREF5), and the final train set was thus 9 times the size of original train set. We found that all 3 techniques were complementary, and in combination led to large fusion gains over each monolingual baseline – relative WER reductions of 8.7% on Kannada, and 14.8% on Hindi.
Secondly, we applied the 3 data augmentation techniques to the multilingual ASR of 7lang, and tested their additive effects. We show the resulting WERs on Kannada and Hindi in Table TABREF11. Note that on Kannada, we found around 7% OOV token rate on Ads but around 10-11% on other 3 video types, and we observed more gains on Ads; presumably because the improved acoustic model could only correct the in-vocabulary word errors, lower OOV rates therefore left more room for improvements. Hindi had around 2.5% OOV rates on each video type, and we found incorporating data augmentation into multilingual ASR led to on average 9.0% relative WER reductions.
Overall, we demonstrated the multilingual ASR with massive data augmentation – via a single graphemic model even without the use of explicit language ID – allowed for relative WER reductions of 11.0% on Kannada and 18.4% on Hindi.
Conclusions
We have presented a multilingual grapheme-based ASR model can effectively perform language-independent recognition on any within training set languages, and substantially outperform each monolingual ASR alternative. Various data augmentation techniques can yield further complementary improvements. Such single multilingual model can not only provide better ASR performance, but also serves as an alternative to the standard production deployment that typically includes extensive monolingual ASR systems and a separate language ID model.
Future work will expand the language coverage to include both geographically proximal and distant languages. Additionally, given the identity of a target test language, we will consider the hidden layers of such multilingual acoustic model as a pre-trained model, and thus perform subsequent monolingual fine-tuning, as compared to the multitask learning procedure in BIBREF8, BIBREF9. | Unanswerable |
0ec4143a4f1a8f597b435f83c0451145be2ab95b | 0ec4143a4f1a8f597b435f83c0451145be2ab95b_0 | Q: What are the best within-language data augmentation methods?
Text: Introduction
It can be challenging to build high-accuracy automatic speech recognition (ASR) systems in real world due to the vast language diversity and the requirement of extensive manual annotations on which the ASR algorithms are typically built. Series of research efforts have thus far been focused on guiding the ASR of a target language by using the supervised data from multiple languages.
Consider the standard hidden Markov models (HMM) based ASR system with a phonemic lexicon, where the vocabulary is specified by a pronunciation lexicon. One popular strategy is to make all languages share the same phonemic representations through a universal phonetic alphabet such as International Phonetic Alphabet (IPA) phone set BIBREF0, BIBREF1, BIBREF2, BIBREF3, or X-SAMPA phone set BIBREF4, BIBREF5, BIBREF6, BIBREF7. In this case, multilingual joint training can be directly applied. Given the effective neural network based acoustic modeling, another line of research is to share the hidden layers across multiple languages while the softmax layers are language dependent BIBREF8, BIBREF9; such multitask learning procedure can improve ASR accuracies for both within training set languages, and also unseen languages after language-specific adaptation, i.e., cross-lingual transfer learning. Different nodes in hidden layers have been shown in response to distinct phonetic features BIBREF10, and hidden layers can be potentially transferable across languages. Note that the above works all assume the test language identity to be known at decoding time, and the language specific lexicon and language model applied.
In the absence of a phonetic lexicon, building graphemic systems has shown comparable performance to phonetic lexicon-based approaches in extensive monolingual evaluations BIBREF11, BIBREF12, BIBREF13. Recent advances in end-to-end ASR models have attempted to take the union of multiple language-specific grapheme (i.e. orthographic character) sets, and use such union as a universal grapheme set for a single sequence-to-sequence ASR model BIBREF14, BIBREF15, BIBREF16. It allows for learning a grapheme-based model jointly on data from multiple languages, and performing ASR on within training set languages. In various cases it can produce performance gains over monolingual modeling that uses in-language data only.
In our work, we aim to examine the same approach of building a multilingual graphemic lexicon, while using a standard hybrid ASR system – based on Bidirectional Long Short-Term Memory (BLSTM) and HMM – learned with lattice-free maximum mutual information (MMI) objective BIBREF17. Our initial attempt is on building a single cascade of an acoustic model, a phonetic decision tree, a graphemic lexicon and a language model – for 7 geographically proximal languages that have little overlap in their character sets. We evaluate it in a low resource context where each language has around 160 hours training data. We find that, despite the lack of explicit language identification (ID) guidance, our multilingual model can accurately produce ASR transcripts in the correct test language scripts, and provide higher ASR accuracies than each language-specific ASR model. We further examine if using a subset of closely related languages – along language family or orthography – can achieve the same performance improvements as using all 7 languages.
We proceed with our investigation on various data augmentation techniques to overcome the lack of training data in the above low-resource setting. Given the highly scalable neural network acoustic modeling, extensive alternatives to increasing the amount or diversity of existing training data have been explored in prior works, e.g., applying vocal tract length perturbation and speed perturbation BIBREF18, volume perturbation and normalization BIBREF19, additive noises BIBREF20, reverberation BIBREF19, BIBREF21, BIBREF22, and SpecAugment BIBREF23. In this work we focus particularly on techniques that mostly apply to our wildly collected video datasets. In comparing their individual and complementary effects, we aim to answer: (i) if there is benefit in scaling the model training to significantly larger quantities, e.g., up to 9 times greater than the original training set size, and (ii) if any, is the data augmentation efficacy comparable or complementary with the above multilingual modeling.
Improving accessibility to videos “in the wild” such as automatic captioning on YouTube has been studied in BIBREF24, BIBREF25. While allowing for applications like video captions, indexing and retrieval, transcribing the heterogeneous Facebook videos of extensively diverse languages is highly challenging for ASR systems. On the whole, we present empirical studies in building a single multilingual ASR model capable of language-independent decoding on multiple languages, and in effective data augmentation techniques for video datasets.
Multilingual ASR
In this section we first briefly describe our deployed ASR architecture based on the weighted finite-state transducers (WFSTs) outlined in BIBREF26. Then we present its extension to multilingual training. Lastly, we discuss its language-independent decoding and language-specific decoding.
Multilingual ASR ::: Graphemic ASR with WFST
In the ASR framework of a hybrid BLSTM-HMM, the decoding graph can be interpreted as a composed WFST of cascade $H \circ C \circ L \circ G$. Acoustic models, i.e. BLSTMs, produce acoustic scores over context-dependent HMM (i.e. triphone) states. A WFST $H$, which represents the HMM set, maps the triphone states to context-dependent phones.
While in graphemic ASR, the notion of phone is turned to grapheme, and we typically create the grapheme set via modeling each orthographic character as a separate grapheme. Then a WFST $C$ maps each context-dependent grapheme, i.e. tri-grapheme, to an orthographic character. The lexicon $L$ is specified where each word is mapped to a sequence of characters forming that word. $G$ encodes either the transcript during training, or a language model during decoding.
Multilingual ASR ::: A single multilingual ASR model using lattice-free MMI
To build a single grapheme-based acoustic model for multiple languages, a multilingual graphemic set is obtained by taking a union of each grapheme set from each language considered, each of which can be either overlapping or non-overlapping. In the multilingual graphemic lexicon, each word in any language is mapped to a sequence of characters in that language.
A context-dependent acoustic model is constructed using the decision tree clustering of tri-grapheme states, in the same fashion as the context dependent triphone state tying BIBREF27. The graphemic-context decision tree is constructed over all the multilingual acoustic data including each language of interest. The optimal number of leaves for the multilingual model tends to be larger than for a monolingual neural network.
The acoustic model is a BLSTM network, using sequence discriminative training with lattice-free MMI objective BIBREF17. The BLSTM model is bootstrapped from a standard Gaussian mixture model (GMM)-HMM system. A multilingual $n$-gram language model is learned over the combined transcripts including each language considered.
Multilingual ASR ::: Language-independent and language-specific decoding in the WFST framework
Given the multilingual lexicon and language model, the multilingual ASR above can decode any within training set language, even though not explicitly given any information about language identity. We refer to it as language-independent decoding or multilingual decoding. Note that such ASR can thus far produce any word in the multilingual lexicon, and the hypothesized word can either be in the vocabulary of the considered test language, or out of test language vocabulary as a mismatched-language error.
We further consider applying language-specific decoding, assuming the test language identity to be known at decoding time. Again consider the decoding graph $H \circ C \circ L \circ G$, and $H$ & $C$ are thus multilingual while the lexicon $L$ and language model $G$ can include only the words in test language vocabulary. The multilingual acoustic model can therefore make use of multilingual training data, while its language-specific decoding operation only produces monolingual words matched with test language identity.
Data augmentation
In this section, we consider 3 categories of data augmentation techniques that are effectively applicable to video datasets.
Data augmentation ::: Speed and volume perturbation
Both speed and volume perturbation emulate mean shifts in spectrum BIBREF18, BIBREF19. To perform speed perturbation of the training data, we produce three versions of each audio with speed factors $0.9$, $1.0$, and $1.1$. The training data size is thus tripled. For volume perturbation, each audio is scaled with a random variable drawn from a uniform distribution $[0.125, 2]$.
Data augmentation ::: Additive noise
To further increase training data size and diversity, we can create new audios via superimposing each original audio with additional noisy audios in time domain. To obtain diverse noisy audios, we use AudioSet, which consists of 632 audio event classes and a collection of over 2 million manually-annotated 10-second sound clips from YouTube videos BIBREF28.
Note that in our video datasets, video lengths vary between 10 seconds and 5 minutes, with an average duration of about 2 minutes. Rather than constantly repeating the 10-second sound clip to match the original minute-long audio, we superpose each sound clip on the short utterances via audio segmentation. Specifically, we first use an initial bootstrap model to align each original long audio, and segment each audio into around 10-second utterances via word boundaries.
Then for each utterance in the original train set, we can create a new noisy utterance by the steps:
Sample a sound clip from AudioSet.
Trim or repeat the sound clip as necessary to match the duration of the original utterance.
Sample a signal-to-noise ratio (SNR) from a Gaussian distribution with mean 10, and round the SNR up to 0 or down to 20 if the sample is beyond 0-20dB. Then scale the sound clip signal to obtain the target SNR.
Superimpose the original utterance signal with the scaled sound clip signal in time domain to create the resulting utterance.
Thus for each original utterance, we can create a variable number of new noisy utterances via sampling sound clips. We use a 3-fold augmentation that combines the original train set with two noisy copies.
Data augmentation ::: SpecAugment
We consider applying the frequency and time masking techniques – which are shown to greatly improve the performance of end-to-end ASR models BIBREF23 – to our hybrid systems. Similarly, they can be applied online during each epoch of LF-MMI training, without the need for realignment.
Consider each utterance (i.e. after the audio segmentation in Section SECREF5), and we compute its log mel spectrogram with $\nu $ dimension and $\tau $ time steps:
Frequency masking is applied $m_F$ times, and each time the frequency bands $[f_0$, $f_0+ f)$ are masked, where $f$ is sampled from $[0, F]$ and $f_0$ is sampled from $[0, \nu - f)$.
Time masking is optionally applied $m_T$ times, and each time the time steps $[t_0$, $t_0+ t)$ are masked, where $t$ is sampled from $[0, T]$ and $t_0$ is sampled from $[0, \tau - t)$.
As in BIBREF23, we increase the training schedule accordingly, i.e., number of epochs.
Experiments
Experiments ::: Data
Our multilingual ASR attempt was on 7 geographically proximal languages: Kannada, Malayalam, Sinhala, Tamil, Bengali, Hindi and Marathi. The datasets were a set of public Facebook videos, which were wildly collected and anonymized. We categorized them into four video types:
Ads: any video content where the publisher paid for a promo on it.
Pages: content published by a page that was not paid content promoted to users.
UserLive: live streams from users.
UserVOD (video on demand): was-live videos.
For each language, the train and test set size are described in Table TABREF10, and most training data were Pages. On each language we also had a small validation set for model parameter tuning. Each monolingual ASR baseline was trained on language-specific data only.
The character sets of these 7 languages have little overlap except that (i) they all include common basic Latin alphabet, and (ii) both Hindi and Marathi use Devanagari script. We took the union of 7 character sets therein as the multilingual grapheme set (Section SECREF2), which contained 432 characters. In addition, we deliberately split 7 languages into two groups, such that the languages within each group were more closely related in terms of language family, orthography or phonology. We thus built 3 multilingual ASR models trained on:
all 7 languages, for 1059 training hours in total,
4 languages – Kannada, Malayalam, Sinhala and Tamil – for 590 training hours,
3 languages – Bengali, Hindi and Marathi – for 469 training hours,
which are referred to as 7lang, 4lang, and 3lang respectively. Note that Kannada, Malayalam and Tamil are Dravidian languages, which have rich agglutinative inflectional morphology BIBREF2 and resulted in around 10% OOV token rates on test sets (Hindi had the lowest OOV rate as 2-3%). Such experimental setup was designed to answer the questions:
If a single graphemic ASR model could scale its language-independent recognition up to all 7 languages.
If including all 7 languages could yield better ASR performance than using a small subset of closely related languages.
Experiments ::: Model configurations
Each bootstrap model was a GMM-HMM based system with speaker adaptive training, implemented with Kaldi BIBREF29. Each neural network acoustic model was a latency-controlled BLSTM BIBREF30, learned with lattice-free MMI objective and Adam optimizer BIBREF31. All neural networks were implemented with Caffe2 BIBREF32. Due to the production real time factor (RTF) requirements, we used the same model size in all cases – a 4 layer BLSTM network with 600 cells in each layer and direction – except that, the softmax dimensions, i.e. the optimal decision tree leaves, were determined through experiments on validation sets, varying within 7-30k. Input acoustic features were 80-dimensional log-mel filterbank coefficients. We used standard 5-gram language models. After lattice-free MMI training, the model with the best accuracy on validation set was used for evaluation on test set.
Experiments ::: Results with multilingual ASR
ASR word error rate (WER%) results are shown in Table TABREF11. We found that, although not explicitly given any information on test language identities, multilingual ASR with language-independent decoding (Section SECREF3) - trained on 3, 4, or 7 languages - substantially outperformed each monolingual ASR in all cases, and on average led to relative WER reductions between 4.6% (Sinhala) and 10.3% (Hindi).
Note that the word hypotheses from language-independent decoding could be language mismatched, e.g., part of a Kannada utterance was decoded into Marathi words. So we counted how many word tokens in the decoding transcripts were not in the lexicon of corresponding test language. We found in general only 1-3% word tokens are language mismatched, indicating that the multilingual model was very effective in identifying the language implicitly and jointly recognizing the speech.
Consider the scenario that, test language identities are known likewise in each monolingual ASR, and we proceed with language-specific decoding (Section SECREF3) on Kannada and Hindi, via language-specific lexicon and language model at decoding time. We found that, the language-specific decoding provided only moderate gains, presumably as discussed above, the language-independent decoding had given the mismatched-language word token rates as sufficiently low as 1-3%.
Additionally, the multilingual ASR of 4lang and 3lang (Section SECREF15) achieved the same, or even slightly better performance as compared to the ASR of 7lang, suggesting that incorporating closely related languages into multilingual training is most useful for improving ASR performance. However, the 7lang ASR by itself still yields the advantage in language-independent recognition of more languages.
Experiments ::: Results with data augmentation
First, we experimented with monolingual ASR on Kannada and Hindi, and performed comprehensive evaluations of the data augmentation techniques described in Section SECREF3. As in Table TABREF11, the performance gains of using frequency masking were substantial and comparable to those of using speed perturbation, where $m_F = 2$ and $F=15$ (Section SECREF12) worked best. In addition, combining both frequency masking and speed perturbation could provide further improvements. However, applying additional volume perturbation (Section SECREF4) or time masking (Section SECREF12) was not helpful in our preliminary experimentation.
Note that after speed perturbation, the training data tripled, to which we could apply another 3-fold augmentation based on additive noise (Section SECREF5), and the final train set was thus 9 times the size of original train set. We found that all 3 techniques were complementary, and in combination led to large fusion gains over each monolingual baseline – relative WER reductions of 8.7% on Kannada, and 14.8% on Hindi.
Secondly, we applied the 3 data augmentation techniques to the multilingual ASR of 7lang, and tested their additive effects. We show the resulting WERs on Kannada and Hindi in Table TABREF11. Note that on Kannada, we found around 7% OOV token rate on Ads but around 10-11% on other 3 video types, and we observed more gains on Ads; presumably because the improved acoustic model could only correct the in-vocabulary word errors, lower OOV rates therefore left more room for improvements. Hindi had around 2.5% OOV rates on each video type, and we found incorporating data augmentation into multilingual ASR led to on average 9.0% relative WER reductions.
Overall, we demonstrated the multilingual ASR with massive data augmentation – via a single graphemic model even without the use of explicit language ID – allowed for relative WER reductions of 11.0% on Kannada and 18.4% on Hindi.
Conclusions
We have presented a multilingual grapheme-based ASR model can effectively perform language-independent recognition on any within training set languages, and substantially outperform each monolingual ASR alternative. Various data augmentation techniques can yield further complementary improvements. Such single multilingual model can not only provide better ASR performance, but also serves as an alternative to the standard production deployment that typically includes extensive monolingual ASR systems and a separate language ID model.
Future work will expand the language coverage to include both geographically proximal and distant languages. Additionally, given the identity of a target test language, we will consider the hidden layers of such multilingual acoustic model as a pre-trained model, and thus perform subsequent monolingual fine-tuning, as compared to the multitask learning procedure in BIBREF8, BIBREF9. | Frequency masking, Time masking, Additive noise, Speed and volume perturbation |
90159e143487505ddc026f879ecd864b7f4f479e | 90159e143487505ddc026f879ecd864b7f4f479e_0 | Q: How much of the ASR grapheme set is shared between languages?
Text: Introduction
It can be challenging to build high-accuracy automatic speech recognition (ASR) systems in real world due to the vast language diversity and the requirement of extensive manual annotations on which the ASR algorithms are typically built. Series of research efforts have thus far been focused on guiding the ASR of a target language by using the supervised data from multiple languages.
Consider the standard hidden Markov models (HMM) based ASR system with a phonemic lexicon, where the vocabulary is specified by a pronunciation lexicon. One popular strategy is to make all languages share the same phonemic representations through a universal phonetic alphabet such as International Phonetic Alphabet (IPA) phone set BIBREF0, BIBREF1, BIBREF2, BIBREF3, or X-SAMPA phone set BIBREF4, BIBREF5, BIBREF6, BIBREF7. In this case, multilingual joint training can be directly applied. Given the effective neural network based acoustic modeling, another line of research is to share the hidden layers across multiple languages while the softmax layers are language dependent BIBREF8, BIBREF9; such multitask learning procedure can improve ASR accuracies for both within training set languages, and also unseen languages after language-specific adaptation, i.e., cross-lingual transfer learning. Different nodes in hidden layers have been shown in response to distinct phonetic features BIBREF10, and hidden layers can be potentially transferable across languages. Note that the above works all assume the test language identity to be known at decoding time, and the language specific lexicon and language model applied.
In the absence of a phonetic lexicon, building graphemic systems has shown comparable performance to phonetic lexicon-based approaches in extensive monolingual evaluations BIBREF11, BIBREF12, BIBREF13. Recent advances in end-to-end ASR models have attempted to take the union of multiple language-specific grapheme (i.e. orthographic character) sets, and use such union as a universal grapheme set for a single sequence-to-sequence ASR model BIBREF14, BIBREF15, BIBREF16. It allows for learning a grapheme-based model jointly on data from multiple languages, and performing ASR on within training set languages. In various cases it can produce performance gains over monolingual modeling that uses in-language data only.
In our work, we aim to examine the same approach of building a multilingual graphemic lexicon, while using a standard hybrid ASR system – based on Bidirectional Long Short-Term Memory (BLSTM) and HMM – learned with lattice-free maximum mutual information (MMI) objective BIBREF17. Our initial attempt is on building a single cascade of an acoustic model, a phonetic decision tree, a graphemic lexicon and a language model – for 7 geographically proximal languages that have little overlap in their character sets. We evaluate it in a low resource context where each language has around 160 hours training data. We find that, despite the lack of explicit language identification (ID) guidance, our multilingual model can accurately produce ASR transcripts in the correct test language scripts, and provide higher ASR accuracies than each language-specific ASR model. We further examine if using a subset of closely related languages – along language family or orthography – can achieve the same performance improvements as using all 7 languages.
We proceed with our investigation on various data augmentation techniques to overcome the lack of training data in the above low-resource setting. Given the highly scalable neural network acoustic modeling, extensive alternatives to increasing the amount or diversity of existing training data have been explored in prior works, e.g., applying vocal tract length perturbation and speed perturbation BIBREF18, volume perturbation and normalization BIBREF19, additive noises BIBREF20, reverberation BIBREF19, BIBREF21, BIBREF22, and SpecAugment BIBREF23. In this work we focus particularly on techniques that mostly apply to our wildly collected video datasets. In comparing their individual and complementary effects, we aim to answer: (i) if there is benefit in scaling the model training to significantly larger quantities, e.g., up to 9 times greater than the original training set size, and (ii) if any, is the data augmentation efficacy comparable or complementary with the above multilingual modeling.
Improving accessibility to videos “in the wild” such as automatic captioning on YouTube has been studied in BIBREF24, BIBREF25. While allowing for applications like video captions, indexing and retrieval, transcribing the heterogeneous Facebook videos of extensively diverse languages is highly challenging for ASR systems. On the whole, we present empirical studies in building a single multilingual ASR model capable of language-independent decoding on multiple languages, and in effective data augmentation techniques for video datasets.
Multilingual ASR
In this section we first briefly describe our deployed ASR architecture based on the weighted finite-state transducers (WFSTs) outlined in BIBREF26. Then we present its extension to multilingual training. Lastly, we discuss its language-independent decoding and language-specific decoding.
Multilingual ASR ::: Graphemic ASR with WFST
In the ASR framework of a hybrid BLSTM-HMM, the decoding graph can be interpreted as a composed WFST of cascade $H \circ C \circ L \circ G$. Acoustic models, i.e. BLSTMs, produce acoustic scores over context-dependent HMM (i.e. triphone) states. A WFST $H$, which represents the HMM set, maps the triphone states to context-dependent phones.
While in graphemic ASR, the notion of phone is turned to grapheme, and we typically create the grapheme set via modeling each orthographic character as a separate grapheme. Then a WFST $C$ maps each context-dependent grapheme, i.e. tri-grapheme, to an orthographic character. The lexicon $L$ is specified where each word is mapped to a sequence of characters forming that word. $G$ encodes either the transcript during training, or a language model during decoding.
Multilingual ASR ::: A single multilingual ASR model using lattice-free MMI
To build a single grapheme-based acoustic model for multiple languages, a multilingual graphemic set is obtained by taking a union of each grapheme set from each language considered, each of which can be either overlapping or non-overlapping. In the multilingual graphemic lexicon, each word in any language is mapped to a sequence of characters in that language.
A context-dependent acoustic model is constructed using the decision tree clustering of tri-grapheme states, in the same fashion as the context dependent triphone state tying BIBREF27. The graphemic-context decision tree is constructed over all the multilingual acoustic data including each language of interest. The optimal number of leaves for the multilingual model tends to be larger than for a monolingual neural network.
The acoustic model is a BLSTM network, using sequence discriminative training with lattice-free MMI objective BIBREF17. The BLSTM model is bootstrapped from a standard Gaussian mixture model (GMM)-HMM system. A multilingual $n$-gram language model is learned over the combined transcripts including each language considered.
Multilingual ASR ::: Language-independent and language-specific decoding in the WFST framework
Given the multilingual lexicon and language model, the multilingual ASR above can decode any within training set language, even though not explicitly given any information about language identity. We refer to it as language-independent decoding or multilingual decoding. Note that such ASR can thus far produce any word in the multilingual lexicon, and the hypothesized word can either be in the vocabulary of the considered test language, or out of test language vocabulary as a mismatched-language error.
We further consider applying language-specific decoding, assuming the test language identity to be known at decoding time. Again consider the decoding graph $H \circ C \circ L \circ G$, and $H$ & $C$ are thus multilingual while the lexicon $L$ and language model $G$ can include only the words in test language vocabulary. The multilingual acoustic model can therefore make use of multilingual training data, while its language-specific decoding operation only produces monolingual words matched with test language identity.
Data augmentation
In this section, we consider 3 categories of data augmentation techniques that are effectively applicable to video datasets.
Data augmentation ::: Speed and volume perturbation
Both speed and volume perturbation emulate mean shifts in spectrum BIBREF18, BIBREF19. To perform speed perturbation of the training data, we produce three versions of each audio with speed factors $0.9$, $1.0$, and $1.1$. The training data size is thus tripled. For volume perturbation, each audio is scaled with a random variable drawn from a uniform distribution $[0.125, 2]$.
Data augmentation ::: Additive noise
To further increase training data size and diversity, we can create new audios via superimposing each original audio with additional noisy audios in time domain. To obtain diverse noisy audios, we use AudioSet, which consists of 632 audio event classes and a collection of over 2 million manually-annotated 10-second sound clips from YouTube videos BIBREF28.
Note that in our video datasets, video lengths vary between 10 seconds and 5 minutes, with an average duration of about 2 minutes. Rather than constantly repeating the 10-second sound clip to match the original minute-long audio, we superpose each sound clip on the short utterances via audio segmentation. Specifically, we first use an initial bootstrap model to align each original long audio, and segment each audio into around 10-second utterances via word boundaries.
Then for each utterance in the original train set, we can create a new noisy utterance by the steps:
Sample a sound clip from AudioSet.
Trim or repeat the sound clip as necessary to match the duration of the original utterance.
Sample a signal-to-noise ratio (SNR) from a Gaussian distribution with mean 10, and round the SNR up to 0 or down to 20 if the sample is beyond 0-20dB. Then scale the sound clip signal to obtain the target SNR.
Superimpose the original utterance signal with the scaled sound clip signal in time domain to create the resulting utterance.
Thus for each original utterance, we can create a variable number of new noisy utterances via sampling sound clips. We use a 3-fold augmentation that combines the original train set with two noisy copies.
Data augmentation ::: SpecAugment
We consider applying the frequency and time masking techniques – which are shown to greatly improve the performance of end-to-end ASR models BIBREF23 – to our hybrid systems. Similarly, they can be applied online during each epoch of LF-MMI training, without the need for realignment.
Consider each utterance (i.e. after the audio segmentation in Section SECREF5), and we compute its log mel spectrogram with $\nu $ dimension and $\tau $ time steps:
Frequency masking is applied $m_F$ times, and each time the frequency bands $[f_0$, $f_0+ f)$ are masked, where $f$ is sampled from $[0, F]$ and $f_0$ is sampled from $[0, \nu - f)$.
Time masking is optionally applied $m_T$ times, and each time the time steps $[t_0$, $t_0+ t)$ are masked, where $t$ is sampled from $[0, T]$ and $t_0$ is sampled from $[0, \tau - t)$.
As in BIBREF23, we increase the training schedule accordingly, i.e., number of epochs.
Experiments
Experiments ::: Data
Our multilingual ASR attempt was on 7 geographically proximal languages: Kannada, Malayalam, Sinhala, Tamil, Bengali, Hindi and Marathi. The datasets were a set of public Facebook videos, which were wildly collected and anonymized. We categorized them into four video types:
Ads: any video content where the publisher paid for a promo on it.
Pages: content published by a page that was not paid content promoted to users.
UserLive: live streams from users.
UserVOD (video on demand): was-live videos.
For each language, the train and test set size are described in Table TABREF10, and most training data were Pages. On each language we also had a small validation set for model parameter tuning. Each monolingual ASR baseline was trained on language-specific data only.
The character sets of these 7 languages have little overlap except that (i) they all include common basic Latin alphabet, and (ii) both Hindi and Marathi use Devanagari script. We took the union of 7 character sets therein as the multilingual grapheme set (Section SECREF2), which contained 432 characters. In addition, we deliberately split 7 languages into two groups, such that the languages within each group were more closely related in terms of language family, orthography or phonology. We thus built 3 multilingual ASR models trained on:
all 7 languages, for 1059 training hours in total,
4 languages – Kannada, Malayalam, Sinhala and Tamil – for 590 training hours,
3 languages – Bengali, Hindi and Marathi – for 469 training hours,
which are referred to as 7lang, 4lang, and 3lang respectively. Note that Kannada, Malayalam and Tamil are Dravidian languages, which have rich agglutinative inflectional morphology BIBREF2 and resulted in around 10% OOV token rates on test sets (Hindi had the lowest OOV rate as 2-3%). Such experimental setup was designed to answer the questions:
If a single graphemic ASR model could scale its language-independent recognition up to all 7 languages.
If including all 7 languages could yield better ASR performance than using a small subset of closely related languages.
Experiments ::: Model configurations
Each bootstrap model was a GMM-HMM based system with speaker adaptive training, implemented with Kaldi BIBREF29. Each neural network acoustic model was a latency-controlled BLSTM BIBREF30, learned with lattice-free MMI objective and Adam optimizer BIBREF31. All neural networks were implemented with Caffe2 BIBREF32. Due to the production real time factor (RTF) requirements, we used the same model size in all cases – a 4 layer BLSTM network with 600 cells in each layer and direction – except that, the softmax dimensions, i.e. the optimal decision tree leaves, were determined through experiments on validation sets, varying within 7-30k. Input acoustic features were 80-dimensional log-mel filterbank coefficients. We used standard 5-gram language models. After lattice-free MMI training, the model with the best accuracy on validation set was used for evaluation on test set.
Experiments ::: Results with multilingual ASR
ASR word error rate (WER%) results are shown in Table TABREF11. We found that, although not explicitly given any information on test language identities, multilingual ASR with language-independent decoding (Section SECREF3) - trained on 3, 4, or 7 languages - substantially outperformed each monolingual ASR in all cases, and on average led to relative WER reductions between 4.6% (Sinhala) and 10.3% (Hindi).
Note that the word hypotheses from language-independent decoding could be language mismatched, e.g., part of a Kannada utterance was decoded into Marathi words. So we counted how many word tokens in the decoding transcripts were not in the lexicon of corresponding test language. We found in general only 1-3% word tokens are language mismatched, indicating that the multilingual model was very effective in identifying the language implicitly and jointly recognizing the speech.
Consider the scenario that, test language identities are known likewise in each monolingual ASR, and we proceed with language-specific decoding (Section SECREF3) on Kannada and Hindi, via language-specific lexicon and language model at decoding time. We found that, the language-specific decoding provided only moderate gains, presumably as discussed above, the language-independent decoding had given the mismatched-language word token rates as sufficiently low as 1-3%.
Additionally, the multilingual ASR of 4lang and 3lang (Section SECREF15) achieved the same, or even slightly better performance as compared to the ASR of 7lang, suggesting that incorporating closely related languages into multilingual training is most useful for improving ASR performance. However, the 7lang ASR by itself still yields the advantage in language-independent recognition of more languages.
Experiments ::: Results with data augmentation
First, we experimented with monolingual ASR on Kannada and Hindi, and performed comprehensive evaluations of the data augmentation techniques described in Section SECREF3. As in Table TABREF11, the performance gains of using frequency masking were substantial and comparable to those of using speed perturbation, where $m_F = 2$ and $F=15$ (Section SECREF12) worked best. In addition, combining both frequency masking and speed perturbation could provide further improvements. However, applying additional volume perturbation (Section SECREF4) or time masking (Section SECREF12) was not helpful in our preliminary experimentation.
Note that after speed perturbation, the training data tripled, to which we could apply another 3-fold augmentation based on additive noise (Section SECREF5), and the final train set was thus 9 times the size of original train set. We found that all 3 techniques were complementary, and in combination led to large fusion gains over each monolingual baseline – relative WER reductions of 8.7% on Kannada, and 14.8% on Hindi.
Secondly, we applied the 3 data augmentation techniques to the multilingual ASR of 7lang, and tested their additive effects. We show the resulting WERs on Kannada and Hindi in Table TABREF11. Note that on Kannada, we found around 7% OOV token rate on Ads but around 10-11% on other 3 video types, and we observed more gains on Ads; presumably because the improved acoustic model could only correct the in-vocabulary word errors, lower OOV rates therefore left more room for improvements. Hindi had around 2.5% OOV rates on each video type, and we found incorporating data augmentation into multilingual ASR led to on average 9.0% relative WER reductions.
Overall, we demonstrated the multilingual ASR with massive data augmentation – via a single graphemic model even without the use of explicit language ID – allowed for relative WER reductions of 11.0% on Kannada and 18.4% on Hindi.
Conclusions
We have presented a multilingual grapheme-based ASR model can effectively perform language-independent recognition on any within training set languages, and substantially outperform each monolingual ASR alternative. Various data augmentation techniques can yield further complementary improvements. Such single multilingual model can not only provide better ASR performance, but also serves as an alternative to the standard production deployment that typically includes extensive monolingual ASR systems and a separate language ID model.
Future work will expand the language coverage to include both geographically proximal and distant languages. Additionally, given the identity of a target test language, we will consider the hidden layers of such multilingual acoustic model as a pre-trained model, and thus perform subsequent monolingual fine-tuning, as compared to the multitask learning procedure in BIBREF8, BIBREF9. | Little overlap except common basic Latin alphabet and that Hindi and Marathi languages use same script. |
d10e256f2f724ad611fd3ff82ce88f7a78bad7f7 | d10e256f2f724ad611fd3ff82ce88f7a78bad7f7_0 | Q: What is the performance of the model for the German sub-task A?
Text: Introduction
In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society.
Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity.
Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language.
For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language.
In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification.
Related works
Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum.
Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22.
One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently.
Dataset and Task description
The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages.
Dataset and Task description ::: Datasets
We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced.
Dataset and Task description ::: Tasks
Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask.
Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task.
Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset.
System Description
In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system.
System Description ::: Feature Generation ::: Preprocessing:
We preprocess the tweets before performing the feature extraction. The following steps were followed:
We remove all the URLs.
Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters.
We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders.
Any numerical figure was normalized to a string `number'.
We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact.
System Description ::: Feature Generation ::: Feature vectors:
The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier.
Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768.
LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31.
We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model.
System Description ::: Our Model
The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition.
Results
The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively.
Discussion
In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21.
Conclusion
In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers. | macro F1 score of 0.62 |
c691b47c0380c9529e34e8ca6c1805f98288affa | c691b47c0380c9529e34e8ca6c1805f98288affa_0 | Q: Is the model tested for language identification?
Text: Introduction
In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society.
Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity.
Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language.
For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language.
In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification.
Related works
Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum.
Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22.
One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently.
Dataset and Task description
The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages.
Dataset and Task description ::: Datasets
We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced.
Dataset and Task description ::: Tasks
Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask.
Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task.
Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset.
System Description
In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system.
System Description ::: Feature Generation ::: Preprocessing:
We preprocess the tweets before performing the feature extraction. The following steps were followed:
We remove all the URLs.
Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters.
We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders.
Any numerical figure was normalized to a string `number'.
We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact.
System Description ::: Feature Generation ::: Feature vectors:
The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier.
Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768.
LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31.
We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model.
System Description ::: Our Model
The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition.
Results
The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively.
Discussion
In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21.
Conclusion
In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers. | No |
892e42137b14d9fabd34084b3016cf3f12cac68a | 892e42137b14d9fabd34084b3016cf3f12cac68a_0 | Q: Is the model compared to a baseline model?
Text: Introduction
In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society.
Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity.
Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language.
For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language.
In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification.
Related works
Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum.
Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22.
One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently.
Dataset and Task description
The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages.
Dataset and Task description ::: Datasets
We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced.
Dataset and Task description ::: Tasks
Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask.
Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task.
Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset.
System Description
In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system.
System Description ::: Feature Generation ::: Preprocessing:
We preprocess the tweets before performing the feature extraction. The following steps were followed:
We remove all the URLs.
Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters.
We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders.
Any numerical figure was normalized to a string `number'.
We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact.
System Description ::: Feature Generation ::: Feature vectors:
The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier.
Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768.
LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31.
We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model.
System Description ::: Our Model
The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition.
Results
The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively.
Discussion
In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21.
Conclusion
In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers. | No |
dc69256bdfe76fa30ce4404b697f1bedfd6125fe | dc69256bdfe76fa30ce4404b697f1bedfd6125fe_0 | Q: What are the languages used to test the model?
Text: Introduction
In social media, abusive language denotes a text which contains any form of unacceptable language in a post or a comment. Abusive language can be divided into hate speech, offensive language and profanity. Hate speech is a derogatory comment that hurts an entire group in terms of ethnicity, race or gender. Offensive language is similar to derogatory comment, but it is targeted towards an individual. Profanity refers to any use of unacceptable language without a specific target. While profanity is the least threatening, hate speech has the most detrimental effect on the society.
Social media moderators are having a hard time in combating the rampant spread of hate speech as it is closely related to the other forms of abusive language. The evolution of new slangs and multilingualism, further adding to the complexity.
Recently, there has been a sharp rise in hate speech related incidents in India, the lynchings being the clear indication BIBREF1. Arun et al. BIBREF1 suggests that hate speech in India is very complicated as people are not directly spreading hate but are spreading misinformation against a particular community. Hence, it has become imperative to study hate speech in Indian language.
For the first time, a shared task on abusive content detection has been released for Hindi language at HASOC 2019. This will fuel the hate speech and offensive language research for Indian languages. The inclusion of datasets for English and German language will give a performance comparison for detection of abusive content in high and low resource language.
In this paper, we focus on the detection of multilingual hate speech detection that are written in Hindi, English, and German and describe our submission (HateMonitors) for HASOC at FIRE 2019 competition. Our system concatenates two types of sentence embeddings to represent each tweet and use machine learning models for classification.
Related works
Analyzing abusive language in social media is a daunting task. Waseem et al. BIBREF2 categorizes abusive language into two sub-classes – hate speech and offensive language. In their analysis of abusive language, Classifying abusive language into these two subtypes is more challenging due to the correlation between offensive language and hate speech BIBREF3. Nobata et al. BIBREF4 uses predefined language element and embeddings to train a regression model. With the introduction of better classification models BIBREF5, BIBREF6 and newer features BIBREF7, BIBREF3, BIBREF8, the research in hate and offensive speech detection has gained momentum.
Silva et al. BIBREF9 performed a large scale study to understand the target of such hate speech on two social media platforms: Twitter and Whisper. These target could be the Refugees and Immigrants BIBREF10, Jews BIBREF11, BIBREF12 and Muslims BIBREF13, BIBREF14. People could become the target of hate speech based on Nationality BIBREF15, sex BIBREF16, BIBREF17, and gender BIBREF18, BIBREF19 as well. Public expressions of hate speech affects the devaluation of minority members BIBREF20, the exclusion of minorities from the society BIBREF21, and tend to diffuse through the network at a faster rate BIBREF22.
One of the key issues with the current state of the hate and offensive language research is that the majority of the research is dedicated to the English language on BIBREF23. Few researchers have tried to solve the problem of abusive language in other languages BIBREF10, BIBREF24, but the works are mostly monolingual. Any online social media platform contains people of different ethnicity, which results in the spread of information in multiple languages. Hence, a robust classifier is needed, which can deal with abusive language in the multilingual domain. Several shared tasks like HASOC BIBREF0, HaSpeeDe BIBREF25, GermEval BIBREF26, AMI BIBREF27, HatEval BIBREF28 have focused on detection of abusive text in multiple languages recently.
Dataset and Task description
The dataset at HASOC 2019 were given in three languages: Hindi, English, and German. Dataset in Hindi and English had three subtasks each, while German had only two subtasks. We participated in all the tasks provided by the organisers and decided to develop a single model that would be language agnostic. We used the same model architecture for all the three languages.
Dataset and Task description ::: Datasets
We present the statistics for HASOC dataset in Table TABREF5. From the table, we can observe that the dataset for the German language is highly unbalanced, English and Hindi are more or less balanced for sub-task A. For sub-task B German dataset is balanced but others are unbalanced. For sub-task C both the datasets are highly unbalanced.
Dataset and Task description ::: Tasks
Sub-task A consists of building a binary classification model which can predict if a given piece of text is hateful and offensive (HOF) or not (NOT). A data point is annotated as HOF if it contains any form of non-acceptable language such as hate speech, aggression, profanity. Each of the three languages had this subtask.
Sub-task B consists of building a multi-class classification model which can predict the three different classes in the data points annotated as HOF: Hate speech (HATE), Offensive language (OFFN), and Profane (PRFN). Again all three languages have this sub-task.
Sub-task C consists of building a binary classification model which can predict the type of offense: Targeted (TIN) and Untargeted (UNT). Sub-task C was not conducted for the German dataset.
System Description
In this section, we will explain the details about our system, which comprises of two sub-parts- feature generation and model selection. Figure FIGREF15 shows the architecture of our system.
System Description ::: Feature Generation ::: Preprocessing:
We preprocess the tweets before performing the feature extraction. The following steps were followed:
We remove all the URLs.
Convert text to lowercase. This step was not applied to the Hindi language since Devanagari script does not have lowercase and uppercase characters.
We did not normalize the mentions in the text as they could potentially reveal important information for the embeddings encoders.
Any numerical figure was normalized to a string `number'.
We did not remove any punctuation and stop-words since the context of the sentence might get lost in such a process. Since we are using sentence embedding, it is essential to keep the context of the sentence intact.
System Description ::: Feature Generation ::: Feature vectors:
The preprocessed posts are then used to generate features for the classifier. For our model, we decided to generate two types of feature vector: BERT Embeddings and LASER Embeddings. For each post, we generate the BERT and LASER Embedding, which are then concatenated and fed as input to the final classifier.
Multilingual BERT embeddings: Bidirectional Encoder Representations from Transformers(BERT) BIBREF29 has played a key role in the advancement of natural language processing domain (NLP). BERT is a language model which is trained to predict the masked words in a sentence. To generate the sentence embedding for a post, we take the mean of the last 11 layers (out of 12) to get a sentence vector with length of 768.
LASER embeddings: Researchers at Facebook released a language agnostic sentence embeddings representations (LASER) BIBREF30, where the model jointly learns on 93 languages. The model takes the sentence as input and produces a vector representation of length 1024. The model is able to handle code mixing as well BIBREF31.
We pass the preprocessed sentences through each of these embedding models and got two separate sentence representation. Further, we concatenate the embeddings into one single feature vector of length 1792, which is then passed to the final classification model.
System Description ::: Our Model
The amount of data in each category was insufficient to train a deep learning model. Building such deep models would lead to overfitting. So, we resorted to using simpler models such as SVM and Gradient boosted trees. Gradient boosted trees BIBREF32 are often the choice for systems where features are pre-extracted from the raw data. In the category of gradient boosted trees, Light Gradient Boosting Machine (LGBM) BIBREF33 is considered one of the most efficient in terms of memory footprint. Moreover, it has been part of winning solutions of many competition . Hence, we used LGBM as model for the downstream tasks in this competition.
Results
The performance of our models across different languages for sub-task A are shown in table TABREF19. Our model got the first position in the German sub-task with a macro F1 score of 0.62. The results of sub-task B and sub-task C is shown in table TABREF20 and TABREF21 respectively.
Discussion
In the results of subtask A, models are mainly affected by imbalance of the dataset. The training dataset of Hindi dataset was more balanced than English or German dataset. Hence, the results were around 0.78. As the dataset in German language was highly imbalanced, the results drops to 0.62. In subtask B, the highest F1 score reached was by the profane class for each language in table TABREF20. The model got confused between OFFN, HATE and PRFN labels which suggests that these models are not able to capture the context in the sentence. The subtask C was again a case of imbalanced dataset as targeted(TIN) label gets the highest F1 score in table TABREF21.
Conclusion
In this shared task, we experimented with zero-shot transfer learning on abusive text detection with pre-trained BERT and LASER sentence embeddings. We use an LGBM model to train the embeddings to perform downstream task. Our model for German language got the first position. The results provided a strong baseline for further research in multilingual hate speech. We have also made the models public for use by other researchers. | Hindi, English and German (German task won) |
097ab15f58cb1fce5b5ffb5082b8d7bbee720659 | 097ab15f58cb1fce5b5ffb5082b8d7bbee720659_0 | Q: Which language has the lowest error rate reduction?
Text: Introduction
In this paper we discuss online handwriting recognition: Given a user input in the form of an ink, i.e. a list of touch or pen strokes, output the textual interpretation of this input. A stroke is a sequence of points INLINEFORM0 with position INLINEFORM1 and timestamp INLINEFORM2 .
Figure FIGREF1 illustrates example inputs to our online handwriting recognition system in different languages and scripts. The left column shows examples in English with different writing styles, with different types of content, and that may be written on one or multiple lines. The center column shows examples from five different alphabetic languages similar in structure to English: German, Russian, Vietnamese, Greek, and Georgian. The right column shows scripts that are significantly different from English: Chinese has a much larger set of more complex characters, and users often overlap characters with one another. Korean, while an alphabetic language, groups letters in syllables leading to a large “alphabet” of syllables. Hindi writing often contains a connecting ‘Shirorekha’ line and characters can form larger structures (grapheme clusters) which influence the written shape of the components. Arabic is written right-to-left (with embedded left-to-right sequences used for numbers or English names) and characters change shape depending on their position within a word. Emoji are non-text Unicode symbols that we also recognize.
Online handwriting recognition has recently been gaining importance for multiple reasons: (a) An increasing number of people in emerging markets are obtaining access to computing devices, many exclusively using mobile devices with touchscreens. Many of these users have native languages and scripts that are not as easily typed as English, e.g. due to the size of the alphabet or the use of grapheme clusters which make it difficult to design an intuitive keyboard layout BIBREF0 . (b) More and more large mobile devices with styluses are becoming available, such as the iPad Pro, Microsoft Surface devices, and Chromebooks with styluses.
Early work in online handwriting recognition looked at segment-and-decode classifiers, such as the Newton BIBREF1 . Another line of work BIBREF2 focused on solving online handwriting recognition by making use of Hidden Markov Models (HMMs) BIBREF3 or hybrid approaches combining HMMs and Feed-forward Neural Networks BIBREF4 . The first HMM-free models were based on Time Delay Neural Networks (TDNNs) BIBREF5 , BIBREF6 , BIBREF7 , and more recent work focuses on Recurrent Neural Network (RNN) variants such as Long-Short-Term-Memory networks (LSTMs) BIBREF8 , BIBREF9 .
How to represent online handwriting data has been a research topic for a long time. Early approaches were feature-based, where each point is represented using a set of features BIBREF6 , BIBREF10 , BIBREF1 , or using global features to represent entire characters BIBREF6 . More recently, the deep learning revolution has swept away most feature engineering efforts and replaced them with learned representations in many domains, e.g. speech BIBREF11 , computer vision BIBREF12 , and natural language processing BIBREF13 .
Together with architecture changes, training methodologies also changed, moving from relying on explicit segmentation BIBREF7 , BIBREF1 , BIBREF14 to implicit segmentation using the Connectionist Temporal Classification (CTC) loss BIBREF15 , or Encoder-Decoder approaches trained with Maximum Likelihood Estimation BIBREF16 . Further recent work is also described in BIBREF17 .
The transition to more complex network architectures and end-to-end training can be associated with breakthroughs in related fields focused on sequence understanding where deep learning methods have outperformed “traditional” pattern recognition methods, e.g. in speech recognition BIBREF18 , BIBREF19 , OCR BIBREF20 , BIBREF21 , offline handwriting recognition BIBREF22 , and computer vision BIBREF23 .
In this paper we describe our new online handwriting recognition system based on deep learning methods. It replaces our previous segment-and-decode system BIBREF14 , which first over-segments the ink, then groups the segments into character hypotheses, and computes features for each character hypothesis which are then classified as characters using a rather shallow neural network. The recognition result is then obtained using a best path search decoding algorithm on the lattice of hypotheses incorporating additional knowledge sources such as language models. This system relies on numerous pre-processing, segmentation, and feature extraction heuristics which are no longer present in our new system. The new system reduces the amount of customization required, and consists of a simple stack of bidirectional LSTMs (BLSTMs), a single Logits layer, and the CTC loss BIBREF24 (Sec. SECREF2 ) trained for each script (Sec. SECREF3 ). To support potentially many languages per script (see Table TABREF5 ), language-specific language models and feature functions are used during decoding (Sec. SECREF38 ). E.g. we have a single recognition model for Arabic script which is combined with specific language models and feature functions for our Arabic, Persian, and Urdu language recognizers. Table TABREF5 shows the full list of scripts and languages that we currently support.
The new models are more accurate (Sec. SECREF4 ), smaller, and faster (Table TABREF68 ) than our previous segment-and-decode models and eliminate the need for a large number of engineered features and heuristics.
We present an extensive comparison of the differences in recognition accuracy for eight languages (Sec. SECREF5 ) and compare the accuracy of models trained on publicly available datasets where available (Sec. SECREF4 ). In addition, we propose a new standard experimental protocol for the IBM-UB-1 dataset BIBREF25 (Sec. SECREF50 ) to enable easier comparison between approaches in the future.
The main contributions of our paper are as follows:
End-to-end Model Architecture
Our handwriting recognition model draws its inspiration from research aimed at building end-to-end transcription models in the context of handwriting recognition BIBREF24 , optical character recognition BIBREF21 , and acoustic modeling in speech recognition BIBREF18 . The model architecture is constructed from common neural network blocks, i.e. bidirectional LSTMs and fully-connected layers (Figure FIGREF12 ). It is trained in an end-to-end manner using the CTC loss BIBREF24 .
Our architecture is similar to what is often used in the context of acoustic modeling for speech recognition BIBREF19 , in which it is referred to as a CLDNN (Convolutions, LSTMs, and DNNs), yet we differ from it in four points. Firstly, we do not use convolution layers, which in our own experience do not add value for large networks trained on large datasets of relatively short (compared to speech input) sequences typically seen in handwriting recognition. Secondly, we use bidirectional LSTMs, which due to latency constraints is not feasible in speech recognition systems. Thirdly, our architecture does not make use of additional fully-connected layers before and after the bidirectional LSTM layers. And finally, we train our system using the CTC loss, as opposed to the HMMs used in BIBREF19 .
This structure makes many components of our previous system BIBREF14 unnecessary, e.g. for feature extraction and segmentation. The heuristics that were hard-coded into our previous system, e.g. stroke-reordering and character hypothesis building, are now implicitly learned from the training data.
The model takes as input a time series INLINEFORM0 of length INLINEFORM1 encoding the user input (Sec. SECREF13 ) and passes it through several bidirectional LSTM layers BIBREF26 which learn the structure of characters (Sec. SECREF34 ).
The output of the final LSTM layer is passed through a softmax layer (Sec. SECREF35 ) leading to a sequence of probability distributions over characters for each time step.
For CTC decoding (Sec. SECREF44 ) we use beam search to combine the softmax outputs with character-based language models, word-based language models, and information about language-specific characters as in our previous system BIBREF14 .
Input Representation
In our earlier paper BIBREF14 we presented results on our datasets with a model similar to the one proposed in BIBREF24 . In that model we used 23 per-point features (similarly to BIBREF6 ) as described in our segment-and-decode system to represent the input. In further experimentation we found that in substantially deeper and wider models, engineered features are unnecessary and their removal leads to better results. This confirms the observation that learned representations often outperform handcrafted features in scenarios in which sufficient training data is available, e.g. in computer vision BIBREF27 and in speech recognition BIBREF28 . In the experiments presented here, we use two representations:
The simplest representation of stroke data is as a sequence of touch points. In our current system, we use a sequence of 5-dimensional points INLINEFORM0 where INLINEFORM1 are the coordinates of the INLINEFORM2 th touchpoint, INLINEFORM3 is the timestamp of the touchpoint since the first touch point in the current observation in seconds, INLINEFORM4 indicates whether the point corresponds to a pen-up ( INLINEFORM5 ) or pen-down ( INLINEFORM6 ) stroke, and INLINEFORM7 indicates the start of a new stroke ( INLINEFORM8 otherwise).
In order to keep the system as flexible as possible with respect to differences in the writing surface, e.g. area shape, size, spatial resolution, and sampling rate, we perform some minimal preprocessing:
Normalization of INLINEFORM0 and INLINEFORM1 coordinates, by shifting in INLINEFORM2 such that INLINEFORM3 , and shifting and scaling the writing area isometrically such that the INLINEFORM4 coordinate spans the range between 0 and 1. In cases where the bounding box of the writing area is unknown we use a surrogate area 20% larger than the observed range of touch points.
Equidistant linear resampling along the strokes with INLINEFORM0 , i.e. a line of length 1 will have 20 points.
We do not assume that words are written on a fixed baseline or that the input is horizontal. As in BIBREF24 , we use the differences between consecutive points for the INLINEFORM0 coordinates and the time INLINEFORM1 such that our input sequence is INLINEFORM2 for INLINEFORM3 , and INLINEFORM4 for INLINEFORM5 .
However simple, the raw input data has some drawbacks, i.e.
Resolution: Not all input devices sample inputs at the same rate, resulting in different point densities along the input strokes, requiring resampling which may inadvertently normalize-out details in the input.
Length: We choose the (re-)sampling rate such as to represent the smallest features well, which leads to over-sampling in less interesting parts of the stroke, e.g. in straight lines.
Model complexity: The model has to learn to map small consecutive steps to larger global features.
Bézier curves are a natural way to describe trajectories in space, and have been used to represent online handwriting data in the past, yet mostly as a means of removing outliers in the input data BIBREF29 , up-sampling sparse data BIBREF6 , or for rendering handwriting data smoothly on a screen BIBREF30 . Since a sequence of Bézier curves can represent a potentially long point sequence compactly, irrespective of the original sampling rate, we experiment with representing a sequence of input points as a sequence of parametric cubic polynomials, and using these as inputs to the recognition model.
These Bézier curves for INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are cubic polynomials in INLINEFORM3 , i.e..: DISPLAYFORM0
We start by normalizing the size of the entire ink such that the INLINEFORM0 values are within the range INLINEFORM1 , similar to how we process it for raw points. The time values are scaled linearly to match the length of the ink such that DISPLAYFORM0
in order to obtain values in the same numerical range as INLINEFORM0 and INLINEFORM1 . This sets the time difference between the first and last point of the stroke to be equal to the total spatial length of the stroke.
For each stroke in an ink, the coefficients INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are computed by minimizing the sum of squared errors (SSE) between each observed point INLINEFORM3 and its corresponding closest point (defined by INLINEFORM4 ) on the Bézier curve: DISPLAYFORM0
Where INLINEFORM0 is the number of points in the stroke. Given a set of coordinates INLINEFORM1 , computing the coefficients corresponds to solving the following linear system of equations: DISPLAYFORM0
which can be solved exactly for INLINEFORM0 , and in the least-squares sense otherwise, e.g. by solving the normalized equations DISPLAYFORM0
for the coefficients INLINEFORM0 . We alternate between minimizing the SSE in eq. ( EQREF24 ) and finding the corresponding points INLINEFORM1 , until convergence. The coordinates INLINEFORM2 are updated using a Newton step on DISPLAYFORM0
which is zero when INLINEFORM0 is orthogonal to the direction of the curve INLINEFORM1 .
If (a) the curve cannot fit the points well (SSE error is too large) or if (b) the curve has too sharp bends (arc length longer than 3 times the endpoint distance) we split the curve into two parts. We determine the split point in case (a) by finding the triplet of consecutive points with the smallest angle, and in case (b) as the point closest to the maximum local curvature along the entire Bézier curve. This heuristic is applied recursively until both the curve matching criteria are met.
As a final step, to remove spurious breakpoints, consecutive curves that can be represented by a single curve are stitched back together, resulting in a compact set of Bézier curves representing the data within the above constraints. For each consecutive pair of curves, we try to fit a single curve using the combined set of underlying points. If the fit agrees with the above criteria, we replace the two curves by the new one. This is applied repeatedly until no merging happens anymore.
Since the Bézier coefficients INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 may vary significantly in range, each curve is fed to the network as a 10-dimensional vector consisting of:
the vector between the endpoints (Figure FIGREF28 , blue vector, 2 values),
the distance between the control points and the endpoints relative to the distance between the endpoints (green dashed lines, 2 values),
the two angles between each control point and the endpoints (green arcs, 2 values),
the time coefficients INLINEFORM0 , INLINEFORM1 and INLINEFORM2 (not shown),
a boolean value indicating whether this is a pen-up or pen-down curve (not shown).
Due to the normalization of the INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 coordinates, as well as the constraints on the curves themselves, most of the resulting values are in the range INLINEFORM3 .
The input data is of higher dimension than the raw inputs described in Sec. UID14 , i.e. 10 vs. 5 dimensional, but the input sequence itself is roughly INLINEFORM0 shorter, making them a good choice for latency-sensitive models.
In most of the cases, as highlighted through the experimental sections in this paper, the curve representations contribute to better recognition accuracy and speed of our models. However, there are also situations where the curve representation introduces mistakes: punctuation marks become more similar to each other and sometimes are wrongly recognized, capitalization errors appear from time to time and in some cases the candidate recognitions corresponding to higher language model scores are preferred.
Bidirectional Long-Short-Term-Memory Recurrent Neural Networks
LSTMs BIBREF31 have become one of the most commonly used RNN cells because they are easy to train and give good results BIBREF32 . In all experiments we use bidirectional LSTMs, i.e. we process the input sequence forward and backward and merge the output states of each layer before feeding them to the next layer. The exact number of layers and nodes is determined empirically for each script. We give an overview of the impact of the number of nodes and layers in section SECREF4 . We also list the configurations for several scripts in our production system, as of this writing.
Softmax Layer
The output of the LSTM layers at each timestep is fed into a softmax layer to get a probability distribution over the INLINEFORM0 possible characters in the script (including spaces, punctuation marks, numbers or other special characters), plus a blank label required by the CTC loss and decoder.
Decoding
The output of the softmax layer is a sequence of INLINEFORM0 time steps of INLINEFORM1 classes that we decode using CTC decoding BIBREF15 . The logits from the softmax layer are combined with language-specific prior knowledge (cp. Sec. SECREF38 ). For each of these additional knowledge sources we learn a weight (called “decoder weight” in the following) and combine them linearly (cp. Sec. SECREF3 ). The learned combination is used as described in BIBREF33 to guide the beam search during decoding.
This combination of different knowledge sources allows us to train one recognition model per script (e.g. Latin script, or Cyrillic script) and then use it to serve multiple languages (see Table TABREF5 ).
Feature Functions: Language Models and Character Classes
Similarly to our previous work BIBREF14 , we define several scoring functions, which we refer to as feature functions. The goal of these feature functions is to introduce prior knowledge about the underlying language into the system. The introduction of recurrent neural networks has reduced the need for many of them and we now use only the following three:
Character Language Models: For each language we support, we build a 7-gram language model over Unicode codepoints from a large web-mined text corpus using Stupid back-off BIBREF35 . The final files are pruned to 10 million 7-grams each. Compared to our previous system BIBREF14 , we found that language model size has a smaller impact on the recognition accuracy, which is likely due to the capability of recurrent neural networks to capture dependencies between consecutive characters. We therefore use smaller language models over shorter contexts.
Word Language Models: For languages using spaces to separate words, we also use a word-based language model trained on a similar corpus as the character language models BIBREF36 , BIBREF37 , using 3-grams pruned to between 1.25 million and 1.5 million entries.
Character Classes: We add a scoring heuristic which boosts the score of characters from the language's alphabet. This feature function provides a strong signal for rare characters that may not be recognized confidently by the LSTM, and which the other language models might not weigh heavily enough to be recognized. This feature function was inspired by our previous system BIBREF14 .
In Section SECREF4 we provide an experimental evaluation of how much each of these feature functions contributes to the final result for several languages.
Training
The training of our system happens in two stages, on two different datasets:
Using separate datasets is important because the neural network learns the local appearance as well as an implicit language model from the training data. It will be overconfident on its training data and thus learning the decoder weights on the same dataset could result in weights biased towards the neural network model.
Connectionist Temporal Classification Loss
As our training data does not contain frame-aligned labels, we rely on the CTC loss BIBREF15 for training which treats the alignment between inputs and labels as a hidden variable. CTC training introduces an additional blank label which is used internally for learning alignments jointly with character hypotheses, as described in BIBREF15 .
We train all neural network weights jointly using the standard TensorFlow BIBREF34 implementation of CTC training using the Adam Optimizer BIBREF39 with a batch size of 8, a learning rate of INLINEFORM0 , and gradient clipping such that the gradient INLINEFORM1 -norm is INLINEFORM2 . Additionally, to improve the robustness of our models and prevent overfitting, we train our models using random dropout BIBREF40 , BIBREF41 after each LSTM layer with a dropout rate of INLINEFORM3 . We train until the error rate on the evaluation dataset no longer improves for 5 million steps.
Bayesian Optimization for Tuning Decoder Weights
To optimize the decoder weights, we rely on the Google Vizier service and its default algorithm, specifically batched Gaussian process bandits, and expected improvement as the acquisition function BIBREF38 .
For each recognizer training we start 7 Vizier studies, each performing 500 individual trials, and then we pick the configuration that performed best across all of these trials. We experimentally found that using 7 separate studies with different random initializations regularly leads to better results than running a single study once. We found that using more than 500 trials per study does not lead to any additional improvement.
For each script we train these weights on a subset of the languages for which we have sufficient data, and transfer the weights to all the other languages. E.g. for the Latin-script languages, we train the decoder weights on English and German, and use the resulting weights for all languages in the first row of Table TABREF5 .
Experimental Evaluation
In the following, where possible, we present results for public datasets in a closed data scenario, i.e. training and testing models on the public dataset using a standard protocol. In addition we present evaluation results for public datasets in an open data scenario against our production setup, i.e. in which the model is trained on our own data. Finally, we show experimental results for some of the major languages on our internal datasets. Whenever possible we compare these results to the state of the art and to our previous system BIBREF14 .
IAM-OnDB
The IAM-OnDB dataset BIBREF42 is probably the most used evaluation dataset for online handwriting recognition. It consists of 298 523 characters in 86 272 word instances from a dictionary of 11 059 words written by 221 writers. We use the standard IAM-OnDB dataset separation: one training set, two validations sets and a test set containing 5 363, 1 438, 1 518 and 3 859 written lines, respectively. We tune the decoder weights using the validation set with 1 438 items and report error rates on the test set.
We perform a more extensive study of the number of layers and nodes per layer for both the raw and curve input formats to determine the optimal size of the bidirectional LSTM network (see Figure FIGREF48 , Table TABREF47 ). We first run experiments without additional feature functions (Figure FIGREF48 , solid lines), then re-compute the results with tuned weights for language models and character classes (Figure FIGREF48 , dashed lines). We observe that for both input formats, using 3 or 5 layers outperforms more shallow networks, and using more layers gives hardly any improvement. Furthermore, using 64 nodes per layer is sufficient, as wider networks give only small improvements, if at all.
Finally, we show a comparison of our old and new systems with the literature on the IAM-OnDB dataset in Table TABREF49 . Our method establishes a new state of the art result when relying on closed data using IAM-OnDB, as well as when relying on our in-house data that we use for our production system, which was not tuned for the IAM-OnDB data and for which none of the IAM-OnDB data was used for training.
To better understand where the improvements come from, we discuss the differences between the previous state-of-the-art system (Graves et al. BLSTM BIBREF24 ) and this work across four dimensions: input pre-processing and feature extraction, neural network architecture, CTC training and decoding, and model training methodology.
Our input pre-processing (Sec SECREF13 ) differs only in minor ways: the INLINEFORM0 -coordinate used is not first transformed using a high-pass filter, we don't split text-lines using gaps and we don't remove delayed strokes, nor do we do any skew and slant correction or other pre-processing.
The major difference comes from feature extraction. In contrast to the 25 features per point uesd in BIBREF24 , we use either 5 features (raw) or 10 features (curves). While the 25 features included both temporal (position in the time series) and spatial features (offline representation), our work uses only the temporal structure. In contrast also to our previous system BIBREF14 , using a more compact representation (and reducing the number of points for curves) allows a feature representation, including spatial structure, to be learned in the first or upper layers of the neural network.
The neural network architecture differs both in internal structure of the LSTM cell as well as in the architecture configuration. Our internal structure differs only in that we do not use peephole connections BIBREF44 .
As opposed to relying on a single bidirectional LSTM layer of width 100, we experiment with a number of configuration variants as detailed in Figure FIGREF48 . We note that it is particularly important to have more than one layer in order to learn a meaningful representation without feature extraction.
We use the CTC forward-backward training algorithm as described in BIBREF24 , and implemented in TensorFlow. The training hyperparameters are described in Section SECREF44 .
The CTC decoding algorithm incorporates feature functions similarly to how the dictionary is incorporated in the previous state-of-the-art system. However, we use more feature functions, our language models are trained on a different corpus, and the combination weights are optimized separately as described in Sec SECREF45 .
IBM-UB-1
Another publicly-accessible English-language dataset is the IBM-UB-1 dataset BIBREF25 . From the available datasets therein, we use the English query dataset, which consists of 63 268 handwritten English words. As this dataset has not been used often in the academic literature, we propose an evaluation protocol. We split this dataset into 4 parts with non-overlapping writer IDs: 47 108 items for training, 4 690 for decoder weight tuning, 6 134 for validation and 5 336 for testing.
We perform a similar set of experiments as we did for IAM-OnDB to determine the right depth and width of our neural network architecture. The results of these experiments are shown in Figure FIGREF52 . The conclusion for this dataset is similar to the conclusions we drew for the IAM-OnDB: using networks with 5 layers of bidirectional LSTMs with 64 cells each is sufficient for good accuracy. Less deep and less wide networks perform substantially worse, but larger networks only give small improvements. This is true regardless of the input processing method chosen.
We give some exemplary results and a comparison with our current production system as well as results for our previous system in Table TABREF53 . We note that our current system is about 38% and 32% better (relative) in CER and WER, respectively, when compared to the previous segment-and-decode approach. The lack of improvement in error rate when evaluating on our production system is due to the fact that our datasets contain spaces while the same setup trained solely on IBM-UB-1 does not.
Additional public datasets
We provide an evaluation of our production system trained on our in-house datasets applied to a number of publicly available benchmark datasets from the literature. Note that for all experiments presented in this section we evaluate our current live system without any tuning specifec to the tasks at hand.
The ICDAR-2013 Competition for Online Handwriting Chinese Character Recognition BIBREF45 introduced a dataset for classifying the most common Chinese characters. We report the error rates in comparison to published results from the competition and more recent work done by others in Table TABREF56 .
We evaluate our live production system on this dataset. Our system was not tuned to the task at hand and was trained as a multi-character recognizer, thus it is not even aware that each sample only contains a single character. Further, our system supports 12 363 different characters while the competition data only contains 3 755 characters. Note that our system did not have access to the training data for this task at all.
Whenever our system returns more than one character for a sample, we count this as an error (this happened twice on the entire test set of 224 590 samples). Despite supporting almost four times as many characters than needed for the CASIA data and not having been tuned to the task, the accuracy of our system is still competitive with systems that were tuned for this data specifically.
In the ICFHR2018 Competition on Vietnamese Online Handwritten Text Recognition using VNOnDB BIBREF50 , our production system was evaluated against other systems. The system used in the competition is the one reported and described in this paper. Due to licensing restrictions we were unable to do any experiments on the competition training data, or specific tuning for the competition, which was not the case for the other systems mentioned here.
We participated in the two tasks that best suited the purpose of our system, specifically the "Word" (ref. table TABREF58 ) and the "Text line" (ref. table TABREF59 ) recognition levels. Even though we can technically process paragraph level inputs, our system was not built with this goal in mind.
In contrast to us, the other teams used the training and validation sets to tune their systems:
The IVTOV team's system is very similar to our system. It makes use of bidirectional LSTM layers trained end-to-end with the CTC loss. The inputs used are delta INLINEFORM0 and INLINEFORM1 coordinates, together with pen-up strokes (boolean feature quantifying whether a stroke has ended or not). They report using a two-layer network of 100 cells each and additional preprocessing for better handling the dataset.
The MyScript team submitted two systems. The first system has an explicit segmentation component along with a feed-forward network for recognizing character hypotheses, similar in formulation to our previous system BIBREF14 . In addition, they also make use of a bidirectional LSTM system trained end-to-end with the CTC loss. They do not provide additional details on which system is which.
We note that the modeling stacks of the systems out-performing ours in this competition are not fundamentally different (to the best of our knowledge, according to released descriptions). We therefore believe that our system might perform comparably if trained on the competition training dataset as well.
On our internal testset of Vietnamese data, our new system obtains a CER of 3.3% which is 54% relative better than the old Segment-and-Decode system which had a CER of 7.2% (see also Table FIGREF69 ).
Tuning neural network parameters on our internal data
Our in-house datasets consist of various types of training data, the amount of which varies by script. Sources of training data include data collected through prompting, commercially available data, artificially inflated data, and labeled/self-labeled anonymized recognition requests (see BIBREF14 for a more detailed description). The number of training samples varies from tens of thousands to several million per script, depending on the complexity and usage.
The best configuration for our production systems were identified by running multiple experiments over a range of layer depths and widths on our Latin script datasets. For the Latin script experiments shown in Figure FIGREF63 , the training set we used was a mixture of data from all the Latin-script languages we support and evaluation is done on an English validation dataset, also used for the English evaluation in Table TABREF68 .
Similarly to experiments depicted in Figure FIGREF48 and Figure FIGREF52 , increasing the depth and width of the network architecture brings diminishing returns fairly quickly. However, overfitting is less pronounced, particularly when relying on Bézier curve inputs, highlighting that our datasets are more complex in nature.
In all our experiments using our production datasets, the Bézier curve inputs outperformed the raw inputs both in terms of accuracy and recognition latency, and are thus used throughout in our production models. We hypothesize that this is due to the implicit normalization of sampling rates and thus line smoothness of the input data. The input data of our production datasets come from a wide variety of data sources including data collection and crowd sourcing from many different types of devices, unlike academic datasets such as IBM-UB-1 or IAM-OnDB which were collected under standardized conditions.
System Performance and Discussion
The setup described throughout this paper that obtained the best results relies on input processing with Bézier spline interpolation (Sec. UID18 ), followed by 4–5 layers of varying width bidirectional LSTMs, followed by a final softmax layer. For each script, we experimentally determined the best configuration through multiple training runs.
We performed an ablation study with the best configurations for each of the eight most important scripts by number of users and compare the results with our previous work BIBREF14 (Table TABREF68 ). The largest relative improvement comes from the overall network architecture stack, followed by the use of the character language model and the other feature functions.
In addition, we show the relative improvement in error rates on the languages for which we have evaluation datasets of more than 2 000 items (Figure FIGREF69 ). The new architecture performs between 20%–40% (relative) better over almost all languages.
Differences Between IAM-OnDB, IBM-UB-1 and our internal datasets
To understand how the different datasets relate to each other, we performed a set of experiments and evaluations with the goal of better characterizing the differences between the datasets.
We trained a recognizer on each of the three training sets separately, then evaluated each system on all three test sets (Table TABREF65 ). The neural network architecture is the same as the one we determined earlier (5 layers bidirectional LSTMs of 64 cells each) with the same feature functions, with weights tuned on the corresponding tuning dataset. The inputs are processed using Bézier curves.
To better understand the source of discrepancy when training on IAM-OnDB and evaluating on IBM-UB-1, we note the different characteristics of the datasets:
IBM-UB-1 has predominantly cursive writing, while IAM-OnDB has mostly printed writing
IBM-UB-1 contains single words, while IAM-OnDB has lines of space-separated words
This results in models trained on the IBM-UB-1 dataset not being able to predict spaces as they are not present in the dataset's alphabet. In addition, the printed writing style of IAM-OnDB makes recognition harder when evaluating cursive writing from IBM-UB-1. It is likely that the lack of structure through words-only data makes recognizing IAM-OnDB on a system trained on IBM-UB-1 harder than vice-versa.
Systems trained on IBM-UB-1 or IAM-OnDB alone perform significantly worse on our internal datasets, as our data distribution covers a wide range of use-cases not necessarily relevant to, or present, in the two academic datasets: sloppy handwriting, overlapping characters for handling writing on small input surfaces, non-uniform sampling rates, and partially rotated inputs.
The network trained on the internal dataset performs well on all three datasets. It performs better on IAM-OnDB than the system trained only thereon, but worse for IBM-UB-1. We believe that using only cursive words when training allows the network to better learn the sample characteristics, than when learning about space separation and other structure properties not present in IBM-UB-1.
Conclusion
We describe the online handwriting recognition system that we currently use at Google for 102 languages in 26 scripts. The system is based on an end-to-end trained neural network and replaces our old Segment-and-Decode system. Recognition accuracy of the new system improves by 20% to 40% relative depending on the language while using smaller and faster models. We encode the touch inputs using a Bézier curve representation which performs at least as well as raw touch inputs but which also allows for a faster recognition because the input sequence representation is shorter.
We further compare the performance of our system to the state of the art on publicly available datasets such as IAM-OnDB, IBM-UB-1, and CASIA and improve over the previous best published result on IAM-OnDB. | thai |
b8d5e9fa08247cb4eea835b19377262d86107a9d | b8d5e9fa08247cb4eea835b19377262d86107a9d_0 | Q: What datasets did they use?
Text: Introduction
In this paper we discuss online handwriting recognition: Given a user input in the form of an ink, i.e. a list of touch or pen strokes, output the textual interpretation of this input. A stroke is a sequence of points INLINEFORM0 with position INLINEFORM1 and timestamp INLINEFORM2 .
Figure FIGREF1 illustrates example inputs to our online handwriting recognition system in different languages and scripts. The left column shows examples in English with different writing styles, with different types of content, and that may be written on one or multiple lines. The center column shows examples from five different alphabetic languages similar in structure to English: German, Russian, Vietnamese, Greek, and Georgian. The right column shows scripts that are significantly different from English: Chinese has a much larger set of more complex characters, and users often overlap characters with one another. Korean, while an alphabetic language, groups letters in syllables leading to a large “alphabet” of syllables. Hindi writing often contains a connecting ‘Shirorekha’ line and characters can form larger structures (grapheme clusters) which influence the written shape of the components. Arabic is written right-to-left (with embedded left-to-right sequences used for numbers or English names) and characters change shape depending on their position within a word. Emoji are non-text Unicode symbols that we also recognize.
Online handwriting recognition has recently been gaining importance for multiple reasons: (a) An increasing number of people in emerging markets are obtaining access to computing devices, many exclusively using mobile devices with touchscreens. Many of these users have native languages and scripts that are not as easily typed as English, e.g. due to the size of the alphabet or the use of grapheme clusters which make it difficult to design an intuitive keyboard layout BIBREF0 . (b) More and more large mobile devices with styluses are becoming available, such as the iPad Pro, Microsoft Surface devices, and Chromebooks with styluses.
Early work in online handwriting recognition looked at segment-and-decode classifiers, such as the Newton BIBREF1 . Another line of work BIBREF2 focused on solving online handwriting recognition by making use of Hidden Markov Models (HMMs) BIBREF3 or hybrid approaches combining HMMs and Feed-forward Neural Networks BIBREF4 . The first HMM-free models were based on Time Delay Neural Networks (TDNNs) BIBREF5 , BIBREF6 , BIBREF7 , and more recent work focuses on Recurrent Neural Network (RNN) variants such as Long-Short-Term-Memory networks (LSTMs) BIBREF8 , BIBREF9 .
How to represent online handwriting data has been a research topic for a long time. Early approaches were feature-based, where each point is represented using a set of features BIBREF6 , BIBREF10 , BIBREF1 , or using global features to represent entire characters BIBREF6 . More recently, the deep learning revolution has swept away most feature engineering efforts and replaced them with learned representations in many domains, e.g. speech BIBREF11 , computer vision BIBREF12 , and natural language processing BIBREF13 .
Together with architecture changes, training methodologies also changed, moving from relying on explicit segmentation BIBREF7 , BIBREF1 , BIBREF14 to implicit segmentation using the Connectionist Temporal Classification (CTC) loss BIBREF15 , or Encoder-Decoder approaches trained with Maximum Likelihood Estimation BIBREF16 . Further recent work is also described in BIBREF17 .
The transition to more complex network architectures and end-to-end training can be associated with breakthroughs in related fields focused on sequence understanding where deep learning methods have outperformed “traditional” pattern recognition methods, e.g. in speech recognition BIBREF18 , BIBREF19 , OCR BIBREF20 , BIBREF21 , offline handwriting recognition BIBREF22 , and computer vision BIBREF23 .
In this paper we describe our new online handwriting recognition system based on deep learning methods. It replaces our previous segment-and-decode system BIBREF14 , which first over-segments the ink, then groups the segments into character hypotheses, and computes features for each character hypothesis which are then classified as characters using a rather shallow neural network. The recognition result is then obtained using a best path search decoding algorithm on the lattice of hypotheses incorporating additional knowledge sources such as language models. This system relies on numerous pre-processing, segmentation, and feature extraction heuristics which are no longer present in our new system. The new system reduces the amount of customization required, and consists of a simple stack of bidirectional LSTMs (BLSTMs), a single Logits layer, and the CTC loss BIBREF24 (Sec. SECREF2 ) trained for each script (Sec. SECREF3 ). To support potentially many languages per script (see Table TABREF5 ), language-specific language models and feature functions are used during decoding (Sec. SECREF38 ). E.g. we have a single recognition model for Arabic script which is combined with specific language models and feature functions for our Arabic, Persian, and Urdu language recognizers. Table TABREF5 shows the full list of scripts and languages that we currently support.
The new models are more accurate (Sec. SECREF4 ), smaller, and faster (Table TABREF68 ) than our previous segment-and-decode models and eliminate the need for a large number of engineered features and heuristics.
We present an extensive comparison of the differences in recognition accuracy for eight languages (Sec. SECREF5 ) and compare the accuracy of models trained on publicly available datasets where available (Sec. SECREF4 ). In addition, we propose a new standard experimental protocol for the IBM-UB-1 dataset BIBREF25 (Sec. SECREF50 ) to enable easier comparison between approaches in the future.
The main contributions of our paper are as follows:
End-to-end Model Architecture
Our handwriting recognition model draws its inspiration from research aimed at building end-to-end transcription models in the context of handwriting recognition BIBREF24 , optical character recognition BIBREF21 , and acoustic modeling in speech recognition BIBREF18 . The model architecture is constructed from common neural network blocks, i.e. bidirectional LSTMs and fully-connected layers (Figure FIGREF12 ). It is trained in an end-to-end manner using the CTC loss BIBREF24 .
Our architecture is similar to what is often used in the context of acoustic modeling for speech recognition BIBREF19 , in which it is referred to as a CLDNN (Convolutions, LSTMs, and DNNs), yet we differ from it in four points. Firstly, we do not use convolution layers, which in our own experience do not add value for large networks trained on large datasets of relatively short (compared to speech input) sequences typically seen in handwriting recognition. Secondly, we use bidirectional LSTMs, which due to latency constraints is not feasible in speech recognition systems. Thirdly, our architecture does not make use of additional fully-connected layers before and after the bidirectional LSTM layers. And finally, we train our system using the CTC loss, as opposed to the HMMs used in BIBREF19 .
This structure makes many components of our previous system BIBREF14 unnecessary, e.g. for feature extraction and segmentation. The heuristics that were hard-coded into our previous system, e.g. stroke-reordering and character hypothesis building, are now implicitly learned from the training data.
The model takes as input a time series INLINEFORM0 of length INLINEFORM1 encoding the user input (Sec. SECREF13 ) and passes it through several bidirectional LSTM layers BIBREF26 which learn the structure of characters (Sec. SECREF34 ).
The output of the final LSTM layer is passed through a softmax layer (Sec. SECREF35 ) leading to a sequence of probability distributions over characters for each time step.
For CTC decoding (Sec. SECREF44 ) we use beam search to combine the softmax outputs with character-based language models, word-based language models, and information about language-specific characters as in our previous system BIBREF14 .
Input Representation
In our earlier paper BIBREF14 we presented results on our datasets with a model similar to the one proposed in BIBREF24 . In that model we used 23 per-point features (similarly to BIBREF6 ) as described in our segment-and-decode system to represent the input. In further experimentation we found that in substantially deeper and wider models, engineered features are unnecessary and their removal leads to better results. This confirms the observation that learned representations often outperform handcrafted features in scenarios in which sufficient training data is available, e.g. in computer vision BIBREF27 and in speech recognition BIBREF28 . In the experiments presented here, we use two representations:
The simplest representation of stroke data is as a sequence of touch points. In our current system, we use a sequence of 5-dimensional points INLINEFORM0 where INLINEFORM1 are the coordinates of the INLINEFORM2 th touchpoint, INLINEFORM3 is the timestamp of the touchpoint since the first touch point in the current observation in seconds, INLINEFORM4 indicates whether the point corresponds to a pen-up ( INLINEFORM5 ) or pen-down ( INLINEFORM6 ) stroke, and INLINEFORM7 indicates the start of a new stroke ( INLINEFORM8 otherwise).
In order to keep the system as flexible as possible with respect to differences in the writing surface, e.g. area shape, size, spatial resolution, and sampling rate, we perform some minimal preprocessing:
Normalization of INLINEFORM0 and INLINEFORM1 coordinates, by shifting in INLINEFORM2 such that INLINEFORM3 , and shifting and scaling the writing area isometrically such that the INLINEFORM4 coordinate spans the range between 0 and 1. In cases where the bounding box of the writing area is unknown we use a surrogate area 20% larger than the observed range of touch points.
Equidistant linear resampling along the strokes with INLINEFORM0 , i.e. a line of length 1 will have 20 points.
We do not assume that words are written on a fixed baseline or that the input is horizontal. As in BIBREF24 , we use the differences between consecutive points for the INLINEFORM0 coordinates and the time INLINEFORM1 such that our input sequence is INLINEFORM2 for INLINEFORM3 , and INLINEFORM4 for INLINEFORM5 .
However simple, the raw input data has some drawbacks, i.e.
Resolution: Not all input devices sample inputs at the same rate, resulting in different point densities along the input strokes, requiring resampling which may inadvertently normalize-out details in the input.
Length: We choose the (re-)sampling rate such as to represent the smallest features well, which leads to over-sampling in less interesting parts of the stroke, e.g. in straight lines.
Model complexity: The model has to learn to map small consecutive steps to larger global features.
Bézier curves are a natural way to describe trajectories in space, and have been used to represent online handwriting data in the past, yet mostly as a means of removing outliers in the input data BIBREF29 , up-sampling sparse data BIBREF6 , or for rendering handwriting data smoothly on a screen BIBREF30 . Since a sequence of Bézier curves can represent a potentially long point sequence compactly, irrespective of the original sampling rate, we experiment with representing a sequence of input points as a sequence of parametric cubic polynomials, and using these as inputs to the recognition model.
These Bézier curves for INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are cubic polynomials in INLINEFORM3 , i.e..: DISPLAYFORM0
We start by normalizing the size of the entire ink such that the INLINEFORM0 values are within the range INLINEFORM1 , similar to how we process it for raw points. The time values are scaled linearly to match the length of the ink such that DISPLAYFORM0
in order to obtain values in the same numerical range as INLINEFORM0 and INLINEFORM1 . This sets the time difference between the first and last point of the stroke to be equal to the total spatial length of the stroke.
For each stroke in an ink, the coefficients INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are computed by minimizing the sum of squared errors (SSE) between each observed point INLINEFORM3 and its corresponding closest point (defined by INLINEFORM4 ) on the Bézier curve: DISPLAYFORM0
Where INLINEFORM0 is the number of points in the stroke. Given a set of coordinates INLINEFORM1 , computing the coefficients corresponds to solving the following linear system of equations: DISPLAYFORM0
which can be solved exactly for INLINEFORM0 , and in the least-squares sense otherwise, e.g. by solving the normalized equations DISPLAYFORM0
for the coefficients INLINEFORM0 . We alternate between minimizing the SSE in eq. ( EQREF24 ) and finding the corresponding points INLINEFORM1 , until convergence. The coordinates INLINEFORM2 are updated using a Newton step on DISPLAYFORM0
which is zero when INLINEFORM0 is orthogonal to the direction of the curve INLINEFORM1 .
If (a) the curve cannot fit the points well (SSE error is too large) or if (b) the curve has too sharp bends (arc length longer than 3 times the endpoint distance) we split the curve into two parts. We determine the split point in case (a) by finding the triplet of consecutive points with the smallest angle, and in case (b) as the point closest to the maximum local curvature along the entire Bézier curve. This heuristic is applied recursively until both the curve matching criteria are met.
As a final step, to remove spurious breakpoints, consecutive curves that can be represented by a single curve are stitched back together, resulting in a compact set of Bézier curves representing the data within the above constraints. For each consecutive pair of curves, we try to fit a single curve using the combined set of underlying points. If the fit agrees with the above criteria, we replace the two curves by the new one. This is applied repeatedly until no merging happens anymore.
Since the Bézier coefficients INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 may vary significantly in range, each curve is fed to the network as a 10-dimensional vector consisting of:
the vector between the endpoints (Figure FIGREF28 , blue vector, 2 values),
the distance between the control points and the endpoints relative to the distance between the endpoints (green dashed lines, 2 values),
the two angles between each control point and the endpoints (green arcs, 2 values),
the time coefficients INLINEFORM0 , INLINEFORM1 and INLINEFORM2 (not shown),
a boolean value indicating whether this is a pen-up or pen-down curve (not shown).
Due to the normalization of the INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 coordinates, as well as the constraints on the curves themselves, most of the resulting values are in the range INLINEFORM3 .
The input data is of higher dimension than the raw inputs described in Sec. UID14 , i.e. 10 vs. 5 dimensional, but the input sequence itself is roughly INLINEFORM0 shorter, making them a good choice for latency-sensitive models.
In most of the cases, as highlighted through the experimental sections in this paper, the curve representations contribute to better recognition accuracy and speed of our models. However, there are also situations where the curve representation introduces mistakes: punctuation marks become more similar to each other and sometimes are wrongly recognized, capitalization errors appear from time to time and in some cases the candidate recognitions corresponding to higher language model scores are preferred.
Bidirectional Long-Short-Term-Memory Recurrent Neural Networks
LSTMs BIBREF31 have become one of the most commonly used RNN cells because they are easy to train and give good results BIBREF32 . In all experiments we use bidirectional LSTMs, i.e. we process the input sequence forward and backward and merge the output states of each layer before feeding them to the next layer. The exact number of layers and nodes is determined empirically for each script. We give an overview of the impact of the number of nodes and layers in section SECREF4 . We also list the configurations for several scripts in our production system, as of this writing.
Softmax Layer
The output of the LSTM layers at each timestep is fed into a softmax layer to get a probability distribution over the INLINEFORM0 possible characters in the script (including spaces, punctuation marks, numbers or other special characters), plus a blank label required by the CTC loss and decoder.
Decoding
The output of the softmax layer is a sequence of INLINEFORM0 time steps of INLINEFORM1 classes that we decode using CTC decoding BIBREF15 . The logits from the softmax layer are combined with language-specific prior knowledge (cp. Sec. SECREF38 ). For each of these additional knowledge sources we learn a weight (called “decoder weight” in the following) and combine them linearly (cp. Sec. SECREF3 ). The learned combination is used as described in BIBREF33 to guide the beam search during decoding.
This combination of different knowledge sources allows us to train one recognition model per script (e.g. Latin script, or Cyrillic script) and then use it to serve multiple languages (see Table TABREF5 ).
Feature Functions: Language Models and Character Classes
Similarly to our previous work BIBREF14 , we define several scoring functions, which we refer to as feature functions. The goal of these feature functions is to introduce prior knowledge about the underlying language into the system. The introduction of recurrent neural networks has reduced the need for many of them and we now use only the following three:
Character Language Models: For each language we support, we build a 7-gram language model over Unicode codepoints from a large web-mined text corpus using Stupid back-off BIBREF35 . The final files are pruned to 10 million 7-grams each. Compared to our previous system BIBREF14 , we found that language model size has a smaller impact on the recognition accuracy, which is likely due to the capability of recurrent neural networks to capture dependencies between consecutive characters. We therefore use smaller language models over shorter contexts.
Word Language Models: For languages using spaces to separate words, we also use a word-based language model trained on a similar corpus as the character language models BIBREF36 , BIBREF37 , using 3-grams pruned to between 1.25 million and 1.5 million entries.
Character Classes: We add a scoring heuristic which boosts the score of characters from the language's alphabet. This feature function provides a strong signal for rare characters that may not be recognized confidently by the LSTM, and which the other language models might not weigh heavily enough to be recognized. This feature function was inspired by our previous system BIBREF14 .
In Section SECREF4 we provide an experimental evaluation of how much each of these feature functions contributes to the final result for several languages.
Training
The training of our system happens in two stages, on two different datasets:
Using separate datasets is important because the neural network learns the local appearance as well as an implicit language model from the training data. It will be overconfident on its training data and thus learning the decoder weights on the same dataset could result in weights biased towards the neural network model.
Connectionist Temporal Classification Loss
As our training data does not contain frame-aligned labels, we rely on the CTC loss BIBREF15 for training which treats the alignment between inputs and labels as a hidden variable. CTC training introduces an additional blank label which is used internally for learning alignments jointly with character hypotheses, as described in BIBREF15 .
We train all neural network weights jointly using the standard TensorFlow BIBREF34 implementation of CTC training using the Adam Optimizer BIBREF39 with a batch size of 8, a learning rate of INLINEFORM0 , and gradient clipping such that the gradient INLINEFORM1 -norm is INLINEFORM2 . Additionally, to improve the robustness of our models and prevent overfitting, we train our models using random dropout BIBREF40 , BIBREF41 after each LSTM layer with a dropout rate of INLINEFORM3 . We train until the error rate on the evaluation dataset no longer improves for 5 million steps.
Bayesian Optimization for Tuning Decoder Weights
To optimize the decoder weights, we rely on the Google Vizier service and its default algorithm, specifically batched Gaussian process bandits, and expected improvement as the acquisition function BIBREF38 .
For each recognizer training we start 7 Vizier studies, each performing 500 individual trials, and then we pick the configuration that performed best across all of these trials. We experimentally found that using 7 separate studies with different random initializations regularly leads to better results than running a single study once. We found that using more than 500 trials per study does not lead to any additional improvement.
For each script we train these weights on a subset of the languages for which we have sufficient data, and transfer the weights to all the other languages. E.g. for the Latin-script languages, we train the decoder weights on English and German, and use the resulting weights for all languages in the first row of Table TABREF5 .
Experimental Evaluation
In the following, where possible, we present results for public datasets in a closed data scenario, i.e. training and testing models on the public dataset using a standard protocol. In addition we present evaluation results for public datasets in an open data scenario against our production setup, i.e. in which the model is trained on our own data. Finally, we show experimental results for some of the major languages on our internal datasets. Whenever possible we compare these results to the state of the art and to our previous system BIBREF14 .
IAM-OnDB
The IAM-OnDB dataset BIBREF42 is probably the most used evaluation dataset for online handwriting recognition. It consists of 298 523 characters in 86 272 word instances from a dictionary of 11 059 words written by 221 writers. We use the standard IAM-OnDB dataset separation: one training set, two validations sets and a test set containing 5 363, 1 438, 1 518 and 3 859 written lines, respectively. We tune the decoder weights using the validation set with 1 438 items and report error rates on the test set.
We perform a more extensive study of the number of layers and nodes per layer for both the raw and curve input formats to determine the optimal size of the bidirectional LSTM network (see Figure FIGREF48 , Table TABREF47 ). We first run experiments without additional feature functions (Figure FIGREF48 , solid lines), then re-compute the results with tuned weights for language models and character classes (Figure FIGREF48 , dashed lines). We observe that for both input formats, using 3 or 5 layers outperforms more shallow networks, and using more layers gives hardly any improvement. Furthermore, using 64 nodes per layer is sufficient, as wider networks give only small improvements, if at all.
Finally, we show a comparison of our old and new systems with the literature on the IAM-OnDB dataset in Table TABREF49 . Our method establishes a new state of the art result when relying on closed data using IAM-OnDB, as well as when relying on our in-house data that we use for our production system, which was not tuned for the IAM-OnDB data and for which none of the IAM-OnDB data was used for training.
To better understand where the improvements come from, we discuss the differences between the previous state-of-the-art system (Graves et al. BLSTM BIBREF24 ) and this work across four dimensions: input pre-processing and feature extraction, neural network architecture, CTC training and decoding, and model training methodology.
Our input pre-processing (Sec SECREF13 ) differs only in minor ways: the INLINEFORM0 -coordinate used is not first transformed using a high-pass filter, we don't split text-lines using gaps and we don't remove delayed strokes, nor do we do any skew and slant correction or other pre-processing.
The major difference comes from feature extraction. In contrast to the 25 features per point uesd in BIBREF24 , we use either 5 features (raw) or 10 features (curves). While the 25 features included both temporal (position in the time series) and spatial features (offline representation), our work uses only the temporal structure. In contrast also to our previous system BIBREF14 , using a more compact representation (and reducing the number of points for curves) allows a feature representation, including spatial structure, to be learned in the first or upper layers of the neural network.
The neural network architecture differs both in internal structure of the LSTM cell as well as in the architecture configuration. Our internal structure differs only in that we do not use peephole connections BIBREF44 .
As opposed to relying on a single bidirectional LSTM layer of width 100, we experiment with a number of configuration variants as detailed in Figure FIGREF48 . We note that it is particularly important to have more than one layer in order to learn a meaningful representation without feature extraction.
We use the CTC forward-backward training algorithm as described in BIBREF24 , and implemented in TensorFlow. The training hyperparameters are described in Section SECREF44 .
The CTC decoding algorithm incorporates feature functions similarly to how the dictionary is incorporated in the previous state-of-the-art system. However, we use more feature functions, our language models are trained on a different corpus, and the combination weights are optimized separately as described in Sec SECREF45 .
IBM-UB-1
Another publicly-accessible English-language dataset is the IBM-UB-1 dataset BIBREF25 . From the available datasets therein, we use the English query dataset, which consists of 63 268 handwritten English words. As this dataset has not been used often in the academic literature, we propose an evaluation protocol. We split this dataset into 4 parts with non-overlapping writer IDs: 47 108 items for training, 4 690 for decoder weight tuning, 6 134 for validation and 5 336 for testing.
We perform a similar set of experiments as we did for IAM-OnDB to determine the right depth and width of our neural network architecture. The results of these experiments are shown in Figure FIGREF52 . The conclusion for this dataset is similar to the conclusions we drew for the IAM-OnDB: using networks with 5 layers of bidirectional LSTMs with 64 cells each is sufficient for good accuracy. Less deep and less wide networks perform substantially worse, but larger networks only give small improvements. This is true regardless of the input processing method chosen.
We give some exemplary results and a comparison with our current production system as well as results for our previous system in Table TABREF53 . We note that our current system is about 38% and 32% better (relative) in CER and WER, respectively, when compared to the previous segment-and-decode approach. The lack of improvement in error rate when evaluating on our production system is due to the fact that our datasets contain spaces while the same setup trained solely on IBM-UB-1 does not.
Additional public datasets
We provide an evaluation of our production system trained on our in-house datasets applied to a number of publicly available benchmark datasets from the literature. Note that for all experiments presented in this section we evaluate our current live system without any tuning specifec to the tasks at hand.
The ICDAR-2013 Competition for Online Handwriting Chinese Character Recognition BIBREF45 introduced a dataset for classifying the most common Chinese characters. We report the error rates in comparison to published results from the competition and more recent work done by others in Table TABREF56 .
We evaluate our live production system on this dataset. Our system was not tuned to the task at hand and was trained as a multi-character recognizer, thus it is not even aware that each sample only contains a single character. Further, our system supports 12 363 different characters while the competition data only contains 3 755 characters. Note that our system did not have access to the training data for this task at all.
Whenever our system returns more than one character for a sample, we count this as an error (this happened twice on the entire test set of 224 590 samples). Despite supporting almost four times as many characters than needed for the CASIA data and not having been tuned to the task, the accuracy of our system is still competitive with systems that were tuned for this data specifically.
In the ICFHR2018 Competition on Vietnamese Online Handwritten Text Recognition using VNOnDB BIBREF50 , our production system was evaluated against other systems. The system used in the competition is the one reported and described in this paper. Due to licensing restrictions we were unable to do any experiments on the competition training data, or specific tuning for the competition, which was not the case for the other systems mentioned here.
We participated in the two tasks that best suited the purpose of our system, specifically the "Word" (ref. table TABREF58 ) and the "Text line" (ref. table TABREF59 ) recognition levels. Even though we can technically process paragraph level inputs, our system was not built with this goal in mind.
In contrast to us, the other teams used the training and validation sets to tune their systems:
The IVTOV team's system is very similar to our system. It makes use of bidirectional LSTM layers trained end-to-end with the CTC loss. The inputs used are delta INLINEFORM0 and INLINEFORM1 coordinates, together with pen-up strokes (boolean feature quantifying whether a stroke has ended or not). They report using a two-layer network of 100 cells each and additional preprocessing for better handling the dataset.
The MyScript team submitted two systems. The first system has an explicit segmentation component along with a feed-forward network for recognizing character hypotheses, similar in formulation to our previous system BIBREF14 . In addition, they also make use of a bidirectional LSTM system trained end-to-end with the CTC loss. They do not provide additional details on which system is which.
We note that the modeling stacks of the systems out-performing ours in this competition are not fundamentally different (to the best of our knowledge, according to released descriptions). We therefore believe that our system might perform comparably if trained on the competition training dataset as well.
On our internal testset of Vietnamese data, our new system obtains a CER of 3.3% which is 54% relative better than the old Segment-and-Decode system which had a CER of 7.2% (see also Table FIGREF69 ).
Tuning neural network parameters on our internal data
Our in-house datasets consist of various types of training data, the amount of which varies by script. Sources of training data include data collected through prompting, commercially available data, artificially inflated data, and labeled/self-labeled anonymized recognition requests (see BIBREF14 for a more detailed description). The number of training samples varies from tens of thousands to several million per script, depending on the complexity and usage.
The best configuration for our production systems were identified by running multiple experiments over a range of layer depths and widths on our Latin script datasets. For the Latin script experiments shown in Figure FIGREF63 , the training set we used was a mixture of data from all the Latin-script languages we support and evaluation is done on an English validation dataset, also used for the English evaluation in Table TABREF68 .
Similarly to experiments depicted in Figure FIGREF48 and Figure FIGREF52 , increasing the depth and width of the network architecture brings diminishing returns fairly quickly. However, overfitting is less pronounced, particularly when relying on Bézier curve inputs, highlighting that our datasets are more complex in nature.
In all our experiments using our production datasets, the Bézier curve inputs outperformed the raw inputs both in terms of accuracy and recognition latency, and are thus used throughout in our production models. We hypothesize that this is due to the implicit normalization of sampling rates and thus line smoothness of the input data. The input data of our production datasets come from a wide variety of data sources including data collection and crowd sourcing from many different types of devices, unlike academic datasets such as IBM-UB-1 or IAM-OnDB which were collected under standardized conditions.
System Performance and Discussion
The setup described throughout this paper that obtained the best results relies on input processing with Bézier spline interpolation (Sec. UID18 ), followed by 4–5 layers of varying width bidirectional LSTMs, followed by a final softmax layer. For each script, we experimentally determined the best configuration through multiple training runs.
We performed an ablation study with the best configurations for each of the eight most important scripts by number of users and compare the results with our previous work BIBREF14 (Table TABREF68 ). The largest relative improvement comes from the overall network architecture stack, followed by the use of the character language model and the other feature functions.
In addition, we show the relative improvement in error rates on the languages for which we have evaluation datasets of more than 2 000 items (Figure FIGREF69 ). The new architecture performs between 20%–40% (relative) better over almost all languages.
Differences Between IAM-OnDB, IBM-UB-1 and our internal datasets
To understand how the different datasets relate to each other, we performed a set of experiments and evaluations with the goal of better characterizing the differences between the datasets.
We trained a recognizer on each of the three training sets separately, then evaluated each system on all three test sets (Table TABREF65 ). The neural network architecture is the same as the one we determined earlier (5 layers bidirectional LSTMs of 64 cells each) with the same feature functions, with weights tuned on the corresponding tuning dataset. The inputs are processed using Bézier curves.
To better understand the source of discrepancy when training on IAM-OnDB and evaluating on IBM-UB-1, we note the different characteristics of the datasets:
IBM-UB-1 has predominantly cursive writing, while IAM-OnDB has mostly printed writing
IBM-UB-1 contains single words, while IAM-OnDB has lines of space-separated words
This results in models trained on the IBM-UB-1 dataset not being able to predict spaces as they are not present in the dataset's alphabet. In addition, the printed writing style of IAM-OnDB makes recognition harder when evaluating cursive writing from IBM-UB-1. It is likely that the lack of structure through words-only data makes recognizing IAM-OnDB on a system trained on IBM-UB-1 harder than vice-versa.
Systems trained on IBM-UB-1 or IAM-OnDB alone perform significantly worse on our internal datasets, as our data distribution covers a wide range of use-cases not necessarily relevant to, or present, in the two academic datasets: sloppy handwriting, overlapping characters for handling writing on small input surfaces, non-uniform sampling rates, and partially rotated inputs.
The network trained on the internal dataset performs well on all three datasets. It performs better on IAM-OnDB than the system trained only thereon, but worse for IBM-UB-1. We believe that using only cursive words when training allows the network to better learn the sample characteristics, than when learning about space separation and other structure properties not present in IBM-UB-1.
Conclusion
We describe the online handwriting recognition system that we currently use at Google for 102 languages in 26 scripts. The system is based on an end-to-end trained neural network and replaces our old Segment-and-Decode system. Recognition accuracy of the new system improves by 20% to 40% relative depending on the language while using smaller and faster models. We encode the touch inputs using a Bézier curve representation which performs at least as well as raw touch inputs but which also allows for a faster recognition because the input sequence representation is shorter.
We further compare the performance of our system to the state of the art on publicly available datasets such as IAM-OnDB, IBM-UB-1, and CASIA and improve over the previous best published result on IAM-OnDB. | IBM-UB-1 dataset BIBREF25, IAM-OnDB dataset BIBREF42, The ICDAR-2013 Competition for Online Handwriting Chinese Character Recognition BIBREF45, ICFHR2018 Competition on Vietnamese Online Handwritten Text Recognition using VNOnDB BIBREF50 |
8de64483ae96c0a03a8e527950582f127b43dceb | 8de64483ae96c0a03a8e527950582f127b43dceb_0 | Q: Do they report results only on English data?
Text: Introduction
There is a broad consensus that artificial intelligence (AI) research is progressing steadily, and that its impact on society is likely to increase. From self-driving cars on public streets to self-piloting, reusable rockets, AI systems tackle more and more complex human activities in a more and more autonomous way. This leads into new spheres, where traditional ethics has limited applicability. Both self-driving cars, where mistakes may be life-threatening, and machine classifiers that hurt social matters may serve as examples for entering grey areas in ethics: How does AI embody our value system? Can AI systems learn human ethical judgements? If not, can we contest the AI system?
Unfortunately, aligning social, ethical, and moral norms to structure of science and innovation in general is a long road. According to BIBREF1 (BIBREF1), who examined affirmative ethics, the emergence of new questions leads to intense public discussions, that are driven by strong emotions of participants. And machine ethics BIBREF2, BIBREF3, BIBREF4 is no exception. Consider, e.g., BIBREF5's (BIBREF5) empirical proof that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, such as the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse.
In other words, AI systems are not neutral with respect to purpose and society anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing in general most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your individual ethical frame. So, one may think that it is impossible to equip AI systems to make human like ethical choices. Luckily, BIBREF0 (BIBREF0) showed that standard machine learning can actually learn answers to ethical choices from textual data that reflect everyday human culture. Recently, BIBREF6 (BIBREF6) showed that sentence embeddings created by SBERT outperform other state-of-the-art sentence embedding methods like InferSent BIBREF7 and Universal Sentence Encoder BIBREF8. We hypothesis that the improvement of language representations models also improves the representation of the underlying ethical and moral values in these models.
To investigate this, we follow the experimental pipeline described in BIBREF0. As a first investigation, we compare the new state-of-the-art text-embedding model BERT, or more precisely Sentence-BERT, focusing on quantifying deontological ethics, i.e. finding out, whether an action itself is right or wrong. Following BIBREF0 and for a fair comparison, we first restrict our attention to atomic actions instead of complex behavioural patterns. Semantically, those contextual isolated actions are represented by verbs. Consequently, identifying verbs that reflect social norms and allow capturing what people rather should do and what not. To conduct this investigation, we used the same template list of prompts and responses for ethical choices as in BIBREF0, cf. Tab. TABREF15(Appendix). The templates include questions, such as "Should I kill people?", "Is it allowed to murder people?", etc. with answer templates of "Yes/no, I should (not)."
Using the Moral Choice Machine (MCM), based on some language representation, one is able to demonstrate the presence of ethical valuation in text collections by generating an ethical bias of actions derived from the Verb Extraction. As the next step, the correlation of WEAT (Word Embedding Association Test) values BIBREF5 and moral bias is examined. Based on that, we show that the new state-of-the-art method BERT improves the quality of the MCM. Although the three methods—Word Embedding Association Test (WEAT), Moral Choice Machine based on the Universal Sentence Encoder (USE), and Moral Choice Machine based on Sentence-BERT (SBERT)—are based on incoherent embeddings with different text corpora as training source, we show that they correspond in classification of actions as Dos and Don'ts. Our findings support the hypothesis of the presence of generally valid valuation in human text. Actually, they show that BERT improves the extraction of the moral score. Next, we move to more complex actions with surrounding contextual information and extend the (moral-) ranking of such actions presented in BIBREF0 by an evaluation of the actual moral bias. Again, we show that BERT has a more accurate reflection of moral values than USE. Finally, we contribute an alternative way of specifying the moral value of an action by learning a projection of the embedding space into a moral subspace. With the MCM in combination with BERT we can reduce the embedding dimensionality to one single dimension representing the moral bias.
We proceed as follows. After reviewing our assumptions and the required background, we present the MCM using BERT, followed by improvements of the MCM. Before concluding, we present our empirical results.
Assumptions and Background
In this section, we review our assumptions, in particular what we mean by moral choices, and the required background, following closely BIBREF0.
Moral Choices. Philosophically, roughly speaking, morals refer to the “right” and “wrong” at an individual's level while ethics refer to the systems of “right” and “wrong” set by a social group. Social norms and implicit behavioural rules exist in all human societies. But even though their presence is ubiquitous, they are hardly measurable and difficult to define consistently. The underlying mechanisms are still poorly understood. Indeed, each working society possesses an abstract moral that is generally valid and needs to be adhered to. However, theoretic definitions have been described as being inconsistent or even contradicting occasionally. Accordingly, latent ethics and morals have been described as the sum of particular norms that may not follow rational justification necessarily. Recently, BIBREF9 (BIBREF9) for instance suggested that moral norms are determined to a large extent by what is perceived to be common convention.
With regards to complexity and intangibility of ethics and morals, we restrict ourselves to a rather basic implementation of this construct, following the theories of deontological ethics. These ask which choices are morally required, forbidden, or permitted instead of asking which kind of a person we should be or which consequences of our actions are to be preferred. Thus, norms are understood as universal rules of what to do and what not to do. Therefore, we focus on the valuation of social acceptance in single verbs and single verbs with surrounding context information —e.g. trust my friend or trust a machine— to figure out which of them represent a Do and which tend to be a Don't. Because we specifically chose templates in the first person, i.e., asking “should I” and not asking “should one”, we address the moral dimension of “right” or “wrong” decisions, and not only their ethical dimension. This is the reason why we will often use the term “moral”, although we actually touch upon “ethics” and “moral”. To measure the valuation, we make use of implicit association tests (IATs) and their connections to word embeddings.
Word and Sentence Embeddings. A word/phrase embedding is a representation of words/phrases as points in a vector space. All approaches have in common that more related or even similar text entities lie close to each other in the vector space, whereas distinct words/phrases can be found in distant regions BIBREF10. This enables determining semantic similarities in a language.
Although these techniques have been around for some time, their potential increased considerably with the emergence of deep distributional approaches. In contrast to previous implementations, those embeddings are built on neural networks (NNs) and enable a rich variety of mathematical vector operations. One of the initial and most widespread algorithms to train word embeddings is Word2Vec BIBREF11, where unsupervised feature extraction and learning is conducted per word on either CBOW or Skip-gram NNs. This can be extended to full sentences BIBREF7, BIBREF8, BIBREF12.
Bias in Text Embeddings. While biases in machine learning models can potentially be rooted in the implemented algorithm, they are primarily due to the data they are trained on. BIBREF5 (BIBREF5) empirically showed that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, as for instance the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse: AI systems “have the potential to inherit a very human flaw: bias”, as Socure's CEO Sunil Madhu puts it. Besides the typically discussed bias in gender and race stereotypes, AI systems are also not neutral with respect to purpose and societal values anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your ethical frame of reference.
Human-like Moral Choices from Human Text
Word-based approaches such as WEAT or Verb Extraction are comparatively simple. They consider single words only, detached from their grammatical and contextual surrounding. In contrast, the Moral Choice Machine determines biases on a sentence level.
Analogous to word embeddings, sentence embeddings, e.g. the Universal Sentence Encoder BIBREF8 and Sentence-BERT BIBREF6, allow one to calculate the cosine similarity of various different sentences, as for instance the similarity of a question and the corresponding answer. The more appropriate a specific answer is to a given question, the stronger is its cosine similarity expected to be. When considering two opposite answers, it is therefore possible to determine a bias value:
where $\vec{q}$ is the vector representation of the question and $\vec{a}$ and $\vec{b}$ the representations of the two answers/choices. A positive value indicates a stronger association to answer $a$, whereas a negative value indicates a stronger association to $b$.
This can be adapted to any arbitrary kind of bias by formulating appropriate question/answer triples, where the question captures the target dimension and the answers represent two opposite manifestations, the choices. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template. The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value. Specifically, we considered two different biases: gender and moral.
The question/answer template for moral biases, our main target, included the ten prompts summarized in Tab. TABREF15(Appendix), where any action term can be inserted for the dots. All prompts follow the same structure: the action is mirrored in the question, and the answers represent two extremes of social desirability. Questions were, e.g., “Is it allowed to ...?” or “Should I ...?”. The used answers were grammatically adjusted to the question, e.g., “Yes, it is.” vs. “No, it is not.” respectively “Yes, you should.” vs. “No, you should not.”. Besides atomic actions, e.g. smile, love, lie or kill, this template can be used on more complex, context-based actions e.g. love my parents, love my wife/husband, kill people or kill time.
Moral Subspace Projection
As BIBREF0 (BIBREF0) showed the question/answer template is an appropriate method to extract moral biases. However as BIBREF13 (BIBREF13) showed, one is also able to even adapt the model's bias, e.g. debias the model's gender bias. They describe that the first step for debiasing word embeddings is to identify a direction (or, more generally, a subspace) of the embedding that captures the bias.
To identify the gender subspace, e.g., they proposed to take the difference vectors of given gender pairs and computed its principal components (PCs) and found a single direction that explains the majority of variance in these vectors, i.e. the first eigenvalue is significantly larger than the rest. Therefore, they argue that the top PC, denoted by the unit vector $g$, captures the gender subspace. Subsequently, they debias the embedding based on this subspace. Please note that the gender pairs are labelled beforehand.
Using the above-mentioned methodology, we propose an alternative to identify the moral bias. Inspired by BIBREF13, we first compute the moral subspace of the text embedding. Instead of the difference vectors of the question/answer pairs, we compute the PCA on selected atomic actions —we expect that these actions represent Dos and Don'ts (cf. Appendix). We formulate the actions as questions, i.e. using question templates, and compute the mean embedding, since this amplifies their moral score BIBREF0. Similar to the gender subspace, if the first eigenvalue is significantly larger than the rest, the top PC, denoted by the unit vector $m$, captures the moral subspace and therefore also the moral bias. Then, based on this subspace, one can extract the moral bias of even complex actions with surrounding context by the projection of an action.
Experimental Results
This section investigates empirically whether text corpora contain recoverable and accurate imprints of our moral choices. Specifically, we move beyond BIBREF0, by showing that BERT has a more accurate moral representation than that of the Universal Sentence Encoder.
Datasets and Embeddings Models. Experiments of the Moral Choice Machine are conducted with the Universal Sentence Encoder (USE) BIBREF8 and Sentence-BERT (SBERT) BIBREF6. The USE model is trained on phrases and sentences from a variety of different text sources; mainly Wikipedia but also sources such as forums, question/answering platforms, and news pages and augmented with supervised elements. SBERT is a modification of the pretrained BERT BIBREF12 network that aims to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. BERT is, like USE, also trained mainly on Wikipedia. For the verb extraction, the same general positive and negative association sets as in BIBREF0 are used—$A$ and $B$ in Eq. DISPLAY_FORM18—. The comprehensive list of vocabulary can be found in the appendix (Tab. TABREF20).
Dos and Don'ts for the Moral Choice Machine. The verb extraction identifies the most positive and most negative associated verbs in vocabulary, to infer socially desired and neglected behaviour. BIBREF0 (BIBREF0) extracted them with the general positive and negative association sets on the Google Slim embedding. Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter.
Tab. TABREF22 and Tab. TABREF23 (cf. Appendix) lists the most positive and negative associated verbs (in decreasing order).
Summarized, even though the contained positive verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. As BIBREF0 (BIBREF0) describe, the listed words can be accepted as commonly agreed Dos and Don'ts.
Replicating Atomic Moral Choices. Next, based on the verbs extractions and the question/answer templates, we show that social norms are present in text embeddings and a text embedding network known to achieve high score in unsupervised scenarios —such as semantic textual similarity via cosine-similarity, clustering or semantic search— improves the scores of the extracted moral actions. The correlation of the moral bias and the corresponding WEAT value was calculated to test consistency of findings. It is hypothesised that resulting moral biases for generated Dos and Don'ts correspond to the WEAT value of each word. The correlation was tested by means of Pearson's Correlation Coefficient:
where $m_x$ and $m_y$ are the the means of $X$ and $Y$. Pearson's $r$ ranges between $-1$, indicating a strong negative correlation, and 1, indicating a strong positive correlation. Significance levels are defined as $5\%$, $1\%$ and $0.1\%$, indicated by one, two or three starlets.
The correlation between WEAT value and the moral bias gets tangible, when inspecting their correlation graphically, cf. Fig. FIGREF4. The concrete bias scores can be found in the Appendix, Fig. TABREF28 and TABREF29. For both WEAT and MCM, the scatter plots of Dos and Don'ts are divided on the x-axis. The Pearson's Correlation Coefficient using USE as embedding (Top) $r = 0.73$ with $p = 2.3732e^{-16}$ is indicating a significant positive correlation. However, according to the distribution one can see that using BERT (Bottom) improves the distinction between Dos and Don't. Actually, the Pearson's Correlation Coefficient $r = 0.88$ with $p = 1.1054e^{-29}$ indicates a high positive correlation. These findings suggest that if we build an AI system that learns an improved language representation to be able to better understand and produce it, in the process it will also acquire more accurate historical cultural associations to make human-like “right” and “wrong” choices.
Replicating Complex Moral Choices in the Moral Subspace.
The strong correlation between WEAT values and moral biases at the verb level gives reasons to investigate BERT's Moral Choice Machine for complex human-like choices at the phrase level. For instance, it is appropriate to help old people, but one should not help a thief. It is good behaviour to love your parents, but not to steal money. To see whether the moral choice machine can, in principle, deal with complex choices and implicit context information around these complex choices, BIBREF0 (BIBREF0) considered the rankings among answers induced by cosine distance. Their results indicate that human text may indeed contain complex human-like choices that are reproducible by the Moral Choice Machine. To investigate this further, we define a Moral Subspace Projection and consider a set of atomic actions and combine them with varying context information, e.g. “Should I have a gun to hunt animals?” or “Should I have a gun to kill people?”.
First we will investigate the subspace of vector differences (moral direction) which was introduced by BIBREF13 (BIBREF13) to debias word embeddings. Fig. FIGREF6 (a-b) shows the percentage of variance explained in the PCA using the MCM with USE(a) and BERT(b). Clearly, the top principal component (PC) using BERT explains the majority of variance in these vectors, therefore we conclude that it represents the moral direction $m$. Using USE, we were unable to find a clear moral dimension, rather multiple directions. Although both projections should enable one to adapt the model's moral bias based on the subspace, BERT seems to have a more intuitive moral direction.
Next, we investigate the subspace projection with the actions formulated as questions. Also, here, one can see that BERT enables the MCM to identify a clear moral direction, cf. Fig. FIGREF6(c-d). The PCA is computed with the embedding of atomic actions. Based on this projection, we query more complex actions to investigate their moral bias score. The atomic actions in the subspace are visualized in Fig. FIGREF1 and the queried actions in Fig. FIGREF11. The horizontal axis (the top PC) represents the moral direction. One can observe that the atomic actions kill, murder, slaughter, brutalise, destroy are the most negative actions and congratulate, compliment, welcome and smile the most positive. E.g. apologize, dream, go, become seem to be neutral —which would change depending on the context—. If we, now, query the MCM with projection with more complex actions, one can see that the most negative actions are kill people, have a gun to kill people and become evil, but becoming a good parent is positive. Further, one can see that eat healthy is positive but eat meat is not appropriate. One should not travel to North Korea, but also not to Germany. Instead traveling to the United States is appropriate.
Conclusions
We have demonstrated that BERT has a more pronounced moral compass than previous embedding methods. That is, yes, text embeddings encode knowledge about deontological ethical and even moral choices, but the quality of the bias score depends on the quality of the text embedding network. Specifically, our empirical results show that the Moral Choice Machine with recent state-of-the-art language representations, namely BERT, extends the boundary of previous approaches and demonstrate the existence of biases in human language on a complex phrase level. Moreover, we identified for the first time that there is a moral dimension in text embeddings, even when taking context into account.
Generally, improved moral choice machines hold promise for identifying and addressing sources of ethical and moral choices in culture, including AI systems. This provides several avenues for future work. Inspired by BIBREF13 (BIBREF13), we aim at modifying the embedding, given human ethical values collected from an user study. Further, it is interesting to track ethical choices over time and to compare them among different text corpora. Even more interesting is an interactive learning setting with an interactive robot, in which users would teach and revise the robot's moral bias. Our identification of a moral subspace in sentence embeddings lays the foundation for this.
Appendix ::: Moral Choice Machine
BIBREF0 (BIBREF0) developed Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template (cf. Tab. TABREF15). The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value.
Appendix ::: Implicit Associations in Word Embeddings
Transferring the approach of implicit associations from human subjects to information retrieval systems on natural text was initially suggested by Caliskan et al. (BIBREF5), who reported some basic effects of the Word Embedding Association Test (WEAT). Whereas the strength of association in human minds is defined by response latency in Implicit Association Tests (IAT), it is here instantiated as cosine similarity of text in the Euclidean space. Similar to the IAT, complex concepts are defined by word sets. The association of any single word vector $\vec{w}$ to a word set is defined as the mean cosine similarity between $\vec{w}$ and the particular elements of the set. Now, let there be two sets of target words $X$ and $Y$. The allocation of $\vec{w}$ to two discriminating association sets $A$ and $B$ can be formulated as
A word with representation $\vec{w}$ that is stronger associated to concept $A$ yields a positive value and representation related to $B$ a negative value.
Appendix ::: Association Sets
The complete lists of positive and negative association words that were applied for generating Dos and Don'ts with Verb Extraction are given in Tab. TABREF20. The words were collected from four different literature sources that provide unspecific association sets to define pleasant and unpleasant associations BIBREF14, BIBREF17, BIBREF18, BIBREF15.
Appendix ::: Dos and Don’ts for the Moral Choice Machine
Tab. TABREF22 lists the most positive associated verbs (in decreasing order).
Even though the contained verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, Tab. TABREF23 presents the most negative associated verbs (in decreasing order) we found in our vocabulary.
Some of the words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. And still others words, as for instance suppurate or rot, appear to be disgusting in the first place. Exculpate is not a bad behaviour per se. However, its occurrence in the don't set is not surprising, since it is semantically and contextual related to wrongdoings. Some of the words are of surprisingly repugnant nature as it was not even anticipated in preliminary considerations, e.g. depopulate or dehumanise. Undoubtedly, the listed words can be accepted as commonly agreed Don'ts. Both lists include few words are rather common as a noun or adjectives, as joy, long, gift or bad. Anyhow, they can also be used as verbs and comply the requirements of being a do or a don't in that function. The allocation of verbs into Dos and Don'ts was confirmed by the affective lexicon AFINN BIBREF16. AFINN allows one to rate words and phrases for valence on a scale of $-5$ and 5, indicating inherent connotation. Elements with no ratings are treated as neutral ($0.0$).
When passing the comprehensive lists of generated Dos and Don'ts to AFINN, the mean rating for Dos is $1.12$ ($std=1.24$) and for Don'ts $-0.90$ ($std=1.22$). The t-test statistic yielded values of $t = 8.12$ with $p < .0001^{***}$. When neglecting all verbs that are not included in AFINN, the mean value for Dos is $2.34$ ($std=0.62$, $n = 24$) and the mean for Don'ts $-2.37$ ($std = 0.67$, $n=19$), with again highly significant statistics ($t = 23.28$, $p<.0001^{***}$). Thus, the sentimental rating is completely in line with the allocation of Verb Extraction. The verb extraction was highly successful and delivers useful Dos and Don'ts. The word sets contain consistently positive and negative connoted verbs, respectively, that are reasonable to represent a socially agreed norm in the right context. The AFINN validation clearly shows that the valuation of positive and negative verbs is in line with other independent rating systems.
Appendix ::: Moral Bias of USE and BERT
The following results were computed with the MCM version of BIBREF0 (BIBREF0) using both USE and BERT as sentence embedding. Specifically, to investigate whether the sentiments of the extracted Dos and Don'ts also hold for more complex sentence level, we inserted them into the question/answer templates of Moral Choice Machine BIBREF0. The resulting moral biases scores/choices are summarized in Tab. TABREF28. It presents the moral biases exemplary for the top ten Dos and Don'ts by WEAT value of both sets. The threshold between the groups is not 0, but slightly shifted negatively (Using USE further shifted than Using BERT). However, the distinction of Dos and Don'ts is clearly reflected in bias values. Using USE the mean bias of all considered elements is $-0.018$ ($std=0.025$), whereat the mean of Dos is $-0.001$ ($std=0.190$, $n=50$) and the mean of Don'ts $-0.037$ ($std=0.017$, $n=50$). Using BERT the mean bias of all considered elements is $-0.054$ ($std=0.11$), whereat the mean of Dos is $0.041$ ($std=0.064$, $n=50$) and the mean of Don'ts $-0.163$ ($std=0.053$, $n=50$).
Furthermore Tab. TABREF29 shows the resulting moral biases scores/choices for action with additional surrounding context exemplary for the top ten Dos and Don'ts of both sentence embeddings.
Appendix ::: Moral Subspace Projection
To create a the moral subspace projection a Principal Component Analysis (PCA) was computed. The used atomic actions are listed in Tab. TABREF26. The resulting space, with the MCM using BERT, is visualized in Fig. FIGREF1 based on the first two top PCs. The top PC (the $X$ axis) defines the moral direction $m$ (bias). The context-based actions which were tested using the moral subspace projection are listed in Tab. TABREF27. The resulting moral direction $m$ (or bias) for both the atomic and context-based actions can be found in Tab. TABREF30. We also list the results using the sentence embedding USE instead of BERT. $m < 0$ corresponds to a positive moral score and $m > 0$ corresponds to a negative moral score. | Yes |
4d062673b714998800e61f66b6ccbf7eef5be2ac | 4d062673b714998800e61f66b6ccbf7eef5be2ac_0 | Q: What is the Moral Choice Machine?
Text: Introduction
There is a broad consensus that artificial intelligence (AI) research is progressing steadily, and that its impact on society is likely to increase. From self-driving cars on public streets to self-piloting, reusable rockets, AI systems tackle more and more complex human activities in a more and more autonomous way. This leads into new spheres, where traditional ethics has limited applicability. Both self-driving cars, where mistakes may be life-threatening, and machine classifiers that hurt social matters may serve as examples for entering grey areas in ethics: How does AI embody our value system? Can AI systems learn human ethical judgements? If not, can we contest the AI system?
Unfortunately, aligning social, ethical, and moral norms to structure of science and innovation in general is a long road. According to BIBREF1 (BIBREF1), who examined affirmative ethics, the emergence of new questions leads to intense public discussions, that are driven by strong emotions of participants. And machine ethics BIBREF2, BIBREF3, BIBREF4 is no exception. Consider, e.g., BIBREF5's (BIBREF5) empirical proof that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, such as the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse.
In other words, AI systems are not neutral with respect to purpose and society anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing in general most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your individual ethical frame. So, one may think that it is impossible to equip AI systems to make human like ethical choices. Luckily, BIBREF0 (BIBREF0) showed that standard machine learning can actually learn answers to ethical choices from textual data that reflect everyday human culture. Recently, BIBREF6 (BIBREF6) showed that sentence embeddings created by SBERT outperform other state-of-the-art sentence embedding methods like InferSent BIBREF7 and Universal Sentence Encoder BIBREF8. We hypothesis that the improvement of language representations models also improves the representation of the underlying ethical and moral values in these models.
To investigate this, we follow the experimental pipeline described in BIBREF0. As a first investigation, we compare the new state-of-the-art text-embedding model BERT, or more precisely Sentence-BERT, focusing on quantifying deontological ethics, i.e. finding out, whether an action itself is right or wrong. Following BIBREF0 and for a fair comparison, we first restrict our attention to atomic actions instead of complex behavioural patterns. Semantically, those contextual isolated actions are represented by verbs. Consequently, identifying verbs that reflect social norms and allow capturing what people rather should do and what not. To conduct this investigation, we used the same template list of prompts and responses for ethical choices as in BIBREF0, cf. Tab. TABREF15(Appendix). The templates include questions, such as "Should I kill people?", "Is it allowed to murder people?", etc. with answer templates of "Yes/no, I should (not)."
Using the Moral Choice Machine (MCM), based on some language representation, one is able to demonstrate the presence of ethical valuation in text collections by generating an ethical bias of actions derived from the Verb Extraction. As the next step, the correlation of WEAT (Word Embedding Association Test) values BIBREF5 and moral bias is examined. Based on that, we show that the new state-of-the-art method BERT improves the quality of the MCM. Although the three methods—Word Embedding Association Test (WEAT), Moral Choice Machine based on the Universal Sentence Encoder (USE), and Moral Choice Machine based on Sentence-BERT (SBERT)—are based on incoherent embeddings with different text corpora as training source, we show that they correspond in classification of actions as Dos and Don'ts. Our findings support the hypothesis of the presence of generally valid valuation in human text. Actually, they show that BERT improves the extraction of the moral score. Next, we move to more complex actions with surrounding contextual information and extend the (moral-) ranking of such actions presented in BIBREF0 by an evaluation of the actual moral bias. Again, we show that BERT has a more accurate reflection of moral values than USE. Finally, we contribute an alternative way of specifying the moral value of an action by learning a projection of the embedding space into a moral subspace. With the MCM in combination with BERT we can reduce the embedding dimensionality to one single dimension representing the moral bias.
We proceed as follows. After reviewing our assumptions and the required background, we present the MCM using BERT, followed by improvements of the MCM. Before concluding, we present our empirical results.
Assumptions and Background
In this section, we review our assumptions, in particular what we mean by moral choices, and the required background, following closely BIBREF0.
Moral Choices. Philosophically, roughly speaking, morals refer to the “right” and “wrong” at an individual's level while ethics refer to the systems of “right” and “wrong” set by a social group. Social norms and implicit behavioural rules exist in all human societies. But even though their presence is ubiquitous, they are hardly measurable and difficult to define consistently. The underlying mechanisms are still poorly understood. Indeed, each working society possesses an abstract moral that is generally valid and needs to be adhered to. However, theoretic definitions have been described as being inconsistent or even contradicting occasionally. Accordingly, latent ethics and morals have been described as the sum of particular norms that may not follow rational justification necessarily. Recently, BIBREF9 (BIBREF9) for instance suggested that moral norms are determined to a large extent by what is perceived to be common convention.
With regards to complexity and intangibility of ethics and morals, we restrict ourselves to a rather basic implementation of this construct, following the theories of deontological ethics. These ask which choices are morally required, forbidden, or permitted instead of asking which kind of a person we should be or which consequences of our actions are to be preferred. Thus, norms are understood as universal rules of what to do and what not to do. Therefore, we focus on the valuation of social acceptance in single verbs and single verbs with surrounding context information —e.g. trust my friend or trust a machine— to figure out which of them represent a Do and which tend to be a Don't. Because we specifically chose templates in the first person, i.e., asking “should I” and not asking “should one”, we address the moral dimension of “right” or “wrong” decisions, and not only their ethical dimension. This is the reason why we will often use the term “moral”, although we actually touch upon “ethics” and “moral”. To measure the valuation, we make use of implicit association tests (IATs) and their connections to word embeddings.
Word and Sentence Embeddings. A word/phrase embedding is a representation of words/phrases as points in a vector space. All approaches have in common that more related or even similar text entities lie close to each other in the vector space, whereas distinct words/phrases can be found in distant regions BIBREF10. This enables determining semantic similarities in a language.
Although these techniques have been around for some time, their potential increased considerably with the emergence of deep distributional approaches. In contrast to previous implementations, those embeddings are built on neural networks (NNs) and enable a rich variety of mathematical vector operations. One of the initial and most widespread algorithms to train word embeddings is Word2Vec BIBREF11, where unsupervised feature extraction and learning is conducted per word on either CBOW or Skip-gram NNs. This can be extended to full sentences BIBREF7, BIBREF8, BIBREF12.
Bias in Text Embeddings. While biases in machine learning models can potentially be rooted in the implemented algorithm, they are primarily due to the data they are trained on. BIBREF5 (BIBREF5) empirically showed that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, as for instance the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse: AI systems “have the potential to inherit a very human flaw: bias”, as Socure's CEO Sunil Madhu puts it. Besides the typically discussed bias in gender and race stereotypes, AI systems are also not neutral with respect to purpose and societal values anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your ethical frame of reference.
Human-like Moral Choices from Human Text
Word-based approaches such as WEAT or Verb Extraction are comparatively simple. They consider single words only, detached from their grammatical and contextual surrounding. In contrast, the Moral Choice Machine determines biases on a sentence level.
Analogous to word embeddings, sentence embeddings, e.g. the Universal Sentence Encoder BIBREF8 and Sentence-BERT BIBREF6, allow one to calculate the cosine similarity of various different sentences, as for instance the similarity of a question and the corresponding answer. The more appropriate a specific answer is to a given question, the stronger is its cosine similarity expected to be. When considering two opposite answers, it is therefore possible to determine a bias value:
where $\vec{q}$ is the vector representation of the question and $\vec{a}$ and $\vec{b}$ the representations of the two answers/choices. A positive value indicates a stronger association to answer $a$, whereas a negative value indicates a stronger association to $b$.
This can be adapted to any arbitrary kind of bias by formulating appropriate question/answer triples, where the question captures the target dimension and the answers represent two opposite manifestations, the choices. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template. The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value. Specifically, we considered two different biases: gender and moral.
The question/answer template for moral biases, our main target, included the ten prompts summarized in Tab. TABREF15(Appendix), where any action term can be inserted for the dots. All prompts follow the same structure: the action is mirrored in the question, and the answers represent two extremes of social desirability. Questions were, e.g., “Is it allowed to ...?” or “Should I ...?”. The used answers were grammatically adjusted to the question, e.g., “Yes, it is.” vs. “No, it is not.” respectively “Yes, you should.” vs. “No, you should not.”. Besides atomic actions, e.g. smile, love, lie or kill, this template can be used on more complex, context-based actions e.g. love my parents, love my wife/husband, kill people or kill time.
Moral Subspace Projection
As BIBREF0 (BIBREF0) showed the question/answer template is an appropriate method to extract moral biases. However as BIBREF13 (BIBREF13) showed, one is also able to even adapt the model's bias, e.g. debias the model's gender bias. They describe that the first step for debiasing word embeddings is to identify a direction (or, more generally, a subspace) of the embedding that captures the bias.
To identify the gender subspace, e.g., they proposed to take the difference vectors of given gender pairs and computed its principal components (PCs) and found a single direction that explains the majority of variance in these vectors, i.e. the first eigenvalue is significantly larger than the rest. Therefore, they argue that the top PC, denoted by the unit vector $g$, captures the gender subspace. Subsequently, they debias the embedding based on this subspace. Please note that the gender pairs are labelled beforehand.
Using the above-mentioned methodology, we propose an alternative to identify the moral bias. Inspired by BIBREF13, we first compute the moral subspace of the text embedding. Instead of the difference vectors of the question/answer pairs, we compute the PCA on selected atomic actions —we expect that these actions represent Dos and Don'ts (cf. Appendix). We formulate the actions as questions, i.e. using question templates, and compute the mean embedding, since this amplifies their moral score BIBREF0. Similar to the gender subspace, if the first eigenvalue is significantly larger than the rest, the top PC, denoted by the unit vector $m$, captures the moral subspace and therefore also the moral bias. Then, based on this subspace, one can extract the moral bias of even complex actions with surrounding context by the projection of an action.
Experimental Results
This section investigates empirically whether text corpora contain recoverable and accurate imprints of our moral choices. Specifically, we move beyond BIBREF0, by showing that BERT has a more accurate moral representation than that of the Universal Sentence Encoder.
Datasets and Embeddings Models. Experiments of the Moral Choice Machine are conducted with the Universal Sentence Encoder (USE) BIBREF8 and Sentence-BERT (SBERT) BIBREF6. The USE model is trained on phrases and sentences from a variety of different text sources; mainly Wikipedia but also sources such as forums, question/answering platforms, and news pages and augmented with supervised elements. SBERT is a modification of the pretrained BERT BIBREF12 network that aims to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. BERT is, like USE, also trained mainly on Wikipedia. For the verb extraction, the same general positive and negative association sets as in BIBREF0 are used—$A$ and $B$ in Eq. DISPLAY_FORM18—. The comprehensive list of vocabulary can be found in the appendix (Tab. TABREF20).
Dos and Don'ts for the Moral Choice Machine. The verb extraction identifies the most positive and most negative associated verbs in vocabulary, to infer socially desired and neglected behaviour. BIBREF0 (BIBREF0) extracted them with the general positive and negative association sets on the Google Slim embedding. Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter.
Tab. TABREF22 and Tab. TABREF23 (cf. Appendix) lists the most positive and negative associated verbs (in decreasing order).
Summarized, even though the contained positive verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. As BIBREF0 (BIBREF0) describe, the listed words can be accepted as commonly agreed Dos and Don'ts.
Replicating Atomic Moral Choices. Next, based on the verbs extractions and the question/answer templates, we show that social norms are present in text embeddings and a text embedding network known to achieve high score in unsupervised scenarios —such as semantic textual similarity via cosine-similarity, clustering or semantic search— improves the scores of the extracted moral actions. The correlation of the moral bias and the corresponding WEAT value was calculated to test consistency of findings. It is hypothesised that resulting moral biases for generated Dos and Don'ts correspond to the WEAT value of each word. The correlation was tested by means of Pearson's Correlation Coefficient:
where $m_x$ and $m_y$ are the the means of $X$ and $Y$. Pearson's $r$ ranges between $-1$, indicating a strong negative correlation, and 1, indicating a strong positive correlation. Significance levels are defined as $5\%$, $1\%$ and $0.1\%$, indicated by one, two or three starlets.
The correlation between WEAT value and the moral bias gets tangible, when inspecting their correlation graphically, cf. Fig. FIGREF4. The concrete bias scores can be found in the Appendix, Fig. TABREF28 and TABREF29. For both WEAT and MCM, the scatter plots of Dos and Don'ts are divided on the x-axis. The Pearson's Correlation Coefficient using USE as embedding (Top) $r = 0.73$ with $p = 2.3732e^{-16}$ is indicating a significant positive correlation. However, according to the distribution one can see that using BERT (Bottom) improves the distinction between Dos and Don't. Actually, the Pearson's Correlation Coefficient $r = 0.88$ with $p = 1.1054e^{-29}$ indicates a high positive correlation. These findings suggest that if we build an AI system that learns an improved language representation to be able to better understand and produce it, in the process it will also acquire more accurate historical cultural associations to make human-like “right” and “wrong” choices.
Replicating Complex Moral Choices in the Moral Subspace.
The strong correlation between WEAT values and moral biases at the verb level gives reasons to investigate BERT's Moral Choice Machine for complex human-like choices at the phrase level. For instance, it is appropriate to help old people, but one should not help a thief. It is good behaviour to love your parents, but not to steal money. To see whether the moral choice machine can, in principle, deal with complex choices and implicit context information around these complex choices, BIBREF0 (BIBREF0) considered the rankings among answers induced by cosine distance. Their results indicate that human text may indeed contain complex human-like choices that are reproducible by the Moral Choice Machine. To investigate this further, we define a Moral Subspace Projection and consider a set of atomic actions and combine them with varying context information, e.g. “Should I have a gun to hunt animals?” or “Should I have a gun to kill people?”.
First we will investigate the subspace of vector differences (moral direction) which was introduced by BIBREF13 (BIBREF13) to debias word embeddings. Fig. FIGREF6 (a-b) shows the percentage of variance explained in the PCA using the MCM with USE(a) and BERT(b). Clearly, the top principal component (PC) using BERT explains the majority of variance in these vectors, therefore we conclude that it represents the moral direction $m$. Using USE, we were unable to find a clear moral dimension, rather multiple directions. Although both projections should enable one to adapt the model's moral bias based on the subspace, BERT seems to have a more intuitive moral direction.
Next, we investigate the subspace projection with the actions formulated as questions. Also, here, one can see that BERT enables the MCM to identify a clear moral direction, cf. Fig. FIGREF6(c-d). The PCA is computed with the embedding of atomic actions. Based on this projection, we query more complex actions to investigate their moral bias score. The atomic actions in the subspace are visualized in Fig. FIGREF1 and the queried actions in Fig. FIGREF11. The horizontal axis (the top PC) represents the moral direction. One can observe that the atomic actions kill, murder, slaughter, brutalise, destroy are the most negative actions and congratulate, compliment, welcome and smile the most positive. E.g. apologize, dream, go, become seem to be neutral —which would change depending on the context—. If we, now, query the MCM with projection with more complex actions, one can see that the most negative actions are kill people, have a gun to kill people and become evil, but becoming a good parent is positive. Further, one can see that eat healthy is positive but eat meat is not appropriate. One should not travel to North Korea, but also not to Germany. Instead traveling to the United States is appropriate.
Conclusions
We have demonstrated that BERT has a more pronounced moral compass than previous embedding methods. That is, yes, text embeddings encode knowledge about deontological ethical and even moral choices, but the quality of the bias score depends on the quality of the text embedding network. Specifically, our empirical results show that the Moral Choice Machine with recent state-of-the-art language representations, namely BERT, extends the boundary of previous approaches and demonstrate the existence of biases in human language on a complex phrase level. Moreover, we identified for the first time that there is a moral dimension in text embeddings, even when taking context into account.
Generally, improved moral choice machines hold promise for identifying and addressing sources of ethical and moral choices in culture, including AI systems. This provides several avenues for future work. Inspired by BIBREF13 (BIBREF13), we aim at modifying the embedding, given human ethical values collected from an user study. Further, it is interesting to track ethical choices over time and to compare them among different text corpora. Even more interesting is an interactive learning setting with an interactive robot, in which users would teach and revise the robot's moral bias. Our identification of a moral subspace in sentence embeddings lays the foundation for this.
Appendix ::: Moral Choice Machine
BIBREF0 (BIBREF0) developed Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template (cf. Tab. TABREF15). The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value.
Appendix ::: Implicit Associations in Word Embeddings
Transferring the approach of implicit associations from human subjects to information retrieval systems on natural text was initially suggested by Caliskan et al. (BIBREF5), who reported some basic effects of the Word Embedding Association Test (WEAT). Whereas the strength of association in human minds is defined by response latency in Implicit Association Tests (IAT), it is here instantiated as cosine similarity of text in the Euclidean space. Similar to the IAT, complex concepts are defined by word sets. The association of any single word vector $\vec{w}$ to a word set is defined as the mean cosine similarity between $\vec{w}$ and the particular elements of the set. Now, let there be two sets of target words $X$ and $Y$. The allocation of $\vec{w}$ to two discriminating association sets $A$ and $B$ can be formulated as
A word with representation $\vec{w}$ that is stronger associated to concept $A$ yields a positive value and representation related to $B$ a negative value.
Appendix ::: Association Sets
The complete lists of positive and negative association words that were applied for generating Dos and Don'ts with Verb Extraction are given in Tab. TABREF20. The words were collected from four different literature sources that provide unspecific association sets to define pleasant and unpleasant associations BIBREF14, BIBREF17, BIBREF18, BIBREF15.
Appendix ::: Dos and Don’ts for the Moral Choice Machine
Tab. TABREF22 lists the most positive associated verbs (in decreasing order).
Even though the contained verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, Tab. TABREF23 presents the most negative associated verbs (in decreasing order) we found in our vocabulary.
Some of the words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. And still others words, as for instance suppurate or rot, appear to be disgusting in the first place. Exculpate is not a bad behaviour per se. However, its occurrence in the don't set is not surprising, since it is semantically and contextual related to wrongdoings. Some of the words are of surprisingly repugnant nature as it was not even anticipated in preliminary considerations, e.g. depopulate or dehumanise. Undoubtedly, the listed words can be accepted as commonly agreed Don'ts. Both lists include few words are rather common as a noun or adjectives, as joy, long, gift or bad. Anyhow, they can also be used as verbs and comply the requirements of being a do or a don't in that function. The allocation of verbs into Dos and Don'ts was confirmed by the affective lexicon AFINN BIBREF16. AFINN allows one to rate words and phrases for valence on a scale of $-5$ and 5, indicating inherent connotation. Elements with no ratings are treated as neutral ($0.0$).
When passing the comprehensive lists of generated Dos and Don'ts to AFINN, the mean rating for Dos is $1.12$ ($std=1.24$) and for Don'ts $-0.90$ ($std=1.22$). The t-test statistic yielded values of $t = 8.12$ with $p < .0001^{***}$. When neglecting all verbs that are not included in AFINN, the mean value for Dos is $2.34$ ($std=0.62$, $n = 24$) and the mean for Don'ts $-2.37$ ($std = 0.67$, $n=19$), with again highly significant statistics ($t = 23.28$, $p<.0001^{***}$). Thus, the sentimental rating is completely in line with the allocation of Verb Extraction. The verb extraction was highly successful and delivers useful Dos and Don'ts. The word sets contain consistently positive and negative connoted verbs, respectively, that are reasonable to represent a socially agreed norm in the right context. The AFINN validation clearly shows that the valuation of positive and negative verbs is in line with other independent rating systems.
Appendix ::: Moral Bias of USE and BERT
The following results were computed with the MCM version of BIBREF0 (BIBREF0) using both USE and BERT as sentence embedding. Specifically, to investigate whether the sentiments of the extracted Dos and Don'ts also hold for more complex sentence level, we inserted them into the question/answer templates of Moral Choice Machine BIBREF0. The resulting moral biases scores/choices are summarized in Tab. TABREF28. It presents the moral biases exemplary for the top ten Dos and Don'ts by WEAT value of both sets. The threshold between the groups is not 0, but slightly shifted negatively (Using USE further shifted than Using BERT). However, the distinction of Dos and Don'ts is clearly reflected in bias values. Using USE the mean bias of all considered elements is $-0.018$ ($std=0.025$), whereat the mean of Dos is $-0.001$ ($std=0.190$, $n=50$) and the mean of Don'ts $-0.037$ ($std=0.017$, $n=50$). Using BERT the mean bias of all considered elements is $-0.054$ ($std=0.11$), whereat the mean of Dos is $0.041$ ($std=0.064$, $n=50$) and the mean of Don'ts $-0.163$ ($std=0.053$, $n=50$).
Furthermore Tab. TABREF29 shows the resulting moral biases scores/choices for action with additional surrounding context exemplary for the top ten Dos and Don'ts of both sentence embeddings.
Appendix ::: Moral Subspace Projection
To create a the moral subspace projection a Principal Component Analysis (PCA) was computed. The used atomic actions are listed in Tab. TABREF26. The resulting space, with the MCM using BERT, is visualized in Fig. FIGREF1 based on the first two top PCs. The top PC (the $X$ axis) defines the moral direction $m$ (bias). The context-based actions which were tested using the moral subspace projection are listed in Tab. TABREF27. The resulting moral direction $m$ (or bias) for both the atomic and context-based actions can be found in Tab. TABREF30. We also list the results using the sentence embedding USE instead of BERT. $m < 0$ corresponds to a positive moral score and $m > 0$ corresponds to a negative moral score. | Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs |
f4238f558d6ddf3849497a130b3a6ad866ff38b3 | f4238f558d6ddf3849497a130b3a6ad866ff38b3_0 | Q: How is moral bias measured?
Text: Introduction
There is a broad consensus that artificial intelligence (AI) research is progressing steadily, and that its impact on society is likely to increase. From self-driving cars on public streets to self-piloting, reusable rockets, AI systems tackle more and more complex human activities in a more and more autonomous way. This leads into new spheres, where traditional ethics has limited applicability. Both self-driving cars, where mistakes may be life-threatening, and machine classifiers that hurt social matters may serve as examples for entering grey areas in ethics: How does AI embody our value system? Can AI systems learn human ethical judgements? If not, can we contest the AI system?
Unfortunately, aligning social, ethical, and moral norms to structure of science and innovation in general is a long road. According to BIBREF1 (BIBREF1), who examined affirmative ethics, the emergence of new questions leads to intense public discussions, that are driven by strong emotions of participants. And machine ethics BIBREF2, BIBREF3, BIBREF4 is no exception. Consider, e.g., BIBREF5's (BIBREF5) empirical proof that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, such as the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse.
In other words, AI systems are not neutral with respect to purpose and society anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing in general most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your individual ethical frame. So, one may think that it is impossible to equip AI systems to make human like ethical choices. Luckily, BIBREF0 (BIBREF0) showed that standard machine learning can actually learn answers to ethical choices from textual data that reflect everyday human culture. Recently, BIBREF6 (BIBREF6) showed that sentence embeddings created by SBERT outperform other state-of-the-art sentence embedding methods like InferSent BIBREF7 and Universal Sentence Encoder BIBREF8. We hypothesis that the improvement of language representations models also improves the representation of the underlying ethical and moral values in these models.
To investigate this, we follow the experimental pipeline described in BIBREF0. As a first investigation, we compare the new state-of-the-art text-embedding model BERT, or more precisely Sentence-BERT, focusing on quantifying deontological ethics, i.e. finding out, whether an action itself is right or wrong. Following BIBREF0 and for a fair comparison, we first restrict our attention to atomic actions instead of complex behavioural patterns. Semantically, those contextual isolated actions are represented by verbs. Consequently, identifying verbs that reflect social norms and allow capturing what people rather should do and what not. To conduct this investigation, we used the same template list of prompts and responses for ethical choices as in BIBREF0, cf. Tab. TABREF15(Appendix). The templates include questions, such as "Should I kill people?", "Is it allowed to murder people?", etc. with answer templates of "Yes/no, I should (not)."
Using the Moral Choice Machine (MCM), based on some language representation, one is able to demonstrate the presence of ethical valuation in text collections by generating an ethical bias of actions derived from the Verb Extraction. As the next step, the correlation of WEAT (Word Embedding Association Test) values BIBREF5 and moral bias is examined. Based on that, we show that the new state-of-the-art method BERT improves the quality of the MCM. Although the three methods—Word Embedding Association Test (WEAT), Moral Choice Machine based on the Universal Sentence Encoder (USE), and Moral Choice Machine based on Sentence-BERT (SBERT)—are based on incoherent embeddings with different text corpora as training source, we show that they correspond in classification of actions as Dos and Don'ts. Our findings support the hypothesis of the presence of generally valid valuation in human text. Actually, they show that BERT improves the extraction of the moral score. Next, we move to more complex actions with surrounding contextual information and extend the (moral-) ranking of such actions presented in BIBREF0 by an evaluation of the actual moral bias. Again, we show that BERT has a more accurate reflection of moral values than USE. Finally, we contribute an alternative way of specifying the moral value of an action by learning a projection of the embedding space into a moral subspace. With the MCM in combination with BERT we can reduce the embedding dimensionality to one single dimension representing the moral bias.
We proceed as follows. After reviewing our assumptions and the required background, we present the MCM using BERT, followed by improvements of the MCM. Before concluding, we present our empirical results.
Assumptions and Background
In this section, we review our assumptions, in particular what we mean by moral choices, and the required background, following closely BIBREF0.
Moral Choices. Philosophically, roughly speaking, morals refer to the “right” and “wrong” at an individual's level while ethics refer to the systems of “right” and “wrong” set by a social group. Social norms and implicit behavioural rules exist in all human societies. But even though their presence is ubiquitous, they are hardly measurable and difficult to define consistently. The underlying mechanisms are still poorly understood. Indeed, each working society possesses an abstract moral that is generally valid and needs to be adhered to. However, theoretic definitions have been described as being inconsistent or even contradicting occasionally. Accordingly, latent ethics and morals have been described as the sum of particular norms that may not follow rational justification necessarily. Recently, BIBREF9 (BIBREF9) for instance suggested that moral norms are determined to a large extent by what is perceived to be common convention.
With regards to complexity and intangibility of ethics and morals, we restrict ourselves to a rather basic implementation of this construct, following the theories of deontological ethics. These ask which choices are morally required, forbidden, or permitted instead of asking which kind of a person we should be or which consequences of our actions are to be preferred. Thus, norms are understood as universal rules of what to do and what not to do. Therefore, we focus on the valuation of social acceptance in single verbs and single verbs with surrounding context information —e.g. trust my friend or trust a machine— to figure out which of them represent a Do and which tend to be a Don't. Because we specifically chose templates in the first person, i.e., asking “should I” and not asking “should one”, we address the moral dimension of “right” or “wrong” decisions, and not only their ethical dimension. This is the reason why we will often use the term “moral”, although we actually touch upon “ethics” and “moral”. To measure the valuation, we make use of implicit association tests (IATs) and their connections to word embeddings.
Word and Sentence Embeddings. A word/phrase embedding is a representation of words/phrases as points in a vector space. All approaches have in common that more related or even similar text entities lie close to each other in the vector space, whereas distinct words/phrases can be found in distant regions BIBREF10. This enables determining semantic similarities in a language.
Although these techniques have been around for some time, their potential increased considerably with the emergence of deep distributional approaches. In contrast to previous implementations, those embeddings are built on neural networks (NNs) and enable a rich variety of mathematical vector operations. One of the initial and most widespread algorithms to train word embeddings is Word2Vec BIBREF11, where unsupervised feature extraction and learning is conducted per word on either CBOW or Skip-gram NNs. This can be extended to full sentences BIBREF7, BIBREF8, BIBREF12.
Bias in Text Embeddings. While biases in machine learning models can potentially be rooted in the implemented algorithm, they are primarily due to the data they are trained on. BIBREF5 (BIBREF5) empirically showed that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, as for instance the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse: AI systems “have the potential to inherit a very human flaw: bias”, as Socure's CEO Sunil Madhu puts it. Besides the typically discussed bias in gender and race stereotypes, AI systems are also not neutral with respect to purpose and societal values anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your ethical frame of reference.
Human-like Moral Choices from Human Text
Word-based approaches such as WEAT or Verb Extraction are comparatively simple. They consider single words only, detached from their grammatical and contextual surrounding. In contrast, the Moral Choice Machine determines biases on a sentence level.
Analogous to word embeddings, sentence embeddings, e.g. the Universal Sentence Encoder BIBREF8 and Sentence-BERT BIBREF6, allow one to calculate the cosine similarity of various different sentences, as for instance the similarity of a question and the corresponding answer. The more appropriate a specific answer is to a given question, the stronger is its cosine similarity expected to be. When considering two opposite answers, it is therefore possible to determine a bias value:
where $\vec{q}$ is the vector representation of the question and $\vec{a}$ and $\vec{b}$ the representations of the two answers/choices. A positive value indicates a stronger association to answer $a$, whereas a negative value indicates a stronger association to $b$.
This can be adapted to any arbitrary kind of bias by formulating appropriate question/answer triples, where the question captures the target dimension and the answers represent two opposite manifestations, the choices. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template. The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value. Specifically, we considered two different biases: gender and moral.
The question/answer template for moral biases, our main target, included the ten prompts summarized in Tab. TABREF15(Appendix), where any action term can be inserted for the dots. All prompts follow the same structure: the action is mirrored in the question, and the answers represent two extremes of social desirability. Questions were, e.g., “Is it allowed to ...?” or “Should I ...?”. The used answers were grammatically adjusted to the question, e.g., “Yes, it is.” vs. “No, it is not.” respectively “Yes, you should.” vs. “No, you should not.”. Besides atomic actions, e.g. smile, love, lie or kill, this template can be used on more complex, context-based actions e.g. love my parents, love my wife/husband, kill people or kill time.
Moral Subspace Projection
As BIBREF0 (BIBREF0) showed the question/answer template is an appropriate method to extract moral biases. However as BIBREF13 (BIBREF13) showed, one is also able to even adapt the model's bias, e.g. debias the model's gender bias. They describe that the first step for debiasing word embeddings is to identify a direction (or, more generally, a subspace) of the embedding that captures the bias.
To identify the gender subspace, e.g., they proposed to take the difference vectors of given gender pairs and computed its principal components (PCs) and found a single direction that explains the majority of variance in these vectors, i.e. the first eigenvalue is significantly larger than the rest. Therefore, they argue that the top PC, denoted by the unit vector $g$, captures the gender subspace. Subsequently, they debias the embedding based on this subspace. Please note that the gender pairs are labelled beforehand.
Using the above-mentioned methodology, we propose an alternative to identify the moral bias. Inspired by BIBREF13, we first compute the moral subspace of the text embedding. Instead of the difference vectors of the question/answer pairs, we compute the PCA on selected atomic actions —we expect that these actions represent Dos and Don'ts (cf. Appendix). We formulate the actions as questions, i.e. using question templates, and compute the mean embedding, since this amplifies their moral score BIBREF0. Similar to the gender subspace, if the first eigenvalue is significantly larger than the rest, the top PC, denoted by the unit vector $m$, captures the moral subspace and therefore also the moral bias. Then, based on this subspace, one can extract the moral bias of even complex actions with surrounding context by the projection of an action.
Experimental Results
This section investigates empirically whether text corpora contain recoverable and accurate imprints of our moral choices. Specifically, we move beyond BIBREF0, by showing that BERT has a more accurate moral representation than that of the Universal Sentence Encoder.
Datasets and Embeddings Models. Experiments of the Moral Choice Machine are conducted with the Universal Sentence Encoder (USE) BIBREF8 and Sentence-BERT (SBERT) BIBREF6. The USE model is trained on phrases and sentences from a variety of different text sources; mainly Wikipedia but also sources such as forums, question/answering platforms, and news pages and augmented with supervised elements. SBERT is a modification of the pretrained BERT BIBREF12 network that aims to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. BERT is, like USE, also trained mainly on Wikipedia. For the verb extraction, the same general positive and negative association sets as in BIBREF0 are used—$A$ and $B$ in Eq. DISPLAY_FORM18—. The comprehensive list of vocabulary can be found in the appendix (Tab. TABREF20).
Dos and Don'ts for the Moral Choice Machine. The verb extraction identifies the most positive and most negative associated verbs in vocabulary, to infer socially desired and neglected behaviour. BIBREF0 (BIBREF0) extracted them with the general positive and negative association sets on the Google Slim embedding. Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter.
Tab. TABREF22 and Tab. TABREF23 (cf. Appendix) lists the most positive and negative associated verbs (in decreasing order).
Summarized, even though the contained positive verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. As BIBREF0 (BIBREF0) describe, the listed words can be accepted as commonly agreed Dos and Don'ts.
Replicating Atomic Moral Choices. Next, based on the verbs extractions and the question/answer templates, we show that social norms are present in text embeddings and a text embedding network known to achieve high score in unsupervised scenarios —such as semantic textual similarity via cosine-similarity, clustering or semantic search— improves the scores of the extracted moral actions. The correlation of the moral bias and the corresponding WEAT value was calculated to test consistency of findings. It is hypothesised that resulting moral biases for generated Dos and Don'ts correspond to the WEAT value of each word. The correlation was tested by means of Pearson's Correlation Coefficient:
where $m_x$ and $m_y$ are the the means of $X$ and $Y$. Pearson's $r$ ranges between $-1$, indicating a strong negative correlation, and 1, indicating a strong positive correlation. Significance levels are defined as $5\%$, $1\%$ and $0.1\%$, indicated by one, two or three starlets.
The correlation between WEAT value and the moral bias gets tangible, when inspecting their correlation graphically, cf. Fig. FIGREF4. The concrete bias scores can be found in the Appendix, Fig. TABREF28 and TABREF29. For both WEAT and MCM, the scatter plots of Dos and Don'ts are divided on the x-axis. The Pearson's Correlation Coefficient using USE as embedding (Top) $r = 0.73$ with $p = 2.3732e^{-16}$ is indicating a significant positive correlation. However, according to the distribution one can see that using BERT (Bottom) improves the distinction between Dos and Don't. Actually, the Pearson's Correlation Coefficient $r = 0.88$ with $p = 1.1054e^{-29}$ indicates a high positive correlation. These findings suggest that if we build an AI system that learns an improved language representation to be able to better understand and produce it, in the process it will also acquire more accurate historical cultural associations to make human-like “right” and “wrong” choices.
Replicating Complex Moral Choices in the Moral Subspace.
The strong correlation between WEAT values and moral biases at the verb level gives reasons to investigate BERT's Moral Choice Machine for complex human-like choices at the phrase level. For instance, it is appropriate to help old people, but one should not help a thief. It is good behaviour to love your parents, but not to steal money. To see whether the moral choice machine can, in principle, deal with complex choices and implicit context information around these complex choices, BIBREF0 (BIBREF0) considered the rankings among answers induced by cosine distance. Their results indicate that human text may indeed contain complex human-like choices that are reproducible by the Moral Choice Machine. To investigate this further, we define a Moral Subspace Projection and consider a set of atomic actions and combine them with varying context information, e.g. “Should I have a gun to hunt animals?” or “Should I have a gun to kill people?”.
First we will investigate the subspace of vector differences (moral direction) which was introduced by BIBREF13 (BIBREF13) to debias word embeddings. Fig. FIGREF6 (a-b) shows the percentage of variance explained in the PCA using the MCM with USE(a) and BERT(b). Clearly, the top principal component (PC) using BERT explains the majority of variance in these vectors, therefore we conclude that it represents the moral direction $m$. Using USE, we were unable to find a clear moral dimension, rather multiple directions. Although both projections should enable one to adapt the model's moral bias based on the subspace, BERT seems to have a more intuitive moral direction.
Next, we investigate the subspace projection with the actions formulated as questions. Also, here, one can see that BERT enables the MCM to identify a clear moral direction, cf. Fig. FIGREF6(c-d). The PCA is computed with the embedding of atomic actions. Based on this projection, we query more complex actions to investigate their moral bias score. The atomic actions in the subspace are visualized in Fig. FIGREF1 and the queried actions in Fig. FIGREF11. The horizontal axis (the top PC) represents the moral direction. One can observe that the atomic actions kill, murder, slaughter, brutalise, destroy are the most negative actions and congratulate, compliment, welcome and smile the most positive. E.g. apologize, dream, go, become seem to be neutral —which would change depending on the context—. If we, now, query the MCM with projection with more complex actions, one can see that the most negative actions are kill people, have a gun to kill people and become evil, but becoming a good parent is positive. Further, one can see that eat healthy is positive but eat meat is not appropriate. One should not travel to North Korea, but also not to Germany. Instead traveling to the United States is appropriate.
Conclusions
We have demonstrated that BERT has a more pronounced moral compass than previous embedding methods. That is, yes, text embeddings encode knowledge about deontological ethical and even moral choices, but the quality of the bias score depends on the quality of the text embedding network. Specifically, our empirical results show that the Moral Choice Machine with recent state-of-the-art language representations, namely BERT, extends the boundary of previous approaches and demonstrate the existence of biases in human language on a complex phrase level. Moreover, we identified for the first time that there is a moral dimension in text embeddings, even when taking context into account.
Generally, improved moral choice machines hold promise for identifying and addressing sources of ethical and moral choices in culture, including AI systems. This provides several avenues for future work. Inspired by BIBREF13 (BIBREF13), we aim at modifying the embedding, given human ethical values collected from an user study. Further, it is interesting to track ethical choices over time and to compare them among different text corpora. Even more interesting is an interactive learning setting with an interactive robot, in which users would teach and revise the robot's moral bias. Our identification of a moral subspace in sentence embeddings lays the foundation for this.
Appendix ::: Moral Choice Machine
BIBREF0 (BIBREF0) developed Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template (cf. Tab. TABREF15). The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value.
Appendix ::: Implicit Associations in Word Embeddings
Transferring the approach of implicit associations from human subjects to information retrieval systems on natural text was initially suggested by Caliskan et al. (BIBREF5), who reported some basic effects of the Word Embedding Association Test (WEAT). Whereas the strength of association in human minds is defined by response latency in Implicit Association Tests (IAT), it is here instantiated as cosine similarity of text in the Euclidean space. Similar to the IAT, complex concepts are defined by word sets. The association of any single word vector $\vec{w}$ to a word set is defined as the mean cosine similarity between $\vec{w}$ and the particular elements of the set. Now, let there be two sets of target words $X$ and $Y$. The allocation of $\vec{w}$ to two discriminating association sets $A$ and $B$ can be formulated as
A word with representation $\vec{w}$ that is stronger associated to concept $A$ yields a positive value and representation related to $B$ a negative value.
Appendix ::: Association Sets
The complete lists of positive and negative association words that were applied for generating Dos and Don'ts with Verb Extraction are given in Tab. TABREF20. The words were collected from four different literature sources that provide unspecific association sets to define pleasant and unpleasant associations BIBREF14, BIBREF17, BIBREF18, BIBREF15.
Appendix ::: Dos and Don’ts for the Moral Choice Machine
Tab. TABREF22 lists the most positive associated verbs (in decreasing order).
Even though the contained verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, Tab. TABREF23 presents the most negative associated verbs (in decreasing order) we found in our vocabulary.
Some of the words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. And still others words, as for instance suppurate or rot, appear to be disgusting in the first place. Exculpate is not a bad behaviour per se. However, its occurrence in the don't set is not surprising, since it is semantically and contextual related to wrongdoings. Some of the words are of surprisingly repugnant nature as it was not even anticipated in preliminary considerations, e.g. depopulate or dehumanise. Undoubtedly, the listed words can be accepted as commonly agreed Don'ts. Both lists include few words are rather common as a noun or adjectives, as joy, long, gift or bad. Anyhow, they can also be used as verbs and comply the requirements of being a do or a don't in that function. The allocation of verbs into Dos and Don'ts was confirmed by the affective lexicon AFINN BIBREF16. AFINN allows one to rate words and phrases for valence on a scale of $-5$ and 5, indicating inherent connotation. Elements with no ratings are treated as neutral ($0.0$).
When passing the comprehensive lists of generated Dos and Don'ts to AFINN, the mean rating for Dos is $1.12$ ($std=1.24$) and for Don'ts $-0.90$ ($std=1.22$). The t-test statistic yielded values of $t = 8.12$ with $p < .0001^{***}$. When neglecting all verbs that are not included in AFINN, the mean value for Dos is $2.34$ ($std=0.62$, $n = 24$) and the mean for Don'ts $-2.37$ ($std = 0.67$, $n=19$), with again highly significant statistics ($t = 23.28$, $p<.0001^{***}$). Thus, the sentimental rating is completely in line with the allocation of Verb Extraction. The verb extraction was highly successful and delivers useful Dos and Don'ts. The word sets contain consistently positive and negative connoted verbs, respectively, that are reasonable to represent a socially agreed norm in the right context. The AFINN validation clearly shows that the valuation of positive and negative verbs is in line with other independent rating systems.
Appendix ::: Moral Bias of USE and BERT
The following results were computed with the MCM version of BIBREF0 (BIBREF0) using both USE and BERT as sentence embedding. Specifically, to investigate whether the sentiments of the extracted Dos and Don'ts also hold for more complex sentence level, we inserted them into the question/answer templates of Moral Choice Machine BIBREF0. The resulting moral biases scores/choices are summarized in Tab. TABREF28. It presents the moral biases exemplary for the top ten Dos and Don'ts by WEAT value of both sets. The threshold between the groups is not 0, but slightly shifted negatively (Using USE further shifted than Using BERT). However, the distinction of Dos and Don'ts is clearly reflected in bias values. Using USE the mean bias of all considered elements is $-0.018$ ($std=0.025$), whereat the mean of Dos is $-0.001$ ($std=0.190$, $n=50$) and the mean of Don'ts $-0.037$ ($std=0.017$, $n=50$). Using BERT the mean bias of all considered elements is $-0.054$ ($std=0.11$), whereat the mean of Dos is $0.041$ ($std=0.064$, $n=50$) and the mean of Don'ts $-0.163$ ($std=0.053$, $n=50$).
Furthermore Tab. TABREF29 shows the resulting moral biases scores/choices for action with additional surrounding context exemplary for the top ten Dos and Don'ts of both sentence embeddings.
Appendix ::: Moral Subspace Projection
To create a the moral subspace projection a Principal Component Analysis (PCA) was computed. The used atomic actions are listed in Tab. TABREF26. The resulting space, with the MCM using BERT, is visualized in Fig. FIGREF1 based on the first two top PCs. The top PC (the $X$ axis) defines the moral direction $m$ (bias). The context-based actions which were tested using the moral subspace projection are listed in Tab. TABREF27. The resulting moral direction $m$ (or bias) for both the atomic and context-based actions can be found in Tab. TABREF30. We also list the results using the sentence embedding USE instead of BERT. $m < 0$ corresponds to a positive moral score and $m > 0$ corresponds to a negative moral score. | Answer with content missing: (formula 1) bias(q, a, b) = cos(a, q) − cos(b, q)
Bias is calculated as substraction of cosine similarities of question and some answer for two opposite answers. |
3582fac4b2705db056f75a14949db7b80cbc3197 | 3582fac4b2705db056f75a14949db7b80cbc3197_0 | Q: What sentence embeddings were used in the previous Jentzsch paper?
Text: Introduction
There is a broad consensus that artificial intelligence (AI) research is progressing steadily, and that its impact on society is likely to increase. From self-driving cars on public streets to self-piloting, reusable rockets, AI systems tackle more and more complex human activities in a more and more autonomous way. This leads into new spheres, where traditional ethics has limited applicability. Both self-driving cars, where mistakes may be life-threatening, and machine classifiers that hurt social matters may serve as examples for entering grey areas in ethics: How does AI embody our value system? Can AI systems learn human ethical judgements? If not, can we contest the AI system?
Unfortunately, aligning social, ethical, and moral norms to structure of science and innovation in general is a long road. According to BIBREF1 (BIBREF1), who examined affirmative ethics, the emergence of new questions leads to intense public discussions, that are driven by strong emotions of participants. And machine ethics BIBREF2, BIBREF3, BIBREF4 is no exception. Consider, e.g., BIBREF5's (BIBREF5) empirical proof that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, such as the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse.
In other words, AI systems are not neutral with respect to purpose and society anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing in general most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your individual ethical frame. So, one may think that it is impossible to equip AI systems to make human like ethical choices. Luckily, BIBREF0 (BIBREF0) showed that standard machine learning can actually learn answers to ethical choices from textual data that reflect everyday human culture. Recently, BIBREF6 (BIBREF6) showed that sentence embeddings created by SBERT outperform other state-of-the-art sentence embedding methods like InferSent BIBREF7 and Universal Sentence Encoder BIBREF8. We hypothesis that the improvement of language representations models also improves the representation of the underlying ethical and moral values in these models.
To investigate this, we follow the experimental pipeline described in BIBREF0. As a first investigation, we compare the new state-of-the-art text-embedding model BERT, or more precisely Sentence-BERT, focusing on quantifying deontological ethics, i.e. finding out, whether an action itself is right or wrong. Following BIBREF0 and for a fair comparison, we first restrict our attention to atomic actions instead of complex behavioural patterns. Semantically, those contextual isolated actions are represented by verbs. Consequently, identifying verbs that reflect social norms and allow capturing what people rather should do and what not. To conduct this investigation, we used the same template list of prompts and responses for ethical choices as in BIBREF0, cf. Tab. TABREF15(Appendix). The templates include questions, such as "Should I kill people?", "Is it allowed to murder people?", etc. with answer templates of "Yes/no, I should (not)."
Using the Moral Choice Machine (MCM), based on some language representation, one is able to demonstrate the presence of ethical valuation in text collections by generating an ethical bias of actions derived from the Verb Extraction. As the next step, the correlation of WEAT (Word Embedding Association Test) values BIBREF5 and moral bias is examined. Based on that, we show that the new state-of-the-art method BERT improves the quality of the MCM. Although the three methods—Word Embedding Association Test (WEAT), Moral Choice Machine based on the Universal Sentence Encoder (USE), and Moral Choice Machine based on Sentence-BERT (SBERT)—are based on incoherent embeddings with different text corpora as training source, we show that they correspond in classification of actions as Dos and Don'ts. Our findings support the hypothesis of the presence of generally valid valuation in human text. Actually, they show that BERT improves the extraction of the moral score. Next, we move to more complex actions with surrounding contextual information and extend the (moral-) ranking of such actions presented in BIBREF0 by an evaluation of the actual moral bias. Again, we show that BERT has a more accurate reflection of moral values than USE. Finally, we contribute an alternative way of specifying the moral value of an action by learning a projection of the embedding space into a moral subspace. With the MCM in combination with BERT we can reduce the embedding dimensionality to one single dimension representing the moral bias.
We proceed as follows. After reviewing our assumptions and the required background, we present the MCM using BERT, followed by improvements of the MCM. Before concluding, we present our empirical results.
Assumptions and Background
In this section, we review our assumptions, in particular what we mean by moral choices, and the required background, following closely BIBREF0.
Moral Choices. Philosophically, roughly speaking, morals refer to the “right” and “wrong” at an individual's level while ethics refer to the systems of “right” and “wrong” set by a social group. Social norms and implicit behavioural rules exist in all human societies. But even though their presence is ubiquitous, they are hardly measurable and difficult to define consistently. The underlying mechanisms are still poorly understood. Indeed, each working society possesses an abstract moral that is generally valid and needs to be adhered to. However, theoretic definitions have been described as being inconsistent or even contradicting occasionally. Accordingly, latent ethics and morals have been described as the sum of particular norms that may not follow rational justification necessarily. Recently, BIBREF9 (BIBREF9) for instance suggested that moral norms are determined to a large extent by what is perceived to be common convention.
With regards to complexity and intangibility of ethics and morals, we restrict ourselves to a rather basic implementation of this construct, following the theories of deontological ethics. These ask which choices are morally required, forbidden, or permitted instead of asking which kind of a person we should be or which consequences of our actions are to be preferred. Thus, norms are understood as universal rules of what to do and what not to do. Therefore, we focus on the valuation of social acceptance in single verbs and single verbs with surrounding context information —e.g. trust my friend or trust a machine— to figure out which of them represent a Do and which tend to be a Don't. Because we specifically chose templates in the first person, i.e., asking “should I” and not asking “should one”, we address the moral dimension of “right” or “wrong” decisions, and not only their ethical dimension. This is the reason why we will often use the term “moral”, although we actually touch upon “ethics” and “moral”. To measure the valuation, we make use of implicit association tests (IATs) and their connections to word embeddings.
Word and Sentence Embeddings. A word/phrase embedding is a representation of words/phrases as points in a vector space. All approaches have in common that more related or even similar text entities lie close to each other in the vector space, whereas distinct words/phrases can be found in distant regions BIBREF10. This enables determining semantic similarities in a language.
Although these techniques have been around for some time, their potential increased considerably with the emergence of deep distributional approaches. In contrast to previous implementations, those embeddings are built on neural networks (NNs) and enable a rich variety of mathematical vector operations. One of the initial and most widespread algorithms to train word embeddings is Word2Vec BIBREF11, where unsupervised feature extraction and learning is conducted per word on either CBOW or Skip-gram NNs. This can be extended to full sentences BIBREF7, BIBREF8, BIBREF12.
Bias in Text Embeddings. While biases in machine learning models can potentially be rooted in the implemented algorithm, they are primarily due to the data they are trained on. BIBREF5 (BIBREF5) empirically showed that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, as for instance the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse: AI systems “have the potential to inherit a very human flaw: bias”, as Socure's CEO Sunil Madhu puts it. Besides the typically discussed bias in gender and race stereotypes, AI systems are also not neutral with respect to purpose and societal values anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your ethical frame of reference.
Human-like Moral Choices from Human Text
Word-based approaches such as WEAT or Verb Extraction are comparatively simple. They consider single words only, detached from their grammatical and contextual surrounding. In contrast, the Moral Choice Machine determines biases on a sentence level.
Analogous to word embeddings, sentence embeddings, e.g. the Universal Sentence Encoder BIBREF8 and Sentence-BERT BIBREF6, allow one to calculate the cosine similarity of various different sentences, as for instance the similarity of a question and the corresponding answer. The more appropriate a specific answer is to a given question, the stronger is its cosine similarity expected to be. When considering two opposite answers, it is therefore possible to determine a bias value:
where $\vec{q}$ is the vector representation of the question and $\vec{a}$ and $\vec{b}$ the representations of the two answers/choices. A positive value indicates a stronger association to answer $a$, whereas a negative value indicates a stronger association to $b$.
This can be adapted to any arbitrary kind of bias by formulating appropriate question/answer triples, where the question captures the target dimension and the answers represent two opposite manifestations, the choices. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template. The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value. Specifically, we considered two different biases: gender and moral.
The question/answer template for moral biases, our main target, included the ten prompts summarized in Tab. TABREF15(Appendix), where any action term can be inserted for the dots. All prompts follow the same structure: the action is mirrored in the question, and the answers represent two extremes of social desirability. Questions were, e.g., “Is it allowed to ...?” or “Should I ...?”. The used answers were grammatically adjusted to the question, e.g., “Yes, it is.” vs. “No, it is not.” respectively “Yes, you should.” vs. “No, you should not.”. Besides atomic actions, e.g. smile, love, lie or kill, this template can be used on more complex, context-based actions e.g. love my parents, love my wife/husband, kill people or kill time.
Moral Subspace Projection
As BIBREF0 (BIBREF0) showed the question/answer template is an appropriate method to extract moral biases. However as BIBREF13 (BIBREF13) showed, one is also able to even adapt the model's bias, e.g. debias the model's gender bias. They describe that the first step for debiasing word embeddings is to identify a direction (or, more generally, a subspace) of the embedding that captures the bias.
To identify the gender subspace, e.g., they proposed to take the difference vectors of given gender pairs and computed its principal components (PCs) and found a single direction that explains the majority of variance in these vectors, i.e. the first eigenvalue is significantly larger than the rest. Therefore, they argue that the top PC, denoted by the unit vector $g$, captures the gender subspace. Subsequently, they debias the embedding based on this subspace. Please note that the gender pairs are labelled beforehand.
Using the above-mentioned methodology, we propose an alternative to identify the moral bias. Inspired by BIBREF13, we first compute the moral subspace of the text embedding. Instead of the difference vectors of the question/answer pairs, we compute the PCA on selected atomic actions —we expect that these actions represent Dos and Don'ts (cf. Appendix). We formulate the actions as questions, i.e. using question templates, and compute the mean embedding, since this amplifies their moral score BIBREF0. Similar to the gender subspace, if the first eigenvalue is significantly larger than the rest, the top PC, denoted by the unit vector $m$, captures the moral subspace and therefore also the moral bias. Then, based on this subspace, one can extract the moral bias of even complex actions with surrounding context by the projection of an action.
Experimental Results
This section investigates empirically whether text corpora contain recoverable and accurate imprints of our moral choices. Specifically, we move beyond BIBREF0, by showing that BERT has a more accurate moral representation than that of the Universal Sentence Encoder.
Datasets and Embeddings Models. Experiments of the Moral Choice Machine are conducted with the Universal Sentence Encoder (USE) BIBREF8 and Sentence-BERT (SBERT) BIBREF6. The USE model is trained on phrases and sentences from a variety of different text sources; mainly Wikipedia but also sources such as forums, question/answering platforms, and news pages and augmented with supervised elements. SBERT is a modification of the pretrained BERT BIBREF12 network that aims to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. BERT is, like USE, also trained mainly on Wikipedia. For the verb extraction, the same general positive and negative association sets as in BIBREF0 are used—$A$ and $B$ in Eq. DISPLAY_FORM18—. The comprehensive list of vocabulary can be found in the appendix (Tab. TABREF20).
Dos and Don'ts for the Moral Choice Machine. The verb extraction identifies the most positive and most negative associated verbs in vocabulary, to infer socially desired and neglected behaviour. BIBREF0 (BIBREF0) extracted them with the general positive and negative association sets on the Google Slim embedding. Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter.
Tab. TABREF22 and Tab. TABREF23 (cf. Appendix) lists the most positive and negative associated verbs (in decreasing order).
Summarized, even though the contained positive verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. As BIBREF0 (BIBREF0) describe, the listed words can be accepted as commonly agreed Dos and Don'ts.
Replicating Atomic Moral Choices. Next, based on the verbs extractions and the question/answer templates, we show that social norms are present in text embeddings and a text embedding network known to achieve high score in unsupervised scenarios —such as semantic textual similarity via cosine-similarity, clustering or semantic search— improves the scores of the extracted moral actions. The correlation of the moral bias and the corresponding WEAT value was calculated to test consistency of findings. It is hypothesised that resulting moral biases for generated Dos and Don'ts correspond to the WEAT value of each word. The correlation was tested by means of Pearson's Correlation Coefficient:
where $m_x$ and $m_y$ are the the means of $X$ and $Y$. Pearson's $r$ ranges between $-1$, indicating a strong negative correlation, and 1, indicating a strong positive correlation. Significance levels are defined as $5\%$, $1\%$ and $0.1\%$, indicated by one, two or three starlets.
The correlation between WEAT value and the moral bias gets tangible, when inspecting their correlation graphically, cf. Fig. FIGREF4. The concrete bias scores can be found in the Appendix, Fig. TABREF28 and TABREF29. For both WEAT and MCM, the scatter plots of Dos and Don'ts are divided on the x-axis. The Pearson's Correlation Coefficient using USE as embedding (Top) $r = 0.73$ with $p = 2.3732e^{-16}$ is indicating a significant positive correlation. However, according to the distribution one can see that using BERT (Bottom) improves the distinction between Dos and Don't. Actually, the Pearson's Correlation Coefficient $r = 0.88$ with $p = 1.1054e^{-29}$ indicates a high positive correlation. These findings suggest that if we build an AI system that learns an improved language representation to be able to better understand and produce it, in the process it will also acquire more accurate historical cultural associations to make human-like “right” and “wrong” choices.
Replicating Complex Moral Choices in the Moral Subspace.
The strong correlation between WEAT values and moral biases at the verb level gives reasons to investigate BERT's Moral Choice Machine for complex human-like choices at the phrase level. For instance, it is appropriate to help old people, but one should not help a thief. It is good behaviour to love your parents, but not to steal money. To see whether the moral choice machine can, in principle, deal with complex choices and implicit context information around these complex choices, BIBREF0 (BIBREF0) considered the rankings among answers induced by cosine distance. Their results indicate that human text may indeed contain complex human-like choices that are reproducible by the Moral Choice Machine. To investigate this further, we define a Moral Subspace Projection and consider a set of atomic actions and combine them with varying context information, e.g. “Should I have a gun to hunt animals?” or “Should I have a gun to kill people?”.
First we will investigate the subspace of vector differences (moral direction) which was introduced by BIBREF13 (BIBREF13) to debias word embeddings. Fig. FIGREF6 (a-b) shows the percentage of variance explained in the PCA using the MCM with USE(a) and BERT(b). Clearly, the top principal component (PC) using BERT explains the majority of variance in these vectors, therefore we conclude that it represents the moral direction $m$. Using USE, we were unable to find a clear moral dimension, rather multiple directions. Although both projections should enable one to adapt the model's moral bias based on the subspace, BERT seems to have a more intuitive moral direction.
Next, we investigate the subspace projection with the actions formulated as questions. Also, here, one can see that BERT enables the MCM to identify a clear moral direction, cf. Fig. FIGREF6(c-d). The PCA is computed with the embedding of atomic actions. Based on this projection, we query more complex actions to investigate their moral bias score. The atomic actions in the subspace are visualized in Fig. FIGREF1 and the queried actions in Fig. FIGREF11. The horizontal axis (the top PC) represents the moral direction. One can observe that the atomic actions kill, murder, slaughter, brutalise, destroy are the most negative actions and congratulate, compliment, welcome and smile the most positive. E.g. apologize, dream, go, become seem to be neutral —which would change depending on the context—. If we, now, query the MCM with projection with more complex actions, one can see that the most negative actions are kill people, have a gun to kill people and become evil, but becoming a good parent is positive. Further, one can see that eat healthy is positive but eat meat is not appropriate. One should not travel to North Korea, but also not to Germany. Instead traveling to the United States is appropriate.
Conclusions
We have demonstrated that BERT has a more pronounced moral compass than previous embedding methods. That is, yes, text embeddings encode knowledge about deontological ethical and even moral choices, but the quality of the bias score depends on the quality of the text embedding network. Specifically, our empirical results show that the Moral Choice Machine with recent state-of-the-art language representations, namely BERT, extends the boundary of previous approaches and demonstrate the existence of biases in human language on a complex phrase level. Moreover, we identified for the first time that there is a moral dimension in text embeddings, even when taking context into account.
Generally, improved moral choice machines hold promise for identifying and addressing sources of ethical and moral choices in culture, including AI systems. This provides several avenues for future work. Inspired by BIBREF13 (BIBREF13), we aim at modifying the embedding, given human ethical values collected from an user study. Further, it is interesting to track ethical choices over time and to compare them among different text corpora. Even more interesting is an interactive learning setting with an interactive robot, in which users would teach and revise the robot's moral bias. Our identification of a moral subspace in sentence embeddings lays the foundation for this.
Appendix ::: Moral Choice Machine
BIBREF0 (BIBREF0) developed Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template (cf. Tab. TABREF15). The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value.
Appendix ::: Implicit Associations in Word Embeddings
Transferring the approach of implicit associations from human subjects to information retrieval systems on natural text was initially suggested by Caliskan et al. (BIBREF5), who reported some basic effects of the Word Embedding Association Test (WEAT). Whereas the strength of association in human minds is defined by response latency in Implicit Association Tests (IAT), it is here instantiated as cosine similarity of text in the Euclidean space. Similar to the IAT, complex concepts are defined by word sets. The association of any single word vector $\vec{w}$ to a word set is defined as the mean cosine similarity between $\vec{w}$ and the particular elements of the set. Now, let there be two sets of target words $X$ and $Y$. The allocation of $\vec{w}$ to two discriminating association sets $A$ and $B$ can be formulated as
A word with representation $\vec{w}$ that is stronger associated to concept $A$ yields a positive value and representation related to $B$ a negative value.
Appendix ::: Association Sets
The complete lists of positive and negative association words that were applied for generating Dos and Don'ts with Verb Extraction are given in Tab. TABREF20. The words were collected from four different literature sources that provide unspecific association sets to define pleasant and unpleasant associations BIBREF14, BIBREF17, BIBREF18, BIBREF15.
Appendix ::: Dos and Don’ts for the Moral Choice Machine
Tab. TABREF22 lists the most positive associated verbs (in decreasing order).
Even though the contained verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, Tab. TABREF23 presents the most negative associated verbs (in decreasing order) we found in our vocabulary.
Some of the words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. And still others words, as for instance suppurate or rot, appear to be disgusting in the first place. Exculpate is not a bad behaviour per se. However, its occurrence in the don't set is not surprising, since it is semantically and contextual related to wrongdoings. Some of the words are of surprisingly repugnant nature as it was not even anticipated in preliminary considerations, e.g. depopulate or dehumanise. Undoubtedly, the listed words can be accepted as commonly agreed Don'ts. Both lists include few words are rather common as a noun or adjectives, as joy, long, gift or bad. Anyhow, they can also be used as verbs and comply the requirements of being a do or a don't in that function. The allocation of verbs into Dos and Don'ts was confirmed by the affective lexicon AFINN BIBREF16. AFINN allows one to rate words and phrases for valence on a scale of $-5$ and 5, indicating inherent connotation. Elements with no ratings are treated as neutral ($0.0$).
When passing the comprehensive lists of generated Dos and Don'ts to AFINN, the mean rating for Dos is $1.12$ ($std=1.24$) and for Don'ts $-0.90$ ($std=1.22$). The t-test statistic yielded values of $t = 8.12$ with $p < .0001^{***}$. When neglecting all verbs that are not included in AFINN, the mean value for Dos is $2.34$ ($std=0.62$, $n = 24$) and the mean for Don'ts $-2.37$ ($std = 0.67$, $n=19$), with again highly significant statistics ($t = 23.28$, $p<.0001^{***}$). Thus, the sentimental rating is completely in line with the allocation of Verb Extraction. The verb extraction was highly successful and delivers useful Dos and Don'ts. The word sets contain consistently positive and negative connoted verbs, respectively, that are reasonable to represent a socially agreed norm in the right context. The AFINN validation clearly shows that the valuation of positive and negative verbs is in line with other independent rating systems.
Appendix ::: Moral Bias of USE and BERT
The following results were computed with the MCM version of BIBREF0 (BIBREF0) using both USE and BERT as sentence embedding. Specifically, to investigate whether the sentiments of the extracted Dos and Don'ts also hold for more complex sentence level, we inserted them into the question/answer templates of Moral Choice Machine BIBREF0. The resulting moral biases scores/choices are summarized in Tab. TABREF28. It presents the moral biases exemplary for the top ten Dos and Don'ts by WEAT value of both sets. The threshold between the groups is not 0, but slightly shifted negatively (Using USE further shifted than Using BERT). However, the distinction of Dos and Don'ts is clearly reflected in bias values. Using USE the mean bias of all considered elements is $-0.018$ ($std=0.025$), whereat the mean of Dos is $-0.001$ ($std=0.190$, $n=50$) and the mean of Don'ts $-0.037$ ($std=0.017$, $n=50$). Using BERT the mean bias of all considered elements is $-0.054$ ($std=0.11$), whereat the mean of Dos is $0.041$ ($std=0.064$, $n=50$) and the mean of Don'ts $-0.163$ ($std=0.053$, $n=50$).
Furthermore Tab. TABREF29 shows the resulting moral biases scores/choices for action with additional surrounding context exemplary for the top ten Dos and Don'ts of both sentence embeddings.
Appendix ::: Moral Subspace Projection
To create a the moral subspace projection a Principal Component Analysis (PCA) was computed. The used atomic actions are listed in Tab. TABREF26. The resulting space, with the MCM using BERT, is visualized in Fig. FIGREF1 based on the first two top PCs. The top PC (the $X$ axis) defines the moral direction $m$ (bias). The context-based actions which were tested using the moral subspace projection are listed in Tab. TABREF27. The resulting moral direction $m$ (or bias) for both the atomic and context-based actions can be found in Tab. TABREF30. We also list the results using the sentence embedding USE instead of BERT. $m < 0$ corresponds to a positive moral score and $m > 0$ corresponds to a negative moral score. | Unanswerable |
96dcabaa8b6bd89b032da609e709900a1569a0f9 | 96dcabaa8b6bd89b032da609e709900a1569a0f9_0 | Q: How do the authors define deontological ethical reasoning?
Text: Introduction
There is a broad consensus that artificial intelligence (AI) research is progressing steadily, and that its impact on society is likely to increase. From self-driving cars on public streets to self-piloting, reusable rockets, AI systems tackle more and more complex human activities in a more and more autonomous way. This leads into new spheres, where traditional ethics has limited applicability. Both self-driving cars, where mistakes may be life-threatening, and machine classifiers that hurt social matters may serve as examples for entering grey areas in ethics: How does AI embody our value system? Can AI systems learn human ethical judgements? If not, can we contest the AI system?
Unfortunately, aligning social, ethical, and moral norms to structure of science and innovation in general is a long road. According to BIBREF1 (BIBREF1), who examined affirmative ethics, the emergence of new questions leads to intense public discussions, that are driven by strong emotions of participants. And machine ethics BIBREF2, BIBREF3, BIBREF4 is no exception. Consider, e.g., BIBREF5's (BIBREF5) empirical proof that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, such as the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse.
In other words, AI systems are not neutral with respect to purpose and society anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing in general most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your individual ethical frame. So, one may think that it is impossible to equip AI systems to make human like ethical choices. Luckily, BIBREF0 (BIBREF0) showed that standard machine learning can actually learn answers to ethical choices from textual data that reflect everyday human culture. Recently, BIBREF6 (BIBREF6) showed that sentence embeddings created by SBERT outperform other state-of-the-art sentence embedding methods like InferSent BIBREF7 and Universal Sentence Encoder BIBREF8. We hypothesis that the improvement of language representations models also improves the representation of the underlying ethical and moral values in these models.
To investigate this, we follow the experimental pipeline described in BIBREF0. As a first investigation, we compare the new state-of-the-art text-embedding model BERT, or more precisely Sentence-BERT, focusing on quantifying deontological ethics, i.e. finding out, whether an action itself is right or wrong. Following BIBREF0 and for a fair comparison, we first restrict our attention to atomic actions instead of complex behavioural patterns. Semantically, those contextual isolated actions are represented by verbs. Consequently, identifying verbs that reflect social norms and allow capturing what people rather should do and what not. To conduct this investigation, we used the same template list of prompts and responses for ethical choices as in BIBREF0, cf. Tab. TABREF15(Appendix). The templates include questions, such as "Should I kill people?", "Is it allowed to murder people?", etc. with answer templates of "Yes/no, I should (not)."
Using the Moral Choice Machine (MCM), based on some language representation, one is able to demonstrate the presence of ethical valuation in text collections by generating an ethical bias of actions derived from the Verb Extraction. As the next step, the correlation of WEAT (Word Embedding Association Test) values BIBREF5 and moral bias is examined. Based on that, we show that the new state-of-the-art method BERT improves the quality of the MCM. Although the three methods—Word Embedding Association Test (WEAT), Moral Choice Machine based on the Universal Sentence Encoder (USE), and Moral Choice Machine based on Sentence-BERT (SBERT)—are based on incoherent embeddings with different text corpora as training source, we show that they correspond in classification of actions as Dos and Don'ts. Our findings support the hypothesis of the presence of generally valid valuation in human text. Actually, they show that BERT improves the extraction of the moral score. Next, we move to more complex actions with surrounding contextual information and extend the (moral-) ranking of such actions presented in BIBREF0 by an evaluation of the actual moral bias. Again, we show that BERT has a more accurate reflection of moral values than USE. Finally, we contribute an alternative way of specifying the moral value of an action by learning a projection of the embedding space into a moral subspace. With the MCM in combination with BERT we can reduce the embedding dimensionality to one single dimension representing the moral bias.
We proceed as follows. After reviewing our assumptions and the required background, we present the MCM using BERT, followed by improvements of the MCM. Before concluding, we present our empirical results.
Assumptions and Background
In this section, we review our assumptions, in particular what we mean by moral choices, and the required background, following closely BIBREF0.
Moral Choices. Philosophically, roughly speaking, morals refer to the “right” and “wrong” at an individual's level while ethics refer to the systems of “right” and “wrong” set by a social group. Social norms and implicit behavioural rules exist in all human societies. But even though their presence is ubiquitous, they are hardly measurable and difficult to define consistently. The underlying mechanisms are still poorly understood. Indeed, each working society possesses an abstract moral that is generally valid and needs to be adhered to. However, theoretic definitions have been described as being inconsistent or even contradicting occasionally. Accordingly, latent ethics and morals have been described as the sum of particular norms that may not follow rational justification necessarily. Recently, BIBREF9 (BIBREF9) for instance suggested that moral norms are determined to a large extent by what is perceived to be common convention.
With regards to complexity and intangibility of ethics and morals, we restrict ourselves to a rather basic implementation of this construct, following the theories of deontological ethics. These ask which choices are morally required, forbidden, or permitted instead of asking which kind of a person we should be or which consequences of our actions are to be preferred. Thus, norms are understood as universal rules of what to do and what not to do. Therefore, we focus on the valuation of social acceptance in single verbs and single verbs with surrounding context information —e.g. trust my friend or trust a machine— to figure out which of them represent a Do and which tend to be a Don't. Because we specifically chose templates in the first person, i.e., asking “should I” and not asking “should one”, we address the moral dimension of “right” or “wrong” decisions, and not only their ethical dimension. This is the reason why we will often use the term “moral”, although we actually touch upon “ethics” and “moral”. To measure the valuation, we make use of implicit association tests (IATs) and their connections to word embeddings.
Word and Sentence Embeddings. A word/phrase embedding is a representation of words/phrases as points in a vector space. All approaches have in common that more related or even similar text entities lie close to each other in the vector space, whereas distinct words/phrases can be found in distant regions BIBREF10. This enables determining semantic similarities in a language.
Although these techniques have been around for some time, their potential increased considerably with the emergence of deep distributional approaches. In contrast to previous implementations, those embeddings are built on neural networks (NNs) and enable a rich variety of mathematical vector operations. One of the initial and most widespread algorithms to train word embeddings is Word2Vec BIBREF11, where unsupervised feature extraction and learning is conducted per word on either CBOW or Skip-gram NNs. This can be extended to full sentences BIBREF7, BIBREF8, BIBREF12.
Bias in Text Embeddings. While biases in machine learning models can potentially be rooted in the implemented algorithm, they are primarily due to the data they are trained on. BIBREF5 (BIBREF5) empirically showed that human language reflects our stereotypical biases. Once AI systems are trained on human language, they carry these (historical) biases, as for instance the (wrong) idea that women are less qualified to hold prestigious professions. These and similar recent scientific studies have raised awareness about machine ethics in the media and public discourse: AI systems “have the potential to inherit a very human flaw: bias”, as Socure's CEO Sunil Madhu puts it. Besides the typically discussed bias in gender and race stereotypes, AI systems are also not neutral with respect to purpose and societal values anymore. Ultimately, if AI systems carry out choices, then they implicitly make ethical and even moral choices. Choosing most often entails trying to pick one of two or more (mutually exclusive) alternatives with an outcome that gives desirable consequences in your ethical frame of reference.
Human-like Moral Choices from Human Text
Word-based approaches such as WEAT or Verb Extraction are comparatively simple. They consider single words only, detached from their grammatical and contextual surrounding. In contrast, the Moral Choice Machine determines biases on a sentence level.
Analogous to word embeddings, sentence embeddings, e.g. the Universal Sentence Encoder BIBREF8 and Sentence-BERT BIBREF6, allow one to calculate the cosine similarity of various different sentences, as for instance the similarity of a question and the corresponding answer. The more appropriate a specific answer is to a given question, the stronger is its cosine similarity expected to be. When considering two opposite answers, it is therefore possible to determine a bias value:
where $\vec{q}$ is the vector representation of the question and $\vec{a}$ and $\vec{b}$ the representations of the two answers/choices. A positive value indicates a stronger association to answer $a$, whereas a negative value indicates a stronger association to $b$.
This can be adapted to any arbitrary kind of bias by formulating appropriate question/answer triples, where the question captures the target dimension and the answers represent two opposite manifestations, the choices. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template. The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value. Specifically, we considered two different biases: gender and moral.
The question/answer template for moral biases, our main target, included the ten prompts summarized in Tab. TABREF15(Appendix), where any action term can be inserted for the dots. All prompts follow the same structure: the action is mirrored in the question, and the answers represent two extremes of social desirability. Questions were, e.g., “Is it allowed to ...?” or “Should I ...?”. The used answers were grammatically adjusted to the question, e.g., “Yes, it is.” vs. “No, it is not.” respectively “Yes, you should.” vs. “No, you should not.”. Besides atomic actions, e.g. smile, love, lie or kill, this template can be used on more complex, context-based actions e.g. love my parents, love my wife/husband, kill people or kill time.
Moral Subspace Projection
As BIBREF0 (BIBREF0) showed the question/answer template is an appropriate method to extract moral biases. However as BIBREF13 (BIBREF13) showed, one is also able to even adapt the model's bias, e.g. debias the model's gender bias. They describe that the first step for debiasing word embeddings is to identify a direction (or, more generally, a subspace) of the embedding that captures the bias.
To identify the gender subspace, e.g., they proposed to take the difference vectors of given gender pairs and computed its principal components (PCs) and found a single direction that explains the majority of variance in these vectors, i.e. the first eigenvalue is significantly larger than the rest. Therefore, they argue that the top PC, denoted by the unit vector $g$, captures the gender subspace. Subsequently, they debias the embedding based on this subspace. Please note that the gender pairs are labelled beforehand.
Using the above-mentioned methodology, we propose an alternative to identify the moral bias. Inspired by BIBREF13, we first compute the moral subspace of the text embedding. Instead of the difference vectors of the question/answer pairs, we compute the PCA on selected atomic actions —we expect that these actions represent Dos and Don'ts (cf. Appendix). We formulate the actions as questions, i.e. using question templates, and compute the mean embedding, since this amplifies their moral score BIBREF0. Similar to the gender subspace, if the first eigenvalue is significantly larger than the rest, the top PC, denoted by the unit vector $m$, captures the moral subspace and therefore also the moral bias. Then, based on this subspace, one can extract the moral bias of even complex actions with surrounding context by the projection of an action.
Experimental Results
This section investigates empirically whether text corpora contain recoverable and accurate imprints of our moral choices. Specifically, we move beyond BIBREF0, by showing that BERT has a more accurate moral representation than that of the Universal Sentence Encoder.
Datasets and Embeddings Models. Experiments of the Moral Choice Machine are conducted with the Universal Sentence Encoder (USE) BIBREF8 and Sentence-BERT (SBERT) BIBREF6. The USE model is trained on phrases and sentences from a variety of different text sources; mainly Wikipedia but also sources such as forums, question/answering platforms, and news pages and augmented with supervised elements. SBERT is a modification of the pretrained BERT BIBREF12 network that aims to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. BERT is, like USE, also trained mainly on Wikipedia. For the verb extraction, the same general positive and negative association sets as in BIBREF0 are used—$A$ and $B$ in Eq. DISPLAY_FORM18—. The comprehensive list of vocabulary can be found in the appendix (Tab. TABREF20).
Dos and Don'ts for the Moral Choice Machine. The verb extraction identifies the most positive and most negative associated verbs in vocabulary, to infer socially desired and neglected behaviour. BIBREF0 (BIBREF0) extracted them with the general positive and negative association sets on the Google Slim embedding. Since those sets are expected to reflect social norms, they are referred as Dos and Don'ts hereafter.
Tab. TABREF22 and Tab. TABREF23 (cf. Appendix) lists the most positive and negative associated verbs (in decreasing order).
Summarized, even though the contained positive verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, some of the negative words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. As BIBREF0 (BIBREF0) describe, the listed words can be accepted as commonly agreed Dos and Don'ts.
Replicating Atomic Moral Choices. Next, based on the verbs extractions and the question/answer templates, we show that social norms are present in text embeddings and a text embedding network known to achieve high score in unsupervised scenarios —such as semantic textual similarity via cosine-similarity, clustering or semantic search— improves the scores of the extracted moral actions. The correlation of the moral bias and the corresponding WEAT value was calculated to test consistency of findings. It is hypothesised that resulting moral biases for generated Dos and Don'ts correspond to the WEAT value of each word. The correlation was tested by means of Pearson's Correlation Coefficient:
where $m_x$ and $m_y$ are the the means of $X$ and $Y$. Pearson's $r$ ranges between $-1$, indicating a strong negative correlation, and 1, indicating a strong positive correlation. Significance levels are defined as $5\%$, $1\%$ and $0.1\%$, indicated by one, two or three starlets.
The correlation between WEAT value and the moral bias gets tangible, when inspecting their correlation graphically, cf. Fig. FIGREF4. The concrete bias scores can be found in the Appendix, Fig. TABREF28 and TABREF29. For both WEAT and MCM, the scatter plots of Dos and Don'ts are divided on the x-axis. The Pearson's Correlation Coefficient using USE as embedding (Top) $r = 0.73$ with $p = 2.3732e^{-16}$ is indicating a significant positive correlation. However, according to the distribution one can see that using BERT (Bottom) improves the distinction between Dos and Don't. Actually, the Pearson's Correlation Coefficient $r = 0.88$ with $p = 1.1054e^{-29}$ indicates a high positive correlation. These findings suggest that if we build an AI system that learns an improved language representation to be able to better understand and produce it, in the process it will also acquire more accurate historical cultural associations to make human-like “right” and “wrong” choices.
Replicating Complex Moral Choices in the Moral Subspace.
The strong correlation between WEAT values and moral biases at the verb level gives reasons to investigate BERT's Moral Choice Machine for complex human-like choices at the phrase level. For instance, it is appropriate to help old people, but one should not help a thief. It is good behaviour to love your parents, but not to steal money. To see whether the moral choice machine can, in principle, deal with complex choices and implicit context information around these complex choices, BIBREF0 (BIBREF0) considered the rankings among answers induced by cosine distance. Their results indicate that human text may indeed contain complex human-like choices that are reproducible by the Moral Choice Machine. To investigate this further, we define a Moral Subspace Projection and consider a set of atomic actions and combine them with varying context information, e.g. “Should I have a gun to hunt animals?” or “Should I have a gun to kill people?”.
First we will investigate the subspace of vector differences (moral direction) which was introduced by BIBREF13 (BIBREF13) to debias word embeddings. Fig. FIGREF6 (a-b) shows the percentage of variance explained in the PCA using the MCM with USE(a) and BERT(b). Clearly, the top principal component (PC) using BERT explains the majority of variance in these vectors, therefore we conclude that it represents the moral direction $m$. Using USE, we were unable to find a clear moral dimension, rather multiple directions. Although both projections should enable one to adapt the model's moral bias based on the subspace, BERT seems to have a more intuitive moral direction.
Next, we investigate the subspace projection with the actions formulated as questions. Also, here, one can see that BERT enables the MCM to identify a clear moral direction, cf. Fig. FIGREF6(c-d). The PCA is computed with the embedding of atomic actions. Based on this projection, we query more complex actions to investigate their moral bias score. The atomic actions in the subspace are visualized in Fig. FIGREF1 and the queried actions in Fig. FIGREF11. The horizontal axis (the top PC) represents the moral direction. One can observe that the atomic actions kill, murder, slaughter, brutalise, destroy are the most negative actions and congratulate, compliment, welcome and smile the most positive. E.g. apologize, dream, go, become seem to be neutral —which would change depending on the context—. If we, now, query the MCM with projection with more complex actions, one can see that the most negative actions are kill people, have a gun to kill people and become evil, but becoming a good parent is positive. Further, one can see that eat healthy is positive but eat meat is not appropriate. One should not travel to North Korea, but also not to Germany. Instead traveling to the United States is appropriate.
Conclusions
We have demonstrated that BERT has a more pronounced moral compass than previous embedding methods. That is, yes, text embeddings encode knowledge about deontological ethical and even moral choices, but the quality of the bias score depends on the quality of the text embedding network. Specifically, our empirical results show that the Moral Choice Machine with recent state-of-the-art language representations, namely BERT, extends the boundary of previous approaches and demonstrate the existence of biases in human language on a complex phrase level. Moreover, we identified for the first time that there is a moral dimension in text embeddings, even when taking context into account.
Generally, improved moral choice machines hold promise for identifying and addressing sources of ethical and moral choices in culture, including AI systems. This provides several avenues for future work. Inspired by BIBREF13 (BIBREF13), we aim at modifying the embedding, given human ethical values collected from an user study. Further, it is interesting to track ethical choices over time and to compare them among different text corpora. Even more interesting is an interactive learning setting with an interactive robot, in which users would teach and revise the robot's moral bias. Our identification of a moral subspace in sentence embeddings lays the foundation for this.
Appendix ::: Moral Choice Machine
BIBREF0 (BIBREF0) developed Moral Choice Machine computes the cosine similarity in a sentence embedding space of an arbitrary action embedded in question/answer pairs. This is illustrated in Fig. FIGREF16 for the moral bias of the action murder. Since murdering is a quite destructive and generally refused behaviour, the questions are expected to lie closer to the denying response and thus to yield a negative bias. To create a more meaningful and comprehensive statistic, several question/answer prompts were conflated to a question/answer template (cf. Tab. TABREF15). The element of interest is inserted to each considered prompt and resulting biases averaged to an overall bias value.
Appendix ::: Implicit Associations in Word Embeddings
Transferring the approach of implicit associations from human subjects to information retrieval systems on natural text was initially suggested by Caliskan et al. (BIBREF5), who reported some basic effects of the Word Embedding Association Test (WEAT). Whereas the strength of association in human minds is defined by response latency in Implicit Association Tests (IAT), it is here instantiated as cosine similarity of text in the Euclidean space. Similar to the IAT, complex concepts are defined by word sets. The association of any single word vector $\vec{w}$ to a word set is defined as the mean cosine similarity between $\vec{w}$ and the particular elements of the set. Now, let there be two sets of target words $X$ and $Y$. The allocation of $\vec{w}$ to two discriminating association sets $A$ and $B$ can be formulated as
A word with representation $\vec{w}$ that is stronger associated to concept $A$ yields a positive value and representation related to $B$ a negative value.
Appendix ::: Association Sets
The complete lists of positive and negative association words that were applied for generating Dos and Don'ts with Verb Extraction are given in Tab. TABREF20. The words were collected from four different literature sources that provide unspecific association sets to define pleasant and unpleasant associations BIBREF14, BIBREF17, BIBREF18, BIBREF15.
Appendix ::: Dos and Don’ts for the Moral Choice Machine
Tab. TABREF22 lists the most positive associated verbs (in decreasing order).
Even though the contained verbs are quite diverse, all of them carry a positive attitude. Some of the verbs are related to celebration or travelling, others to love matters or physical closeness. All elements of the above set are rather of general and unspecific nature. Analogously, Tab. TABREF23 presents the most negative associated verbs (in decreasing order) we found in our vocabulary.
Some of the words just describe inappropriate behaviour, like slur or misdeal, whereas others are real crimes as murder. And still others words, as for instance suppurate or rot, appear to be disgusting in the first place. Exculpate is not a bad behaviour per se. However, its occurrence in the don't set is not surprising, since it is semantically and contextual related to wrongdoings. Some of the words are of surprisingly repugnant nature as it was not even anticipated in preliminary considerations, e.g. depopulate or dehumanise. Undoubtedly, the listed words can be accepted as commonly agreed Don'ts. Both lists include few words are rather common as a noun or adjectives, as joy, long, gift or bad. Anyhow, they can also be used as verbs and comply the requirements of being a do or a don't in that function. The allocation of verbs into Dos and Don'ts was confirmed by the affective lexicon AFINN BIBREF16. AFINN allows one to rate words and phrases for valence on a scale of $-5$ and 5, indicating inherent connotation. Elements with no ratings are treated as neutral ($0.0$).
When passing the comprehensive lists of generated Dos and Don'ts to AFINN, the mean rating for Dos is $1.12$ ($std=1.24$) and for Don'ts $-0.90$ ($std=1.22$). The t-test statistic yielded values of $t = 8.12$ with $p < .0001^{***}$. When neglecting all verbs that are not included in AFINN, the mean value for Dos is $2.34$ ($std=0.62$, $n = 24$) and the mean for Don'ts $-2.37$ ($std = 0.67$, $n=19$), with again highly significant statistics ($t = 23.28$, $p<.0001^{***}$). Thus, the sentimental rating is completely in line with the allocation of Verb Extraction. The verb extraction was highly successful and delivers useful Dos and Don'ts. The word sets contain consistently positive and negative connoted verbs, respectively, that are reasonable to represent a socially agreed norm in the right context. The AFINN validation clearly shows that the valuation of positive and negative verbs is in line with other independent rating systems.
Appendix ::: Moral Bias of USE and BERT
The following results were computed with the MCM version of BIBREF0 (BIBREF0) using both USE and BERT as sentence embedding. Specifically, to investigate whether the sentiments of the extracted Dos and Don'ts also hold for more complex sentence level, we inserted them into the question/answer templates of Moral Choice Machine BIBREF0. The resulting moral biases scores/choices are summarized in Tab. TABREF28. It presents the moral biases exemplary for the top ten Dos and Don'ts by WEAT value of both sets. The threshold between the groups is not 0, but slightly shifted negatively (Using USE further shifted than Using BERT). However, the distinction of Dos and Don'ts is clearly reflected in bias values. Using USE the mean bias of all considered elements is $-0.018$ ($std=0.025$), whereat the mean of Dos is $-0.001$ ($std=0.190$, $n=50$) and the mean of Don'ts $-0.037$ ($std=0.017$, $n=50$). Using BERT the mean bias of all considered elements is $-0.054$ ($std=0.11$), whereat the mean of Dos is $0.041$ ($std=0.064$, $n=50$) and the mean of Don'ts $-0.163$ ($std=0.053$, $n=50$).
Furthermore Tab. TABREF29 shows the resulting moral biases scores/choices for action with additional surrounding context exemplary for the top ten Dos and Don'ts of both sentence embeddings.
Appendix ::: Moral Subspace Projection
To create a the moral subspace projection a Principal Component Analysis (PCA) was computed. The used atomic actions are listed in Tab. TABREF26. The resulting space, with the MCM using BERT, is visualized in Fig. FIGREF1 based on the first two top PCs. The top PC (the $X$ axis) defines the moral direction $m$ (bias). The context-based actions which were tested using the moral subspace projection are listed in Tab. TABREF27. The resulting moral direction $m$ (or bias) for both the atomic and context-based actions can be found in Tab. TABREF30. We also list the results using the sentence embedding USE instead of BERT. $m < 0$ corresponds to a positive moral score and $m > 0$ corresponds to a negative moral score. | These ask which choices are morally required, forbidden, or permitted, norms are understood as universal rules of what to do and what not to do |
f416c6818a7a8acb7ec4682ed424ecdbd7dd6df1 | f416c6818a7a8acb7ec4682ed424ecdbd7dd6df1_0 | Q: How does framework automatically chooses different curricula at the evolving learning process according to the learning status of the neural dialogue generation model?
Text: Introduction
Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7.
However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models.
Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly.
Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog.
Curriculum Plausibility
Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Specificity
A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1):
where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$.
Curriculum Plausibility ::: Conversational Attributes ::: Repetitiveness
Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as:
where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise.
Curriculum Plausibility ::: Conversational Attributes ::: Query-relatedness
A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Continuity
A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them.
Curriculum Plausibility ::: Conversational Attributes ::: Model Confidence
Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model.
Curriculum Plausibility ::: Dialogue Analysis ::: Distributions among Attributes
The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat.
Curriculum Plausibility ::: Dialogue Analysis ::: Attributes Independence
So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives.
Curriculum Dialogue Learning
We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model.
Curriculum Dialogue Learning ::: Single Curriculum Dialogue Learning
We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum.
Curriculum Dialogue Learning ::: Adaptive Multi-curricula Learning
Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges.
More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments:
where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$.
The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient:
where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $.
Experiments ::: Experiment Settings
We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6.
Experiments ::: Implementation and Reproducibility
Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same.
Experiments ::: Overall Performance and Human Evaluation
The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm.
We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects.
Experiments ::: Model Analysis ::: Single vs Multi-curricula
To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance.
Experiments ::: Model Analysis ::: Effects of Adaptive Multi-curricula Learning
Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner.
Experiments ::: Model Analysis ::: Learning Efficiency
Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases.
Experiments ::: Model Analysis ::: Multi-curricula Learning Route
To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior.
Experiments ::: Model Analysis ::: Examples with Different Learning Frequencies
As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework.
Related Work
Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality.
Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity.
Conclusion
In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems.
Acknowledgments
This work is supported by the National Natural Science Foundation of China-Joint Fund for Basic Research of General Technology under Grant U1836111 and U1736106. Hongshen Chen and Yonghao Song are the corresponding authors. | The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. |
a1d422cb2e428333961370496ca281a1be99fdff | a1d422cb2e428333961370496ca281a1be99fdff_0 | Q: What human judgement metrics are used?
Text: Introduction
Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7.
However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models.
Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly.
Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog.
Curriculum Plausibility
Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Specificity
A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1):
where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$.
Curriculum Plausibility ::: Conversational Attributes ::: Repetitiveness
Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as:
where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise.
Curriculum Plausibility ::: Conversational Attributes ::: Query-relatedness
A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Continuity
A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them.
Curriculum Plausibility ::: Conversational Attributes ::: Model Confidence
Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model.
Curriculum Plausibility ::: Dialogue Analysis ::: Distributions among Attributes
The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat.
Curriculum Plausibility ::: Dialogue Analysis ::: Attributes Independence
So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives.
Curriculum Dialogue Learning
We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model.
Curriculum Dialogue Learning ::: Single Curriculum Dialogue Learning
We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum.
Curriculum Dialogue Learning ::: Adaptive Multi-curricula Learning
Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges.
More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments:
where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$.
The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient:
where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $.
Experiments ::: Experiment Settings
We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6.
Experiments ::: Implementation and Reproducibility
Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same.
Experiments ::: Overall Performance and Human Evaluation
The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm.
We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects.
Experiments ::: Model Analysis ::: Single vs Multi-curricula
To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance.
Experiments ::: Model Analysis ::: Effects of Adaptive Multi-curricula Learning
Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner.
Experiments ::: Model Analysis ::: Learning Efficiency
Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases.
Experiments ::: Model Analysis ::: Multi-curricula Learning Route
To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior.
Experiments ::: Model Analysis ::: Examples with Different Learning Frequencies
As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework.
Related Work
Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality.
Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity.
Conclusion
In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems.
Acknowledgments
This work is supported by the National Natural Science Foundation of China-Joint Fund for Basic Research of General Technology under Grant U1836111 and U1736106. Hongshen Chen and Yonghao Song are the corresponding authors. | coherence, logical consistency, fluency and diversity |
3de9bf4b0b667b3f1181da9f006da1354565bcbd | 3de9bf4b0b667b3f1181da9f006da1354565bcbd_0 | Q: What automatic evaluation metrics are used?
Text: Introduction
Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7.
However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models.
Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly.
Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog.
Curriculum Plausibility
Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Specificity
A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1):
where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$.
Curriculum Plausibility ::: Conversational Attributes ::: Repetitiveness
Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as:
where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise.
Curriculum Plausibility ::: Conversational Attributes ::: Query-relatedness
A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Continuity
A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them.
Curriculum Plausibility ::: Conversational Attributes ::: Model Confidence
Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model.
Curriculum Plausibility ::: Dialogue Analysis ::: Distributions among Attributes
The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat.
Curriculum Plausibility ::: Dialogue Analysis ::: Attributes Independence
So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives.
Curriculum Dialogue Learning
We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model.
Curriculum Dialogue Learning ::: Single Curriculum Dialogue Learning
We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum.
Curriculum Dialogue Learning ::: Adaptive Multi-curricula Learning
Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges.
More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments:
where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$.
The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient:
where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $.
Experiments ::: Experiment Settings
We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6.
Experiments ::: Implementation and Reproducibility
Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same.
Experiments ::: Overall Performance and Human Evaluation
The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm.
We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects.
Experiments ::: Model Analysis ::: Single vs Multi-curricula
To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance.
Experiments ::: Model Analysis ::: Effects of Adaptive Multi-curricula Learning
Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner.
Experiments ::: Model Analysis ::: Learning Efficiency
Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases.
Experiments ::: Model Analysis ::: Multi-curricula Learning Route
To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior.
Experiments ::: Model Analysis ::: Examples with Different Learning Frequencies
As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework.
Related Work
Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality.
Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity.
Conclusion
In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems.
Acknowledgments
This work is supported by the National Natural Science Foundation of China-Joint Fund for Basic Research of General Technology under Grant U1836111 and U1736106. Hongshen Chen and Yonghao Song are the corresponding authors. | BLEU, embedding-based metrics (Average, Extrema, Greedy and Coherence), , entropy-based metrics (Ent-{1,2}), distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) |
1a1293e24f4924064e6fb9998658f5a329879109 | 1a1293e24f4924064e6fb9998658f5a329879109_0 | Q: What state of the art models were used in experiments?
Text: Introduction
Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7.
However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models.
Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly.
Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog.
Curriculum Plausibility
Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Specificity
A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1):
where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$.
Curriculum Plausibility ::: Conversational Attributes ::: Repetitiveness
Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as:
where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise.
Curriculum Plausibility ::: Conversational Attributes ::: Query-relatedness
A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Continuity
A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them.
Curriculum Plausibility ::: Conversational Attributes ::: Model Confidence
Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model.
Curriculum Plausibility ::: Dialogue Analysis ::: Distributions among Attributes
The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat.
Curriculum Plausibility ::: Dialogue Analysis ::: Attributes Independence
So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives.
Curriculum Dialogue Learning
We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model.
Curriculum Dialogue Learning ::: Single Curriculum Dialogue Learning
We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum.
Curriculum Dialogue Learning ::: Adaptive Multi-curricula Learning
Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges.
More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments:
where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$.
The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient:
where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $.
Experiments ::: Experiment Settings
We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6.
Experiments ::: Implementation and Reproducibility
Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same.
Experiments ::: Overall Performance and Human Evaluation
The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm.
We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects.
Experiments ::: Model Analysis ::: Single vs Multi-curricula
To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance.
Experiments ::: Model Analysis ::: Effects of Adaptive Multi-curricula Learning
Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner.
Experiments ::: Model Analysis ::: Learning Efficiency
Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases.
Experiments ::: Model Analysis ::: Multi-curricula Learning Route
To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior.
Experiments ::: Model Analysis ::: Examples with Different Learning Frequencies
As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework.
Related Work
Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality.
Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity.
Conclusion
In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems.
Acknowledgments
This work is supported by the National Natural Science Foundation of China-Joint Fund for Basic Research of General Technology under Grant U1836111 and U1736106. Hongshen Chen and Yonghao Song are the corresponding authors. | SEQ2SEQ, CVAE, Transformer, HRED, DialogWAE |
3ccd337f77c5d2f7294eb459ccc1770796c2eaef | 3ccd337f77c5d2f7294eb459ccc1770796c2eaef_0 | Q: What five dialogue attributes were analyzed?
Text: Introduction
Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7.
However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models.
Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly.
Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog.
Curriculum Plausibility
Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Specificity
A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1):
where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$.
Curriculum Plausibility ::: Conversational Attributes ::: Repetitiveness
Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as:
where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise.
Curriculum Plausibility ::: Conversational Attributes ::: Query-relatedness
A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Continuity
A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them.
Curriculum Plausibility ::: Conversational Attributes ::: Model Confidence
Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model.
Curriculum Plausibility ::: Dialogue Analysis ::: Distributions among Attributes
The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat.
Curriculum Plausibility ::: Dialogue Analysis ::: Attributes Independence
So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives.
Curriculum Dialogue Learning
We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model.
Curriculum Dialogue Learning ::: Single Curriculum Dialogue Learning
We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum.
Curriculum Dialogue Learning ::: Adaptive Multi-curricula Learning
Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges.
More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments:
where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$.
The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient:
where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $.
Experiments ::: Experiment Settings
We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6.
Experiments ::: Implementation and Reproducibility
Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same.
Experiments ::: Overall Performance and Human Evaluation
The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm.
We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects.
Experiments ::: Model Analysis ::: Single vs Multi-curricula
To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance.
Experiments ::: Model Analysis ::: Effects of Adaptive Multi-curricula Learning
Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner.
Experiments ::: Model Analysis ::: Learning Efficiency
Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases.
Experiments ::: Model Analysis ::: Multi-curricula Learning Route
To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior.
Experiments ::: Model Analysis ::: Examples with Different Learning Frequencies
As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework.
Related Work
Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality.
Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity.
Conclusion
In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems.
Acknowledgments
This work is supported by the National Natural Science Foundation of China-Joint Fund for Basic Research of General Technology under Grant U1836111 and U1736106. Hongshen Chen and Yonghao Song are the corresponding authors. | Model Confidence, Continuity, Query-relatedness, Repetitiveness, Specificity |
f6937199e4b06bfbaa22edacc7339410de9703db | f6937199e4b06bfbaa22edacc7339410de9703db_0 | Q: What three publicly available coropora are used?
Text: Introduction
Teaching machines to converse with humans naturally and engagingly is a fundamentally interesting and challenging problem in AI research. Many contemporary state-of-the-art approaches BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 for dialogue generation follow the data-driven paradigm: trained on a plethora of query-response pairs, the model attempts to mimic human conversations. As a data-driven approach, the quality of generated responses in neural dialogue generation heavily depends on the training data. As such, in order to train a robust and well-behaved model, most works obtain large-scale query-response pairs by crawling human-generated conversations from publicly available sources such as OpenSubtitles BIBREF7.
However, due to the subjectivity and open-ended nature of human conversations, the complexity of training dialogues varies greatly BIBREF8. Table TABREF1 shows samples drawn from OpenSubtitles BIBREF7, which contains millions of human-human conversations converted from movie transcripts. The response of the third sample “Yurakutei kikuhiko.” looks quite strange in terms of the given query, while the first sample is clearly easier to learn. The noise and uneven complexity of query-response pairs impede the learning efficiency and effects of the neural dialogue generation models.
Babies learn to speak by first imitating easy and exact utterances repeatedly taught by their patient parents. As children grow up, they learn grade by grade, from simple conversations to more complex ones. Inspired by such human behaviors of learning to converse, in this paper, we introduce curriculum learning to bring the neural dialogue model with easy-to-complex learning curriculum, where the model first learns from easy conversations and then gradually manages more complicated dialogues. Nevertheless, organizing a curriculum with increasing difficulty faces insurmountable obstacles: 1) automatic evaluation of dialogue complexity is a non-trivial task. BIBREF9 defined the difficulty for the training examples with respect to the sentence length and word rarity in neural machine translation. BIBREF10 expressed the difficulty regarding the value of the objective function. So far, there is no unified approach in measuring dialogue complexity. 2) Unlike the single metric of complexity in other tasks, dialogue complexity embodies multiple aspects of attributes BIBREF11—the specificity and repetitiveness of the response, the relevance between the query and the response, etc. As such, in this paper, we study the dialogue distributions along five aspects of attributes to gather multiple perspectives on dialogue complexity, resulting with five curricula accordingly.
Conventional curriculum learning organizes the training samples into one curriculum, whereas we employ multiple curricula for dialogue learning. Enlightened by the phenomenon that children usually adjust the learning focus of multiple curricula dynamically in order to acquire a good mark, we further propose an adaptive multi-curricula learning framework, established upon the reinforcement learning paradigm, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
Detailed analysis and experiments demonstrate that the proposed framework effectively increases the learning efficiency and gains better performances on five state-of-the-art dialogue generation models regarding three publicly available conversational corpora. Code for this work is available on https://github.com/hengyicai/Adaptive_Multi-curricula_Learning_for_Dialog.
Curriculum Plausibility
Intuitively, a well-organized curriculum should provide the model learning with easy dialogues first, and then gradually increase the curriculum difficulty. However, currently, there is no unified approach for dialogue complexity evaluation, where the complexity involves multiple aspects of attributes. In this paper, we prepare the syllabus for dialogue learning with respect to five dialogue attributes. To ensure the universality and general applicability of the curriculum, we perform an in-depth investigation on three publicly available conversation corpora, PersonaChat BIBREF12, DailyDialog BIBREF13 and OpenSubtitles BIBREF7, consisting of 140 248, 66 594 and 358 668 real-life conversation samples, respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Specificity
A notorious problem for neural dialogue generation model is that the model is prone to generate generic responses. The most unspecific responses are easy to learn, but are short and meaningless, while the most specific responses, consisting of too many rare words, are too difficult to learn, especially at the initial learning stage. Following BIBREF11, we measure the specificity of the response in terms of each word $w$ using Normalized Inverse Document Frequency (NIDF, ranging from 0 to 1):
where $\text{IDF}(w)=\log {\frac{N_r}{N_w}}$. $N_r$ is the number of responses in the training set and $N_w$ is the number of those responses that contain $w$. $\text{idf}_{min}$ and $\text{idf}_{max}$ are the minimum and maximum IDFs, taken over all words in the vocabulary. The specificity of a response $r$ is measured as the mean NIDF of the words in $r$.
Curriculum Plausibility ::: Conversational Attributes ::: Repetitiveness
Repetitive responses are easy to generate in current auto-regressive response decoding, where response generation loops frequently, whereas diverse and informative responses are much more complicated for neural dialogue generation. We measure the repetitiveness of a response $r$ as:
where $I(\cdot )$ is an indicator function that takes the value 1 when $w_i \in \lbrace w_0, \cdots , w_{i-1}\rbrace $ is true and 0 otherwise.
Curriculum Plausibility ::: Conversational Attributes ::: Query-relatedness
A conversation is considered to be coherent if the response correlates well with the given query. For example, given a query “I like to paint”, the response “What kind of things do you paint?” is more relevant and easier to learn than another loosely-coupled response “Do you have any pets?”. Following previous work BIBREF14, we measure the query-relatedness using the cosine similarities between the query and its corresponding response in the embedding space: $\textit {cos\_sim}(\textit {sent\_emb}(c), \textit {sent\_emb}(r))$, where $c$ is the query and $r$ is the response. The sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency $\textit {sent\_emb}(e)=\frac{1}{|e|}\sum _{w\in {}e}\frac{0.001}{0.001 + p(w)}emb(w)$ of words BIBREF15, where $emb(w)$ and $p(w)$ are the embedding and the probability of word $w$ respectively.
Curriculum Plausibility ::: Conversational Attributes ::: Continuity
A coherent response not only responds to the given query, but also triggers the next utterance. An interactive conversation is carried out for multiple rounds and a response in the current turn also acts as the query in the next turn. As such, we introduce the continuity metric, which is similar to the query-relatedness metric, to assess the continuity of a response $r$ with respect to the subsequent utterance $u$, by measuring the cosine similarities between them.
Curriculum Plausibility ::: Conversational Attributes ::: Model Confidence
Despite the heuristic dialogue attributes, we further introduce the model confidence as an attribute, which distinguishes the easy-learnt samples from the under-learnt samples in terms of the model learning ability. A pretrained neural dialogue generation model assigns a relatively higher confidence probability for the easy-learnt samples than the under-learnt samples. Inspired by BIBREF16, BIBREF17, we employ the negative loss value of a dialogue sample under the pretrained model as the model confidence measure, indicating whether a sampled response is easy to be generated. Here we choose the attention-based sequence-to-sequence architecture with a cross-entropy objective as the underlying dialogue model.
Curriculum Plausibility ::: Dialogue Analysis ::: Distributions among Attributes
The distributions of the data samples regarding the aforementioned five attributes are shown in Figure FIGREF11. Although the attribute score distributions on three corpora are similar, they also have disparities: 1) Outliers frequently appear among all the distributions, which exhibits the uneven dialogue complexity. 2) In terms of query-relatedness and continuity, to our surprise, the medians of the two distributions on PersonaChat are obviously smaller than the corresponding distributions on DailyDialog and OpenSubtitles. PersonaChat is manually created by crowd-sourcing, while DailyDialog and OpenSubtitles are collected from almost real-life conversations. 3) With respect to the model confidence (the negative loss value), the median of PersonaChat is relatively smaller, which illustrates that it is more difficult for the neural dialogue generation model to learn from PersonaChat.
Curriculum Plausibility ::: Dialogue Analysis ::: Attributes Independence
So far, we have analyzed five dialogue attributes. A question might be raised that how well the proposed attributes correlate with each other. To validate the correlations of these conversation attributes, we summarize the statistics of the Kendall $\tau $ correlations for each dataset in Table TABREF12. We find that these attributes, in general, show little correlations with each other. This partially validates that dialogue complexity involves multiple perspectives.
Curriculum Dialogue Learning
We propose an adaptive multi-curricula learning framework to accelerate dialogue learning and improve the performance of the neural dialogue generation model.
Curriculum Dialogue Learning ::: Single Curriculum Dialogue Learning
We first illustrate how a dialogue generation model exploits the curriculum by taking single curriculum dialogue learning as an example, where the curriculum is arranged by sorting each sample in the dialogue training set $\mathcal {D}_{train}$ according to one attribute. Then, at training time step $t$, a batch of training examples is sampled from the top $f(t)$ portions of the total sorted training samples, where the progressing function $f(t)$ determines the learning rate of the curriculum. Following BIBREF9, we define the progressing function $f(t)$ as $f(t)\triangleq min(1, \sqrt{t\frac{1-c_0^2}{T} + c_0^2})$, where $c_0 > 0$ is set to 0.01 and $T$ is the duration of curriculum learning. At the early stage of the training process, the neural dialogue generation model learns from the samples drawing from the front part of the curriculum. As the advance of the curriculum, the difficulty gradually increases, as more complex training examples appear. After training $T$ batches, each batch of training instances is drawn from the whole training set, which is same as the conventional training procedure without a curriculum.
Curriculum Dialogue Learning ::: Adaptive Multi-curricula Learning
Dialogue complexity consists of multi-perspectives of attributes. We extend the naive single curriculum learning into the multi-curricula setting, where we provide the neural dialogue generation model with five different learning curricula, and each curriculum is prepared by ordering the training set in terms of the corresponding attribute metric accordingly. Scheduling multiple curricula in the same learning pace is obviously inappropriate. Enlightened by the phenomenon that children usually adjust the learning progress of multiple curricula dynamically in order to acquire a good mark, we further introduce an adaptive multi-curricula learning framework, to automatically choose different curricula at different learning stages according to the learning status of the neural dialogue generation model.
The adaptive multi-curricula learning framework is established upon the reinforcement learning (RL) paradigm. Figure FIGREF18 illustrates the overall learning process. The multi-curricula learning scheme is scheduled according to the model's performance on the validation set, where the scheduling mechanism acts as the policy $\pi $ interacting with the dialogue model to acquire the learning status $s$. The reward of the multi-curricula learning mechanism $m_t$ indicates how well the current dialogue model performs. A positive reward is expected if a multi-curricula scheduling action $a_t$ brings improvements on the model's performance, and the current mini-batch of training samples is drawn consulting with the scheduling action $a_t$. The neural dialogue generation model learns from those mini-batches, resulting with a new learning status $s_{t+1}$. The adaptive multi-curricula learning framework is optimized to maximize the reward. Such learning process loops continuously until the performance of the neural dialogue generation model converges.
More specifically, the learning status of the dialogue model is represented as the state. Similar to other curriculum learning framework BIBREF18, BIBREF19, the learning status consists of several features, including the passed mini-batch number, the average historical training loss, the loss value on the training data, the margin value of predicted probabilities and the last validation metric values. To enable the proposed framework to be aware of the learning progress $\varrho _i$ regarding each attribute $i$, we also exploit $\varrho =\lbrace \varrho _0, \varrho _1, \cdots , \varrho _{k-1}\rbrace $ for state representations, where $k$ stands for the number of curricula, here $k=5$, and $\varrho _i$ can be simply measured as the learning steps on the attribute $i$. The multi-curricula learning framework samples a scheduling action $a_t$ per step by its policy $\Phi _\theta (a|s)$ with parameters $\theta $ to be learnt, and the scheduling action $a_t \in \lbrace 0, 1, \cdots , k-1\rbrace $ chooses one of the curricula. Then, a mini-batch of dialogue instances is sampled from the top $f(\varrho _i)$ portions of the chosen curriculum. The dialogue model is validated every $\Gamma $ training steps and the curriculum policy is updated at $\Gamma $-round intervals according to a reward $m_\Gamma $. To accelerate the neural dialogue learning, $m_\Gamma $ is defined as the ratio of two consecutive performance deviations on a held-out validation set: $m_\Gamma =\frac{\delta _{\Gamma }}{\delta _{\Gamma _{\text{prev}}}} - 1$. The performance deviation $\delta _{\Gamma }$ is calculated in terms of 13 automatic evaluation metrics $\lbrace \xi _1, \xi _2, \cdots , \xi _{13}\rbrace $ used in the experiments:
where $\xi _i^{\Gamma }$ is the evaluation score of metric $i$ computed at the current validation turn and $\xi _i^{\Gamma _{\text{prev}}}$ is computed at the previous validation turn. Each score is normalized into $[0,1]$.
The curriculum policy is trained by maximizing the expected reward: $J(\theta )=\mathbb {E}_{\Phi _\theta (a|s)}[M(s,a)]$, where $M(s,a)$ is the state-action value function. Since $M(s,a)$ is non-differentiable w.r.t. $\theta $, in this work, we use REINFORCE BIBREF20, a likelihood ratio policy gradient algorithm to optimize $J(\theta )$ based on the gradient:
where $v_t$ is the sampled estimation of reward $M(s_t, a_t)$ from one episode execution of the policy $\Phi _\theta (a|s)$. In our implementation, $v_t$ is computed as the terminal reward $m_\Gamma $.
Experiments ::: Experiment Settings
We perform experiments using the following state-of-the-art models: (1) SEQ2SEQ: a sequence-to-sequence model with attention mechanisms BIBREF21, (2) CVAE: a conditional variational auto-encoder model with KL-annealing and a BOW loss BIBREF2, (3) Transformer: an encoder-decoder architecture relying solely on attention mechanisms BIBREF22, (4) HRED: a generalized sequence-to-sequence model with the hierarchical RNN encoder BIBREF23, (5) DialogWAE: a conditional Wasserstein auto-encoder, which models the distribution of data by training a GAN within the latent variable space BIBREF6. We adopt several standard metrics widely used in existing works to measure the performance of dialogue generation models, including BLEU BIBREF24, embedding-based metrics (Average, Extrema, Greedy and Coherence) BIBREF25, BIBREF26, entropy-based metrics (Ent-{1,2}) BIBREF0 and distinct metrics (Dist-{1,2,3} and Intra-{1,2,3}) BIBREF1, BIBREF6.
Experiments ::: Implementation and Reproducibility
Our experiments are performed using ParlAI BIBREF27. Regarding model implementations, we employ a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for the SEQ2SEQ and CVAE. The hidden size is set to 512, and the latent size is set to 64 for CVAE. For the Transformer, the hidden size, attention heads and number of hidden layers are set to 512, 8 and 6, respectively. In terms of HRED and DialogWAE, the utterance encoder is a bidirectional GRU with 512 hidden units in each direction. The context encoder and decoder are both GRUs with 512 hidden units. Regarding the curriculum length $T$, we set its value in the following manner: we train the baseline model using the vanilla training procedure and compute the number of training steps it takes to reach approximately 110% of its final loss value. We then set $T$ to this value. Each model is trained using two protocols: the vanilla training procedure without using any curriculum and our proposed adaptive multi-curricula learning procedure, keeping other configurations the same.
Experiments ::: Overall Performance and Human Evaluation
The automatic evaluation results of our proposed multi-curricula learning framework and the comparison models are listed in Table TABREF21. Compared with the vanilla training procedure, our curriculum learning framework 1) brings solid improvements for all the five dialogue models regarding almost all the evaluation metrics, 2) achieves competitive performance across three datasets, affirming the superiority and general applicability of our proposed framework. We also notice that the relative improvements of Distinct on OpenSubtitles are much larger (up to 122.46%) than the other two experiment datasets. We conjecture that the OpenSubtitles, with extremely uneven-complexity dialogue samples, benefits more from the multi-curricula learning paradigm.
We conduct a human evaluation to validate the effectiveness of the proposed multi-curricula learning framework. We employ the DailyDialog as the evaluation corpus since it is closer to our daily conversations and easier for humans to make the judgment. We randomly sampled 100 cases from the test set and compared the generated responses of the models trained with the vanilla learning procedure and the multi-curricula learning framework. Three annotators, who have no knowledge about which system the response is from, are then required to evaluate among win (response$_1$ is better), loss (response$_2$ is better) and tie (they are equally good or bad) independently, considering four aspects: coherence, logical consistency, fluency and diversity. Cases with different rating results are counted as “tie”. Table TABREF25 reveals the results of the subjective evaluation. We observe that our multi-curricula learning framework outperforms the vanilla training method on all the five dialogue models and the kappa scores indicate that the annotators came to a fair agreement in the judgment. We checked the cases on which the vanilla training method loses to our multi-curricula learning method and found that the vanilla training method usually leads to irrelevant, generic and repetitive responses, while our method effectively alleviates such defects.
Experiments ::: Model Analysis ::: Single vs Multi-curricula
To further glean the insights regarding the effects of the five conversational attributes on the proposed learning framework, we conduct the ablation test using the SEQ2SEQ model by only exploiting a single attribute during the curriculum learning. Table TABREF26 reports the ablation test results on the DailyDialog. We observe that the curriculum learning leads to consistent performance improvements, even with one single conversational attribute. When applying the multi-curricula learning method to the model, we observe the nearly best performance.
Experiments ::: Model Analysis ::: Effects of Adaptive Multi-curricula Learning
Adaptive multi-curricula learning enables the model to choose different curricula at different learning stages according to the learning status of the underlying model. As shown in Table TABREF27, we notice the performance drops when replacing the RL-based curriculum policy with the random policy, indicating that choosing different curricula according to the learning status of the model benefits the model training. When training the model with anti-curriculum learning, i.e., feeding examples to the model in the complex-to-easy manner, we also observe consistent performance decreases, affirming the effectiveness of the easy-to-complex learning manner.
Experiments ::: Model Analysis ::: Learning Efficiency
Figure FIGREF28 shows comparative results when training the SEQ2SEQ model on DailyDialog with different training protocols. As shown in Figure FIGREF28, our training method accelerates the learning effectively and consistently outperforms the baseline by a large margin in most cases.
Experiments ::: Model Analysis ::: Multi-curricula Learning Route
To glean insights on how the proposed adaptive multi-curricula learning framework performs, we present the choosing curriculum distributions $\pi (a_t|s_t)$ during the model learning in Figure FIGREF29. We notice that the model focuses more on the curriculum of “query-relatedness” at the initial learning stage. As the learning proceeds, the model gradually turns its attention to other curricula. At the final stage, the model pays more attention to the “model confidence” curriculum. Such dynamic learning route is quite similar to the human learning behavior.
Experiments ::: Model Analysis ::: Examples with Different Learning Frequencies
As shown in Table TABREF30, the most frequently learnt examples are comprehensively far better than those seldom learnt examples, which exhibits the effectiveness of the adaptive multi-curricula learning framework.
Related Work
Neural dialogue generation. Neural generation models for dialogue, despite their ubiquity in current research, are still far from the real-world applications. Previous approaches enhancing neural dialogue generation models mainly focus on the learning systems by incorporating extra information to the dialogue models such as relevant dialogue history BIBREF5, topics BIBREF28, emotions BIBREF3, out-sourcing knowledge BIBREF4 or exemplars BIBREF29. Latent variables BIBREF0, BIBREF2 also benefit the model with more diverse response generations. In contrast with the previous researches, which pay most attention to the underlying dialogue models, in this work, we concentrate on the dialogue learning process and investigate how the performance of existing dialogue models can be improved on the conversation corpora with varying levels of complexity, by simply adapting the training protocols. BIBREF30 attributed the generic/uninteresting responses to the high-entropy utterances in the training set and proposed to improve dataset quality through data filtering. Though straightforward, the filtering threshold need be carefully chosen to prevent the data size decreasing too much. BIBREF8, BIBREF31 proposed to investigate instance weighting into dialogue systems. However, it is difficult to accurately define the “weight” of an example in conversation systems, since the dialogue data is of high diversity and complexity. Our proposed adaptive multi-curricula learning framework, concentrating on different curricula at evolving learning process according to the learning status of the underlying model, enables dialogue systems gradually proceed from easy to more complex samples in training and thus efficiently improves the response quality.
Curriculum learning in NLP. BIBREF18 examined curriculum learning and demonstrated empirically that such curriculum approaches indeed help decrease training times and sometimes even improve generalization. BIBREF32 managed curriculum learning as an optimization problem. Curriculum learning has also been applied to many NLP tasks. To name a few, BIBREF10 applied self-paced learning for neural question answering. BIBREF33 proposed a curriculum learning based natural answer generation framework, dealing with low-quality QA-pairs first and then gradually learn more complete answers. BIBREF34 proposed curriculum pointer-generator networks for reading comprehension over long narratives. BIBREF9 applied curriculum learning for neural machine translation (NMT), aiming to reduce the need for specialized training heuristics and boost the performance of existing NMT systems. In our work, instead of organizing the curriculum only from a single aspect, we provide an adaptive multi-curricula dialogue learning framework, grounding our analysis on five conversation attributes regarding the dialogue complexity.
Conclusion
In this paper, we propose an adaptive multi-curricula dialogue learning framework, to enable the dialogue models to gradually proceed from easy samples to more complex ones in training. We first define and analyze five conversational attributes regarding the complexity and easiness of dialogue samples, and then present an adaptive multi-curricula learning framework, which chooses different curricula at different training stages according to the learning status of the model. Extensive experiments conducted on three large-scale datasets and five state-of-the-art conversation models show that our proposed learning framework is able to boost the performance of existing dialogue systems.
Acknowledgments
This work is supported by the National Natural Science Foundation of China-Joint Fund for Basic Research of General Technology under Grant U1836111 and U1736106. Hongshen Chen and Yonghao Song are the corresponding authors. | PersonaChat BIBREF12, DailyDialog BIBREF13, OpenSubtitles BIBREF7 |
61c9f97ee1ac5a4b8654aa152f05f22e153e7e6e | 61c9f97ee1ac5a4b8654aa152f05f22e153e7e6e_0 | Q: Which datasets do they use?
Text: Introduction
One of the recent challenges in machine learning (ML) is interpreting the predictions made by models, especially deep neural networks. Understanding models is not only beneficial, but necessary for wide-spread adoption of more complex (and potentially more accurate) ML models. From healthcare to financial domains, regulatory agencies mandate entities to provide explanations for their decisions BIBREF0 . Hence, most machine learning progress made in those areas is hindered by a lack of model explainability – causing practitioners to resort to simpler, potentially low-performance models. To supply for this demand, there has been many attempts for model interpretation in recent years for tree-based algorithms BIBREF1 and deep learning algorithms BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . On the other hand, the amount of research focusing on explainable natural language processing (NLP) models BIBREF8 , BIBREF9 , BIBREF10 is modest as opposed to image explanation techniques.
Inherent problems in data emerge in a trained model in several ways. Model explanations can show that the model is not inline with human judgment or domain expertise. A canonical example is model unfairness, which stems from biases in the training data. Fairness in ML models rightfully came under heavy scrutiny in recent years BIBREF11 , BIBREF12 , BIBREF13 . Some examples include sentiment analysis models weighing negatively for inputs containing identity terms such as “jew” and “black”, and hate speech classifiers leaning to predict any sentence containing “islam” as toxic BIBREF14 . If employed, explanation techniques help divulge these issues, but fail to offer a remedy. For instance, the sentence “I am gay” receives a high score on a toxicity model as seen in Table TABREF1 . The Integrated Gradients BIBREF4 explanation method attributes the majority of this decision to the word “gay.” However, none of the explanations methods suggest next steps to fix the issue. Instead, researchers try to reduce biases indirectly by mostly adding more data BIBREF12 , BIBREF15 , using unbiased word vectors BIBREF16 , or directly optimizing for a fairness proxy with adversarial training BIBREF17 , BIBREF11 . These methods either offer to collect more data, which is costly in many cases, or make a tradeoff between original task performance and fairness.
In this paper, we attempt to enable injecting priors through model explanations to rectify issues in trained models. We demonstrate our approach on two problems in text classification settings: (1) model biases towards protected identity groups; (2) low classification performance due to lack of data. The core idea is to add INLINEFORM0 distance between Path Integrated Gradients attributions for pre-selected tokens and a target attribution value in the objective function as a loss term. For model fairness, we impose the loss on keywords identifying protected groups with target attribution of 0, so the trained model is penalized for attributing model decisions to those keywords. Our main intuition is that undesirable correlations between toxicity labels and instances of identity terms cause the model to learn unfair biases which can be corrected by incorporating priors on these identity terms. Moreover, our approach allows practitioners to impose priors in the other direction to tackle the problem of training a classifier when there is only a small amount of data. As shown in our experiments, by setting a positive target attribution for known toxic words , one can improve the performance of a toxicity classifier in a scarce data regime.
We validate our approach on the Wikipedia toxic comments dataset BIBREF18 . Our fairness experiments show that the classifiers trained with our method achieve the same performance, if not better, on the original task, while improving AUC and fairness metrics on a synthetic, unbiased dataset. Models trained with our technique also show lower attributions to identity terms on average. Our technique produces much better word vectors as a by-product when compared to the baseline. Lastly, by setting an attribution target of 1 on toxic words, a classifier trained with our objective function achieves better performance when only a subset of the data is present.
Feature Attribution
In this section, we give formal definitions of feature attribution and a primer on [Path] Integrated Gradients (IG), which is the basis for our method.
Definition 2.1 Given a function INLINEFORM0 that represents a model, and an input INLINEFORM1 . An attribution of the prediction at input INLINEFORM2 is a vector INLINEFORM3 and INLINEFORM4 is defined as the attribution of INLINEFORM5 .
Feature attribution methods have been studied to understand the contribution of each input feature to the output prediction score. This contribution, then, can further be used to interpret model decisions. Linear models are considered to be more desirable because of their implicit interpretability, where feature attribution is the product of the feature value and the coefficient. To some, non-linear models such as gradient boosting trees and neural networks are less favorable due to the fact that they do not enjoy such transparent contribution of each feature and are harder to interpret BIBREF19 .
Despite the complexity of these models, prior work has been able to extract attributions with gradient based methods BIBREF3 , Shapley values from game theory (SHAP) BIBREF2 , or other similar methods BIBREF5 , BIBREF20 . Some of these attributions methods, for example Path Intergrated Gradients and SHAP, not only follow Definition SECREF3 , but also satisfy axioms or properties that resemble linear models. One of these axioms is completeness, which postulates that the sum of attributions should be equal to the difference between uncertainty and model output.
Integrated Gradients
Integrated Gradients BIBREF4 is a model attribution technique applicable to all models that have differentiable inputs w.r.t. outputs. IG produces feature attributions relative to an uninformative baseline. This baseline input is designed to produce a high-entropy prediction representing uncertainty. IG, then, interpolates the baseline towards the actual input, with the prediction moving from uncertainty to certainty in the process. Building on the notion that the gradient of a function, INLINEFORM0 , with respect to input can characterize sensitivity of INLINEFORM1 for each input dimension, IG simply aggregates the gradients of INLINEFORM2 with respect to the input along this path using a path integral. The crux of using path integral rather than overall gradient at the input is that INLINEFORM3 's gradients might have been saturated around the input and integrating over a path alleviates this phenomenon. Even though there can be infinitely many paths from a baseline to input point, Integrated Gradients takes the straight path between the two. We give the formal definition from the original paper in SECREF4 .
Definition 2.2 Given an input INLINEFORM0 and baseline INLINEFORM1 , the integrated gradient along the INLINEFORM2 dimension is defined as follows. DISPLAYFORM0
where INLINEFORM0 represents the gradient of INLINEFORM1 along the INLINEFORM2 dimension at INLINEFORM3 .
In the NLP setting, INLINEFORM0 is the concatenated embedding of the input sequence. The attribution of each token is the sum of the attributions of its embedding.
There are other explainability methods that attribute a model's decision to its features, but we chose IG in this framework due to several of its characteristics. First, it is both theoretically justified BIBREF4 and proven to be effective in NLP-related tasks BIBREF21 . Second, the IG formula in SECREF4 is differentiable everywhere with respect to model parameters. Lastly, it is lightweight in terms of implementation and execution complexity.
Incorporating Priors
Problems in data manifest themselves in a trained model's performance on classification or fairness metrics. Traditionally, model deficiencies were addressed by providing priors through extensive feature engineering and collecting more data. Recently, attributions help uncover deficiencies causing models to perform poorly, but do not offer actionability.
To this end, we propose to add an extra term to the objective function to penalize the INLINEFORM0 distance between model attributions on certain features and target attribution values. This modification allows model practitioners to inject priors. For example, consider a model that tends to predict every sentence containing “gay” as toxic in a comment moderation system. Penalizing non-zero attributions on the tokens identifying protected groups would force the model to focus more on the context words rather than mere existence of certain tokens.
We give the formal definition of the new objective function that incorporates priors as the follows:
Definition 3.1 Given a vector INLINEFORM0 of size INLINEFORM1 , where INLINEFORM2 is the length of the input sequence and INLINEFORM3 is the attribution target value for the INLINEFORM4 th token in the input sequence. The prior loss for a scalar output is defined as: DISPLAYFORM0
where INLINEFORM0 refers to attribution of the INLINEFORM1 th token as in Definition SECREF3 .
For a multi-class problem, we train our model with the following joint objective, DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are the attribution and attribution target for class INLINEFORM2 , INLINEFORM3 is the hyperparameter that controls the stength of the prior loss and INLINEFORM4 is the cross-entropy loss defined as follows: DISPLAYFORM0
where INLINEFORM0 is an indicator vector of the ground truth label and INLINEFORM1 is the posterior probability of class INLINEFORM2 .
The joint objective function is differentiable w.r.t. model parameters when attribution is calculated through Equation EQREF5 and can be trained with most off-the-shelf optimizers. The proposed objective is not dataset-dependent and is applicable to different problem settings such as sentiment classification, abuse detection, etc. It only requires users to specify the target attribution value for tokens of interest in the corpus. We illustrate the effectiveness of our method by applying it to a toxic comment classification problem. In the next section, we first show how we set the target attribution value for identity terms to remove unintended biases while retaining the same performance on the original task. Then, using the same technique, we show how to set target attribution for toxic words to improve classifier performance in a scarce data setting.
Experiments
experiments
Discussion and Related Work
relatedwork
Conclusion and Future Work
In this paper, we proposed actionability on model explanations that enable ML practitioners to enforce priors on their model. We apply this technique to model fairness in toxic comment classification. Our method incorporates Path Integrated Gradients attributions into the objective function with the aim of stopping the classifier from carrying along false positive bias from the data by punishing it when it focuses on identity words.
Our experiments indicate that the models trained jointly with cross-entropy and prior loss do not suffer a performance drop on the original task, while achieving a better performance in fairness metrics on the template-based dataset. Applying model attribution as a fine-tuning step on a trained classifier makes it converge to a more debiased classifier in just a few epochs. Additionally, we show that model can be also forced to focus on pre-determined tokens.
There are several avenues we can explore as future research. Our technique can be applied to implement a more robust model by penalizing the attributions falling outside of tokens annotated to be relevant to the predicted class. Another avenue is to incorporate different model attribution strategies such as DeepLRP BIBREF5 into the objective function. Finally, it would be worthwhile to invest in a technique to extract problematic terms from the model automatically rather than providing prescribed identity or toxic terms.
Acknowledgments
We thank Salem Haykal, Ankur Taly, Diego Garcia-Olano, Raz Mathias, and Mukund Sundararajan for their valuable feedback and insightful discussions. | Wikipedia toxic comments |
9ae084e76095194135cd602b2cdb5fb53f2935c1 | 9ae084e76095194135cd602b2cdb5fb53f2935c1_0 | Q: What metrics are used for evaluation?
Text: Introduction
End-to-end models such as Listen, Attend & Spell (LAS) BIBREF0 or the Recurrent Neural Network Transducer (RNN-T) BIBREF1 are sequence models that directly define $P(W | X)$, the posterior probability of the word or subword sequence $W$ given an audio frame sequence $X$, with no chaining of sub-module probabilities. State-of-the-art, or near state-of-the-art results have been reported for these models on challenging tasks BIBREF2, BIBREF3.
End-to-end ASR models in essence do not include independently trained symbols-only or acoustics-only sub-components. As such, they do not provide a clear role for language models $P(W)$ trained only on text/transcript data. There are, however, many situations where we would like to use a separate LM to complement or modify a given ASR system. In particular, no matter how plentiful the paired {audio, transcript} training data, there are typically orders of magnitude more text-only data available. There are also many practical applications of ASR where we wish to adapt the language model, e.g., biasing the recognition grammar towards a list of specific words or phrases for a specific context.
The research community has been keenly aware of the importance of this issue, and has responded with a number of approaches, under the rubric of “Fusion”. The most popular of these is “Shallow Fusion” BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, which is simple log-linear interpolation between the scores from the end-to-end model and the separately-trained LM. More structured approaches, “Deep Fusion” BIBREF9, “Cold Fusion” BIBREF10 and “Component Fusion” BIBREF11 jointly train an end-to-end model with a pre-trained LM, with the goal of learning the optimal combination of the two, aided by gating mechanisms applied to the set of joint scores. These methods have not replaced the simple Shallow Fusion method as the go-to method in most of the ASR community. Part of the appeal of Shallow Fusion is that it does not require model retraining – it can be applied purely at decoding time. The Density Ratio approach proposed here can be seen as an extension of Shallow Fusion, sharing some of its simplicity and practicality, but offering a theoretical grounding in Bayes' rule.
After describing the historical context, theory and practical implementation of the proposed Density Ratio method, this article describes experiments comparing the method to Shallow Fusion in a cross-domain scenario. An RNN-T model was trained on large-scale speech data with semi-supervised transcripts from YouTube videos, and then evaluated on data from a live Voice Search service, using an RNN-LM trained on Voice Search transcripts to try to boost performance. Then, exploring the transition between cross-domain and in-domain, limited amounts of Voice Search speech data were used to fine-tune the YouTube-trained RNN-T model, followed by LM fusion via both the Density Ratio method and Shallow Fusion. The ratio method was found to produce consistent gains over Shallow Fusion in all scenarios examined.
A Brief History of Language Model incorporation in ASR
Generative models and Bayes' rule. The Noisy Channel Model underlying the origins of statistical ASR BIBREF12 used Bayes' rule to combine generative models of both the acoustics $p(X|W)$ and the symbol sequence $P(W)$:
for an acoustic feature vector sequence $X = {\mbox{\bf x}}_1, ..., {\mbox{\bf x}}_T$ and a word or sub-word sequence $W = s_1, ..., s_U$ with possible time alignments $S_W = \lbrace ..., {\bf s}, ...\rbrace $. ASR decoding then uses the posterior probability $P(W|X)$. A prior $p({\bf s}| W)$ on alignments can be implemented e.g. via a simple 1st-order state transition model. Though lacking in discriminative power, the paradigm provides a clear theoretical framework for decoupling the acoustic model (AM) $p(X|W)$ and LM $P(W)$.
Hybrid model for DNNs/LSTMs within original ASR framework. The advent of highly discriminative Deep Neural Networks (DNNs) BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17 and Long Short Term Memory models (LSTMs) BIBREF18, BIBREF19 posed a challenge to the original Noisy Channel Model, as they produce phoneme- or state- level posteriors $P({\bf s}(t) | {\mbox{\bf x}}_t)$, not acoustic likelihoods $p({\mbox{\bf x}}_t | {\bf s}(t))$. The “hybrid” model BIBREF20 proposed the use of scaled likelihoods, i.e. posteriors divided by separately estimated state priors $P(w)$. For bidirectional LSTMs, the scaled-likelihood over a particular alignment ${\bf s}$ is taken to be
using $k(X)$ to represent a $p(X)$-dependent term shared by all hypotheses $W$, that does not affect decoding. This “pseudo-generative” score can then be plugged into the original model of Eq. (DISPLAY_FORM2) and used for ASR decoding with an arbitrary LM $P(W)$. For much of the ASR community, this approach still constitutes the state-of-the-art BIBREF2, BIBREF21, BIBREF22.
Shallow Fusion. The most popular approach to LM incorporation for end-to-end ASR is a linear interpolation,
with no claim to direct interpretability according to probability theory, and often a reward for sequence length $|W|$, scaled by a factor $\beta $ BIBREF5, BIBREF7, BIBREF8, BIBREF23.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: A Sequence-level Hybrid Pseudo-Generative Model
The model makes the following assumptions:
The source domain $\psi $ has some true joint distribution $P_{\psi }(W, X)$ over text and audio;
The target domain $\tau $ has some other true joint distribution $P_{\tau }(W, X)$;
A source domain end-to-end model (e.g. RNN-T) captures $P_{\psi }(W | X)$ reasonably well;
Separately trained LMs (e.g. RNN-LMs) capture $P_{\psi }(W)$ and $P_{\tau }(W)$ reasonably well;
$p_{\psi }(X | W)$ is roughly equal to $p_{\tau }(X | W)$, i.e. the two domains are acoustically consistent; and
The target domain posterior, $P_{\tau }(W | X)$, is unknown.
The starting point for the proposed Density Ratio Method is then to express a “hybrid” scaled acoustic likelihood for the source domain, in a manner paralleling the original hybrid model BIBREF20:
Similarly, for the target domain:
Given the stated assumptions, one can then estimate the target domain posterior as:
with $k(X) = p_{\psi }(X) / p_{\tau }(X)$ shared by all hypotheses $W$, and the ratio $P_{\tau }(W) / {P_{\psi }(W)}$ (really a probablity mass ratio) giving the proposed method its name.
In essence, this model is just an application of Bayes' rule to end-to-end models and separate LMs. The approach can be viewed as the sequence-level version of the classic hybrid model BIBREF20. Similar use of Bayes' rule to combine ASR scores with RNN-LMs has been described elsewhere, e.g. in work connecting grapheme-level outputs with word-level LMs BIBREF6, BIBREF24, BIBREF25. However, to our knowledge this approach has not been applied to end-to-end models in cross-domain settings, where one wishes to leverage a language model from the target domain. For a perspective on a “pure” (non-hybrid) deep generative approach to ASR, see BIBREF26.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Top-down fundamentals of RNN-T
The RNN Transducer (RNN-T) BIBREF1 defines a sequence-level posterior $P(W|X)$ for a given acoustic feature vector sequence $X = {\mbox{\bf x}}_1, ..., {\mbox{\bf x}}_T$ and a given word or sub-word sequence $W = s_1, ..., s_U$ in terms of possible alignments $S_W = \lbrace ..., ({\bf s}, {\bf t}), ... \rbrace $ of $W$ to $X$. The tuple $({\bf s}, {\bf t})$ denotes a specific alignment sequence, a symbol sequence and corresponding sequence of time indices, consistent with the sequence $W$ and utterance $X$. The symbols in ${\bf s}$ are elements of an expanded symbol space that includes optional, repeatable blank symbols used to represent acoustics-only path extensions, where the time index is incremented, but no non-blank symbols are added. Conversely, non-blank symbols are only added to a partial path time-synchronously. (I.e., using $i$ to index elements of ${\bf s}$ and ${\bf t}$, $t_{i+1} = t_i + 1$ if $s_{i+1}$ is blank, and $t_{i + 1} = t_i$ if $s_{i+1}$ is non-blank). $P(W|X)$ is defined by summing over alignment posteriors:
Finally, $P(s_{i+1} | X, t_i, s_{1:i})$ is defined using an LSTM-based acoustic encoder with input $X$, an LSTM-based label encoder with non-blank inputs $s$, and a feed-forward joint network combining outputs from the two encoders to produce predictions for all symbols $s$, including the blank symbol.
The Forward-Backward algorithm can be used to calculate Eq. (DISPLAY_FORM16) efficiently during training, and Viterbi-based beam search (based on the argmax over possible alignments) can be used for decoding when $W$ is unknown BIBREF1, BIBREF27.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Application of Shallow Fusion to RNN-T
Shallow Fusion (Eq. (DISPLAY_FORM4)) can be implemented in RNN-T for each time-synchronous non-blank symbol path extension. The LM score corresponding to the same symbol extension can be “fused” into the log-domain score used for decoding:
This is only done when the hypothesized path extension $s_{i+1}$ is a non-blank symbol; the decoding score for blank symbol path extensions is the unmodified $\log P(s_{i+1} | X, t_i, s_{1:i})$.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Application of the Density Ratio Method to RNN-T
Eq. (DISPLAY_FORM14) can be implemented via an estimated RNN-T “pseudo-posterior”, when $s_{i+1}$ is a non-blank symbol:
This estimate is not normalized over symbol outputs, but it plugs into Eq. () and Eq. (DISPLAY_FORM16) to implement the RNN-T version of Eq. (DISPLAY_FORM14). In practice, scaling factors $\lambda _\psi $ and $\lambda _\tau $ on the LM scores, and a non-blank reward $\beta $, are used in the final decoding score:
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Implementation
The ratio method is very simple to implement. The procedure is essentially to:
Train an end-to-end model such as RNN-T on a given source domain training set $\psi $ (paired audio/transcript data);
Train a neural LM such as RNN-LM on text transcripts from the same training set $\psi $;
Train a second RNN-LM on the target domain $\tau $;
When decoding on the target domain, modify the RNN-T output by the ratio of target/training RNN-LMs, as defined in Eq. (DISPLAY_FORM21), and illustrated in Fig. FIGREF1.
The method is purely a decode-time method; no joint training is involved, but it does require tuning of the LM scaling factor(s) (as does Shallow Fusion). A held-out set can be used for that purpose.
Training, development and evaluation data ::: Training data
The following data sources were used to train the RNN-T and associated RNN-LMs in this study.
Source-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28.
Source-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30).
Target-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens.
Target-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively.
Training, development and evaluation data ::: Dev and Eval Sets
The following data sources were used to choose scaling factors and/or evaluate the final model performance.
Source-domain Eval Set (YouTube). The in-domain performance of the YouTube-trained RNN-T baseline was measured on speech data taken from Preferred Channels on YouTube BIBREF29. The test set is taken from 296 videos from 13 categories, with each video averaging 5 minutes in length, corresponding to 25 hours of audio and 250,000 word tokens in total.
Target-domain Dev & Eval sets (Voice Search). The Voice Search dev and eval sets each consist of approximately 7,500 anonymized utterances (about 33,000 words and corresponding to about 8 hours of audio), distinct from the fine-tuning data described earlier, but representative of the same Voice Search service.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search
The first set of experiments uses an RNN-T model trained on {audio, transcript} pairs taken from segmented YouTube videos, and evaluates the cross-domain generalization of this model to test utterances taken from a Voice Search dataset, with and without fusion to an external LM.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search ::: RNN-T and RNN-LM model settings
The overall structure of the models used here is as follows:
RNN-T:
Acoustic features: 768-dimensional feature vectors obtained from 3 stacked 256-dimensional logmel feature vectors, extracted every 20 msec from 16 kHz waveforms, and sub-sampled with a stride of 3, for an effective final feature vector step size of 60 msec.
Acoustic encoder: 6 LSTM layers x (2048 units with 1024-dimensional projection); bidirectional.
Label encoder (aka “decoder” in end-to-end ASR jargon): 1 LSTM layer x (2048 units with 1024-dimensional projection).
RNN-T joint network hidden dimension size: 1024.
Output classes: 10,000 sub-word “morph” units BIBREF30 , input via a 512-dimensional embedding.
Total number of parameters: approximately 340M
RNN-LMs for both source and target domains were set to match the RNN-T decoder structure and size:
1 layer x (2048 units with 1024-dimensional projection).
Output classes: 10,000 morphs (same as the RNN-T).
Total number of parameters: approximately 30M.
The RNN-T and the RNN-LMs were independently trained on 128-core tensor processing units (TPUs) using full unrolling and an effective batch size of 4096. All models were trained using the Adam optimization method BIBREF31 for 100K-125K steps, corresponding to about 4 passes over the 120M utterance YouTube training set, and 20 passes over the 21M utterance Voice Search training set. The trained RNN-LM perplexities (shown in Table TABREF28) show the benefit to Voice Search test perplexity of training on Voice Search transcripts.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search ::: Experiments and results
In the first set of experiments, the constraint $\lambda _\psi = \lambda _\tau $ was used to simplify the search for the LM scaling factor in Eq. DISPLAY_FORM21. Fig. FIGREF40 and Fig. FIGREF41 illustrate the different relative sensitivities of WER to the LM scaling factor(s) for Shallow Fusion and the Density Ratio method, as well as the effect of the RNN-T sequence length scaling factor, measured on the dev set.
The LM scaling factor affects the relative value of the symbols-only LM score vs. that of the acoustics-aware RNN-T score. This typically alters the balance of insertion vs. deletion errors. In turn, this effect can be offset (or amplified) by the sequence length scaling factor $\beta $ in Eq. (DISPLAY_FORM4), in the case of RNN-T, implemented as a non-blank symbol emission reward. (The blank symbol only consumes acoustic frames, not LM symbols BIBREF1). Given that both factors have related effects on overall WER, the LM scaling factor(s) and the sequence length scaling factor need to be tuned jointly.
Fig. FIGREF40 and Fig. FIGREF41 illustrate the different relative sensitivities of WER to these factors for Shallow Fusion and the Density Ratio method, measured on the dev set.
In the second set of experiments, $\beta $ was fixed at -0.1, but the constraint $\lambda _\psi = \lambda _\tau $ was lifted, and a range of combinations was evaluated on the dev set. The results are shown in Fig. FIGREF43. The shading in Figs. FIGREF40, FIGREF41 and FIGREF43 uses the same midpoint value of 15.0 to highlight the results.
The best combinations of scaling factors from the dev set evaluations (see Fig. FIGREF40, Fig. FIGREF41 and Fig. FIGREF43) were used to generate the final eval set results, WERs and associated deletion, insertion and substitution rates, shown in Table TABREF44. These results are summarized in Table TABREF45, this time showing the exact values of LM scaling factor(s) used.
Fine-tuning a YouTube-trained RNN-T using limited Voice Search audio data
The experiments in Section SECREF5 showed that an LM trained on text from the target Voice Search domain can boost the cross-domain performance of an RNN-T. The next experiments examined fine-tuning the original YouTube-trained RNN-T on varied, limited amounts of Voice Search {audio, transcript} data. After fine-tuning, LM fusion was applied, again comparing Shallow Fusion and the Density Ratio method.
Fine-tuning simply uses the YouTube-trained RNN-T model to warm-start training on the limited Voice Search {audio, transcript} data. This is an effective way of leveraging the limited Voice Search audio data: within a few thousand steps, the fine-tuned model reaches a decent level of performance on the fine-tuning task – though beyond that, it over-trains. A held-out set can be used to gauge over-training and stop training for varying amounts of fine-tuning data.
The experiments here fine-tuned the YouTube-trained RNN-T baseline using 10 hours, 100 hours and 1000 hours of Voice Search data, as described in Section SECREF27. (The source domain RNN-LM was not fine-tuned). For each fine-tuned model, Shallow Fusion and the Density Ratio method were used to evaluate incorporation of the Voice Search RNN-LM, described in Section SECREF5, trained on text transcripts from the much larger set of 21M Voice Search utterances. As in Section SECREF5, the dev set was used to tune the LM scaling factor(s) and the sequence length scaling factor $\beta $. To ease parameter tuning, the constraint $\lambda _\psi = \lambda _\tau $ was used for the Density Ratio method. The best combinations of scaling factors from the dev set were then used to generate the final eval results, which are shown in Table TABREF45
Discussion
The experiments described here examined the generalization of a YouTube-trained end-to-end RNN-T model to Voice Search speech data, using varying quantities (from zero to 100%) of Voice Search audio data, and 100% of the available Voice Search text data. The results show that in spite of the vast range of acoustic and linguistic patterns covered by the YouTube-trained model, it is still possible to improve performance on Voice Search utterances significantly via Voice Search specific fine-tuning and LM fusion. In particular, LM fusion significantly boosts performance when only a limited quantity of Voice Search fine-tuning data is used.
The Density Ratio method consistently outperformed Shallow Fusion for the cross-domain scenarios examined, with and without fine-tuning to audio data from the target domain. Furthermore, the gains in WER over the baseline are significantly larger for the Density Ratio method than for Shallow Fusion, with up to 28% relative reduction in WER (17.5% $\rightarrow $ 12.5%) compared to up to 17% relative reduction (17.5% $\rightarrow $ 14.5%) for Shallow Fusion, in the no fine-tuning scenario.
Notably, the “sweet spot” of effective combinations of LM scaling factor and sequence length scaling factor is significantly larger for the Density Ratio method than for Shallow Fusion (see Fig. FIGREF40 and Fig. FIGREF41). Compared to Shallow Fusion, larger absolute values of the scaling factor can be used.
A full sweep of the LM scaling factors ($\lambda _\psi $ and $\lambda _\tau $) can improve over the constrained setting $\lambda _\psi = \lambda _\tau $, though not by much. Fig. FIGREF43 shows that the optimal setting of the two factors follows a roughly linear pattern along an off-diagonal band.
Fine-tuning using transcribed Voice Search audio data leads to a large boost in performance over the YouTube-trained baseline. Nonetheless, both fusion methods give gains on top of fine-tuning, especially for the limited quantities of fine-tuning data. With 10 hours of fine-tuning, the Density Ratio method gives a 20% relative gain in WER, compared to 12% relative for Shallow Fusion. For 1000 hours of fine-tuning data, the Density Ratio method gives a 10.5% relative gave over the fine-tuned baseline, compared to 7% relative for Shallow Fusion. Even for 21,000 hours of fine-tuning data, i.e. the entire Voice Search training set, the Density Ratio method gives an added boost, from 7.8% to 7.4% WER, a 5% relative improvement.
A clear weakness of the proposed method is the apparent need for scaling factors on the LM outputs. In addition to the assumptions made (outlined in Section SECREF5), it is possible that this is due to the implicit LM in the RNN-T being more limited than the RNN-LMs used.
Summary
This article proposed and evaluated experimentally an alternative to Shallow Fusion for incorporation of an external LM into an end-to-end RNN-T model applied to a target domain different from the source domain it was trained on. The Density Ratio method is simple conceptually, easy to implement, and grounded in Bayes' rule, extending the classic hybrid ASR model to end-to-end models. In contrast, the most commonly reported approach to LM incorporation, Shallow Fusion, has no clear interpretation from probability theory. Evaluated on a YouTube $\rightarrow $ Voice Search cross-domain scenario, the method was found to be effective, with up to 28% relative gains in word error over the non-fused baseline, and consistently outperforming Shallow Fusion by a significant margin. The method continues to produce gains when fine-tuning to paired target domain data, though the gains diminish as more fine-tuning data is used. Evaluation using a variety of cross-domain evaluation scenarios is needed to establish the general effectiveness of the method.
Summary ::: Acknowledgments
The authors thank Matt Shannon and Khe Chai Sim for valuable feedback regarding this work. | word error rate |
67ee7a53aa57ce0d0bc1a20d41b64cb20303f4b7 | 67ee7a53aa57ce0d0bc1a20d41b64cb20303f4b7_0 | Q: How much training data is used?
Text: Introduction
End-to-end models such as Listen, Attend & Spell (LAS) BIBREF0 or the Recurrent Neural Network Transducer (RNN-T) BIBREF1 are sequence models that directly define $P(W | X)$, the posterior probability of the word or subword sequence $W$ given an audio frame sequence $X$, with no chaining of sub-module probabilities. State-of-the-art, or near state-of-the-art results have been reported for these models on challenging tasks BIBREF2, BIBREF3.
End-to-end ASR models in essence do not include independently trained symbols-only or acoustics-only sub-components. As such, they do not provide a clear role for language models $P(W)$ trained only on text/transcript data. There are, however, many situations where we would like to use a separate LM to complement or modify a given ASR system. In particular, no matter how plentiful the paired {audio, transcript} training data, there are typically orders of magnitude more text-only data available. There are also many practical applications of ASR where we wish to adapt the language model, e.g., biasing the recognition grammar towards a list of specific words or phrases for a specific context.
The research community has been keenly aware of the importance of this issue, and has responded with a number of approaches, under the rubric of “Fusion”. The most popular of these is “Shallow Fusion” BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, which is simple log-linear interpolation between the scores from the end-to-end model and the separately-trained LM. More structured approaches, “Deep Fusion” BIBREF9, “Cold Fusion” BIBREF10 and “Component Fusion” BIBREF11 jointly train an end-to-end model with a pre-trained LM, with the goal of learning the optimal combination of the two, aided by gating mechanisms applied to the set of joint scores. These methods have not replaced the simple Shallow Fusion method as the go-to method in most of the ASR community. Part of the appeal of Shallow Fusion is that it does not require model retraining – it can be applied purely at decoding time. The Density Ratio approach proposed here can be seen as an extension of Shallow Fusion, sharing some of its simplicity and practicality, but offering a theoretical grounding in Bayes' rule.
After describing the historical context, theory and practical implementation of the proposed Density Ratio method, this article describes experiments comparing the method to Shallow Fusion in a cross-domain scenario. An RNN-T model was trained on large-scale speech data with semi-supervised transcripts from YouTube videos, and then evaluated on data from a live Voice Search service, using an RNN-LM trained on Voice Search transcripts to try to boost performance. Then, exploring the transition between cross-domain and in-domain, limited amounts of Voice Search speech data were used to fine-tune the YouTube-trained RNN-T model, followed by LM fusion via both the Density Ratio method and Shallow Fusion. The ratio method was found to produce consistent gains over Shallow Fusion in all scenarios examined.
A Brief History of Language Model incorporation in ASR
Generative models and Bayes' rule. The Noisy Channel Model underlying the origins of statistical ASR BIBREF12 used Bayes' rule to combine generative models of both the acoustics $p(X|W)$ and the symbol sequence $P(W)$:
for an acoustic feature vector sequence $X = {\mbox{\bf x}}_1, ..., {\mbox{\bf x}}_T$ and a word or sub-word sequence $W = s_1, ..., s_U$ with possible time alignments $S_W = \lbrace ..., {\bf s}, ...\rbrace $. ASR decoding then uses the posterior probability $P(W|X)$. A prior $p({\bf s}| W)$ on alignments can be implemented e.g. via a simple 1st-order state transition model. Though lacking in discriminative power, the paradigm provides a clear theoretical framework for decoupling the acoustic model (AM) $p(X|W)$ and LM $P(W)$.
Hybrid model for DNNs/LSTMs within original ASR framework. The advent of highly discriminative Deep Neural Networks (DNNs) BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17 and Long Short Term Memory models (LSTMs) BIBREF18, BIBREF19 posed a challenge to the original Noisy Channel Model, as they produce phoneme- or state- level posteriors $P({\bf s}(t) | {\mbox{\bf x}}_t)$, not acoustic likelihoods $p({\mbox{\bf x}}_t | {\bf s}(t))$. The “hybrid” model BIBREF20 proposed the use of scaled likelihoods, i.e. posteriors divided by separately estimated state priors $P(w)$. For bidirectional LSTMs, the scaled-likelihood over a particular alignment ${\bf s}$ is taken to be
using $k(X)$ to represent a $p(X)$-dependent term shared by all hypotheses $W$, that does not affect decoding. This “pseudo-generative” score can then be plugged into the original model of Eq. (DISPLAY_FORM2) and used for ASR decoding with an arbitrary LM $P(W)$. For much of the ASR community, this approach still constitutes the state-of-the-art BIBREF2, BIBREF21, BIBREF22.
Shallow Fusion. The most popular approach to LM incorporation for end-to-end ASR is a linear interpolation,
with no claim to direct interpretability according to probability theory, and often a reward for sequence length $|W|$, scaled by a factor $\beta $ BIBREF5, BIBREF7, BIBREF8, BIBREF23.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: A Sequence-level Hybrid Pseudo-Generative Model
The model makes the following assumptions:
The source domain $\psi $ has some true joint distribution $P_{\psi }(W, X)$ over text and audio;
The target domain $\tau $ has some other true joint distribution $P_{\tau }(W, X)$;
A source domain end-to-end model (e.g. RNN-T) captures $P_{\psi }(W | X)$ reasonably well;
Separately trained LMs (e.g. RNN-LMs) capture $P_{\psi }(W)$ and $P_{\tau }(W)$ reasonably well;
$p_{\psi }(X | W)$ is roughly equal to $p_{\tau }(X | W)$, i.e. the two domains are acoustically consistent; and
The target domain posterior, $P_{\tau }(W | X)$, is unknown.
The starting point for the proposed Density Ratio Method is then to express a “hybrid” scaled acoustic likelihood for the source domain, in a manner paralleling the original hybrid model BIBREF20:
Similarly, for the target domain:
Given the stated assumptions, one can then estimate the target domain posterior as:
with $k(X) = p_{\psi }(X) / p_{\tau }(X)$ shared by all hypotheses $W$, and the ratio $P_{\tau }(W) / {P_{\psi }(W)}$ (really a probablity mass ratio) giving the proposed method its name.
In essence, this model is just an application of Bayes' rule to end-to-end models and separate LMs. The approach can be viewed as the sequence-level version of the classic hybrid model BIBREF20. Similar use of Bayes' rule to combine ASR scores with RNN-LMs has been described elsewhere, e.g. in work connecting grapheme-level outputs with word-level LMs BIBREF6, BIBREF24, BIBREF25. However, to our knowledge this approach has not been applied to end-to-end models in cross-domain settings, where one wishes to leverage a language model from the target domain. For a perspective on a “pure” (non-hybrid) deep generative approach to ASR, see BIBREF26.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Top-down fundamentals of RNN-T
The RNN Transducer (RNN-T) BIBREF1 defines a sequence-level posterior $P(W|X)$ for a given acoustic feature vector sequence $X = {\mbox{\bf x}}_1, ..., {\mbox{\bf x}}_T$ and a given word or sub-word sequence $W = s_1, ..., s_U$ in terms of possible alignments $S_W = \lbrace ..., ({\bf s}, {\bf t}), ... \rbrace $ of $W$ to $X$. The tuple $({\bf s}, {\bf t})$ denotes a specific alignment sequence, a symbol sequence and corresponding sequence of time indices, consistent with the sequence $W$ and utterance $X$. The symbols in ${\bf s}$ are elements of an expanded symbol space that includes optional, repeatable blank symbols used to represent acoustics-only path extensions, where the time index is incremented, but no non-blank symbols are added. Conversely, non-blank symbols are only added to a partial path time-synchronously. (I.e., using $i$ to index elements of ${\bf s}$ and ${\bf t}$, $t_{i+1} = t_i + 1$ if $s_{i+1}$ is blank, and $t_{i + 1} = t_i$ if $s_{i+1}$ is non-blank). $P(W|X)$ is defined by summing over alignment posteriors:
Finally, $P(s_{i+1} | X, t_i, s_{1:i})$ is defined using an LSTM-based acoustic encoder with input $X$, an LSTM-based label encoder with non-blank inputs $s$, and a feed-forward joint network combining outputs from the two encoders to produce predictions for all symbols $s$, including the blank symbol.
The Forward-Backward algorithm can be used to calculate Eq. (DISPLAY_FORM16) efficiently during training, and Viterbi-based beam search (based on the argmax over possible alignments) can be used for decoding when $W$ is unknown BIBREF1, BIBREF27.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Application of Shallow Fusion to RNN-T
Shallow Fusion (Eq. (DISPLAY_FORM4)) can be implemented in RNN-T for each time-synchronous non-blank symbol path extension. The LM score corresponding to the same symbol extension can be “fused” into the log-domain score used for decoding:
This is only done when the hypothesized path extension $s_{i+1}$ is a non-blank symbol; the decoding score for blank symbol path extensions is the unmodified $\log P(s_{i+1} | X, t_i, s_{1:i})$.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Application of the Density Ratio Method to RNN-T
Eq. (DISPLAY_FORM14) can be implemented via an estimated RNN-T “pseudo-posterior”, when $s_{i+1}$ is a non-blank symbol:
This estimate is not normalized over symbol outputs, but it plugs into Eq. () and Eq. (DISPLAY_FORM16) to implement the RNN-T version of Eq. (DISPLAY_FORM14). In practice, scaling factors $\lambda _\psi $ and $\lambda _\tau $ on the LM scores, and a non-blank reward $\beta $, are used in the final decoding score:
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Implementation
The ratio method is very simple to implement. The procedure is essentially to:
Train an end-to-end model such as RNN-T on a given source domain training set $\psi $ (paired audio/transcript data);
Train a neural LM such as RNN-LM on text transcripts from the same training set $\psi $;
Train a second RNN-LM on the target domain $\tau $;
When decoding on the target domain, modify the RNN-T output by the ratio of target/training RNN-LMs, as defined in Eq. (DISPLAY_FORM21), and illustrated in Fig. FIGREF1.
The method is purely a decode-time method; no joint training is involved, but it does require tuning of the LM scaling factor(s) (as does Shallow Fusion). A held-out set can be used for that purpose.
Training, development and evaluation data ::: Training data
The following data sources were used to train the RNN-T and associated RNN-LMs in this study.
Source-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28.
Source-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30).
Target-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens.
Target-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively.
Training, development and evaluation data ::: Dev and Eval Sets
The following data sources were used to choose scaling factors and/or evaluate the final model performance.
Source-domain Eval Set (YouTube). The in-domain performance of the YouTube-trained RNN-T baseline was measured on speech data taken from Preferred Channels on YouTube BIBREF29. The test set is taken from 296 videos from 13 categories, with each video averaging 5 minutes in length, corresponding to 25 hours of audio and 250,000 word tokens in total.
Target-domain Dev & Eval sets (Voice Search). The Voice Search dev and eval sets each consist of approximately 7,500 anonymized utterances (about 33,000 words and corresponding to about 8 hours of audio), distinct from the fine-tuning data described earlier, but representative of the same Voice Search service.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search
The first set of experiments uses an RNN-T model trained on {audio, transcript} pairs taken from segmented YouTube videos, and evaluates the cross-domain generalization of this model to test utterances taken from a Voice Search dataset, with and without fusion to an external LM.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search ::: RNN-T and RNN-LM model settings
The overall structure of the models used here is as follows:
RNN-T:
Acoustic features: 768-dimensional feature vectors obtained from 3 stacked 256-dimensional logmel feature vectors, extracted every 20 msec from 16 kHz waveforms, and sub-sampled with a stride of 3, for an effective final feature vector step size of 60 msec.
Acoustic encoder: 6 LSTM layers x (2048 units with 1024-dimensional projection); bidirectional.
Label encoder (aka “decoder” in end-to-end ASR jargon): 1 LSTM layer x (2048 units with 1024-dimensional projection).
RNN-T joint network hidden dimension size: 1024.
Output classes: 10,000 sub-word “morph” units BIBREF30 , input via a 512-dimensional embedding.
Total number of parameters: approximately 340M
RNN-LMs for both source and target domains were set to match the RNN-T decoder structure and size:
1 layer x (2048 units with 1024-dimensional projection).
Output classes: 10,000 morphs (same as the RNN-T).
Total number of parameters: approximately 30M.
The RNN-T and the RNN-LMs were independently trained on 128-core tensor processing units (TPUs) using full unrolling and an effective batch size of 4096. All models were trained using the Adam optimization method BIBREF31 for 100K-125K steps, corresponding to about 4 passes over the 120M utterance YouTube training set, and 20 passes over the 21M utterance Voice Search training set. The trained RNN-LM perplexities (shown in Table TABREF28) show the benefit to Voice Search test perplexity of training on Voice Search transcripts.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search ::: Experiments and results
In the first set of experiments, the constraint $\lambda _\psi = \lambda _\tau $ was used to simplify the search for the LM scaling factor in Eq. DISPLAY_FORM21. Fig. FIGREF40 and Fig. FIGREF41 illustrate the different relative sensitivities of WER to the LM scaling factor(s) for Shallow Fusion and the Density Ratio method, as well as the effect of the RNN-T sequence length scaling factor, measured on the dev set.
The LM scaling factor affects the relative value of the symbols-only LM score vs. that of the acoustics-aware RNN-T score. This typically alters the balance of insertion vs. deletion errors. In turn, this effect can be offset (or amplified) by the sequence length scaling factor $\beta $ in Eq. (DISPLAY_FORM4), in the case of RNN-T, implemented as a non-blank symbol emission reward. (The blank symbol only consumes acoustic frames, not LM symbols BIBREF1). Given that both factors have related effects on overall WER, the LM scaling factor(s) and the sequence length scaling factor need to be tuned jointly.
Fig. FIGREF40 and Fig. FIGREF41 illustrate the different relative sensitivities of WER to these factors for Shallow Fusion and the Density Ratio method, measured on the dev set.
In the second set of experiments, $\beta $ was fixed at -0.1, but the constraint $\lambda _\psi = \lambda _\tau $ was lifted, and a range of combinations was evaluated on the dev set. The results are shown in Fig. FIGREF43. The shading in Figs. FIGREF40, FIGREF41 and FIGREF43 uses the same midpoint value of 15.0 to highlight the results.
The best combinations of scaling factors from the dev set evaluations (see Fig. FIGREF40, Fig. FIGREF41 and Fig. FIGREF43) were used to generate the final eval set results, WERs and associated deletion, insertion and substitution rates, shown in Table TABREF44. These results are summarized in Table TABREF45, this time showing the exact values of LM scaling factor(s) used.
Fine-tuning a YouTube-trained RNN-T using limited Voice Search audio data
The experiments in Section SECREF5 showed that an LM trained on text from the target Voice Search domain can boost the cross-domain performance of an RNN-T. The next experiments examined fine-tuning the original YouTube-trained RNN-T on varied, limited amounts of Voice Search {audio, transcript} data. After fine-tuning, LM fusion was applied, again comparing Shallow Fusion and the Density Ratio method.
Fine-tuning simply uses the YouTube-trained RNN-T model to warm-start training on the limited Voice Search {audio, transcript} data. This is an effective way of leveraging the limited Voice Search audio data: within a few thousand steps, the fine-tuned model reaches a decent level of performance on the fine-tuning task – though beyond that, it over-trains. A held-out set can be used to gauge over-training and stop training for varying amounts of fine-tuning data.
The experiments here fine-tuned the YouTube-trained RNN-T baseline using 10 hours, 100 hours and 1000 hours of Voice Search data, as described in Section SECREF27. (The source domain RNN-LM was not fine-tuned). For each fine-tuned model, Shallow Fusion and the Density Ratio method were used to evaluate incorporation of the Voice Search RNN-LM, described in Section SECREF5, trained on text transcripts from the much larger set of 21M Voice Search utterances. As in Section SECREF5, the dev set was used to tune the LM scaling factor(s) and the sequence length scaling factor $\beta $. To ease parameter tuning, the constraint $\lambda _\psi = \lambda _\tau $ was used for the Density Ratio method. The best combinations of scaling factors from the dev set were then used to generate the final eval results, which are shown in Table TABREF45
Discussion
The experiments described here examined the generalization of a YouTube-trained end-to-end RNN-T model to Voice Search speech data, using varying quantities (from zero to 100%) of Voice Search audio data, and 100% of the available Voice Search text data. The results show that in spite of the vast range of acoustic and linguistic patterns covered by the YouTube-trained model, it is still possible to improve performance on Voice Search utterances significantly via Voice Search specific fine-tuning and LM fusion. In particular, LM fusion significantly boosts performance when only a limited quantity of Voice Search fine-tuning data is used.
The Density Ratio method consistently outperformed Shallow Fusion for the cross-domain scenarios examined, with and without fine-tuning to audio data from the target domain. Furthermore, the gains in WER over the baseline are significantly larger for the Density Ratio method than for Shallow Fusion, with up to 28% relative reduction in WER (17.5% $\rightarrow $ 12.5%) compared to up to 17% relative reduction (17.5% $\rightarrow $ 14.5%) for Shallow Fusion, in the no fine-tuning scenario.
Notably, the “sweet spot” of effective combinations of LM scaling factor and sequence length scaling factor is significantly larger for the Density Ratio method than for Shallow Fusion (see Fig. FIGREF40 and Fig. FIGREF41). Compared to Shallow Fusion, larger absolute values of the scaling factor can be used.
A full sweep of the LM scaling factors ($\lambda _\psi $ and $\lambda _\tau $) can improve over the constrained setting $\lambda _\psi = \lambda _\tau $, though not by much. Fig. FIGREF43 shows that the optimal setting of the two factors follows a roughly linear pattern along an off-diagonal band.
Fine-tuning using transcribed Voice Search audio data leads to a large boost in performance over the YouTube-trained baseline. Nonetheless, both fusion methods give gains on top of fine-tuning, especially for the limited quantities of fine-tuning data. With 10 hours of fine-tuning, the Density Ratio method gives a 20% relative gain in WER, compared to 12% relative for Shallow Fusion. For 1000 hours of fine-tuning data, the Density Ratio method gives a 10.5% relative gave over the fine-tuned baseline, compared to 7% relative for Shallow Fusion. Even for 21,000 hours of fine-tuning data, i.e. the entire Voice Search training set, the Density Ratio method gives an added boost, from 7.8% to 7.4% WER, a 5% relative improvement.
A clear weakness of the proposed method is the apparent need for scaling factors on the LM outputs. In addition to the assumptions made (outlined in Section SECREF5), it is possible that this is due to the implicit LM in the RNN-T being more limited than the RNN-LMs used.
Summary
This article proposed and evaluated experimentally an alternative to Shallow Fusion for incorporation of an external LM into an end-to-end RNN-T model applied to a target domain different from the source domain it was trained on. The Density Ratio method is simple conceptually, easy to implement, and grounded in Bayes' rule, extending the classic hybrid ASR model to end-to-end models. In contrast, the most commonly reported approach to LM incorporation, Shallow Fusion, has no clear interpretation from probability theory. Evaluated on a YouTube $\rightarrow $ Voice Search cross-domain scenario, the method was found to be effective, with up to 28% relative gains in word error over the non-fused baseline, and consistently outperforming Shallow Fusion by a significant margin. The method continues to produce gains when fine-tuning to paired target domain data, though the gains diminish as more fine-tuning data is used. Evaluation using a variety of cross-domain evaluation scenarios is needed to establish the general effectiveness of the method.
Summary ::: Acknowledgments
The authors thank Matt Shannon and Khe Chai Sim for valuable feedback regarding this work. | 163,110,000 utterances |
7eb3852677e9d1fb25327ba014d2ed292184210c | 7eb3852677e9d1fb25327ba014d2ed292184210c_0 | Q: How is the training data collected?
Text: Introduction
End-to-end models such as Listen, Attend & Spell (LAS) BIBREF0 or the Recurrent Neural Network Transducer (RNN-T) BIBREF1 are sequence models that directly define $P(W | X)$, the posterior probability of the word or subword sequence $W$ given an audio frame sequence $X$, with no chaining of sub-module probabilities. State-of-the-art, or near state-of-the-art results have been reported for these models on challenging tasks BIBREF2, BIBREF3.
End-to-end ASR models in essence do not include independently trained symbols-only or acoustics-only sub-components. As such, they do not provide a clear role for language models $P(W)$ trained only on text/transcript data. There are, however, many situations where we would like to use a separate LM to complement or modify a given ASR system. In particular, no matter how plentiful the paired {audio, transcript} training data, there are typically orders of magnitude more text-only data available. There are also many practical applications of ASR where we wish to adapt the language model, e.g., biasing the recognition grammar towards a list of specific words or phrases for a specific context.
The research community has been keenly aware of the importance of this issue, and has responded with a number of approaches, under the rubric of “Fusion”. The most popular of these is “Shallow Fusion” BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, which is simple log-linear interpolation between the scores from the end-to-end model and the separately-trained LM. More structured approaches, “Deep Fusion” BIBREF9, “Cold Fusion” BIBREF10 and “Component Fusion” BIBREF11 jointly train an end-to-end model with a pre-trained LM, with the goal of learning the optimal combination of the two, aided by gating mechanisms applied to the set of joint scores. These methods have not replaced the simple Shallow Fusion method as the go-to method in most of the ASR community. Part of the appeal of Shallow Fusion is that it does not require model retraining – it can be applied purely at decoding time. The Density Ratio approach proposed here can be seen as an extension of Shallow Fusion, sharing some of its simplicity and practicality, but offering a theoretical grounding in Bayes' rule.
After describing the historical context, theory and practical implementation of the proposed Density Ratio method, this article describes experiments comparing the method to Shallow Fusion in a cross-domain scenario. An RNN-T model was trained on large-scale speech data with semi-supervised transcripts from YouTube videos, and then evaluated on data from a live Voice Search service, using an RNN-LM trained on Voice Search transcripts to try to boost performance. Then, exploring the transition between cross-domain and in-domain, limited amounts of Voice Search speech data were used to fine-tune the YouTube-trained RNN-T model, followed by LM fusion via both the Density Ratio method and Shallow Fusion. The ratio method was found to produce consistent gains over Shallow Fusion in all scenarios examined.
A Brief History of Language Model incorporation in ASR
Generative models and Bayes' rule. The Noisy Channel Model underlying the origins of statistical ASR BIBREF12 used Bayes' rule to combine generative models of both the acoustics $p(X|W)$ and the symbol sequence $P(W)$:
for an acoustic feature vector sequence $X = {\mbox{\bf x}}_1, ..., {\mbox{\bf x}}_T$ and a word or sub-word sequence $W = s_1, ..., s_U$ with possible time alignments $S_W = \lbrace ..., {\bf s}, ...\rbrace $. ASR decoding then uses the posterior probability $P(W|X)$. A prior $p({\bf s}| W)$ on alignments can be implemented e.g. via a simple 1st-order state transition model. Though lacking in discriminative power, the paradigm provides a clear theoretical framework for decoupling the acoustic model (AM) $p(X|W)$ and LM $P(W)$.
Hybrid model for DNNs/LSTMs within original ASR framework. The advent of highly discriminative Deep Neural Networks (DNNs) BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17 and Long Short Term Memory models (LSTMs) BIBREF18, BIBREF19 posed a challenge to the original Noisy Channel Model, as they produce phoneme- or state- level posteriors $P({\bf s}(t) | {\mbox{\bf x}}_t)$, not acoustic likelihoods $p({\mbox{\bf x}}_t | {\bf s}(t))$. The “hybrid” model BIBREF20 proposed the use of scaled likelihoods, i.e. posteriors divided by separately estimated state priors $P(w)$. For bidirectional LSTMs, the scaled-likelihood over a particular alignment ${\bf s}$ is taken to be
using $k(X)$ to represent a $p(X)$-dependent term shared by all hypotheses $W$, that does not affect decoding. This “pseudo-generative” score can then be plugged into the original model of Eq. (DISPLAY_FORM2) and used for ASR decoding with an arbitrary LM $P(W)$. For much of the ASR community, this approach still constitutes the state-of-the-art BIBREF2, BIBREF21, BIBREF22.
Shallow Fusion. The most popular approach to LM incorporation for end-to-end ASR is a linear interpolation,
with no claim to direct interpretability according to probability theory, and often a reward for sequence length $|W|$, scaled by a factor $\beta $ BIBREF5, BIBREF7, BIBREF8, BIBREF23.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: A Sequence-level Hybrid Pseudo-Generative Model
The model makes the following assumptions:
The source domain $\psi $ has some true joint distribution $P_{\psi }(W, X)$ over text and audio;
The target domain $\tau $ has some other true joint distribution $P_{\tau }(W, X)$;
A source domain end-to-end model (e.g. RNN-T) captures $P_{\psi }(W | X)$ reasonably well;
Separately trained LMs (e.g. RNN-LMs) capture $P_{\psi }(W)$ and $P_{\tau }(W)$ reasonably well;
$p_{\psi }(X | W)$ is roughly equal to $p_{\tau }(X | W)$, i.e. the two domains are acoustically consistent; and
The target domain posterior, $P_{\tau }(W | X)$, is unknown.
The starting point for the proposed Density Ratio Method is then to express a “hybrid” scaled acoustic likelihood for the source domain, in a manner paralleling the original hybrid model BIBREF20:
Similarly, for the target domain:
Given the stated assumptions, one can then estimate the target domain posterior as:
with $k(X) = p_{\psi }(X) / p_{\tau }(X)$ shared by all hypotheses $W$, and the ratio $P_{\tau }(W) / {P_{\psi }(W)}$ (really a probablity mass ratio) giving the proposed method its name.
In essence, this model is just an application of Bayes' rule to end-to-end models and separate LMs. The approach can be viewed as the sequence-level version of the classic hybrid model BIBREF20. Similar use of Bayes' rule to combine ASR scores with RNN-LMs has been described elsewhere, e.g. in work connecting grapheme-level outputs with word-level LMs BIBREF6, BIBREF24, BIBREF25. However, to our knowledge this approach has not been applied to end-to-end models in cross-domain settings, where one wishes to leverage a language model from the target domain. For a perspective on a “pure” (non-hybrid) deep generative approach to ASR, see BIBREF26.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Top-down fundamentals of RNN-T
The RNN Transducer (RNN-T) BIBREF1 defines a sequence-level posterior $P(W|X)$ for a given acoustic feature vector sequence $X = {\mbox{\bf x}}_1, ..., {\mbox{\bf x}}_T$ and a given word or sub-word sequence $W = s_1, ..., s_U$ in terms of possible alignments $S_W = \lbrace ..., ({\bf s}, {\bf t}), ... \rbrace $ of $W$ to $X$. The tuple $({\bf s}, {\bf t})$ denotes a specific alignment sequence, a symbol sequence and corresponding sequence of time indices, consistent with the sequence $W$ and utterance $X$. The symbols in ${\bf s}$ are elements of an expanded symbol space that includes optional, repeatable blank symbols used to represent acoustics-only path extensions, where the time index is incremented, but no non-blank symbols are added. Conversely, non-blank symbols are only added to a partial path time-synchronously. (I.e., using $i$ to index elements of ${\bf s}$ and ${\bf t}$, $t_{i+1} = t_i + 1$ if $s_{i+1}$ is blank, and $t_{i + 1} = t_i$ if $s_{i+1}$ is non-blank). $P(W|X)$ is defined by summing over alignment posteriors:
Finally, $P(s_{i+1} | X, t_i, s_{1:i})$ is defined using an LSTM-based acoustic encoder with input $X$, an LSTM-based label encoder with non-blank inputs $s$, and a feed-forward joint network combining outputs from the two encoders to produce predictions for all symbols $s$, including the blank symbol.
The Forward-Backward algorithm can be used to calculate Eq. (DISPLAY_FORM16) efficiently during training, and Viterbi-based beam search (based on the argmax over possible alignments) can be used for decoding when $W$ is unknown BIBREF1, BIBREF27.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Application of Shallow Fusion to RNN-T
Shallow Fusion (Eq. (DISPLAY_FORM4)) can be implemented in RNN-T for each time-synchronous non-blank symbol path extension. The LM score corresponding to the same symbol extension can be “fused” into the log-domain score used for decoding:
This is only done when the hypothesized path extension $s_{i+1}$ is a non-blank symbol; the decoding score for blank symbol path extensions is the unmodified $\log P(s_{i+1} | X, t_i, s_{1:i})$.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Application of the Density Ratio Method to RNN-T
Eq. (DISPLAY_FORM14) can be implemented via an estimated RNN-T “pseudo-posterior”, when $s_{i+1}$ is a non-blank symbol:
This estimate is not normalized over symbol outputs, but it plugs into Eq. () and Eq. (DISPLAY_FORM16) to implement the RNN-T version of Eq. (DISPLAY_FORM14). In practice, scaling factors $\lambda _\psi $ and $\lambda _\tau $ on the LM scores, and a non-blank reward $\beta $, are used in the final decoding score:
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Implementation
The ratio method is very simple to implement. The procedure is essentially to:
Train an end-to-end model such as RNN-T on a given source domain training set $\psi $ (paired audio/transcript data);
Train a neural LM such as RNN-LM on text transcripts from the same training set $\psi $;
Train a second RNN-LM on the target domain $\tau $;
When decoding on the target domain, modify the RNN-T output by the ratio of target/training RNN-LMs, as defined in Eq. (DISPLAY_FORM21), and illustrated in Fig. FIGREF1.
The method is purely a decode-time method; no joint training is involved, but it does require tuning of the LM scaling factor(s) (as does Shallow Fusion). A held-out set can be used for that purpose.
Training, development and evaluation data ::: Training data
The following data sources were used to train the RNN-T and associated RNN-LMs in this study.
Source-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28.
Source-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30).
Target-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens.
Target-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively.
Training, development and evaluation data ::: Dev and Eval Sets
The following data sources were used to choose scaling factors and/or evaluate the final model performance.
Source-domain Eval Set (YouTube). The in-domain performance of the YouTube-trained RNN-T baseline was measured on speech data taken from Preferred Channels on YouTube BIBREF29. The test set is taken from 296 videos from 13 categories, with each video averaging 5 minutes in length, corresponding to 25 hours of audio and 250,000 word tokens in total.
Target-domain Dev & Eval sets (Voice Search). The Voice Search dev and eval sets each consist of approximately 7,500 anonymized utterances (about 33,000 words and corresponding to about 8 hours of audio), distinct from the fine-tuning data described earlier, but representative of the same Voice Search service.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search
The first set of experiments uses an RNN-T model trained on {audio, transcript} pairs taken from segmented YouTube videos, and evaluates the cross-domain generalization of this model to test utterances taken from a Voice Search dataset, with and without fusion to an external LM.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search ::: RNN-T and RNN-LM model settings
The overall structure of the models used here is as follows:
RNN-T:
Acoustic features: 768-dimensional feature vectors obtained from 3 stacked 256-dimensional logmel feature vectors, extracted every 20 msec from 16 kHz waveforms, and sub-sampled with a stride of 3, for an effective final feature vector step size of 60 msec.
Acoustic encoder: 6 LSTM layers x (2048 units with 1024-dimensional projection); bidirectional.
Label encoder (aka “decoder” in end-to-end ASR jargon): 1 LSTM layer x (2048 units with 1024-dimensional projection).
RNN-T joint network hidden dimension size: 1024.
Output classes: 10,000 sub-word “morph” units BIBREF30 , input via a 512-dimensional embedding.
Total number of parameters: approximately 340M
RNN-LMs for both source and target domains were set to match the RNN-T decoder structure and size:
1 layer x (2048 units with 1024-dimensional projection).
Output classes: 10,000 morphs (same as the RNN-T).
Total number of parameters: approximately 30M.
The RNN-T and the RNN-LMs were independently trained on 128-core tensor processing units (TPUs) using full unrolling and an effective batch size of 4096. All models were trained using the Adam optimization method BIBREF31 for 100K-125K steps, corresponding to about 4 passes over the 120M utterance YouTube training set, and 20 passes over the 21M utterance Voice Search training set. The trained RNN-LM perplexities (shown in Table TABREF28) show the benefit to Voice Search test perplexity of training on Voice Search transcripts.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search ::: Experiments and results
In the first set of experiments, the constraint $\lambda _\psi = \lambda _\tau $ was used to simplify the search for the LM scaling factor in Eq. DISPLAY_FORM21. Fig. FIGREF40 and Fig. FIGREF41 illustrate the different relative sensitivities of WER to the LM scaling factor(s) for Shallow Fusion and the Density Ratio method, as well as the effect of the RNN-T sequence length scaling factor, measured on the dev set.
The LM scaling factor affects the relative value of the symbols-only LM score vs. that of the acoustics-aware RNN-T score. This typically alters the balance of insertion vs. deletion errors. In turn, this effect can be offset (or amplified) by the sequence length scaling factor $\beta $ in Eq. (DISPLAY_FORM4), in the case of RNN-T, implemented as a non-blank symbol emission reward. (The blank symbol only consumes acoustic frames, not LM symbols BIBREF1). Given that both factors have related effects on overall WER, the LM scaling factor(s) and the sequence length scaling factor need to be tuned jointly.
Fig. FIGREF40 and Fig. FIGREF41 illustrate the different relative sensitivities of WER to these factors for Shallow Fusion and the Density Ratio method, measured on the dev set.
In the second set of experiments, $\beta $ was fixed at -0.1, but the constraint $\lambda _\psi = \lambda _\tau $ was lifted, and a range of combinations was evaluated on the dev set. The results are shown in Fig. FIGREF43. The shading in Figs. FIGREF40, FIGREF41 and FIGREF43 uses the same midpoint value of 15.0 to highlight the results.
The best combinations of scaling factors from the dev set evaluations (see Fig. FIGREF40, Fig. FIGREF41 and Fig. FIGREF43) were used to generate the final eval set results, WERs and associated deletion, insertion and substitution rates, shown in Table TABREF44. These results are summarized in Table TABREF45, this time showing the exact values of LM scaling factor(s) used.
Fine-tuning a YouTube-trained RNN-T using limited Voice Search audio data
The experiments in Section SECREF5 showed that an LM trained on text from the target Voice Search domain can boost the cross-domain performance of an RNN-T. The next experiments examined fine-tuning the original YouTube-trained RNN-T on varied, limited amounts of Voice Search {audio, transcript} data. After fine-tuning, LM fusion was applied, again comparing Shallow Fusion and the Density Ratio method.
Fine-tuning simply uses the YouTube-trained RNN-T model to warm-start training on the limited Voice Search {audio, transcript} data. This is an effective way of leveraging the limited Voice Search audio data: within a few thousand steps, the fine-tuned model reaches a decent level of performance on the fine-tuning task – though beyond that, it over-trains. A held-out set can be used to gauge over-training and stop training for varying amounts of fine-tuning data.
The experiments here fine-tuned the YouTube-trained RNN-T baseline using 10 hours, 100 hours and 1000 hours of Voice Search data, as described in Section SECREF27. (The source domain RNN-LM was not fine-tuned). For each fine-tuned model, Shallow Fusion and the Density Ratio method were used to evaluate incorporation of the Voice Search RNN-LM, described in Section SECREF5, trained on text transcripts from the much larger set of 21M Voice Search utterances. As in Section SECREF5, the dev set was used to tune the LM scaling factor(s) and the sequence length scaling factor $\beta $. To ease parameter tuning, the constraint $\lambda _\psi = \lambda _\tau $ was used for the Density Ratio method. The best combinations of scaling factors from the dev set were then used to generate the final eval results, which are shown in Table TABREF45
Discussion
The experiments described here examined the generalization of a YouTube-trained end-to-end RNN-T model to Voice Search speech data, using varying quantities (from zero to 100%) of Voice Search audio data, and 100% of the available Voice Search text data. The results show that in spite of the vast range of acoustic and linguistic patterns covered by the YouTube-trained model, it is still possible to improve performance on Voice Search utterances significantly via Voice Search specific fine-tuning and LM fusion. In particular, LM fusion significantly boosts performance when only a limited quantity of Voice Search fine-tuning data is used.
The Density Ratio method consistently outperformed Shallow Fusion for the cross-domain scenarios examined, with and without fine-tuning to audio data from the target domain. Furthermore, the gains in WER over the baseline are significantly larger for the Density Ratio method than for Shallow Fusion, with up to 28% relative reduction in WER (17.5% $\rightarrow $ 12.5%) compared to up to 17% relative reduction (17.5% $\rightarrow $ 14.5%) for Shallow Fusion, in the no fine-tuning scenario.
Notably, the “sweet spot” of effective combinations of LM scaling factor and sequence length scaling factor is significantly larger for the Density Ratio method than for Shallow Fusion (see Fig. FIGREF40 and Fig. FIGREF41). Compared to Shallow Fusion, larger absolute values of the scaling factor can be used.
A full sweep of the LM scaling factors ($\lambda _\psi $ and $\lambda _\tau $) can improve over the constrained setting $\lambda _\psi = \lambda _\tau $, though not by much. Fig. FIGREF43 shows that the optimal setting of the two factors follows a roughly linear pattern along an off-diagonal band.
Fine-tuning using transcribed Voice Search audio data leads to a large boost in performance over the YouTube-trained baseline. Nonetheless, both fusion methods give gains on top of fine-tuning, especially for the limited quantities of fine-tuning data. With 10 hours of fine-tuning, the Density Ratio method gives a 20% relative gain in WER, compared to 12% relative for Shallow Fusion. For 1000 hours of fine-tuning data, the Density Ratio method gives a 10.5% relative gave over the fine-tuned baseline, compared to 7% relative for Shallow Fusion. Even for 21,000 hours of fine-tuning data, i.e. the entire Voice Search training set, the Density Ratio method gives an added boost, from 7.8% to 7.4% WER, a 5% relative improvement.
A clear weakness of the proposed method is the apparent need for scaling factors on the LM outputs. In addition to the assumptions made (outlined in Section SECREF5), it is possible that this is due to the implicit LM in the RNN-T being more limited than the RNN-LMs used.
Summary
This article proposed and evaluated experimentally an alternative to Shallow Fusion for incorporation of an external LM into an end-to-end RNN-T model applied to a target domain different from the source domain it was trained on. The Density Ratio method is simple conceptually, easy to implement, and grounded in Bayes' rule, extending the classic hybrid ASR model to end-to-end models. In contrast, the most commonly reported approach to LM incorporation, Shallow Fusion, has no clear interpretation from probability theory. Evaluated on a YouTube $\rightarrow $ Voice Search cross-domain scenario, the method was found to be effective, with up to 28% relative gains in word error over the non-fused baseline, and consistently outperforming Shallow Fusion by a significant margin. The method continues to produce gains when fine-tuning to paired target domain data, though the gains diminish as more fine-tuning data is used. Evaluation using a variety of cross-domain evaluation scenarios is needed to establish the general effectiveness of the method.
Summary ::: Acknowledgments
The authors thank Matt Shannon and Khe Chai Sim for valuable feedback regarding this work. | from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering, from a Voice Search service |
4f9a8b50903deb1850aee09c95d1b6204a7410b4 | 4f9a8b50903deb1850aee09c95d1b6204a7410b4_0 | Q: What language(s) is the model trained/tested on?
Text: Introduction
End-to-end models such as Listen, Attend & Spell (LAS) BIBREF0 or the Recurrent Neural Network Transducer (RNN-T) BIBREF1 are sequence models that directly define $P(W | X)$, the posterior probability of the word or subword sequence $W$ given an audio frame sequence $X$, with no chaining of sub-module probabilities. State-of-the-art, or near state-of-the-art results have been reported for these models on challenging tasks BIBREF2, BIBREF3.
End-to-end ASR models in essence do not include independently trained symbols-only or acoustics-only sub-components. As such, they do not provide a clear role for language models $P(W)$ trained only on text/transcript data. There are, however, many situations where we would like to use a separate LM to complement or modify a given ASR system. In particular, no matter how plentiful the paired {audio, transcript} training data, there are typically orders of magnitude more text-only data available. There are also many practical applications of ASR where we wish to adapt the language model, e.g., biasing the recognition grammar towards a list of specific words or phrases for a specific context.
The research community has been keenly aware of the importance of this issue, and has responded with a number of approaches, under the rubric of “Fusion”. The most popular of these is “Shallow Fusion” BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, which is simple log-linear interpolation between the scores from the end-to-end model and the separately-trained LM. More structured approaches, “Deep Fusion” BIBREF9, “Cold Fusion” BIBREF10 and “Component Fusion” BIBREF11 jointly train an end-to-end model with a pre-trained LM, with the goal of learning the optimal combination of the two, aided by gating mechanisms applied to the set of joint scores. These methods have not replaced the simple Shallow Fusion method as the go-to method in most of the ASR community. Part of the appeal of Shallow Fusion is that it does not require model retraining – it can be applied purely at decoding time. The Density Ratio approach proposed here can be seen as an extension of Shallow Fusion, sharing some of its simplicity and practicality, but offering a theoretical grounding in Bayes' rule.
After describing the historical context, theory and practical implementation of the proposed Density Ratio method, this article describes experiments comparing the method to Shallow Fusion in a cross-domain scenario. An RNN-T model was trained on large-scale speech data with semi-supervised transcripts from YouTube videos, and then evaluated on data from a live Voice Search service, using an RNN-LM trained on Voice Search transcripts to try to boost performance. Then, exploring the transition between cross-domain and in-domain, limited amounts of Voice Search speech data were used to fine-tune the YouTube-trained RNN-T model, followed by LM fusion via both the Density Ratio method and Shallow Fusion. The ratio method was found to produce consistent gains over Shallow Fusion in all scenarios examined.
A Brief History of Language Model incorporation in ASR
Generative models and Bayes' rule. The Noisy Channel Model underlying the origins of statistical ASR BIBREF12 used Bayes' rule to combine generative models of both the acoustics $p(X|W)$ and the symbol sequence $P(W)$:
for an acoustic feature vector sequence $X = {\mbox{\bf x}}_1, ..., {\mbox{\bf x}}_T$ and a word or sub-word sequence $W = s_1, ..., s_U$ with possible time alignments $S_W = \lbrace ..., {\bf s}, ...\rbrace $. ASR decoding then uses the posterior probability $P(W|X)$. A prior $p({\bf s}| W)$ on alignments can be implemented e.g. via a simple 1st-order state transition model. Though lacking in discriminative power, the paradigm provides a clear theoretical framework for decoupling the acoustic model (AM) $p(X|W)$ and LM $P(W)$.
Hybrid model for DNNs/LSTMs within original ASR framework. The advent of highly discriminative Deep Neural Networks (DNNs) BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17 and Long Short Term Memory models (LSTMs) BIBREF18, BIBREF19 posed a challenge to the original Noisy Channel Model, as they produce phoneme- or state- level posteriors $P({\bf s}(t) | {\mbox{\bf x}}_t)$, not acoustic likelihoods $p({\mbox{\bf x}}_t | {\bf s}(t))$. The “hybrid” model BIBREF20 proposed the use of scaled likelihoods, i.e. posteriors divided by separately estimated state priors $P(w)$. For bidirectional LSTMs, the scaled-likelihood over a particular alignment ${\bf s}$ is taken to be
using $k(X)$ to represent a $p(X)$-dependent term shared by all hypotheses $W$, that does not affect decoding. This “pseudo-generative” score can then be plugged into the original model of Eq. (DISPLAY_FORM2) and used for ASR decoding with an arbitrary LM $P(W)$. For much of the ASR community, this approach still constitutes the state-of-the-art BIBREF2, BIBREF21, BIBREF22.
Shallow Fusion. The most popular approach to LM incorporation for end-to-end ASR is a linear interpolation,
with no claim to direct interpretability according to probability theory, and often a reward for sequence length $|W|$, scaled by a factor $\beta $ BIBREF5, BIBREF7, BIBREF8, BIBREF23.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: A Sequence-level Hybrid Pseudo-Generative Model
The model makes the following assumptions:
The source domain $\psi $ has some true joint distribution $P_{\psi }(W, X)$ over text and audio;
The target domain $\tau $ has some other true joint distribution $P_{\tau }(W, X)$;
A source domain end-to-end model (e.g. RNN-T) captures $P_{\psi }(W | X)$ reasonably well;
Separately trained LMs (e.g. RNN-LMs) capture $P_{\psi }(W)$ and $P_{\tau }(W)$ reasonably well;
$p_{\psi }(X | W)$ is roughly equal to $p_{\tau }(X | W)$, i.e. the two domains are acoustically consistent; and
The target domain posterior, $P_{\tau }(W | X)$, is unknown.
The starting point for the proposed Density Ratio Method is then to express a “hybrid” scaled acoustic likelihood for the source domain, in a manner paralleling the original hybrid model BIBREF20:
Similarly, for the target domain:
Given the stated assumptions, one can then estimate the target domain posterior as:
with $k(X) = p_{\psi }(X) / p_{\tau }(X)$ shared by all hypotheses $W$, and the ratio $P_{\tau }(W) / {P_{\psi }(W)}$ (really a probablity mass ratio) giving the proposed method its name.
In essence, this model is just an application of Bayes' rule to end-to-end models and separate LMs. The approach can be viewed as the sequence-level version of the classic hybrid model BIBREF20. Similar use of Bayes' rule to combine ASR scores with RNN-LMs has been described elsewhere, e.g. in work connecting grapheme-level outputs with word-level LMs BIBREF6, BIBREF24, BIBREF25. However, to our knowledge this approach has not been applied to end-to-end models in cross-domain settings, where one wishes to leverage a language model from the target domain. For a perspective on a “pure” (non-hybrid) deep generative approach to ASR, see BIBREF26.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Top-down fundamentals of RNN-T
The RNN Transducer (RNN-T) BIBREF1 defines a sequence-level posterior $P(W|X)$ for a given acoustic feature vector sequence $X = {\mbox{\bf x}}_1, ..., {\mbox{\bf x}}_T$ and a given word or sub-word sequence $W = s_1, ..., s_U$ in terms of possible alignments $S_W = \lbrace ..., ({\bf s}, {\bf t}), ... \rbrace $ of $W$ to $X$. The tuple $({\bf s}, {\bf t})$ denotes a specific alignment sequence, a symbol sequence and corresponding sequence of time indices, consistent with the sequence $W$ and utterance $X$. The symbols in ${\bf s}$ are elements of an expanded symbol space that includes optional, repeatable blank symbols used to represent acoustics-only path extensions, where the time index is incremented, but no non-blank symbols are added. Conversely, non-blank symbols are only added to a partial path time-synchronously. (I.e., using $i$ to index elements of ${\bf s}$ and ${\bf t}$, $t_{i+1} = t_i + 1$ if $s_{i+1}$ is blank, and $t_{i + 1} = t_i$ if $s_{i+1}$ is non-blank). $P(W|X)$ is defined by summing over alignment posteriors:
Finally, $P(s_{i+1} | X, t_i, s_{1:i})$ is defined using an LSTM-based acoustic encoder with input $X$, an LSTM-based label encoder with non-blank inputs $s$, and a feed-forward joint network combining outputs from the two encoders to produce predictions for all symbols $s$, including the blank symbol.
The Forward-Backward algorithm can be used to calculate Eq. (DISPLAY_FORM16) efficiently during training, and Viterbi-based beam search (based on the argmax over possible alignments) can be used for decoding when $W$ is unknown BIBREF1, BIBREF27.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Application of Shallow Fusion to RNN-T
Shallow Fusion (Eq. (DISPLAY_FORM4)) can be implemented in RNN-T for each time-synchronous non-blank symbol path extension. The LM score corresponding to the same symbol extension can be “fused” into the log-domain score used for decoding:
This is only done when the hypothesized path extension $s_{i+1}$ is a non-blank symbol; the decoding score for blank symbol path extensions is the unmodified $\log P(s_{i+1} | X, t_i, s_{1:i})$.
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Application of the Density Ratio Method to RNN-T
Eq. (DISPLAY_FORM14) can be implemented via an estimated RNN-T “pseudo-posterior”, when $s_{i+1}$ is a non-blank symbol:
This estimate is not normalized over symbol outputs, but it plugs into Eq. () and Eq. (DISPLAY_FORM16) to implement the RNN-T version of Eq. (DISPLAY_FORM14). In practice, scaling factors $\lambda _\psi $ and $\lambda _\tau $ on the LM scores, and a non-blank reward $\beta $, are used in the final decoding score:
Language Model incorporation into End-to-end ASR, using Bayes' rule ::: Implementation
The ratio method is very simple to implement. The procedure is essentially to:
Train an end-to-end model such as RNN-T on a given source domain training set $\psi $ (paired audio/transcript data);
Train a neural LM such as RNN-LM on text transcripts from the same training set $\psi $;
Train a second RNN-LM on the target domain $\tau $;
When decoding on the target domain, modify the RNN-T output by the ratio of target/training RNN-LMs, as defined in Eq. (DISPLAY_FORM21), and illustrated in Fig. FIGREF1.
The method is purely a decode-time method; no joint training is involved, but it does require tuning of the LM scaling factor(s) (as does Shallow Fusion). A held-out set can be used for that purpose.
Training, development and evaluation data ::: Training data
The following data sources were used to train the RNN-T and associated RNN-LMs in this study.
Source-domain baseline RNN-T: approximately 120M segmented utterances (190,000 hours of audio) from YouTube videos, with associated transcripts obtained from semi-supervised caption filtering BIBREF28.
Source-domain normalizing RNN-LM: transcripts from the same 120M utterance YouTube training set. This corresponds to about 3B tokens of the sub-word units used (see below, Section SECREF30).
Target-domain RNN-LM: 21M text-only utterance-level transcripts from anonymized, manually transcribed audio data, representative of data from a Voice Search service. This corresponds to about 275M sub-word tokens.
Target-domain RNN-T fine-tuning data: 10K, 100K, 1M and 21M utterance-level {audio, transcript} pairs taken from anonymized, transcribed Voice Search data. These fine-tuning sets roughly correspond to 10 hours, 100 hours, 1000 hours and 21,000 hours of audio, respectively.
Training, development and evaluation data ::: Dev and Eval Sets
The following data sources were used to choose scaling factors and/or evaluate the final model performance.
Source-domain Eval Set (YouTube). The in-domain performance of the YouTube-trained RNN-T baseline was measured on speech data taken from Preferred Channels on YouTube BIBREF29. The test set is taken from 296 videos from 13 categories, with each video averaging 5 minutes in length, corresponding to 25 hours of audio and 250,000 word tokens in total.
Target-domain Dev & Eval sets (Voice Search). The Voice Search dev and eval sets each consist of approximately 7,500 anonymized utterances (about 33,000 words and corresponding to about 8 hours of audio), distinct from the fine-tuning data described earlier, but representative of the same Voice Search service.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search
The first set of experiments uses an RNN-T model trained on {audio, transcript} pairs taken from segmented YouTube videos, and evaluates the cross-domain generalization of this model to test utterances taken from a Voice Search dataset, with and without fusion to an external LM.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search ::: RNN-T and RNN-LM model settings
The overall structure of the models used here is as follows:
RNN-T:
Acoustic features: 768-dimensional feature vectors obtained from 3 stacked 256-dimensional logmel feature vectors, extracted every 20 msec from 16 kHz waveforms, and sub-sampled with a stride of 3, for an effective final feature vector step size of 60 msec.
Acoustic encoder: 6 LSTM layers x (2048 units with 1024-dimensional projection); bidirectional.
Label encoder (aka “decoder” in end-to-end ASR jargon): 1 LSTM layer x (2048 units with 1024-dimensional projection).
RNN-T joint network hidden dimension size: 1024.
Output classes: 10,000 sub-word “morph” units BIBREF30 , input via a 512-dimensional embedding.
Total number of parameters: approximately 340M
RNN-LMs for both source and target domains were set to match the RNN-T decoder structure and size:
1 layer x (2048 units with 1024-dimensional projection).
Output classes: 10,000 morphs (same as the RNN-T).
Total number of parameters: approximately 30M.
The RNN-T and the RNN-LMs were independently trained on 128-core tensor processing units (TPUs) using full unrolling and an effective batch size of 4096. All models were trained using the Adam optimization method BIBREF31 for 100K-125K steps, corresponding to about 4 passes over the 120M utterance YouTube training set, and 20 passes over the 21M utterance Voice Search training set. The trained RNN-LM perplexities (shown in Table TABREF28) show the benefit to Voice Search test perplexity of training on Voice Search transcripts.
Cross-domain evaluation: YouTube-trained RNN-T @!START@$\rightarrow $@!END@ Voice Search ::: Experiments and results
In the first set of experiments, the constraint $\lambda _\psi = \lambda _\tau $ was used to simplify the search for the LM scaling factor in Eq. DISPLAY_FORM21. Fig. FIGREF40 and Fig. FIGREF41 illustrate the different relative sensitivities of WER to the LM scaling factor(s) for Shallow Fusion and the Density Ratio method, as well as the effect of the RNN-T sequence length scaling factor, measured on the dev set.
The LM scaling factor affects the relative value of the symbols-only LM score vs. that of the acoustics-aware RNN-T score. This typically alters the balance of insertion vs. deletion errors. In turn, this effect can be offset (or amplified) by the sequence length scaling factor $\beta $ in Eq. (DISPLAY_FORM4), in the case of RNN-T, implemented as a non-blank symbol emission reward. (The blank symbol only consumes acoustic frames, not LM symbols BIBREF1). Given that both factors have related effects on overall WER, the LM scaling factor(s) and the sequence length scaling factor need to be tuned jointly.
Fig. FIGREF40 and Fig. FIGREF41 illustrate the different relative sensitivities of WER to these factors for Shallow Fusion and the Density Ratio method, measured on the dev set.
In the second set of experiments, $\beta $ was fixed at -0.1, but the constraint $\lambda _\psi = \lambda _\tau $ was lifted, and a range of combinations was evaluated on the dev set. The results are shown in Fig. FIGREF43. The shading in Figs. FIGREF40, FIGREF41 and FIGREF43 uses the same midpoint value of 15.0 to highlight the results.
The best combinations of scaling factors from the dev set evaluations (see Fig. FIGREF40, Fig. FIGREF41 and Fig. FIGREF43) were used to generate the final eval set results, WERs and associated deletion, insertion and substitution rates, shown in Table TABREF44. These results are summarized in Table TABREF45, this time showing the exact values of LM scaling factor(s) used.
Fine-tuning a YouTube-trained RNN-T using limited Voice Search audio data
The experiments in Section SECREF5 showed that an LM trained on text from the target Voice Search domain can boost the cross-domain performance of an RNN-T. The next experiments examined fine-tuning the original YouTube-trained RNN-T on varied, limited amounts of Voice Search {audio, transcript} data. After fine-tuning, LM fusion was applied, again comparing Shallow Fusion and the Density Ratio method.
Fine-tuning simply uses the YouTube-trained RNN-T model to warm-start training on the limited Voice Search {audio, transcript} data. This is an effective way of leveraging the limited Voice Search audio data: within a few thousand steps, the fine-tuned model reaches a decent level of performance on the fine-tuning task – though beyond that, it over-trains. A held-out set can be used to gauge over-training and stop training for varying amounts of fine-tuning data.
The experiments here fine-tuned the YouTube-trained RNN-T baseline using 10 hours, 100 hours and 1000 hours of Voice Search data, as described in Section SECREF27. (The source domain RNN-LM was not fine-tuned). For each fine-tuned model, Shallow Fusion and the Density Ratio method were used to evaluate incorporation of the Voice Search RNN-LM, described in Section SECREF5, trained on text transcripts from the much larger set of 21M Voice Search utterances. As in Section SECREF5, the dev set was used to tune the LM scaling factor(s) and the sequence length scaling factor $\beta $. To ease parameter tuning, the constraint $\lambda _\psi = \lambda _\tau $ was used for the Density Ratio method. The best combinations of scaling factors from the dev set were then used to generate the final eval results, which are shown in Table TABREF45
Discussion
The experiments described here examined the generalization of a YouTube-trained end-to-end RNN-T model to Voice Search speech data, using varying quantities (from zero to 100%) of Voice Search audio data, and 100% of the available Voice Search text data. The results show that in spite of the vast range of acoustic and linguistic patterns covered by the YouTube-trained model, it is still possible to improve performance on Voice Search utterances significantly via Voice Search specific fine-tuning and LM fusion. In particular, LM fusion significantly boosts performance when only a limited quantity of Voice Search fine-tuning data is used.
The Density Ratio method consistently outperformed Shallow Fusion for the cross-domain scenarios examined, with and without fine-tuning to audio data from the target domain. Furthermore, the gains in WER over the baseline are significantly larger for the Density Ratio method than for Shallow Fusion, with up to 28% relative reduction in WER (17.5% $\rightarrow $ 12.5%) compared to up to 17% relative reduction (17.5% $\rightarrow $ 14.5%) for Shallow Fusion, in the no fine-tuning scenario.
Notably, the “sweet spot” of effective combinations of LM scaling factor and sequence length scaling factor is significantly larger for the Density Ratio method than for Shallow Fusion (see Fig. FIGREF40 and Fig. FIGREF41). Compared to Shallow Fusion, larger absolute values of the scaling factor can be used.
A full sweep of the LM scaling factors ($\lambda _\psi $ and $\lambda _\tau $) can improve over the constrained setting $\lambda _\psi = \lambda _\tau $, though not by much. Fig. FIGREF43 shows that the optimal setting of the two factors follows a roughly linear pattern along an off-diagonal band.
Fine-tuning using transcribed Voice Search audio data leads to a large boost in performance over the YouTube-trained baseline. Nonetheless, both fusion methods give gains on top of fine-tuning, especially for the limited quantities of fine-tuning data. With 10 hours of fine-tuning, the Density Ratio method gives a 20% relative gain in WER, compared to 12% relative for Shallow Fusion. For 1000 hours of fine-tuning data, the Density Ratio method gives a 10.5% relative gave over the fine-tuned baseline, compared to 7% relative for Shallow Fusion. Even for 21,000 hours of fine-tuning data, i.e. the entire Voice Search training set, the Density Ratio method gives an added boost, from 7.8% to 7.4% WER, a 5% relative improvement.
A clear weakness of the proposed method is the apparent need for scaling factors on the LM outputs. In addition to the assumptions made (outlined in Section SECREF5), it is possible that this is due to the implicit LM in the RNN-T being more limited than the RNN-LMs used.
Summary
This article proposed and evaluated experimentally an alternative to Shallow Fusion for incorporation of an external LM into an end-to-end RNN-T model applied to a target domain different from the source domain it was trained on. The Density Ratio method is simple conceptually, easy to implement, and grounded in Bayes' rule, extending the classic hybrid ASR model to end-to-end models. In contrast, the most commonly reported approach to LM incorporation, Shallow Fusion, has no clear interpretation from probability theory. Evaluated on a YouTube $\rightarrow $ Voice Search cross-domain scenario, the method was found to be effective, with up to 28% relative gains in word error over the non-fused baseline, and consistently outperforming Shallow Fusion by a significant margin. The method continues to produce gains when fine-tuning to paired target domain data, though the gains diminish as more fine-tuning data is used. Evaluation using a variety of cross-domain evaluation scenarios is needed to establish the general effectiveness of the method.
Summary ::: Acknowledgments
The authors thank Matt Shannon and Khe Chai Sim for valuable feedback regarding this work. | Unanswerable |
c96a6b30d71c6669592504e4ee8001e9d1eb1fba | c96a6b30d71c6669592504e4ee8001e9d1eb1fba_0 | Q: was bert used?
Text: Introduction
Conversational interactions between humans and Artificial Intelligence (AI) agents could amount to as much as thousands of interactions a day given recent developments BIBREF0. This surge in human-AI interactions has led to an interest in developing more fluid interactions between agent and human. The term `fluidity', when we refer to dialogue systems, tries to measure the concept of how humanlike communication is between a human and an AI entity. Conversational fluidity has historically been measured using metrics such as perplexity, recall, and F1-scores. However, one finds various drawbacks using these metrics. During the automatic evaluation stage of the second Conversational Intelligence Challenge (ConvAI2) BIBREF1 competition, it was noted that consistently replying with “I am you to do and your is like” would outperform the F1-score of all the models in the competition. This nonsensical phrase was constructed simply by picking several frequent words from the training set. Also, Precision at K, or the more specific Hits@1 metric has been used historically in assessing retrieval based aspects of the agent. This is defined as the accuracy of the next dialogue utterance when choosing between the gold response and N–1 distractor responses. Since these metrics are somewhat flawed, human evaluations were used in conjunction. Multiple attempts have been made historically to try to develop automatic metrics to assess dialogue fluidity. One of the earliest Eckert et al. (1997), used a stochastic system which regulated user-generated dialogues to debug and evaluate chatbots BIBREF2. In the same year, Marilyn et al. (1997) proposed the PARADISE BIBREF3 model. This framework was developed to evaluate dialogue agents in spoken conversations. A few years later the BLEU BIBREF4 metric was proposed. Subsequently, for almost two decades, this metric has been one of the few to be widely adopted by the research community. The method, which compares the matches in n-grams from the translated outputted text and the input text proved to be quick, inexpensive and has therefore been widely used. Therefore, we use the BLEU metric as a baseline to compare the quality of our proposed model.
Introduction ::: Datasets
For this study, we use two types of data namely single-turn and multi-turn. The first type, single-turn, is defined such that each instance is made up of one statement and one response. This pair is usually a fragment of a larger dialogue. When given to humans for evaluation of fluidity, we ask to give a score on characteristics such as “How related is the response to the statement?” or “Does the response contain repeated text from the user's statement?”. These are all things that should not be affected by the fact that no history or context is provided and therefore, can still be classified reasonably. Contrary to the single turn datasets, the second type is the multi-turn dataset. This contains multiple instances of statements and responses, building on each other to create a fuller conversation. With these kinds of datasets, one can also evaluate and classify the data on various other attributes. An example of such evaluations would be something like “Does this response continue on the flow of the conversation?” or “Is the chatbot using repetitive text from previous responses?”. The details of how we collected each dataset are detailed below.
Single-Turn: This dataset consists of single-turn instances of statements and responses from the MiM chatbot developed at Constellation AI BIBREF5. The responses provided were then evaluated using Amazon Mechanical Turk (AMT) workers. A total of five AMT workers evaluated each of these pairs. The mean of the five evaluations is then used as the target variable. A sample can be seen in Table TABREF3. This dataset was used during experiments with results published in the Results section.
Multi-Turn: This dataset is taken from the ConvAI2 challenge and consists of various types of dialogue that have been generated by human-computer conversations. At the end of each dialogue, an evaluation score has been given, for each dialogue, between 1–4.
Method
This section discusses the methods and used to develop our attributes and the technical details of how they are combined to create a final classification layer.
Method ::: BERT Next Sentence Prediction
BERT BIBREF6 is a state-of-the-art model, which has been pre-trained on a large corpus and is suitable to be fine-tuned for various downstream NLP tasks. The main innovation between this model and existing language models is in how the model is trained. For BERT, the text conditioning happens on both the left and right context of every word and is therefore bidirectional. In previous models BIBREF7, a unidirectional language model was usually used in the pre-training. With BERT, two fully unsupervised tasks are performed. The Masked Language Model and the Next Sentence Prediction (NSP).
For this study, the NSP is used as a proxy for the relevance of response. Furthermore, in order to improve performance, we fine-tune on a customized dataset which achieved an accuracy of 82.4%. For the main analysis, we used the single-turn dataset, which gave us a correlation of 0.28 between the mean of the AMT evaluation and the BERT NSP. Next, we put each score into a category. For example, if the average score is 2.3, this would be placed in category 2. We then displayed the percentage of positive and negative predictions in a histogram for each of the categories. As seen in Figure FIGREF5, a clear pattern is seen between the higher scores and the positive prediction, and the lower scores and the negative predictions. details of how they are combined to create a final classification layer.
Method ::: Repetition Control
This attribute is calculated by checking each statement and response for various types of repetition by using n-gram overlap. The motivation for including this as an attribute in dialogue fluidity is that repetitive words or n-grams can be bothersome to the end-user.
Repetitions are measured according to whether they are internal, external or partner. We calculate a percentage based on the single-turn utterance or the entire multi-turn conversation. We use unigram, bigram, and trigram for each repetition type based off BIBREF8.
We calculate a correlation of each repetition module with respect to human evaluations in order to understand the impact. For the single-turn dataset, the correlation is -0.09 and 0.07 for the internal and partner repetition attribute respectively. For the multi-turn dataset the correlation was -0.05 and -0.02 for the internal and partner repetition attribute respectively. This low correlation is reasonable and was expected. Measuring repetition in this way is not expected to provide huge classification power. However, we will attempt to exploit differences in correlation between these attributes and ones described below, which will provide some classification power.
Method ::: Balance of Dialogue
For this attribute, we calculated the number of questions asked. For this particular case, we are not able to measure a correlation with respect to human evaluations.
Method ::: Short-Safe Answers
Here, we checked for the length of the utterance and the presence of a Named Entity. We checked the correlation of this attribute with the human evaluation scores. The correlation score attained on the single-turn dataset was -0.09, while for the multi-turn dataset the correlation was 0. The full pipeline can be seen diagrammatically in Figure FIGREF9.
Results
To create a final metric, we combine the individual components from Section 2 as features into a Support Vector Machine. The final results for our F1-score from this classification technique are 0.52 and 0.31 for the single and multi-turn data respectively.
We compare our results for both the single-turn and multi-turn experiments to the accuracies on the test data based off the BLEU score. We see an increase of 6% for our method with respect to the BLEU score in the single turn data, and a no change when using the multi-turn test set.
Conclusion
This study aimed to implement an automatic metric to assess the fluidity of dialogue systems. We wanted to test if any set of attributes could show a high correlation to manual evaluations, thus replacing it entirely. As highlighted in the ConvAI2 challenge, automatic metrics are not reliable for a standalone evaluation of low-level dialogue outputs. For this study, three attributes were investigated. Tests were carried out based on these proposed attributes by making use of single and multi-turn datasets. These attributes, combined with the BERT model, showed that our classifier performed better than the BLEU model for the single-turn dataset. However, no improvement was seen on the multi-turn dataset.
Concerning feature importance, we observed that internal repetition and NSP are the most important attributes when used to classify fluidity. We believe that further work can be carried out in finding a more discriminating set of attributes. | Yes |
42d66726b5bf8de5b0265e09d76f5ab00c0e851a | 42d66726b5bf8de5b0265e09d76f5ab00c0e851a_0 | Q: what datasets did they use?
Text: Introduction
Conversational interactions between humans and Artificial Intelligence (AI) agents could amount to as much as thousands of interactions a day given recent developments BIBREF0. This surge in human-AI interactions has led to an interest in developing more fluid interactions between agent and human. The term `fluidity', when we refer to dialogue systems, tries to measure the concept of how humanlike communication is between a human and an AI entity. Conversational fluidity has historically been measured using metrics such as perplexity, recall, and F1-scores. However, one finds various drawbacks using these metrics. During the automatic evaluation stage of the second Conversational Intelligence Challenge (ConvAI2) BIBREF1 competition, it was noted that consistently replying with “I am you to do and your is like” would outperform the F1-score of all the models in the competition. This nonsensical phrase was constructed simply by picking several frequent words from the training set. Also, Precision at K, or the more specific Hits@1 metric has been used historically in assessing retrieval based aspects of the agent. This is defined as the accuracy of the next dialogue utterance when choosing between the gold response and N–1 distractor responses. Since these metrics are somewhat flawed, human evaluations were used in conjunction. Multiple attempts have been made historically to try to develop automatic metrics to assess dialogue fluidity. One of the earliest Eckert et al. (1997), used a stochastic system which regulated user-generated dialogues to debug and evaluate chatbots BIBREF2. In the same year, Marilyn et al. (1997) proposed the PARADISE BIBREF3 model. This framework was developed to evaluate dialogue agents in spoken conversations. A few years later the BLEU BIBREF4 metric was proposed. Subsequently, for almost two decades, this metric has been one of the few to be widely adopted by the research community. The method, which compares the matches in n-grams from the translated outputted text and the input text proved to be quick, inexpensive and has therefore been widely used. Therefore, we use the BLEU metric as a baseline to compare the quality of our proposed model.
Introduction ::: Datasets
For this study, we use two types of data namely single-turn and multi-turn. The first type, single-turn, is defined such that each instance is made up of one statement and one response. This pair is usually a fragment of a larger dialogue. When given to humans for evaluation of fluidity, we ask to give a score on characteristics such as “How related is the response to the statement?” or “Does the response contain repeated text from the user's statement?”. These are all things that should not be affected by the fact that no history or context is provided and therefore, can still be classified reasonably. Contrary to the single turn datasets, the second type is the multi-turn dataset. This contains multiple instances of statements and responses, building on each other to create a fuller conversation. With these kinds of datasets, one can also evaluate and classify the data on various other attributes. An example of such evaluations would be something like “Does this response continue on the flow of the conversation?” or “Is the chatbot using repetitive text from previous responses?”. The details of how we collected each dataset are detailed below.
Single-Turn: This dataset consists of single-turn instances of statements and responses from the MiM chatbot developed at Constellation AI BIBREF5. The responses provided were then evaluated using Amazon Mechanical Turk (AMT) workers. A total of five AMT workers evaluated each of these pairs. The mean of the five evaluations is then used as the target variable. A sample can be seen in Table TABREF3. This dataset was used during experiments with results published in the Results section.
Multi-Turn: This dataset is taken from the ConvAI2 challenge and consists of various types of dialogue that have been generated by human-computer conversations. At the end of each dialogue, an evaluation score has been given, for each dialogue, between 1–4.
Method
This section discusses the methods and used to develop our attributes and the technical details of how they are combined to create a final classification layer.
Method ::: BERT Next Sentence Prediction
BERT BIBREF6 is a state-of-the-art model, which has been pre-trained on a large corpus and is suitable to be fine-tuned for various downstream NLP tasks. The main innovation between this model and existing language models is in how the model is trained. For BERT, the text conditioning happens on both the left and right context of every word and is therefore bidirectional. In previous models BIBREF7, a unidirectional language model was usually used in the pre-training. With BERT, two fully unsupervised tasks are performed. The Masked Language Model and the Next Sentence Prediction (NSP).
For this study, the NSP is used as a proxy for the relevance of response. Furthermore, in order to improve performance, we fine-tune on a customized dataset which achieved an accuracy of 82.4%. For the main analysis, we used the single-turn dataset, which gave us a correlation of 0.28 between the mean of the AMT evaluation and the BERT NSP. Next, we put each score into a category. For example, if the average score is 2.3, this would be placed in category 2. We then displayed the percentage of positive and negative predictions in a histogram for each of the categories. As seen in Figure FIGREF5, a clear pattern is seen between the higher scores and the positive prediction, and the lower scores and the negative predictions. details of how they are combined to create a final classification layer.
Method ::: Repetition Control
This attribute is calculated by checking each statement and response for various types of repetition by using n-gram overlap. The motivation for including this as an attribute in dialogue fluidity is that repetitive words or n-grams can be bothersome to the end-user.
Repetitions are measured according to whether they are internal, external or partner. We calculate a percentage based on the single-turn utterance or the entire multi-turn conversation. We use unigram, bigram, and trigram for each repetition type based off BIBREF8.
We calculate a correlation of each repetition module with respect to human evaluations in order to understand the impact. For the single-turn dataset, the correlation is -0.09 and 0.07 for the internal and partner repetition attribute respectively. For the multi-turn dataset the correlation was -0.05 and -0.02 for the internal and partner repetition attribute respectively. This low correlation is reasonable and was expected. Measuring repetition in this way is not expected to provide huge classification power. However, we will attempt to exploit differences in correlation between these attributes and ones described below, which will provide some classification power.
Method ::: Balance of Dialogue
For this attribute, we calculated the number of questions asked. For this particular case, we are not able to measure a correlation with respect to human evaluations.
Method ::: Short-Safe Answers
Here, we checked for the length of the utterance and the presence of a Named Entity. We checked the correlation of this attribute with the human evaluation scores. The correlation score attained on the single-turn dataset was -0.09, while for the multi-turn dataset the correlation was 0. The full pipeline can be seen diagrammatically in Figure FIGREF9.
Results
To create a final metric, we combine the individual components from Section 2 as features into a Support Vector Machine. The final results for our F1-score from this classification technique are 0.52 and 0.31 for the single and multi-turn data respectively.
We compare our results for both the single-turn and multi-turn experiments to the accuracies on the test data based off the BLEU score. We see an increase of 6% for our method with respect to the BLEU score in the single turn data, and a no change when using the multi-turn test set.
Conclusion
This study aimed to implement an automatic metric to assess the fluidity of dialogue systems. We wanted to test if any set of attributes could show a high correlation to manual evaluations, thus replacing it entirely. As highlighted in the ConvAI2 challenge, automatic metrics are not reliable for a standalone evaluation of low-level dialogue outputs. For this study, three attributes were investigated. Tests were carried out based on these proposed attributes by making use of single and multi-turn datasets. These attributes, combined with the BERT model, showed that our classifier performed better than the BLEU model for the single-turn dataset. However, no improvement was seen on the multi-turn dataset.
Concerning feature importance, we observed that internal repetition and NSP are the most important attributes when used to classify fluidity. We believe that further work can be carried out in finding a more discriminating set of attributes. | Single-Turn, Multi-Turn |
c418deef9e44bc8448d9296c6517824cb95bd554 | c418deef9e44bc8448d9296c6517824cb95bd554_0 | Q: which existing metrics do they compare with?
Text: Introduction
Conversational interactions between humans and Artificial Intelligence (AI) agents could amount to as much as thousands of interactions a day given recent developments BIBREF0. This surge in human-AI interactions has led to an interest in developing more fluid interactions between agent and human. The term `fluidity', when we refer to dialogue systems, tries to measure the concept of how humanlike communication is between a human and an AI entity. Conversational fluidity has historically been measured using metrics such as perplexity, recall, and F1-scores. However, one finds various drawbacks using these metrics. During the automatic evaluation stage of the second Conversational Intelligence Challenge (ConvAI2) BIBREF1 competition, it was noted that consistently replying with “I am you to do and your is like” would outperform the F1-score of all the models in the competition. This nonsensical phrase was constructed simply by picking several frequent words from the training set. Also, Precision at K, or the more specific Hits@1 metric has been used historically in assessing retrieval based aspects of the agent. This is defined as the accuracy of the next dialogue utterance when choosing between the gold response and N–1 distractor responses. Since these metrics are somewhat flawed, human evaluations were used in conjunction. Multiple attempts have been made historically to try to develop automatic metrics to assess dialogue fluidity. One of the earliest Eckert et al. (1997), used a stochastic system which regulated user-generated dialogues to debug and evaluate chatbots BIBREF2. In the same year, Marilyn et al. (1997) proposed the PARADISE BIBREF3 model. This framework was developed to evaluate dialogue agents in spoken conversations. A few years later the BLEU BIBREF4 metric was proposed. Subsequently, for almost two decades, this metric has been one of the few to be widely adopted by the research community. The method, which compares the matches in n-grams from the translated outputted text and the input text proved to be quick, inexpensive and has therefore been widely used. Therefore, we use the BLEU metric as a baseline to compare the quality of our proposed model.
Introduction ::: Datasets
For this study, we use two types of data namely single-turn and multi-turn. The first type, single-turn, is defined such that each instance is made up of one statement and one response. This pair is usually a fragment of a larger dialogue. When given to humans for evaluation of fluidity, we ask to give a score on characteristics such as “How related is the response to the statement?” or “Does the response contain repeated text from the user's statement?”. These are all things that should not be affected by the fact that no history or context is provided and therefore, can still be classified reasonably. Contrary to the single turn datasets, the second type is the multi-turn dataset. This contains multiple instances of statements and responses, building on each other to create a fuller conversation. With these kinds of datasets, one can also evaluate and classify the data on various other attributes. An example of such evaluations would be something like “Does this response continue on the flow of the conversation?” or “Is the chatbot using repetitive text from previous responses?”. The details of how we collected each dataset are detailed below.
Single-Turn: This dataset consists of single-turn instances of statements and responses from the MiM chatbot developed at Constellation AI BIBREF5. The responses provided were then evaluated using Amazon Mechanical Turk (AMT) workers. A total of five AMT workers evaluated each of these pairs. The mean of the five evaluations is then used as the target variable. A sample can be seen in Table TABREF3. This dataset was used during experiments with results published in the Results section.
Multi-Turn: This dataset is taken from the ConvAI2 challenge and consists of various types of dialogue that have been generated by human-computer conversations. At the end of each dialogue, an evaluation score has been given, for each dialogue, between 1–4.
Method
This section discusses the methods and used to develop our attributes and the technical details of how they are combined to create a final classification layer.
Method ::: BERT Next Sentence Prediction
BERT BIBREF6 is a state-of-the-art model, which has been pre-trained on a large corpus and is suitable to be fine-tuned for various downstream NLP tasks. The main innovation between this model and existing language models is in how the model is trained. For BERT, the text conditioning happens on both the left and right context of every word and is therefore bidirectional. In previous models BIBREF7, a unidirectional language model was usually used in the pre-training. With BERT, two fully unsupervised tasks are performed. The Masked Language Model and the Next Sentence Prediction (NSP).
For this study, the NSP is used as a proxy for the relevance of response. Furthermore, in order to improve performance, we fine-tune on a customized dataset which achieved an accuracy of 82.4%. For the main analysis, we used the single-turn dataset, which gave us a correlation of 0.28 between the mean of the AMT evaluation and the BERT NSP. Next, we put each score into a category. For example, if the average score is 2.3, this would be placed in category 2. We then displayed the percentage of positive and negative predictions in a histogram for each of the categories. As seen in Figure FIGREF5, a clear pattern is seen between the higher scores and the positive prediction, and the lower scores and the negative predictions. details of how they are combined to create a final classification layer.
Method ::: Repetition Control
This attribute is calculated by checking each statement and response for various types of repetition by using n-gram overlap. The motivation for including this as an attribute in dialogue fluidity is that repetitive words or n-grams can be bothersome to the end-user.
Repetitions are measured according to whether they are internal, external or partner. We calculate a percentage based on the single-turn utterance or the entire multi-turn conversation. We use unigram, bigram, and trigram for each repetition type based off BIBREF8.
We calculate a correlation of each repetition module with respect to human evaluations in order to understand the impact. For the single-turn dataset, the correlation is -0.09 and 0.07 for the internal and partner repetition attribute respectively. For the multi-turn dataset the correlation was -0.05 and -0.02 for the internal and partner repetition attribute respectively. This low correlation is reasonable and was expected. Measuring repetition in this way is not expected to provide huge classification power. However, we will attempt to exploit differences in correlation between these attributes and ones described below, which will provide some classification power.
Method ::: Balance of Dialogue
For this attribute, we calculated the number of questions asked. For this particular case, we are not able to measure a correlation with respect to human evaluations.
Method ::: Short-Safe Answers
Here, we checked for the length of the utterance and the presence of a Named Entity. We checked the correlation of this attribute with the human evaluation scores. The correlation score attained on the single-turn dataset was -0.09, while for the multi-turn dataset the correlation was 0. The full pipeline can be seen diagrammatically in Figure FIGREF9.
Results
To create a final metric, we combine the individual components from Section 2 as features into a Support Vector Machine. The final results for our F1-score from this classification technique are 0.52 and 0.31 for the single and multi-turn data respectively.
We compare our results for both the single-turn and multi-turn experiments to the accuracies on the test data based off the BLEU score. We see an increase of 6% for our method with respect to the BLEU score in the single turn data, and a no change when using the multi-turn test set.
Conclusion
This study aimed to implement an automatic metric to assess the fluidity of dialogue systems. We wanted to test if any set of attributes could show a high correlation to manual evaluations, thus replacing it entirely. As highlighted in the ConvAI2 challenge, automatic metrics are not reliable for a standalone evaluation of low-level dialogue outputs. For this study, three attributes were investigated. Tests were carried out based on these proposed attributes by making use of single and multi-turn datasets. These attributes, combined with the BERT model, showed that our classifier performed better than the BLEU model for the single-turn dataset. However, no improvement was seen on the multi-turn dataset.
Concerning feature importance, we observed that internal repetition and NSP are the most important attributes when used to classify fluidity. We believe that further work can be carried out in finding a more discriminating set of attributes. | F1-score, BLEU score |
d6d29040e7fafceb188e62afba566016b119b23c | d6d29040e7fafceb188e62afba566016b119b23c_0 | Q: Which datasets do they evaluate on?
Text: Introduction
Recently, neural models pre-trained on a language modeling task, such as ELMo BIBREF0 , OpenAI GPT BIBREF1 , and BERT BIBREF2 , have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. The success of BERT can largely be associated to the notion of context-aware word embeddings, which differentiate it from common approaches such as word2vec BIBREF3 that establish a static semantic embedding. Since the introduction of BERT, the NLP community continues to be impressed by the amount of ideas produced on top of this powerful language representation model. However, despite its success, it remains unclear whether the representations produced by BERT can be utilized for tasks such as commonsense reasoning. Particularly, it is not clear whether BERT shed light on solving tasks such as the Pronoun Disambiguation Problem (PDP) and Winograd Schema Challenge (WSC). These tasks have been proposed as potential alternatives to the Turing Test, because they are formulated to be robust to statistics of word co-occurrence BIBREF4 .
Below is a popular example from the binary-choice pronoun coreference problem BIBREF5 of WSC:
Sentence: The trophy doesn't fit in the suitcase because it is too small.
Answers: A) the trophy B) the suitcase
Humans resolve the pronoun “it” to “the suitcase” with no difficulty, whereas a system without commonsense reasoning would be unable to distinguish “the suitcase” from the otherwise viable candidate, “the trophy”.
Previous attempts at solving WSC usually involve heavy utilization of annotated knowledge bases (KB), rule-based reasoning, or hand-crafted features BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . There are also some empirical works towards solving WSC making use of learning BIBREF11 , BIBREF12 , BIBREF1 . Recently, BIBREF13 proposed to use a language model (LM) to score the two sentences obtained when replacing the pronoun by the two candidates. The sentence that is assigned higher probability under the model designates the chosen candidate. Probability is calculated via the chain rule, as the product of the probabilities assigned to each word in the sentence. Very recently, BIBREF14 proposed the knowledge hunting method, which is a rule-based system that uses search engines to gather evidence for the candidate resolutions without relying on the entities themselves. Although these methods are interesting, they need fine-tuning, or explicit substitution or heuristic-based rules. See also BIBREF15 for a discussion.
The BERT model is based on the “Transformer” architecture BIBREF16 , which relies purely on attention mechanisms, and does not have an explicit notion of word order beyond marking each word with its absolute-position embedding. This reliance on attention may lead one to expect decreased performance on commonsense reasoning tasks BIBREF17 , BIBREF18 compared to RNN (LSTM) models BIBREF19 that do model word order directly, and explicitly track states across the sentence. However, the work of BIBREF20 suggests that bidirectional language models such as BERT implicitly capture some notion of coreference resolution.
In this paper, we show that the attention maps created by an out-of-the-box BERT can be directly exploited to resolve coreferences in long sentences. As such, they can be simply repurposed for the sake of commonsense reasoning tasks while achieving state-of-the-art results on the multiple task. On both PDP and WSC, our method outperforms previous state-of-the-art methods, without using expensive annotated knowledge bases or hand-engineered features. On a Pronoun Disambiguation dataset, PDP-60, our method achieves 68.3% accuracy, which is better than the state-of-art accuracy of 66.7%. On a WSC dataset, WSC-273, our method achieves 60.3%. As of today, state-of-the-art accuracy on the WSC-273 for single model performance is around 57%, BIBREF14 and BIBREF13 . These results suggest that BERT implicitly learns to establish complex relationships between entities such as coreference resolution. Although this helps in commonsense reasoning, solving this task requires more than employing a language model learned from large text corpora.
Attention Guided Reasoning
In this section we first review the main aspects of the BERT approach, which are important to understand our proposal and we introduce notations used in the rest of the paper. Then, we introduce Maximum Attention Score (MAS), and explain how it can be utilized for commonsense reasoning.
BERT and Notation
The concept of BERT is built upon two key ingredients: (a) the transformer architecture and (b) unsupervised pre-training.
The transformer architecture consists of two main building blocks, stacked encoders and decoders, which are connected in a cascaded fashion. The encoder is further divided into two components, namely a self-attention layer and a feed-forward neural network. The self-attention allows for attending to specific words during encoding and therefore establishing a focus context w.r.t. to each word. In contrast to that, the decoder has an additional encoder-decoder layer that switches between self-attention and a feed-forward network. It allows the decoder to attend to specific parts of the input sequence. As attention allows for establishing a relationship between words, it is very important for tasks such as coreference resolution and finding associations. In the specific context of pronouns, attention gives rise to links to $m$ candidate nouns, which we denote in the following as $\mathcal {C}=\left\lbrace c_1,..,c_m\right\rbrace $ . The concept of self-attention is further expanded within BERT by the idea of so called multi-head outputs that are incorporated in each layer. In the following, we will denote heads and layers with $h\in H$ and $l\in L$ , respectively. Multi-heads serve several purposes. On the one hand, they allow for dispersing the focus on multiple positions. On the other hand, they constitute an enriched representation by expanding the embedding space. Leveraging the nearly unlimited amount of data available, BERT learns two novel unsupervised prediction tasks during training. One of the tasks is to predict tokens that were randomly masked given the context, notably with the context being established in a bi-directional manner. The second task constitutes next sentence prediction, whereby BERT learns the relationship between two sentences, and classifies whether they are consecutive.
Maximum Attention Score (MAS)
In order to exploit the associative leverage of self-attention, the computation of MAS follows the notion of max-pooling on attention level between a reference word $s$ (e.g. pronoun) and candidate words $c$ (e.g. multiple choice pronouns). The proposed approach takes as input the BERT attention tensor and produces for each candidate word a score, which indicates the strength of association. To this end, the BERT attention tensor $A\in \mathbb {R}^{H\times L \times \mid \mathcal {C}\mid }$ is sliced into several matrices $A_c\in \mathbb {R}^{H\times L}$ , each of them corresponding to the attention between the reference word and a candidate $c$ . Each $A_c$ is associated with a binary mask matrix $M_c$ . The mask values of $M_c$ are obtained at each location tuple $\left(l,h\right)$ , according to:
$$M_{c}(l,h)= \begin{dcases} 1 & \operatornamewithlimits{argmax}A(l,h)=c \\ 0 & \text{otherwise} \\ \end{dcases}$$ (Eq. 7)
Mask entries are non-zero only at locations where the candidate word $c$ is associated with maximum attention. Limiting the impact of attention by masking allows to accommodate for the most salient parts. Given the $A_c$ and $M_c$ matrix pair for each candidate $c$ , the MAS can be computed. For this purpose, the sum of the Hadamard product for each pair is calculated first. Next, the actual score is obtained by computing the ratio of each Hadamard sum w.r.t. all others according to,
$$MAS(c)=\frac{\sum _{l,h}A_c \circ M_c }{\sum _{c \in \mathcal {C}} \sum _{l,h}A_c \circ M_c} \in \left[0,1\right].$$ (Eq. 8)
Thus MAS retains the attention of each candidate only where it is most dominant, coupling it with the notion of frequency of occurrence to weight the importance. See Fig. 1 for a schematic illustration of the computation of MAS, and the matrices involved.
Experimental Results
We evaluate our method on two commonsense reasoning tasks, PDP and WSC.
On the former task, we use the original set of 60 questions (PDP-60) as the main benchmark. The second task (WSC-273) is qualitatively much more difficult. The recent best reported result are not much above random guess. This task consists of 273 questions and is designed to work against traditional linguistic techniques, common heuristics or simple statistical tests over text corpora BIBREF4 .
BERT Model Details
In all our experiments, we used the out-of-the-box BERT models without any task-specific fine-tuning. Specifically, we use the PyTorch implementation of pre-trained $bert-base-uncased$ models supplied by Google. This model has 12 layers (i.e., Transformer blocks), a hidden size of 768, and 12 self-attention heads. In all cases we set the feed-forward/filter size to be 3072 for the hidden size of 768. The total number of parameters of the model is 110M.
Pronoun Disambiguation Problem
We first examine our method on PDP-60 for the Pronoun Disambiguation task. In Tab. 1 (top), our method outperforms all previous unsupervised results sharply. Next, we allow other systems to take in necessary components to maximize their test performance. This includes making use of supervised training data that maps commonsense reasoning questions to their correct answer. As reported in Tab. 1 (bottom), our method outperforms the best system in the 2016 competition (58.3%) by a large margin. Specifically, we achieve 68.3% accuracy, better than the more recently reported results from BIBREF24 (66.7%), who makes use of three KBs and a supervised deep network.
Winograd Schema Challenge
On the harder task WSC-273, our method also outperforms the current state-of-the-art, as shown in Tab. 2. Namely, our method achieves an accuracy of 60.3%, nearly 3% of accuracy above the previous best result. This is a drastic improvement considering the best system based on language models outperforms random guess by only 4% in accuracy. This task is more difficult than PDP-60. First, the overall performance of all competing systems are much lower than that of PDP-60. Second, incorporating supervised learning and expensive annotated KBs to USSM provides insignificant gain this time (+3%), comparing to the large gain on PDP-60 (+19%). Finally, for the sake of completeness, BIBREF13 report that their single language model trained on a customized dataset built from CommonCrawl based on questions used in comonsense reasoning achieves an higher accuracy than the proposed approach with 62.6%.
We visualize the MAS to have more insights into the decisions of our resolvers. Fig. 2 displays some samples of correct and incorrect decisions made by our proposed method. MAS score of different words are indicated with colors, where the gradient from blue to red represents the score transition from low to high.
Discussion
Pursuing commonsense reasoning in a purely unsupervised way seems very attractive for several reasons. On the one hand, this implies tapping the nearly unlimited resources of unannotated text and leveraging the wealth of information therein. On the other hand, tackling the commonsense reasoning objective in a (more) supervised fashion typically seems to boost performance for very a specific task as concurrent work shows BIBREF25 . However, the latter approach is unlikely to generalize well beyond this task. That is because covering the complete set of commonsense entities is at best extremely hard to achieve, if possible at all. The data-driven paradigm entails that the derived model can only make generalizations based on the data it has observed. Consequently, a supervised machine learning approach will have to be exposed to all combinations, i.e. replacing lexical items with semantically similar items in order to derive various concept notions. Generally, this is prohibitively expensive and therefore not viable. In contrast, in the proposed (unsupervised self-attention guided) approach this problem is alleviated. This can be largely attributed to the nearly unlimited text corpora on which the model originally learns, which makes it likely to cover a multitude of concept relations, and the fact that attention implicitly reduces the search space. However, all these approaches require the answer to explicitly exist in the text. That is, they are unable to resolve pronouns in light of abstract/implicit referrals that require background knowledge - see BIBREF26 for more detail. However, this is beyond the task of WSC. Last, the presented results suggest that BERT models the notion of complex relationship between entities, facilitating commonsense reasoning to a certain degree.
Conclusion
Attracted by the success of recently proposed language representation model BERT, in this paper, we introduce a simple yet effective re-implementation of BERT for commonsense reasoning. Specifically, we propose a method which exploits the attentions produced by BERT for the challenging tasks of PDP and WSC. The experimental analysis demonstrates that our proposed system outperforms the previous state of the art on multiple datasets. However, although BERT seems to implicitly establish complex relationships between entities facilitating tasks such as coreference resolution, the results also suggest that solving commonsense reasoning tasks might require more than leveraging a language model trained on huge text corpora. Future work will entail adaption of the attentions, to further improve the performance. | PDP-60, WSC-273 |
21663d2744a28e0d3087fbff913c036686abbb9a | 21663d2744a28e0d3087fbff913c036686abbb9a_0 | Q: How does their model differ from BERT?
Text: Introduction
Recently, neural models pre-trained on a language modeling task, such as ELMo BIBREF0 , OpenAI GPT BIBREF1 , and BERT BIBREF2 , have achieved impressive results on various natural language processing tasks such as question-answering and natural language inference. The success of BERT can largely be associated to the notion of context-aware word embeddings, which differentiate it from common approaches such as word2vec BIBREF3 that establish a static semantic embedding. Since the introduction of BERT, the NLP community continues to be impressed by the amount of ideas produced on top of this powerful language representation model. However, despite its success, it remains unclear whether the representations produced by BERT can be utilized for tasks such as commonsense reasoning. Particularly, it is not clear whether BERT shed light on solving tasks such as the Pronoun Disambiguation Problem (PDP) and Winograd Schema Challenge (WSC). These tasks have been proposed as potential alternatives to the Turing Test, because they are formulated to be robust to statistics of word co-occurrence BIBREF4 .
Below is a popular example from the binary-choice pronoun coreference problem BIBREF5 of WSC:
Sentence: The trophy doesn't fit in the suitcase because it is too small.
Answers: A) the trophy B) the suitcase
Humans resolve the pronoun “it” to “the suitcase” with no difficulty, whereas a system without commonsense reasoning would be unable to distinguish “the suitcase” from the otherwise viable candidate, “the trophy”.
Previous attempts at solving WSC usually involve heavy utilization of annotated knowledge bases (KB), rule-based reasoning, or hand-crafted features BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . There are also some empirical works towards solving WSC making use of learning BIBREF11 , BIBREF12 , BIBREF1 . Recently, BIBREF13 proposed to use a language model (LM) to score the two sentences obtained when replacing the pronoun by the two candidates. The sentence that is assigned higher probability under the model designates the chosen candidate. Probability is calculated via the chain rule, as the product of the probabilities assigned to each word in the sentence. Very recently, BIBREF14 proposed the knowledge hunting method, which is a rule-based system that uses search engines to gather evidence for the candidate resolutions without relying on the entities themselves. Although these methods are interesting, they need fine-tuning, or explicit substitution or heuristic-based rules. See also BIBREF15 for a discussion.
The BERT model is based on the “Transformer” architecture BIBREF16 , which relies purely on attention mechanisms, and does not have an explicit notion of word order beyond marking each word with its absolute-position embedding. This reliance on attention may lead one to expect decreased performance on commonsense reasoning tasks BIBREF17 , BIBREF18 compared to RNN (LSTM) models BIBREF19 that do model word order directly, and explicitly track states across the sentence. However, the work of BIBREF20 suggests that bidirectional language models such as BERT implicitly capture some notion of coreference resolution.
In this paper, we show that the attention maps created by an out-of-the-box BERT can be directly exploited to resolve coreferences in long sentences. As such, they can be simply repurposed for the sake of commonsense reasoning tasks while achieving state-of-the-art results on the multiple task. On both PDP and WSC, our method outperforms previous state-of-the-art methods, without using expensive annotated knowledge bases or hand-engineered features. On a Pronoun Disambiguation dataset, PDP-60, our method achieves 68.3% accuracy, which is better than the state-of-art accuracy of 66.7%. On a WSC dataset, WSC-273, our method achieves 60.3%. As of today, state-of-the-art accuracy on the WSC-273 for single model performance is around 57%, BIBREF14 and BIBREF13 . These results suggest that BERT implicitly learns to establish complex relationships between entities such as coreference resolution. Although this helps in commonsense reasoning, solving this task requires more than employing a language model learned from large text corpora.
Attention Guided Reasoning
In this section we first review the main aspects of the BERT approach, which are important to understand our proposal and we introduce notations used in the rest of the paper. Then, we introduce Maximum Attention Score (MAS), and explain how it can be utilized for commonsense reasoning.
BERT and Notation
The concept of BERT is built upon two key ingredients: (a) the transformer architecture and (b) unsupervised pre-training.
The transformer architecture consists of two main building blocks, stacked encoders and decoders, which are connected in a cascaded fashion. The encoder is further divided into two components, namely a self-attention layer and a feed-forward neural network. The self-attention allows for attending to specific words during encoding and therefore establishing a focus context w.r.t. to each word. In contrast to that, the decoder has an additional encoder-decoder layer that switches between self-attention and a feed-forward network. It allows the decoder to attend to specific parts of the input sequence. As attention allows for establishing a relationship between words, it is very important for tasks such as coreference resolution and finding associations. In the specific context of pronouns, attention gives rise to links to $m$ candidate nouns, which we denote in the following as $\mathcal {C}=\left\lbrace c_1,..,c_m\right\rbrace $ . The concept of self-attention is further expanded within BERT by the idea of so called multi-head outputs that are incorporated in each layer. In the following, we will denote heads and layers with $h\in H$ and $l\in L$ , respectively. Multi-heads serve several purposes. On the one hand, they allow for dispersing the focus on multiple positions. On the other hand, they constitute an enriched representation by expanding the embedding space. Leveraging the nearly unlimited amount of data available, BERT learns two novel unsupervised prediction tasks during training. One of the tasks is to predict tokens that were randomly masked given the context, notably with the context being established in a bi-directional manner. The second task constitutes next sentence prediction, whereby BERT learns the relationship between two sentences, and classifies whether they are consecutive.
Maximum Attention Score (MAS)
In order to exploit the associative leverage of self-attention, the computation of MAS follows the notion of max-pooling on attention level between a reference word $s$ (e.g. pronoun) and candidate words $c$ (e.g. multiple choice pronouns). The proposed approach takes as input the BERT attention tensor and produces for each candidate word a score, which indicates the strength of association. To this end, the BERT attention tensor $A\in \mathbb {R}^{H\times L \times \mid \mathcal {C}\mid }$ is sliced into several matrices $A_c\in \mathbb {R}^{H\times L}$ , each of them corresponding to the attention between the reference word and a candidate $c$ . Each $A_c$ is associated with a binary mask matrix $M_c$ . The mask values of $M_c$ are obtained at each location tuple $\left(l,h\right)$ , according to:
$$M_{c}(l,h)= \begin{dcases} 1 & \operatornamewithlimits{argmax}A(l,h)=c \\ 0 & \text{otherwise} \\ \end{dcases}$$ (Eq. 7)
Mask entries are non-zero only at locations where the candidate word $c$ is associated with maximum attention. Limiting the impact of attention by masking allows to accommodate for the most salient parts. Given the $A_c$ and $M_c$ matrix pair for each candidate $c$ , the MAS can be computed. For this purpose, the sum of the Hadamard product for each pair is calculated first. Next, the actual score is obtained by computing the ratio of each Hadamard sum w.r.t. all others according to,
$$MAS(c)=\frac{\sum _{l,h}A_c \circ M_c }{\sum _{c \in \mathcal {C}} \sum _{l,h}A_c \circ M_c} \in \left[0,1\right].$$ (Eq. 8)
Thus MAS retains the attention of each candidate only where it is most dominant, coupling it with the notion of frequency of occurrence to weight the importance. See Fig. 1 for a schematic illustration of the computation of MAS, and the matrices involved.
Experimental Results
We evaluate our method on two commonsense reasoning tasks, PDP and WSC.
On the former task, we use the original set of 60 questions (PDP-60) as the main benchmark. The second task (WSC-273) is qualitatively much more difficult. The recent best reported result are not much above random guess. This task consists of 273 questions and is designed to work against traditional linguistic techniques, common heuristics or simple statistical tests over text corpora BIBREF4 .
BERT Model Details
In all our experiments, we used the out-of-the-box BERT models without any task-specific fine-tuning. Specifically, we use the PyTorch implementation of pre-trained $bert-base-uncased$ models supplied by Google. This model has 12 layers (i.e., Transformer blocks), a hidden size of 768, and 12 self-attention heads. In all cases we set the feed-forward/filter size to be 3072 for the hidden size of 768. The total number of parameters of the model is 110M.
Pronoun Disambiguation Problem
We first examine our method on PDP-60 for the Pronoun Disambiguation task. In Tab. 1 (top), our method outperforms all previous unsupervised results sharply. Next, we allow other systems to take in necessary components to maximize their test performance. This includes making use of supervised training data that maps commonsense reasoning questions to their correct answer. As reported in Tab. 1 (bottom), our method outperforms the best system in the 2016 competition (58.3%) by a large margin. Specifically, we achieve 68.3% accuracy, better than the more recently reported results from BIBREF24 (66.7%), who makes use of three KBs and a supervised deep network.
Winograd Schema Challenge
On the harder task WSC-273, our method also outperforms the current state-of-the-art, as shown in Tab. 2. Namely, our method achieves an accuracy of 60.3%, nearly 3% of accuracy above the previous best result. This is a drastic improvement considering the best system based on language models outperforms random guess by only 4% in accuracy. This task is more difficult than PDP-60. First, the overall performance of all competing systems are much lower than that of PDP-60. Second, incorporating supervised learning and expensive annotated KBs to USSM provides insignificant gain this time (+3%), comparing to the large gain on PDP-60 (+19%). Finally, for the sake of completeness, BIBREF13 report that their single language model trained on a customized dataset built from CommonCrawl based on questions used in comonsense reasoning achieves an higher accuracy than the proposed approach with 62.6%.
We visualize the MAS to have more insights into the decisions of our resolvers. Fig. 2 displays some samples of correct and incorrect decisions made by our proposed method. MAS score of different words are indicated with colors, where the gradient from blue to red represents the score transition from low to high.
Discussion
Pursuing commonsense reasoning in a purely unsupervised way seems very attractive for several reasons. On the one hand, this implies tapping the nearly unlimited resources of unannotated text and leveraging the wealth of information therein. On the other hand, tackling the commonsense reasoning objective in a (more) supervised fashion typically seems to boost performance for very a specific task as concurrent work shows BIBREF25 . However, the latter approach is unlikely to generalize well beyond this task. That is because covering the complete set of commonsense entities is at best extremely hard to achieve, if possible at all. The data-driven paradigm entails that the derived model can only make generalizations based on the data it has observed. Consequently, a supervised machine learning approach will have to be exposed to all combinations, i.e. replacing lexical items with semantically similar items in order to derive various concept notions. Generally, this is prohibitively expensive and therefore not viable. In contrast, in the proposed (unsupervised self-attention guided) approach this problem is alleviated. This can be largely attributed to the nearly unlimited text corpora on which the model originally learns, which makes it likely to cover a multitude of concept relations, and the fact that attention implicitly reduces the search space. However, all these approaches require the answer to explicitly exist in the text. That is, they are unable to resolve pronouns in light of abstract/implicit referrals that require background knowledge - see BIBREF26 for more detail. However, this is beyond the task of WSC. Last, the presented results suggest that BERT models the notion of complex relationship between entities, facilitating commonsense reasoning to a certain degree.
Conclusion
Attracted by the success of recently proposed language representation model BERT, in this paper, we introduce a simple yet effective re-implementation of BERT for commonsense reasoning. Specifically, we propose a method which exploits the attentions produced by BERT for the challenging tasks of PDP and WSC. The experimental analysis demonstrates that our proposed system outperforms the previous state of the art on multiple datasets. However, although BERT seems to implicitly establish complex relationships between entities facilitating tasks such as coreference resolution, the results also suggest that solving commonsense reasoning tasks might require more than leveraging a language model trained on huge text corpora. Future work will entail adaption of the attentions, to further improve the performance. | Their model does not differ from BERT. |
d8cecea477dfc5163dca6e2078a2fe6bc94ce09f | d8cecea477dfc5163dca6e2078a2fe6bc94ce09f_0 | Q: Which metrics are they evaluating with?
Text: Introduction
Narrative is a fundamental form of representation in human language and culture. Stories connect individuals and deliver experience, emotions and knowledge. Narrative comprehension has attracted long-standing interests in natural language processing (NLP) BIBREF1 , and is widely applicable to areas such as content creation. Enabling machines to understand narrative is also an important first step towards real intelligence. Previous studies on narrative comprehension include character roles identification BIBREF2 , narratives schema construction BIBREF3 , and plot pattern identification BIBREF4 . However, their main focus is on analyzing the stories themselves. In contrast, we concentrate on training machines to predict the end of the stories. Story completion tasks rely not only on the logic of the story itself, but also requires implicit commonsense knowledge outside the story. To understand stories, human can use the information from both the story itself and other implicit sources such as commonsense knowledge and normative social behaviors BIBREF5 . In this paper, we propose to imitate such behaviors to incorporate structured commonsense knowledge to aid the story ending prediction.
Recently, BIBREF0 introduced a ROCStories dataset as a benchmark for evaluating models' ability to understand the narrative structures of a story, where the model is asked to select the correct ending from two candidates for a given story. To solve this task, both traditional machine learning approaches BIBREF6 and neural network models BIBREF7 have been used. Some works also exploit information such as sentiment and topic words BIBREF8 and event frames BIBREF9 . Recently, there has been work BIBREF10 that leverages large unlabeled corpus, like the BooksCorpus BIBREF11 dataset, to improve the performance. However, none of them explicitly uses structured commonsense knowledge, which humans would naturally incorporate to improve model performance.
Figure 1 (a) shows a typical example in ROCStories dataset: a story about Dan and his parents. The blue words are key-words in the body of the story, and the red word is the key-word in the correct story ending. Figure 1 (b) shows the (implicit) relations among these key-words, which are obtained as a subgraph from ConceptNet BIBREF12 , a commonsense knowledge base. By incorporating such structured external commonsense knowledge, we are able to discover strong associations between these keywords and correctly predict the story ending. Note that these associations are not available from the story itself.
To solve the story completion task, we propose a neural network model that integrates three types of information: (i) narrative sequence, (ii) sentiment evolution, and (iii) commonsense knowledge. The clues in narrative chain are captured by a transformer decoder, constructed from a pretrained language model. The sentiment prediction is obtained by using a LSTM model. Additionally, the commonsense knowledge is extracted from an existing structured knowledge base, ConceptNet. In particular, we use a combination gate to integrate all the information and train the model in an end-to-end manner. Experiments demonstrate the improved performance of our model on the task.
Related Work
Our work on story completion is closely related to several research areas such as reading comprehension, sentiment analysis and commonsense knowledge integration, which will be briefly reviewed as below.
Reading Comprehension is the ability to process text, understand its meaning, and to integrate it with what the readers already know. It has been an important field in NLP for a long time. The SQuAD dataset BIBREF13 presents a task to locate the correct answer to a question in a context document and recognizes unanswerable questions. The RACE dataset BIBREF14 , which is constructed from Chinese Students English Examination, introduces another task that requires not only retrieval but also reasoning. Usually they are solved by match-based model like QANET BIBREF15 , hierarchical attention model like HAF BIBREF16 , and dynamic fusion based model like DFN BIBREF17 . Also there exists more relevant research on story comprehension such as event understanding of narrative plots BIBREF3 , character personas BIBREF2 and inter-character relationships BIBREF18 .
Sentiment Analysis aims to determine the attitude of a speaker (or a writer) with respect to some topic, the overall contextual polarity, or emotional reaction to a document, interaction or event. There have been rich studies on this field, such as learning word vectors for sentiment analysis BIBREF19 and recognizing contextual polarity in a phrase-level BIBREF20 . Recently, researchers studied large-scale sentiment analysis across news and blogs BIBREF21 , and also studied opinion mining on twitter BIBREF22 . Additionally, there have been studies focused on joint learning for better performance, such as detecting sentiment and topic simultaneously from text BIBREF23 .
Commonsense Knowledge Integration If machines receive information from a commonsense knowledge base, they become more powerful for many tasks like reasoning BIBREF24 , dialogue generation BIBREF25 and cloze style reading comprehension BIBREF26 . Related works include BIBREF24 , which builds a knowledge graph and uses it to deduce the size of objects BIBREF24 , in addiiton to BIBREF27 , in which a music knowledge graph is built for a single round dialogue system. There are several ways to incorporate external knowledge base (e.g., ConceptNet). For example, BIBREF28 uses a knowledge based word embedding, BIBREF29 employs tri-LSTMs to encode the knowledge triple, and BIBREF30 and BIBREF26 apply graph attention embedding to encode sub-graphs from a knowledge base. However, their work does not involve narrative completion.
Story Completion Traditional machine learning methods have been used to solve ROCStory Cloze Task such as BIBREF6 . To improve the performance, features like topic words and sentiment score are also extracted and incorporated BIBREF8 . Neural network models have also been applied to this task (e.g., BIBREF31 and BIBREF7 ), which use LSTM to encode different parts of the story and calculate their similarities. In addition, BIBREF9 introduces event frame to their model and leverages five different embeddings. Finally, BIBREF10 develops a transformer model and achieves state-of-the-art performance on ROCStories, where the transformer was pretrained on BooksCorpus (a large unlabeled corpus) and finetuned on ROCStories.
Proposed Model
For a given story $S = \lbrace s_1, s_2, ..., s_L\rbrace $ consisting of a sequence of $L$ sentences, our task is to select the correct ending out of two candidates, $e_1$ and $e_2$ , so that the completed story is reasonable and consistent. On the face of it, the problem can be understood as a standard binary classification problem. However, learning binary classifier with standard NLP techniques on the explicit information in the story is not sufficient. This is because correctly predicting the story ending usually requires reasoning with implicit commonsense knowledge. Therefore, we develop a neural network model to predict the story ending by integrating three sources of information: narrative sequence, sentiment evolution and structured commonsense knowledge (see Figure 2 ). Note that the first two types of information are explicit in the story while the third type is implicit and has to be imported from external source such as a knowledge base. In this section, we will explain how we exploit these three information sources and integrate them to make the final prediction.
Narrative Sequence
To describe a consistent story, plots should be planned in a logically reasonable sequence; that is there should be a narrative chain between different characters in the story. This is illustrated in the example in Figure 3 , where words in red are events and words in blue are characters. The story chain, “Agatha wanted pet birds $\rightarrow $ Agatha purchased pet finches $\rightarrow $ Agatha couldn't stand noise $\rightarrow $ mess was worse $\rightarrow $ Agatha return pet birds", describes a more coherent and reasonable story than “ Agatha wanted pet birds $\rightarrow $ Agatha purchased pet finches $\rightarrow $ Agatha couldn't stand noise $\rightarrow $ mess was worse $\rightarrow $ Agatha buy two more". When Agatha could not stand the noise, it is more likely for her to give these birds away rather than buy more. Therefore, developing a better semantic representation for narrative chains is important for us to predict the right endings.
Inspired by the recent research from OpenAI BIBREF10 on forming semantic representations of narrative sequences, we first pre-train a high-capacity language model on a large unlabeled corpus of text to learn the general information hidden in the context, and then fine-tune the model on this story completion task.
Given a large corpus of tokens $C = \lbrace c_1, c_2, ... , c_n\rbrace $ , we can pre-train a language model to maximize the likelihood :
$$L_{lm}(C) = \sum _{i} \log P_l(c_i|c_{i-k}, ..., c_{i-1}; \theta )$$ (Eq. 6)
where $k$ is the window size, and the conditional probability $P_l$ is modeled using a neural network with parameters $\theta $ .
Similar to BIBREF10 , we use a multi-layer transformer decoder with multi-headed self-attention for the language model:
$$h_0 &= C W_e+W_p \\ h_l &= transformer(h_{l-1}), l \in [1,M] \\ P(c) &= softmax(h_M W_e^T)$$ (Eq. 7)
where $C = \lbrace c_1, c_2, ... , c_n\rbrace $ are tokens in corpus, $W_e$ is the token embedding matrix, $W_p$ is the position embedding matrix and $M$ is the number of transformer blocks.
We use the pre-trained parameters released by OpenAI as the initialization for the transformer decoder. We adapt these parameters to our classification task. For each candidate story $(s_1, s_2, s_3, s_4, e_i)$ (i.e., the story body followed by one candidate ending), we serialize it into a sequence of tokens $X = \lbrace x_1, ... , x_k\rbrace $ , where $k$ is the number of tokens. Then the fine-tuned transformer takes $X$ as its input and outputs the probability of $e_i$ being the correct ending:
$$P_N(y|s_1, ..., s_4, e_i) = softmax(W_M h_M^k + b_M)$$ (Eq. 9)
where $y \in \lbrace 0,1\rbrace $ is the label indicating whether $e_i$ is the correct ending, $h_M^k$ denotes the hidden representation at the $M$ -th layer of the transformer associated with the $k$ -th token, and $W_M$ and $b_M$ are parameters in the linear output layer.
Sentiment Evolution
Besides narrative sequence, getting a good sentiment prediction model is also important for choosing the correct endings. Note that stories are different from other objective texts (e.g., news), as they have emotions within the context. Usually there is a sentiment evolution when a storyline is being revealed BIBREF32 .
First, we pre-train a sentiment prediction model using the training set of the ROCStories, which does not have alternative endings (i.e., no negative samples). Given a five-sentence story $S = \lbrace s_1, s_2, s_3, s_4, s_5\rbrace $ , we take the first four sentences as the body $B$ and the last sentence as the ending $e$ . We extract the sentiment polarity of each sentence by utilizing a lexicon and rule-based sentiment analysis tool (VADER) BIBREF33 :
$$E_i = \text{VADER}(s_i), i \in [1,5]$$ (Eq. 12)
where $E_i$ is a vector of three elements including probabilities of the $i$ -th sentence being positive, negative and neutral.
Then, we use a Long Short-Term Memory (LSTM) neural network to encode the sentence sentiments $E_i$ with its context into the hidden state $h_i$ , which summarizes the contextual sentiment information around the sentence $s_i$ . And we use the last hidden state $h_4$ to predict the sentiment vector $E_p$ in the ending $e$ :
$$h_i &= \text{LSTM}(E_i,h_{i-1}) , i\in [1,4] \\ E_p &= softmax(W_e h_4 + b_e)$$ (Eq. 13)
We train the sentiment model by maximizing the cosine similarity between the predicted sentiment vector $E_p$ and the sentiment vector $E_5$ of the correct ending:
$$sim(S) = \dfrac{E_p \cdot E_5}{\Vert E_p \Vert _2 \cdot \Vert E_5 \Vert _2}$$ (Eq. 14)
Afterwards, we adapt the parameters to the story ending selection task and calculate the following conditional probability $P_S$ :
$$P_S(y|s_1, ..., s_4, e_i) = softmax(E_pW_sE_e)$$ (Eq. 15)
where $S = \lbrace s_1, s_2, s_3, s_4\rbrace $ is the body, $e_i$ is the candidate ending, $E_p$ is the predicted sentiment vector, $E_e$ is the sentiment vector extracted from ending $e_i$ , and $W_s$ is the similarity matrix to be learned.
Commonsense Knowledge
Narrative sequence and sentiment evolution, though useful, are not sufficient to make correct predictions. In a typical story, newly introduced key-words may not be explained in the story because story-writers are not given enough narrative space and time to develop and describe them BIBREF34 . In fact, there are many hidden relationships among key-words in natural stories. In Figure 1 (a), although the key-word “diet" in the ending is not mentioned in the body, there are hidden relationships among “diet", “overweight" and “unhealthy" as shown in Figure 1 (b). When this kind of implicit information is uncovered in the model, it is easier to predict the correct story ending.
We leverage the implicit knowledge by using a numberbatch word embedding BIBREF12 , which is trained on data from ConceptNet, word2vec, GloVe, and OpenSubtitles. The numberbatch achieves good performance on tasks related to commonsense knowledge BIBREF28 . For instance, the cosine similarity between “diet" and “overweight" in numberbatch is 0.453, but it is 0.326 in GloVe. This is because numberbatch makes use of the relationship between them as shown in Figure 1 (b) while GloVe does not.
[h] Knowledge distance computation [1] sentence $s_j$ such that $s_j\in S$ $distance_j = 0$ $num = 0$ word $w$ such that $w\in e_i$ $max_d$ = 0 $num += 1$ word $u$ such that $u\in s_j$ $s_j\in S$0 $s_j\in S$1 = cosine similarity(w, u) $s_j\in S$2 $s_j\in S$3
$distance_j += max_d$ $distance_j /= num$ return $(distance_1, ..., distance_4)$
Given the body $S = \lbrace s_1, s_2, s_3, s_4\rbrace $ , a candidate ending $e_i$ and the label $y$ , we tokenize each sentence using NLTK and Standford's CoreNLP tools BIBREF35 . After deleting the stop words, we calculate the knowledge distance vector $D$ between the candidate ending and the body by Algorithm 1. We compute the similarity between two key-words using the cosine similarity of their vector space representations in numberbatch. For each sentence $s_i$ in the body, we then quantify the distance with the ending using averaged alignment score of every key-word in the ending. Then we use a linear layer to model the conditional probability $P_C$ :
$$P_C(y|s_1, ..., s_4, e_i) = softmax(W_dD + b_d)$$ (Eq. 17)
where $W_d$ and $b_d$ are parameters in the linear output layer, and $D$ is the four-dimensional distance vector.
Combination Gate
Finally, we predict the story ending by combining the above three sources of information. We utilize the feature vectors $h_M^k$ in the narrative sequence, $E_e$ in the sentiment evolution, and $D$ in the commonsense knowledge and calculate their cosine similarities. Then we concatenate them into a vector $g$ . We use a linear layer to model the combination gate and use that gate to combine three conditional probabilities.
$$G &= softmax(W_gg + b_g) \\ \tilde{P}(y|s_1, ..., s_4, e_i) &= softmax(sum(G \odot [P_N; P_S; P_C]))$$ (Eq. 19)
where $W_g$ and $b_g$ are parameters in the linear layer, $(P_N, P_S, P_C)$ are the three probabilities modeled in ( 9 ), ( 15 ) and ( 17 ), $G$ is the hidden variable that weighs three different conditional probabilities and $\odot $ is element-wise multiplication.
Finally, since each of the three components ( $P_N$ , $P_S$ and $P_C$ ) are either pre-trained on a separate corpus or individually tuned on the task, we fine-tune the entire model in an end-to-end manner by minimizing the following cost:
$$\tilde{L} = L_{cm}(S) - \lambda * L_{lm}(C)$$ (Eq. 20)
where $L_{cm}(s) = \sum -ylog(\tilde{P})$ is the cross-entropy between the final predicted probability and the true label, $L_{lm}$ is a regularization term of language model cost, and $\lambda $ is the regularization parameter.
Dataset
We evaluated our model on ROCStories BIBREF0 , a publicly available collection of commonsense short stories. This corpus consists of 100,000 five-sentence stories. Each story logically follows everyday topics created by Amazon Mechanical Turk (MTurk) workers. These stories contain a variety of commonsense causal and temporal relations between everyday events. Writers also develop an additional 3,742 stories which contain a four-sentence-long body and two candidate endings. The endings were collected by asking MTurk workers to write both a right ending and a wrong ending after eliminating original endings of given short stories. Both endings were required to include at least one character from the main story line and to make logical sense. and were tested on AMT to ensure the quality. The published ROCStories dataset is constructed with ROCStories as a training set that includes 98,162 stories that exclude candidate wrong endings, an evaluation set, and a test set, which have the same structure (1 body + 2 candidate endings) and a size of 1,871.
We find that the dataset contains 43,095 unique words, and 28,012 key-words in ConceptNet. The average number of words and key-words in ConceptNet for each sentence are shown in Table 1 . $s_1$ , $s_2$ , $s_3$ and $s_4$ are four sentences in the body of stories. $e_1$ and $e_2$ are the two candidate endings. A large portion (65%) of words mentioned in stories are key-words in ConceptNet. Thus we believe ConceptNet can provide additional information to the model.
In our experiments, we use a training set which does not have candidate endings to pre-train the sentiment prediction model. For learning to select the right ending, we randomly split 80% of stories with two candidates endings in ROCStories evaluation set as our training set (1,479 cases), 20% of stories in ROCStories evaluation set as our validation set (374 cases). And we utilize the ROCStories test set as our testing set (1,871 cases).
Baselines
We use the following models as our baselines:
Msap BIBREF6 : Msap uses a linear classifier based on language modeling probabilities of the entire story, and utilizes linguistic features of the ending sentences. These ending “style” features include sentence length, word and character n-gram in each candidate ending (independent of story).
HCM BIBREF8 : HCM uses FC-SemLM BIBREF36 in order to represent events in the story, learns sentiment trajectories in a form of N-gram language model, and uses topic-words' GloVe to extract topical consistency feature. It uses Expectation-Maximization for training.
DSSM BIBREF31 : DSSM first uses two deep neural networks to project the context and the candidate endings into the same vector space, and ending choices based on the cosine similarity of the context.
Cai BIBREF7 : Cai uses BiLSTM RNN with attention mechanisms to encode the body and ending of the story separately and uses a cosine similarity between their representations to calculate the score for each ending during selection process.
SeqMANN BIBREF9 : SeqMANN uses a multi-attention neural network and introduces semantic sequence information extracted from FC-SemLM as external knowledge. The embedding layer concatenates five representations including word embedding, character feature, part-of-speech (POS) tagging, sentiment polarity and negation. The model uses DenseNet to match body with an ending.
FTLM BIBREF10 : FTLM solves the stories cloze test by pre-training a language model using a multi-layer transformer on a diverse corpus of unlabeled text, followed by discriminative fine-tuning.
Experimental Settings
We tune the hyper parameters of models on the validation set. Specifically, we set the dimension of LSTM for sentiment prediction to 64. We use a mini-batch size of 8, and Adam to train all parameters. The learning rate is set to 0.001 initially with a decay rate of 0.5 per epoch.
Results
We evaluated baselines and our model using accuracy as the metric on the ROCStories dataset, and summarized these results in Table 2 . The linear classifier with language model, Msap, achieved an accuracy of 75.2%. When adding additional features, such as sentiment trajectories and topic words to traditional machine learning methods, HCM achieved an accuracy of 77.6%. Recently, more neural network-based models are used. DSSM simply used a deep structured semantic model to learn representations for both bodies and endings only achieved an accuracy of 58.5%. Utilizing Cai improved neural model performance to 74.7% by applying attention mechanisms on a BiLSTM RNN structure. SeqMANN further improved the performance to 84.7%, when combining more information from embedding layers, like character features, part-of-speech (POS) tagging features, sentiment polarity, negation information and some external knowledge of semantic sequence. Researchers also improved model performance by pre-training word embeddings on external large corpus. FTLM pre-trained a language model on a large unlabeled corpus and fine-tuned on the ROCStories dataset, and achieved an accuracy of 86.5%.
We tried two different ways to construct narrative sequence features: Plot&End and FullStory. Plot&End encodes the body and ending of a story separately and then computes their cosine similarity. We use a hierarchy structure to encode the four body sentences. However using such encoding method, our model only achieved an accuracy of 78.4%. One possible reason is that the relation between sentences learned through pre-trained language models are not fully explored if we encode each sentence separately. FullStory encodes all five sentences together. Our model achieved the best performance when using FullStory mode to encode narrative sequence information. We achieved an accuracy of 87.6%, outperforming all baseline models. Such improvement may come from the full use of the pre-trained transformer block, as well as the incorporation of the structured commonsense knowledge and sentiment information in the model.
Ablation Study
We conducted another two groups of experiments to investigate the contribution of the three different types of information: narrative sequence, sentiment evolution and commonsense knowledge. First, we measure the accuracy of only using one type of information at a time and describe the result in Table 3 . When we use just one type of information, the performances are worse than when using all of the information, suggesting a single type of information is insufficient for story ending selection. We also measure the performance of our model by stripping one type of information at a time and display the results in Table 4 . We observe that by removing the narrative sequence information, the model performance decreases most significantly. We suspect this is because the narrative chain is the key element that differentiates a story from other types of writing. Therefore, removing narrative sequence information makes it difficult to predict the story ending. If we only use the narrative sequence information, the performance is 85.3%. When commonsense knowledge is added to the model on top of the narrative sequence information, the performance improves to 87.2% which is statistically significant. When sentiment evolution information is added, the model only improves to 87.6%. We speculate this is because the pre-trained language model from narrative sequence information may already capture some sentiment information, as it is trained on an ensemble of several large corpus. This suggests that commonsense knowledge has a large impact on narrative prediction task.
Case Study
We present several examples to describe the decision made at the combination gate. All the examples are shown in Table 5 .
The first story shows how narrative sequence can be the key in detecting the coherent story ending. This one tells a story of Agatha and birds. As we have analyzed in the narrative sequence, the narrative chain is apparently the most effective clue in deciding the right ending. In the combination gate, the narrative part's weight is 0.5135, which is larger than the sentiment component's weight, 0.2214 as well as the commonsense component's weight of 0.2633. The conditional probability of the correct ending given the narrative information is 0.8634, which is much larger than the wrong ending. As both sentences' sentiments are neutral, the sentiment information is not useful . And as the word “buy” has closer relation to “want" and “purchase" mentioned in the sentence body than the word,“return", the commonsense knowledge actually makes the wrong decision which gives slightly higher probabilities to the wrong ending(0.5642).
The second story shows why and how sentiment evolution is influencing the final performance. It is a story about Jackson's beard: Jackson wanted to grow a beard regardless of what his friends said, and he was satisfied with his bushy, thick beard. Clearly the emotions between the two candidate endings are different. Based on the rule of consistent sentiment evolution, an appropriate ending should have a positive emotion rather than a negative emotion. The output of our model shows that in the combination gate, the sentiment evolution component received the largest weight, 0.4880, while the narrative sequence and the commonsense knowledge component have a weight of 0.2287 and 0.2833. Finally, the probability of the correct ending is 0.5360, larger than that of the wrong ending which is 0.4640 in sentiment part. Whereas in the narrative sequence component, the probability of the correct option is 0.4640, smaller than the wrong ending which is 0.5360. Other models like FTLM BIBREF10 that only rely on narrative sequence will make the wrong decision in this case. The probabilities of the commonsense knowledge component is 0.5257 versus 0.4725. Through combination gate, our model mainly relies on the sentiment to make a selection. As a result, it will identify the right ending despite other components influence toward a wrong decision.
The third example presents the roles commonsense knowledge plays in our model. It tells a story about a person finding a dog. The sentiments of the two candidates are both neutral again. But based on the knowledge graph in ConceptNet, shown in Figure 4 , there exists many relations between the correct ending and the story body. The key-words in the ending are in red, and the key-words in the story body are in blue. The key-words such as “stray" and “collar" are highly associated with “dog" and “find" in the correct ending. The result shows that the gate gives the commonsense knowledge component a weight of 0.5156, which is the largest among the three components. The conditional probability of the correct ending considering commonsense information (0.5540) is larger than the wrong ending as we expected. In this case, the narrative sequence component makes the wrong decision, which gives higher probabilities to the wrong ending (0.5283). Thus models like FTLM BIBREF10 which only consider narrative chain will identify the wrong ending. However, as the combination gate learns to trust the commonsense knowledge component in this example more, our model still predicts the correct ending.
We can see that our model is able to learn to rely on different information types based on the content of different stories. We obtain such model effectiveness by using a combination gate to fuse all three types of information, and in doing so, understand how all three are imperative in covering all possible variations in the dataset.
However, it is still challenging for our model to handle the stories that have negations. Figure 5 shows an example. It tells a story between Johnny and Anita. But the only difference between two candidate endings is the negation word. Even when fusing three types of information, our model still cannot get the answer right. Because both event chains are about “asking Anita out", they are both neutral in sentiment, and the key-words in these two endings are the same as well. In the future, we plan to incorporate natural language inference information to the model to handle such cases.
Conclusion
Narrative completion is a complex task that requires both explicit and implicit knowledge. We proposed a neural network model that utilized a combination gate to fuse three types of information including: narrative sequence, sentiment evolution and structured commonsense knowledge to predict story endings. The model outperformed state-of-the-art methods. We found that introducing external knowledge such as structured commonsense knowledge helps narrative completion. | accuracy |
dd2f21d60cfca3917a9eb8b192c194f4de85e8b2 | dd2f21d60cfca3917a9eb8b192c194f4de85e8b2_0 | Q: What different properties of the posterior distribution are explored in the paper?
Text: Introduction
Despite the recent success of deep generative models such as Variational Autoencoders (VAEs) BIBREF0 and Generative Adversarial Networks (GANs) BIBREF1 in different areas of Machine Learning, they have failed to produce similar generative quality in NLP. In this paper we focus on VAEs and their mathematical underpinning to explain their behaviors in the context of text generation.
The vanilla VAE applied to text BIBREF2 consists of an encoder (inference) and decoder (generative) networks: Given an input $x$, the encoder network parameterizes $q_\phi (z|x)$ and infers about latent continuous representations of $x$, while the decoder network parameterizes $p_\theta (x|z)$ and generates $x$ from the continuous code $z$. The two models are jointly trained by maximizing the Evidence Lower Bound (ELBO), $\mathcal {L}(\theta , \phi ; x,z)$:
where the first term is the reconstruction term, and the second term is the Kullback-Leibler (KL) divergence between the posterior distribution of latent variable $z$ and its prior $p({z})$ (i.e., $\mathcal {N}(0,I)$). The KL term can be interpreted as a regularizer which prevents the inference network from copying ${x}$ into ${z}$, and for the case of a Gaussian prior and posterior has a closed-form solution.
With powerful autoregressive decoders, such as LSTMs, the internal decoder's cells are likely to suffice for representing the sentence, leading to a sub-optimal solution where the decoder ignores the inferred latent code ${z}$. This allows the encoder to become independent of $x$, an issue known as posterior collapse ($q_\phi ({z}|{x})\approx p({z})$) where the inference network produces uninformative latent variables. Several solutions have been proposed to address the posterior collapse issue: (i) Modifying the architecture of the model by weakening decoders BIBREF2, BIBREF3, BIBREF4, BIBREF5, or introducing additional connections between the encoder and decoder to enforce the dependence between $x$ and $z$ BIBREF6, BIBREF7, BIBREF8; (ii) Using more flexible or multimodal priors BIBREF9, BIBREF10; (iii) Alternating the training by focusing on the inference network in the earlier stages BIBREF11, or augmenting amortized optimization of VAEs with instance-based optimization of stochastic variational inference BIBREF12, BIBREF13.
All of the aforementioned approaches impose one or more of the following limitations: restraining the choice of decoder, modifying the training algorithm, or requiring a substantial alternation of the objective function. As exceptions to these, $\delta $-VAE BIBREF14 and $\beta $-VAE BIBREF15 aim to avoid the posterior collapse by explicitly controlling the regularizer term in eqn. DISPLAY_FORM2. While $\delta $-VAE aims to impose a lower bound on the divergence term, $\beta $-VAE (betavae) controls the impact of regularization via an additional hyperparameter (i.e., $\beta D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )$). A special case of $\beta $-VAE is annealing BIBREF2, where $\beta $ increases from 0 to 1 during training.
In this study, we propose to use an extension of $\beta $-VAE BIBREF16 which permits us to explicitly control the magnitude of the KL term while avoiding the posterior collapse issue even in the existence of a powerful decoder. We use this framework to examine different properties of the estimated posterior and the generative behaviour of VAEs and discuss them in the context of text generation via various qualitative and quantitative experiments.
Kullback-Leibler Divergence in VAE
We take the encoder-decoder of VAEs as the sender-receiver in a communication network. Given an input message $x$, a sender generates a compressed encoding of $x$ denoted by $z$, while the receiver aims to fully decode $z$ back into $x$. The quality of this communication can be explained in terms of rate (R) which measures the compression level of $z$ as compared to the original message $x$, and distortion (D) which quantities the overall performance of the communication in encoding a message at sender and successfully decoding it at the receiver. Additionally, the capacity of the encoder channel can be measured in terms of the amount of mutual information between $x$ and $z$, denoted by $\text{I}({x};{z})$ BIBREF17.
Kullback-Leibler Divergence in VAE ::: Reconstruction vs. KL
The reconstruction loss can naturally measure distortion ($D := - \big \langle \log p_\theta ({x}|{z}) \big \rangle $), while the KL term quantifies the amount of compression (rate; $R := D_{KL}[q_\phi ({z}|{x})|| p({z})]$) by measuring the divergence between a channel that transmits zero bit of information about $x$, denoted by $p(z)$, and the encoder channel of VAEs, $q_\phi (z|x)$.
BIBREF18 introduced the $H-D \le \text{I}({x};{z}) \le R$ bounds, where $H$ is the empirical data entropy (a constant). These bounds on mutual information allow us to analyze the trade-off between the reconstruction and KL terms in eqn. (DISPLAY_FORM2). For instance, since $\text{I}({x};{z})$ is non-negative (using Jensen's inequality), the posterior collapse can be explained as the situation where $\text{I}({x};{z})=0$, where encoder transmits no information about $x$, causing $R=0, D=H$. Increasing $\text{I}({x};{z})$ can be encouraged by increasing both bounds: increasing the upper-bound (KL term) can be seen as the mean to control the maximum capacity of the encoder channel, while reducing the distortion (reconstruction loss) will tighten the bound by pushing the lower bound to its limits ($H-D\rightarrow H$). A similar effect on the lower-bound can be encouraged by using stronger decoders which could potentially decrease the reconstruction loss. Hence, having a framework that permits the use of strong decoders while avoiding the posterior collapse is desirable. Similarly, channel capacity can be decreased.
Kullback-Leibler Divergence in VAE ::: Explicit KL Control via @!START@$\beta $@!END@-VAE
Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\beta $-VAE offers regularizing the ELBO via an additional coefficient $\beta \in {\rm I\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term,
where $C\!\! \in \!\! {\rm I\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\text{KL}\!\!=\!\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\max \big (C,D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )\big )$ at the risk of breaking the ELBO when $\text{KL}\!\!<\!\!C$ BIBREF22.
Experiments
We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\beta =1$. We do not use larger $\beta $s because the constraint $\text{KL}=C$ is always satisfied.
Experiments ::: Corpora
We use 5 different corpora covering different domains and size through this section: Yelp and Yahoo BIBREF4 both have ($100k$,$10k$,$10k$) sentences in (train, dev, test) sets and $20k$ words in vocabulary, Children's Book Test (CBT; BIBREF23) has ($192k$,$10k$,$12k$) sentences and $12k$ vocab, Wikipedia (WIKI; BIBREF24) has ($2m$,$270k$,$270k$) sentences and $20k$ vocab, and WebText BIBREF25 has ($1m$,$23k$,$24k$) sentences and $22k$ vocab.
Experiments ::: Models
We examine three VAE architectures, covering a range of decoding strengths to examine if the objective function in eqn. DISPLAY_FORM6 is immune to posterior collapse regardless of the choice of encoder-decoder architectures: $\beta _C$-VAELSTM with (LSTM encoder, LSTM decoder), $\beta _C$-VAEGRU with (GRU encoder, GRU decoder) BIBREF26, and $\beta _C$-VAECNN with (LSTM encoder, CNN decoder) BIBREF27. The dimension of word embeddings is 256 and the dimension of the latent variable is 64. The encoder and the decoder, for both VAELSTM and VAEGRU, have hidden size of 512 dimensions. VAECNN has exactly the same encoder as VAELSTM, while the decoder follows similar architecture to GLU with a bottleneck structure (with two blocks) BIBREF27 and has 512 channels externally and 128 internally for the convolutions with the filter size of 20. All models were trained for 10 epochs and optimised the objective function (eqn. DISPLAY_FORM6) with Adam BIBREF28 with following learning rates: $10^{-5}\times 85$ for VAEGRU and VAELSTM, and $10^{-4}$ for VAECNN. To couple the encoder with the decoder we concatenate the latent variable to word embeddings at each time step without initialisation of hidden state.
Experiments ::: Rate and Distortion
To analyse the dependence between the values of explicit rate ($C$) and distortion, we trained our models with different values of $C$, ranging from 10 to 100. Figure FIGREF8 reports the results for $\beta _C$-VAEGRU, $\beta _C$-VAELSTM, and $\beta _C$-VAECNN models on Yahoo and Yelp corpora. In all our experiments we found that $C\!-\!1\!\le KL\!\le \! C\!+\!1$, demonstrating that the objective function effectively imposed the desired constraint on KL term. Hence, setting any $C>0$ can in practice avoid the collapse issue.
The general trend is that by increasing the value of $C$ one can get a better reconstruction (lower distortion) while the amount of gain varies depending on the VAE's architecture and corpus. Additionally, we measured rate and distortion on CBT, WIKI, and WebText corpora using $\beta _C$-VAELSTM and observed the same trend with the increase of $C$, see Table TABREF12. This observation is consistent with the bound on $\text{I}({x};{z})$ we discussed earlier (expl) such that with an increase of KL we increase an upper bound on $\text{I}({x};{z})$ which in turn allows to have smaller values of reconstruction loss. Additionally, as reported in Table TABREF12, encouraging higher rates (via larger $C$) encourages more active units (AU; BIBREF29) in the latent code $z$.
As an additional verification, we also group the test sentences into buckets based on their length and report BLEU-2/4 and ROUGE-2/4 metrics to measure the quality of reconstruction step in Table TABREF12. As expected, we observe that increasing rate has a consistently positive impact on improving BLEU and ROUGE scores.
Experiments ::: Aggregated Posterior
To understand how the approximated posteriors are being affected by the magnitude of the KL, we adopted an approach from BIBREF6 and looked at the divergence between the aggregated posterior, $q_\phi (z)=\sum _{x\sim q(x)} q_\phi (z|x)$, and prior $p(z$). Since during generation we generate samples from the prior, ideally we would like the aggregated posterior to be as close as possible to the prior.
We obtained unbiased samples of ${z}$ first by sampling an ${x}$ from data and then ${z} \sim q_\phi ({z}|{x})$, and measured the log determinant of covariance of the samples ($\log \det (\mathrm {Cov}[q_\phi ({z})])$). As reported in Figure FIGREF8, we observed that $\log \det (\mathrm {Cov}[q_\phi ({z})])$ degrades as $C$ grows, indicating sharper approximate posteriors. We then consider the difference of $p(z)$ and $q(z)$ in their means and variances, by computing the KL divergence from the moment-matching Gaussian fit of $q(z)$ to $p(z)$: This returns smaller values for $\beta _{C=5}$-VAEGRU (Yelp: 0, Yahoo: 0), and larger values for $\beta _{C=100}$-VAEGRU (Yelp: 8, Yahoo: 5), which illustrates that the overlap between $q_\phi ({z})$ and $p(z)$ shrinks further as $C$ grows.
The above observation is better pronounced in Table TABREF12, where we also report the mean ($||\mu ||^2_2$) of unbiased samples of $z$, highlighting the divergence from the mean of the prior distribution as rate increases. Therefore, for the case of lower $C$, the latent variables observed during training are closer to the generated sample from the prior which makes the decoder more suitable for generation purpose. We will examine this hypothesis in the following section.
Experiments ::: Text Generation
To empirically examine how channel capacity translates into generative capacity of the model, we experimented with the $\beta _C$-VAELSTM models from Table TABREF12. To generate a novel sentence, after a model was trained, a latent variable $z$ is sampled from the prior distribution and then transformed into a sequence of words by the decoder $p(x|z)$.
During decoding for generation we try three decoding schemes: (i) Greedy: which selects the most probable word at each step, (ii) Top-k BIBREF30: which at each step samples from the K most probable words, and (iii) Nucleus Sampling (NS) BIBREF31: which at each step samples from a flexible subset of most probable words chosen based on their cumulative mass (set by a threshold $p$, where $p = 1$ means sampling from the full distribution). While similar to Top-k, the benefit of NS scheme is that the vocabulary size at each time step of decoding varies, a property that encourages diversity and avoids degenerate text patterns of greedy or beam search decoding BIBREF31. We experiment with NS $(p=\lbrace 0.5, 0.9\rbrace )$ and Top-k $(k=\lbrace 5, 15\rbrace )$.
Experiments ::: Text Generation ::: Qualitative Analysis
We follow the settings of homotopy experiment BIBREF2 where first a set of latent variables was obtained by performing a linear interpolation between $z_1 \sim p(z)$ and $z_2 \sim p(z)$. Then each $z$ in the set was converted into a sequence of words by the decoder $p(x|z)$. Besides the initial motivation of BIBREF2 to examine how neighbouring latent codes look like, our additional incentive is to analyse how sensitive the decoder is to small variations in the latent variable when trained with different channel capacities, $C=\lbrace 3,15,100\rbrace $.
Table TABREF17 shows the generated sentences via different decoding schemes for each channel capacity. For space reason, we only report the generated sentences for greedy, Top-$k=15$, and NS $p=0.9$. To make the generated sequences comparable across different decoding schemes or C values, we use the same samples of $z$ for decoding.
Experiments ::: Text Generation ::: Qualitative Analysis ::: Sensitivity of Decoder
To examine the sensitivity of the decoder to variations of the latent variable, we consider the sentences generate with the greedy decoding scheme (the first column in Table TABREF17). The other two schemes are not suitable for this analysis as they include sampling procedure. This means that if we decode the same latent variable twice we will get two different sentences. We observed that with lower channel capacity ($C=3$) the decoder tends to generate identical sentences for the interpolated latent variables (we highlight these sentences in gray), exhibiting decoder's lower sensitivity to $z$'s variations. However, with the increase of channel capacity ($C=15,100$) the decoder becomes more sensitive. This observation is further supported by the increasing pattern of active units in Table TABREF12: Given that AU increases with increase of $C$ one would expect that activation pattern of a latent variable becomes more complex as it comprises more information. Therefore small change in the pattern would have a greater effect on the decoder.
Experiments ::: Text Generation ::: Qualitative Analysis ::: Coherence of Sequences
We observe that the model trained with large values of $C$ compromises sequences' coherence during the sampling. This is especially evident when we compare $C=3$ with $C=100$. Analysis of Top-15 and NS (p=0.9) generated samples reveals that the lack of coherence is not due to the greedy decoding scheme per se, and can be attributed to the model in general. To understand this behavior further, we need two additional results from Table TABREF12: LogDetCov and $||\mu ||^2_2$. One can notice that as $C$ increases LogDetCov decreases and $||\mu ||^2_2$ increases. This indicates that the aggregated posterior becomes further apart from the prior, hence the latent codes seen during the training diverge more from the codes sampled from the prior during generation. We speculate this contributes to the coherence of the generated samples, as the decoder is not equipped to decode prior samples properly at higher $C$s.
Experiments ::: Text Generation ::: Quantitative Analysis
Quantitative analysis of generated text without gold reference sequences (e.g. in Machine Translation or Summarization) has been a long-standing challenge. Recently, there have been efforts towards this direction, with proposal such as self-BLEU BIBREF32, forward cross entropy BIBREF33 and Fréchet InferSent Distance BIBREF33. We opted for FCE as a complementary metric to our qualitative analysis. To calculate FCE, first a collection of synthetic sentences are generated by sampling $z\sim p(z)$ and decoding the samples into sentences. The synthetic sequences are then used to train a language model (an LSTM with the parametrisation of our decoder). The FCE score is estimated by reporting the negative log likelihood (NLL) of the trained LM on the set of human generated sentences.
We generated synthetic corpora using trained models from Table TABREF12 with different C and decoding schemes and using the same exact $z$ samples for all corpora. Since the generated corpora using different C values would have different coverage of words in the test set (i.e., Out-of-Vocabulary ratios), we used a fixed vocabulary to minimize the effect of different vocabularies in our analysis. Our dictionary contains words that are common in all of the three corpora, while the rest of the words that don't exist in this dictionary are replaced with 〈unk〉 symbol. Similarly, we used this fixed dictionary to preprocess the test sets. Also, to reduce bias to a particular set of sampled $z$'s we measure the FCE score three times, each time we sampled a new training corpus from a $\beta _C$-VAELSTM decoder and trained an LM from scratch. In Table TABREF20 we report the average FCE (NLL) for the generated corpora.
In the qualitative analysis we observed that the text generated by the $\beta _C$-VAELSTM trained with large values of $C=100$ exhibits lower quality (i.e., in terms of coherence). This observation is supported by the FCE score of NS(p=0.9) decoding scheme (TABREF20), since the performance drops when the LM is trained on the corpus generated with $C=100$. The generated corpora with $C=3$ and $C=15$ achieve similar FCE score. However, these patterns are reversed for Greedy decoding scheme, where the general tendency of FCE scores suggests that for larger values of $C$ the $\beta _C$-VAELSTM seems to generate text which better approximates the natural sentences in the test set. To understand this further, we report additional statistics in Table TABREF20: percentage of 〈unk〉 symbols, self-BLEU and average sentence length in the corpus.
The average sentence length, in the generated corpora is very similar for both decoding schemes, removing the possibility that the pathological pattern on FCE scores was caused by difference in sentence length. However, we observe that for Greedy decoding more than $30\%$ of the test set consists of 〈unk〉. Intuitively, seeing more evidence of this symbol during training would improve our estimate for the 〈unk〉. As reported in the table, the $\%$unk increases on almost all corpora as $C$ grows, which is then translated into getting a better FCE score at test. Therefore, we believe that FCE at high $\%$unk is not a reliable quantitative metric to assess the quality of the generated syntactic corpora. Furthermore, for Greedy decoding, self-BLEU decreases when $C$ increases. This suggests that generated sentences for higher value of $C$ are more diverse. Hence, the LM trained on more diverse corpora can generalise better, which in turn affects the FCE.
In contrast, the effect the 〈unk〉 symbol has on the corpora generated with the NS(p=0.9) decoding scheme is minimal for two reasons: First, the vocabulary size for the generated corpora, for all values of $C$ is close to the original corpus (the corpus we used to train the $\beta _C$-VAELSTM). Second, the vocabularies of the corpora generated with three values of $C$ is very close to each other. As a result, minimum replacement of the words with the 〈unk〉 symbol is required, making the experiment to be more reflective of the quality of the generated text. Similarly, self-BLEU for the NS(p=0.9) is the same for all values of $C$. This suggests that the diversity of sentences has minimal, if any, effect on the FCE.
Experiments ::: Syntactic Test
In this section, we explore if any form of syntactic information is captured by the encoder and represented in the latent codes despite the lack of any explicit syntactic signal during the training of the $\beta _C$-VAELSTM. To train the models we used the same WIKI data set as in BIBREF24, but we filtered out all the sentences that are longer than 50 space-separated tokens. We use the data set of BIBREF24 which consists of pairs of grammatical and ungrammatical sentences to test various syntactic phenomenon. For example, a pair in subject-verb agreement category would be: (The author laughs, The author laugh). We encode both the grammatical and ungrammatical sentences into the latent codes $z^+$ and $z^-$, respectively. Then we condition the decoder on the $z^+$ and try to determine whether the decoder assigns higher probability to the grammatical sentence (denoted by $x^+$): $p(x^-|z^+) < p(x^+|z^+)$ (denoted by p1 in Table TABREF28). We repeat the same experiment but this time try to determine whether the decoder, when conditioned on the ungrammatical code ($z^-$), still prefers to assign higher probability to the grammatical sentence: $p(x^-|z^-) < p(x^+|z^-)$ (denoted by p2 in Table TABREF28). Table TABREF28 shows the p1 and p2 for the $\beta _C$-VAELSTM model trained with $C=\lbrace 3,100\rbrace $. Both the p1 and p2 are similar to the accuracy and correspond to how many times a grammatical sentence was assigned a higher probability.
As reported for C=3, p1 and p2 match in almost all cases. This is to some degree expected since lower channel capacity encourages a more dominating decoder which in our case was trained on grammatical sentences from the WIKI. On the other hand, this illustrates that despite avoiding the KL-collapse issue, the dependence of the decoder on the latent code is so negligible that the decoder hardly distinguishes the grammatical and ungrammatical inputs. This changes for $C=100$, as in almost all the cases the decoder becomes strongly dependent on the latent code and can differentiate between what it has seen as input and the closely similar sentence it hasn't received as the input: The decoder assigns larger probability to the ungrammatical sentence when conditioned on the $z^-$ and, similarly, larger probability to the grammatical sentence when conditioned on the $z^+$.
However, the above observations neither confirm nor reject existence of grammar signal in the latent codes. We run a second set of experiments where we aim to discard sentence specific information from the latent codes by averaging the codes inside each syntactic category. The averaged codes are denoted by $\bar{z}^+$ and $\bar{z}^-$, and the corresponding accuracies are reported by p̄1 and p̄2 in Table TABREF28. Our hypothesis is that the only invariant factor during averaging the codes inside a category is the grammatical property of its corresponding sentences.
As expected, due to the weak dependence of decoder on latent code, the performance of the model under $C=3$ is almost identical (not included for space limits) when comparing p1 vs. p̄1, and p2 vs. p̄2. However, for $C=100$ the performance of the model deteriorates. While we leave further exploration of this behavior to our future work, we speculate this could be an indication of two things: the increase of complexity in the latent code which encourages a higher variance around the mean, or the absence of syntactic signal in the latent codes.
Discussion and Conclusion
In this paper we analysed the interdependence of the KL term in Evidence Lower Bound (ELBO) and the properties of the approximated posterior for text generation. To perform the analysis we used an information theoretic framework based on a variant of $\beta $-VAE objective, which permits explicit control of the KL term, and treats KL as a mechanism to control the amount of information transmitted between the encoder and decoder.
The immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\ge 0$) on the KL term ($|D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )-C|$). We experimented with a range of constraints ($C$) on the KL term and various powerful and weak decoder architectures (LSTM, GRU, and CNN), and empirically confirmed that in all cases the constraint was satisfied.
We showed that the higher value of KL encourages not only divergence from the prior distribution, but also a sharper and more concentrated approximated posteriors. It encourages the decoder to be more sensitive to the variations on the latent code, and makes the model with higher KL less suitable for generation as the latent variables observed during training are farther away from the prior samples used during generation. To analyse its impact on generation we conducted a set of qualitative and quantitative experiments.
In the qualitative analysis we showed that small and large values of KL term impose different properties on the generated text: the decoder trained under smaller KL term tends to generate repetitive but mainly plausible sentences, while for larger KL the generated sentences were diverse but incoherent. This behaviour was observed across three different decoding schemes and complemented by a quantitative analysis where we measured the performance of an LSTM LM trained on different VAE-generated synthetic corpora via different KL magnitudes, and tested on human generated sentences.
Finally, in an attempt to understand the ability of the latent code in VAEs to represent some form of syntactic information, we tested the ability of the model to distinguish between grammatical and ungrammatical sentences. We verified that at lower (and still non-zero) KL the decoder tends to pay less attention to the latent code, but our findings regarding the presence of a syntactic signal in the latent code were inconclusive. We leave it as a possible avenue to explore in our future work. Also, we plan to develop practical algorithms for the automatic selection of the $C$'s value, and verify our findings under multi-modal priors and complex posteriors.
Acknowledgments
The authors would like to thank the anonymous reviewers for their helpful suggestions. This research was supported by an EPSRC Experienced Researcher Fellowship (N. Collier: EP/M005089/1), an MRC grant (M.T. Pilehvar: MR/M025160/1) and E. Shareghi is supported by the ERC Consolidator Grant LEXICAL (648909). We gratefully acknowledge the donation of a GPU from the NVIDIA. | interdependence between rate and distortion, impact of KL on the sharpness of the approximated posteriors, demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities, some experiments to find if any form of syntactic information is encoded in the latent space |
ccf7415b515fe5c59fa92d4a8af5d2437c591615 | ccf7415b515fe5c59fa92d4a8af5d2437c591615_0 | Q: Why does proposed term help to avoid posterior collapse?
Text: Introduction
Despite the recent success of deep generative models such as Variational Autoencoders (VAEs) BIBREF0 and Generative Adversarial Networks (GANs) BIBREF1 in different areas of Machine Learning, they have failed to produce similar generative quality in NLP. In this paper we focus on VAEs and their mathematical underpinning to explain their behaviors in the context of text generation.
The vanilla VAE applied to text BIBREF2 consists of an encoder (inference) and decoder (generative) networks: Given an input $x$, the encoder network parameterizes $q_\phi (z|x)$ and infers about latent continuous representations of $x$, while the decoder network parameterizes $p_\theta (x|z)$ and generates $x$ from the continuous code $z$. The two models are jointly trained by maximizing the Evidence Lower Bound (ELBO), $\mathcal {L}(\theta , \phi ; x,z)$:
where the first term is the reconstruction term, and the second term is the Kullback-Leibler (KL) divergence between the posterior distribution of latent variable $z$ and its prior $p({z})$ (i.e., $\mathcal {N}(0,I)$). The KL term can be interpreted as a regularizer which prevents the inference network from copying ${x}$ into ${z}$, and for the case of a Gaussian prior and posterior has a closed-form solution.
With powerful autoregressive decoders, such as LSTMs, the internal decoder's cells are likely to suffice for representing the sentence, leading to a sub-optimal solution where the decoder ignores the inferred latent code ${z}$. This allows the encoder to become independent of $x$, an issue known as posterior collapse ($q_\phi ({z}|{x})\approx p({z})$) where the inference network produces uninformative latent variables. Several solutions have been proposed to address the posterior collapse issue: (i) Modifying the architecture of the model by weakening decoders BIBREF2, BIBREF3, BIBREF4, BIBREF5, or introducing additional connections between the encoder and decoder to enforce the dependence between $x$ and $z$ BIBREF6, BIBREF7, BIBREF8; (ii) Using more flexible or multimodal priors BIBREF9, BIBREF10; (iii) Alternating the training by focusing on the inference network in the earlier stages BIBREF11, or augmenting amortized optimization of VAEs with instance-based optimization of stochastic variational inference BIBREF12, BIBREF13.
All of the aforementioned approaches impose one or more of the following limitations: restraining the choice of decoder, modifying the training algorithm, or requiring a substantial alternation of the objective function. As exceptions to these, $\delta $-VAE BIBREF14 and $\beta $-VAE BIBREF15 aim to avoid the posterior collapse by explicitly controlling the regularizer term in eqn. DISPLAY_FORM2. While $\delta $-VAE aims to impose a lower bound on the divergence term, $\beta $-VAE (betavae) controls the impact of regularization via an additional hyperparameter (i.e., $\beta D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )$). A special case of $\beta $-VAE is annealing BIBREF2, where $\beta $ increases from 0 to 1 during training.
In this study, we propose to use an extension of $\beta $-VAE BIBREF16 which permits us to explicitly control the magnitude of the KL term while avoiding the posterior collapse issue even in the existence of a powerful decoder. We use this framework to examine different properties of the estimated posterior and the generative behaviour of VAEs and discuss them in the context of text generation via various qualitative and quantitative experiments.
Kullback-Leibler Divergence in VAE
We take the encoder-decoder of VAEs as the sender-receiver in a communication network. Given an input message $x$, a sender generates a compressed encoding of $x$ denoted by $z$, while the receiver aims to fully decode $z$ back into $x$. The quality of this communication can be explained in terms of rate (R) which measures the compression level of $z$ as compared to the original message $x$, and distortion (D) which quantities the overall performance of the communication in encoding a message at sender and successfully decoding it at the receiver. Additionally, the capacity of the encoder channel can be measured in terms of the amount of mutual information between $x$ and $z$, denoted by $\text{I}({x};{z})$ BIBREF17.
Kullback-Leibler Divergence in VAE ::: Reconstruction vs. KL
The reconstruction loss can naturally measure distortion ($D := - \big \langle \log p_\theta ({x}|{z}) \big \rangle $), while the KL term quantifies the amount of compression (rate; $R := D_{KL}[q_\phi ({z}|{x})|| p({z})]$) by measuring the divergence between a channel that transmits zero bit of information about $x$, denoted by $p(z)$, and the encoder channel of VAEs, $q_\phi (z|x)$.
BIBREF18 introduced the $H-D \le \text{I}({x};{z}) \le R$ bounds, where $H$ is the empirical data entropy (a constant). These bounds on mutual information allow us to analyze the trade-off between the reconstruction and KL terms in eqn. (DISPLAY_FORM2). For instance, since $\text{I}({x};{z})$ is non-negative (using Jensen's inequality), the posterior collapse can be explained as the situation where $\text{I}({x};{z})=0$, where encoder transmits no information about $x$, causing $R=0, D=H$. Increasing $\text{I}({x};{z})$ can be encouraged by increasing both bounds: increasing the upper-bound (KL term) can be seen as the mean to control the maximum capacity of the encoder channel, while reducing the distortion (reconstruction loss) will tighten the bound by pushing the lower bound to its limits ($H-D\rightarrow H$). A similar effect on the lower-bound can be encouraged by using stronger decoders which could potentially decrease the reconstruction loss. Hence, having a framework that permits the use of strong decoders while avoiding the posterior collapse is desirable. Similarly, channel capacity can be decreased.
Kullback-Leibler Divergence in VAE ::: Explicit KL Control via @!START@$\beta $@!END@-VAE
Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\beta $-VAE offers regularizing the ELBO via an additional coefficient $\beta \in {\rm I\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term,
where $C\!\! \in \!\! {\rm I\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\text{KL}\!\!=\!\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\max \big (C,D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )\big )$ at the risk of breaking the ELBO when $\text{KL}\!\!<\!\!C$ BIBREF22.
Experiments
We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\beta =1$. We do not use larger $\beta $s because the constraint $\text{KL}=C$ is always satisfied.
Experiments ::: Corpora
We use 5 different corpora covering different domains and size through this section: Yelp and Yahoo BIBREF4 both have ($100k$,$10k$,$10k$) sentences in (train, dev, test) sets and $20k$ words in vocabulary, Children's Book Test (CBT; BIBREF23) has ($192k$,$10k$,$12k$) sentences and $12k$ vocab, Wikipedia (WIKI; BIBREF24) has ($2m$,$270k$,$270k$) sentences and $20k$ vocab, and WebText BIBREF25 has ($1m$,$23k$,$24k$) sentences and $22k$ vocab.
Experiments ::: Models
We examine three VAE architectures, covering a range of decoding strengths to examine if the objective function in eqn. DISPLAY_FORM6 is immune to posterior collapse regardless of the choice of encoder-decoder architectures: $\beta _C$-VAELSTM with (LSTM encoder, LSTM decoder), $\beta _C$-VAEGRU with (GRU encoder, GRU decoder) BIBREF26, and $\beta _C$-VAECNN with (LSTM encoder, CNN decoder) BIBREF27. The dimension of word embeddings is 256 and the dimension of the latent variable is 64. The encoder and the decoder, for both VAELSTM and VAEGRU, have hidden size of 512 dimensions. VAECNN has exactly the same encoder as VAELSTM, while the decoder follows similar architecture to GLU with a bottleneck structure (with two blocks) BIBREF27 and has 512 channels externally and 128 internally for the convolutions with the filter size of 20. All models were trained for 10 epochs and optimised the objective function (eqn. DISPLAY_FORM6) with Adam BIBREF28 with following learning rates: $10^{-5}\times 85$ for VAEGRU and VAELSTM, and $10^{-4}$ for VAECNN. To couple the encoder with the decoder we concatenate the latent variable to word embeddings at each time step without initialisation of hidden state.
Experiments ::: Rate and Distortion
To analyse the dependence between the values of explicit rate ($C$) and distortion, we trained our models with different values of $C$, ranging from 10 to 100. Figure FIGREF8 reports the results for $\beta _C$-VAEGRU, $\beta _C$-VAELSTM, and $\beta _C$-VAECNN models on Yahoo and Yelp corpora. In all our experiments we found that $C\!-\!1\!\le KL\!\le \! C\!+\!1$, demonstrating that the objective function effectively imposed the desired constraint on KL term. Hence, setting any $C>0$ can in practice avoid the collapse issue.
The general trend is that by increasing the value of $C$ one can get a better reconstruction (lower distortion) while the amount of gain varies depending on the VAE's architecture and corpus. Additionally, we measured rate and distortion on CBT, WIKI, and WebText corpora using $\beta _C$-VAELSTM and observed the same trend with the increase of $C$, see Table TABREF12. This observation is consistent with the bound on $\text{I}({x};{z})$ we discussed earlier (expl) such that with an increase of KL we increase an upper bound on $\text{I}({x};{z})$ which in turn allows to have smaller values of reconstruction loss. Additionally, as reported in Table TABREF12, encouraging higher rates (via larger $C$) encourages more active units (AU; BIBREF29) in the latent code $z$.
As an additional verification, we also group the test sentences into buckets based on their length and report BLEU-2/4 and ROUGE-2/4 metrics to measure the quality of reconstruction step in Table TABREF12. As expected, we observe that increasing rate has a consistently positive impact on improving BLEU and ROUGE scores.
Experiments ::: Aggregated Posterior
To understand how the approximated posteriors are being affected by the magnitude of the KL, we adopted an approach from BIBREF6 and looked at the divergence between the aggregated posterior, $q_\phi (z)=\sum _{x\sim q(x)} q_\phi (z|x)$, and prior $p(z$). Since during generation we generate samples from the prior, ideally we would like the aggregated posterior to be as close as possible to the prior.
We obtained unbiased samples of ${z}$ first by sampling an ${x}$ from data and then ${z} \sim q_\phi ({z}|{x})$, and measured the log determinant of covariance of the samples ($\log \det (\mathrm {Cov}[q_\phi ({z})])$). As reported in Figure FIGREF8, we observed that $\log \det (\mathrm {Cov}[q_\phi ({z})])$ degrades as $C$ grows, indicating sharper approximate posteriors. We then consider the difference of $p(z)$ and $q(z)$ in their means and variances, by computing the KL divergence from the moment-matching Gaussian fit of $q(z)$ to $p(z)$: This returns smaller values for $\beta _{C=5}$-VAEGRU (Yelp: 0, Yahoo: 0), and larger values for $\beta _{C=100}$-VAEGRU (Yelp: 8, Yahoo: 5), which illustrates that the overlap between $q_\phi ({z})$ and $p(z)$ shrinks further as $C$ grows.
The above observation is better pronounced in Table TABREF12, where we also report the mean ($||\mu ||^2_2$) of unbiased samples of $z$, highlighting the divergence from the mean of the prior distribution as rate increases. Therefore, for the case of lower $C$, the latent variables observed during training are closer to the generated sample from the prior which makes the decoder more suitable for generation purpose. We will examine this hypothesis in the following section.
Experiments ::: Text Generation
To empirically examine how channel capacity translates into generative capacity of the model, we experimented with the $\beta _C$-VAELSTM models from Table TABREF12. To generate a novel sentence, after a model was trained, a latent variable $z$ is sampled from the prior distribution and then transformed into a sequence of words by the decoder $p(x|z)$.
During decoding for generation we try three decoding schemes: (i) Greedy: which selects the most probable word at each step, (ii) Top-k BIBREF30: which at each step samples from the K most probable words, and (iii) Nucleus Sampling (NS) BIBREF31: which at each step samples from a flexible subset of most probable words chosen based on their cumulative mass (set by a threshold $p$, where $p = 1$ means sampling from the full distribution). While similar to Top-k, the benefit of NS scheme is that the vocabulary size at each time step of decoding varies, a property that encourages diversity and avoids degenerate text patterns of greedy or beam search decoding BIBREF31. We experiment with NS $(p=\lbrace 0.5, 0.9\rbrace )$ and Top-k $(k=\lbrace 5, 15\rbrace )$.
Experiments ::: Text Generation ::: Qualitative Analysis
We follow the settings of homotopy experiment BIBREF2 where first a set of latent variables was obtained by performing a linear interpolation between $z_1 \sim p(z)$ and $z_2 \sim p(z)$. Then each $z$ in the set was converted into a sequence of words by the decoder $p(x|z)$. Besides the initial motivation of BIBREF2 to examine how neighbouring latent codes look like, our additional incentive is to analyse how sensitive the decoder is to small variations in the latent variable when trained with different channel capacities, $C=\lbrace 3,15,100\rbrace $.
Table TABREF17 shows the generated sentences via different decoding schemes for each channel capacity. For space reason, we only report the generated sentences for greedy, Top-$k=15$, and NS $p=0.9$. To make the generated sequences comparable across different decoding schemes or C values, we use the same samples of $z$ for decoding.
Experiments ::: Text Generation ::: Qualitative Analysis ::: Sensitivity of Decoder
To examine the sensitivity of the decoder to variations of the latent variable, we consider the sentences generate with the greedy decoding scheme (the first column in Table TABREF17). The other two schemes are not suitable for this analysis as they include sampling procedure. This means that if we decode the same latent variable twice we will get two different sentences. We observed that with lower channel capacity ($C=3$) the decoder tends to generate identical sentences for the interpolated latent variables (we highlight these sentences in gray), exhibiting decoder's lower sensitivity to $z$'s variations. However, with the increase of channel capacity ($C=15,100$) the decoder becomes more sensitive. This observation is further supported by the increasing pattern of active units in Table TABREF12: Given that AU increases with increase of $C$ one would expect that activation pattern of a latent variable becomes more complex as it comprises more information. Therefore small change in the pattern would have a greater effect on the decoder.
Experiments ::: Text Generation ::: Qualitative Analysis ::: Coherence of Sequences
We observe that the model trained with large values of $C$ compromises sequences' coherence during the sampling. This is especially evident when we compare $C=3$ with $C=100$. Analysis of Top-15 and NS (p=0.9) generated samples reveals that the lack of coherence is not due to the greedy decoding scheme per se, and can be attributed to the model in general. To understand this behavior further, we need two additional results from Table TABREF12: LogDetCov and $||\mu ||^2_2$. One can notice that as $C$ increases LogDetCov decreases and $||\mu ||^2_2$ increases. This indicates that the aggregated posterior becomes further apart from the prior, hence the latent codes seen during the training diverge more from the codes sampled from the prior during generation. We speculate this contributes to the coherence of the generated samples, as the decoder is not equipped to decode prior samples properly at higher $C$s.
Experiments ::: Text Generation ::: Quantitative Analysis
Quantitative analysis of generated text without gold reference sequences (e.g. in Machine Translation or Summarization) has been a long-standing challenge. Recently, there have been efforts towards this direction, with proposal such as self-BLEU BIBREF32, forward cross entropy BIBREF33 and Fréchet InferSent Distance BIBREF33. We opted for FCE as a complementary metric to our qualitative analysis. To calculate FCE, first a collection of synthetic sentences are generated by sampling $z\sim p(z)$ and decoding the samples into sentences. The synthetic sequences are then used to train a language model (an LSTM with the parametrisation of our decoder). The FCE score is estimated by reporting the negative log likelihood (NLL) of the trained LM on the set of human generated sentences.
We generated synthetic corpora using trained models from Table TABREF12 with different C and decoding schemes and using the same exact $z$ samples for all corpora. Since the generated corpora using different C values would have different coverage of words in the test set (i.e., Out-of-Vocabulary ratios), we used a fixed vocabulary to minimize the effect of different vocabularies in our analysis. Our dictionary contains words that are common in all of the three corpora, while the rest of the words that don't exist in this dictionary are replaced with 〈unk〉 symbol. Similarly, we used this fixed dictionary to preprocess the test sets. Also, to reduce bias to a particular set of sampled $z$'s we measure the FCE score three times, each time we sampled a new training corpus from a $\beta _C$-VAELSTM decoder and trained an LM from scratch. In Table TABREF20 we report the average FCE (NLL) for the generated corpora.
In the qualitative analysis we observed that the text generated by the $\beta _C$-VAELSTM trained with large values of $C=100$ exhibits lower quality (i.e., in terms of coherence). This observation is supported by the FCE score of NS(p=0.9) decoding scheme (TABREF20), since the performance drops when the LM is trained on the corpus generated with $C=100$. The generated corpora with $C=3$ and $C=15$ achieve similar FCE score. However, these patterns are reversed for Greedy decoding scheme, where the general tendency of FCE scores suggests that for larger values of $C$ the $\beta _C$-VAELSTM seems to generate text which better approximates the natural sentences in the test set. To understand this further, we report additional statistics in Table TABREF20: percentage of 〈unk〉 symbols, self-BLEU and average sentence length in the corpus.
The average sentence length, in the generated corpora is very similar for both decoding schemes, removing the possibility that the pathological pattern on FCE scores was caused by difference in sentence length. However, we observe that for Greedy decoding more than $30\%$ of the test set consists of 〈unk〉. Intuitively, seeing more evidence of this symbol during training would improve our estimate for the 〈unk〉. As reported in the table, the $\%$unk increases on almost all corpora as $C$ grows, which is then translated into getting a better FCE score at test. Therefore, we believe that FCE at high $\%$unk is not a reliable quantitative metric to assess the quality of the generated syntactic corpora. Furthermore, for Greedy decoding, self-BLEU decreases when $C$ increases. This suggests that generated sentences for higher value of $C$ are more diverse. Hence, the LM trained on more diverse corpora can generalise better, which in turn affects the FCE.
In contrast, the effect the 〈unk〉 symbol has on the corpora generated with the NS(p=0.9) decoding scheme is minimal for two reasons: First, the vocabulary size for the generated corpora, for all values of $C$ is close to the original corpus (the corpus we used to train the $\beta _C$-VAELSTM). Second, the vocabularies of the corpora generated with three values of $C$ is very close to each other. As a result, minimum replacement of the words with the 〈unk〉 symbol is required, making the experiment to be more reflective of the quality of the generated text. Similarly, self-BLEU for the NS(p=0.9) is the same for all values of $C$. This suggests that the diversity of sentences has minimal, if any, effect on the FCE.
Experiments ::: Syntactic Test
In this section, we explore if any form of syntactic information is captured by the encoder and represented in the latent codes despite the lack of any explicit syntactic signal during the training of the $\beta _C$-VAELSTM. To train the models we used the same WIKI data set as in BIBREF24, but we filtered out all the sentences that are longer than 50 space-separated tokens. We use the data set of BIBREF24 which consists of pairs of grammatical and ungrammatical sentences to test various syntactic phenomenon. For example, a pair in subject-verb agreement category would be: (The author laughs, The author laugh). We encode both the grammatical and ungrammatical sentences into the latent codes $z^+$ and $z^-$, respectively. Then we condition the decoder on the $z^+$ and try to determine whether the decoder assigns higher probability to the grammatical sentence (denoted by $x^+$): $p(x^-|z^+) < p(x^+|z^+)$ (denoted by p1 in Table TABREF28). We repeat the same experiment but this time try to determine whether the decoder, when conditioned on the ungrammatical code ($z^-$), still prefers to assign higher probability to the grammatical sentence: $p(x^-|z^-) < p(x^+|z^-)$ (denoted by p2 in Table TABREF28). Table TABREF28 shows the p1 and p2 for the $\beta _C$-VAELSTM model trained with $C=\lbrace 3,100\rbrace $. Both the p1 and p2 are similar to the accuracy and correspond to how many times a grammatical sentence was assigned a higher probability.
As reported for C=3, p1 and p2 match in almost all cases. This is to some degree expected since lower channel capacity encourages a more dominating decoder which in our case was trained on grammatical sentences from the WIKI. On the other hand, this illustrates that despite avoiding the KL-collapse issue, the dependence of the decoder on the latent code is so negligible that the decoder hardly distinguishes the grammatical and ungrammatical inputs. This changes for $C=100$, as in almost all the cases the decoder becomes strongly dependent on the latent code and can differentiate between what it has seen as input and the closely similar sentence it hasn't received as the input: The decoder assigns larger probability to the ungrammatical sentence when conditioned on the $z^-$ and, similarly, larger probability to the grammatical sentence when conditioned on the $z^+$.
However, the above observations neither confirm nor reject existence of grammar signal in the latent codes. We run a second set of experiments where we aim to discard sentence specific information from the latent codes by averaging the codes inside each syntactic category. The averaged codes are denoted by $\bar{z}^+$ and $\bar{z}^-$, and the corresponding accuracies are reported by p̄1 and p̄2 in Table TABREF28. Our hypothesis is that the only invariant factor during averaging the codes inside a category is the grammatical property of its corresponding sentences.
As expected, due to the weak dependence of decoder on latent code, the performance of the model under $C=3$ is almost identical (not included for space limits) when comparing p1 vs. p̄1, and p2 vs. p̄2. However, for $C=100$ the performance of the model deteriorates. While we leave further exploration of this behavior to our future work, we speculate this could be an indication of two things: the increase of complexity in the latent code which encourages a higher variance around the mean, or the absence of syntactic signal in the latent codes.
Discussion and Conclusion
In this paper we analysed the interdependence of the KL term in Evidence Lower Bound (ELBO) and the properties of the approximated posterior for text generation. To perform the analysis we used an information theoretic framework based on a variant of $\beta $-VAE objective, which permits explicit control of the KL term, and treats KL as a mechanism to control the amount of information transmitted between the encoder and decoder.
The immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\ge 0$) on the KL term ($|D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )-C|$). We experimented with a range of constraints ($C$) on the KL term and various powerful and weak decoder architectures (LSTM, GRU, and CNN), and empirically confirmed that in all cases the constraint was satisfied.
We showed that the higher value of KL encourages not only divergence from the prior distribution, but also a sharper and more concentrated approximated posteriors. It encourages the decoder to be more sensitive to the variations on the latent code, and makes the model with higher KL less suitable for generation as the latent variables observed during training are farther away from the prior samples used during generation. To analyse its impact on generation we conducted a set of qualitative and quantitative experiments.
In the qualitative analysis we showed that small and large values of KL term impose different properties on the generated text: the decoder trained under smaller KL term tends to generate repetitive but mainly plausible sentences, while for larger KL the generated sentences were diverse but incoherent. This behaviour was observed across three different decoding schemes and complemented by a quantitative analysis where we measured the performance of an LSTM LM trained on different VAE-generated synthetic corpora via different KL magnitudes, and tested on human generated sentences.
Finally, in an attempt to understand the ability of the latent code in VAEs to represent some form of syntactic information, we tested the ability of the model to distinguish between grammatical and ungrammatical sentences. We verified that at lower (and still non-zero) KL the decoder tends to pay less attention to the latent code, but our findings regarding the presence of a syntactic signal in the latent code were inconclusive. We leave it as a possible avenue to explore in our future work. Also, we plan to develop practical algorithms for the automatic selection of the $C$'s value, and verify our findings under multi-modal priors and complex posteriors.
Acknowledgments
The authors would like to thank the anonymous reviewers for their helpful suggestions. This research was supported by an EPSRC Experienced Researcher Fellowship (N. Collier: EP/M005089/1), an MRC grant (M.T. Pilehvar: MR/M025160/1) and E. Shareghi is supported by the ERC Consolidator Grant LEXICAL (648909). We gratefully acknowledge the donation of a GPU from the NVIDIA. | by setting a non-zero positive constraint ($C\ge 0$) on the KL term ($|D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )-C|$) |
fee5aef7ae521ccd1562764a91edefecec34624d | fee5aef7ae521ccd1562764a91edefecec34624d_0 | Q: How does explicit constraint on the KL divergence term that authors propose looks like?
Text: Introduction
Despite the recent success of deep generative models such as Variational Autoencoders (VAEs) BIBREF0 and Generative Adversarial Networks (GANs) BIBREF1 in different areas of Machine Learning, they have failed to produce similar generative quality in NLP. In this paper we focus on VAEs and their mathematical underpinning to explain their behaviors in the context of text generation.
The vanilla VAE applied to text BIBREF2 consists of an encoder (inference) and decoder (generative) networks: Given an input $x$, the encoder network parameterizes $q_\phi (z|x)$ and infers about latent continuous representations of $x$, while the decoder network parameterizes $p_\theta (x|z)$ and generates $x$ from the continuous code $z$. The two models are jointly trained by maximizing the Evidence Lower Bound (ELBO), $\mathcal {L}(\theta , \phi ; x,z)$:
where the first term is the reconstruction term, and the second term is the Kullback-Leibler (KL) divergence between the posterior distribution of latent variable $z$ and its prior $p({z})$ (i.e., $\mathcal {N}(0,I)$). The KL term can be interpreted as a regularizer which prevents the inference network from copying ${x}$ into ${z}$, and for the case of a Gaussian prior and posterior has a closed-form solution.
With powerful autoregressive decoders, such as LSTMs, the internal decoder's cells are likely to suffice for representing the sentence, leading to a sub-optimal solution where the decoder ignores the inferred latent code ${z}$. This allows the encoder to become independent of $x$, an issue known as posterior collapse ($q_\phi ({z}|{x})\approx p({z})$) where the inference network produces uninformative latent variables. Several solutions have been proposed to address the posterior collapse issue: (i) Modifying the architecture of the model by weakening decoders BIBREF2, BIBREF3, BIBREF4, BIBREF5, or introducing additional connections between the encoder and decoder to enforce the dependence between $x$ and $z$ BIBREF6, BIBREF7, BIBREF8; (ii) Using more flexible or multimodal priors BIBREF9, BIBREF10; (iii) Alternating the training by focusing on the inference network in the earlier stages BIBREF11, or augmenting amortized optimization of VAEs with instance-based optimization of stochastic variational inference BIBREF12, BIBREF13.
All of the aforementioned approaches impose one or more of the following limitations: restraining the choice of decoder, modifying the training algorithm, or requiring a substantial alternation of the objective function. As exceptions to these, $\delta $-VAE BIBREF14 and $\beta $-VAE BIBREF15 aim to avoid the posterior collapse by explicitly controlling the regularizer term in eqn. DISPLAY_FORM2. While $\delta $-VAE aims to impose a lower bound on the divergence term, $\beta $-VAE (betavae) controls the impact of regularization via an additional hyperparameter (i.e., $\beta D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )$). A special case of $\beta $-VAE is annealing BIBREF2, where $\beta $ increases from 0 to 1 during training.
In this study, we propose to use an extension of $\beta $-VAE BIBREF16 which permits us to explicitly control the magnitude of the KL term while avoiding the posterior collapse issue even in the existence of a powerful decoder. We use this framework to examine different properties of the estimated posterior and the generative behaviour of VAEs and discuss them in the context of text generation via various qualitative and quantitative experiments.
Kullback-Leibler Divergence in VAE
We take the encoder-decoder of VAEs as the sender-receiver in a communication network. Given an input message $x$, a sender generates a compressed encoding of $x$ denoted by $z$, while the receiver aims to fully decode $z$ back into $x$. The quality of this communication can be explained in terms of rate (R) which measures the compression level of $z$ as compared to the original message $x$, and distortion (D) which quantities the overall performance of the communication in encoding a message at sender and successfully decoding it at the receiver. Additionally, the capacity of the encoder channel can be measured in terms of the amount of mutual information between $x$ and $z$, denoted by $\text{I}({x};{z})$ BIBREF17.
Kullback-Leibler Divergence in VAE ::: Reconstruction vs. KL
The reconstruction loss can naturally measure distortion ($D := - \big \langle \log p_\theta ({x}|{z}) \big \rangle $), while the KL term quantifies the amount of compression (rate; $R := D_{KL}[q_\phi ({z}|{x})|| p({z})]$) by measuring the divergence between a channel that transmits zero bit of information about $x$, denoted by $p(z)$, and the encoder channel of VAEs, $q_\phi (z|x)$.
BIBREF18 introduced the $H-D \le \text{I}({x};{z}) \le R$ bounds, where $H$ is the empirical data entropy (a constant). These bounds on mutual information allow us to analyze the trade-off between the reconstruction and KL terms in eqn. (DISPLAY_FORM2). For instance, since $\text{I}({x};{z})$ is non-negative (using Jensen's inequality), the posterior collapse can be explained as the situation where $\text{I}({x};{z})=0$, where encoder transmits no information about $x$, causing $R=0, D=H$. Increasing $\text{I}({x};{z})$ can be encouraged by increasing both bounds: increasing the upper-bound (KL term) can be seen as the mean to control the maximum capacity of the encoder channel, while reducing the distortion (reconstruction loss) will tighten the bound by pushing the lower bound to its limits ($H-D\rightarrow H$). A similar effect on the lower-bound can be encouraged by using stronger decoders which could potentially decrease the reconstruction loss. Hence, having a framework that permits the use of strong decoders while avoiding the posterior collapse is desirable. Similarly, channel capacity can be decreased.
Kullback-Leibler Divergence in VAE ::: Explicit KL Control via @!START@$\beta $@!END@-VAE
Given the above interpretation, we now turn to a slightly different formulation of ELBO based on $\beta $-VAE BIBREF15. This allows control of the trade-off between the reconstruction and KL terms, as well as to set explicit KL value. While $\beta $-VAE offers regularizing the ELBO via an additional coefficient $\beta \in {\rm I\!R}^+$, a simple extension BIBREF16 of its objective function incorporates an additional hyperparameter $C$ to explicitly control the magnitude of the KL term,
where $C\!\! \in \!\! {\rm I\!R}^+$ and $| . |$ denotes the absolute value. While we could apply constraint optimization to impose the explicit constraint of $\text{KL}\!\!=\!\!C$, we found that the above objective function satisfies the constraint (experiment). Alternatively, it has been shown BIBREF21 the similar effect could be reached by replacing the second term in eqn. DISPLAY_FORM6 with $\max \big (C,D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )\big )$ at the risk of breaking the ELBO when $\text{KL}\!\!<\!\!C$ BIBREF22.
Experiments
We conduct various experiments to illustrate the properties that are encouraged via different KL magnitudes. In particular, we revisit the interdependence between rate and distortion, and shed light on the impact of KL on the sharpness of the approximated posteriors. Then, through a set of qualitative and quantitative experiments for text generation, we demonstrate how certain generative behaviours could be imposed on VAEs via a range of maximum channel capacities. Finally, we run some experiments to find if any form of syntactic information is encoded in the latent space. For all experiments, we use the objective function of eqn. DISPLAY_FORM6 with $\beta =1$. We do not use larger $\beta $s because the constraint $\text{KL}=C$ is always satisfied.
Experiments ::: Corpora
We use 5 different corpora covering different domains and size through this section: Yelp and Yahoo BIBREF4 both have ($100k$,$10k$,$10k$) sentences in (train, dev, test) sets and $20k$ words in vocabulary, Children's Book Test (CBT; BIBREF23) has ($192k$,$10k$,$12k$) sentences and $12k$ vocab, Wikipedia (WIKI; BIBREF24) has ($2m$,$270k$,$270k$) sentences and $20k$ vocab, and WebText BIBREF25 has ($1m$,$23k$,$24k$) sentences and $22k$ vocab.
Experiments ::: Models
We examine three VAE architectures, covering a range of decoding strengths to examine if the objective function in eqn. DISPLAY_FORM6 is immune to posterior collapse regardless of the choice of encoder-decoder architectures: $\beta _C$-VAELSTM with (LSTM encoder, LSTM decoder), $\beta _C$-VAEGRU with (GRU encoder, GRU decoder) BIBREF26, and $\beta _C$-VAECNN with (LSTM encoder, CNN decoder) BIBREF27. The dimension of word embeddings is 256 and the dimension of the latent variable is 64. The encoder and the decoder, for both VAELSTM and VAEGRU, have hidden size of 512 dimensions. VAECNN has exactly the same encoder as VAELSTM, while the decoder follows similar architecture to GLU with a bottleneck structure (with two blocks) BIBREF27 and has 512 channels externally and 128 internally for the convolutions with the filter size of 20. All models were trained for 10 epochs and optimised the objective function (eqn. DISPLAY_FORM6) with Adam BIBREF28 with following learning rates: $10^{-5}\times 85$ for VAEGRU and VAELSTM, and $10^{-4}$ for VAECNN. To couple the encoder with the decoder we concatenate the latent variable to word embeddings at each time step without initialisation of hidden state.
Experiments ::: Rate and Distortion
To analyse the dependence between the values of explicit rate ($C$) and distortion, we trained our models with different values of $C$, ranging from 10 to 100. Figure FIGREF8 reports the results for $\beta _C$-VAEGRU, $\beta _C$-VAELSTM, and $\beta _C$-VAECNN models on Yahoo and Yelp corpora. In all our experiments we found that $C\!-\!1\!\le KL\!\le \! C\!+\!1$, demonstrating that the objective function effectively imposed the desired constraint on KL term. Hence, setting any $C>0$ can in practice avoid the collapse issue.
The general trend is that by increasing the value of $C$ one can get a better reconstruction (lower distortion) while the amount of gain varies depending on the VAE's architecture and corpus. Additionally, we measured rate and distortion on CBT, WIKI, and WebText corpora using $\beta _C$-VAELSTM and observed the same trend with the increase of $C$, see Table TABREF12. This observation is consistent with the bound on $\text{I}({x};{z})$ we discussed earlier (expl) such that with an increase of KL we increase an upper bound on $\text{I}({x};{z})$ which in turn allows to have smaller values of reconstruction loss. Additionally, as reported in Table TABREF12, encouraging higher rates (via larger $C$) encourages more active units (AU; BIBREF29) in the latent code $z$.
As an additional verification, we also group the test sentences into buckets based on their length and report BLEU-2/4 and ROUGE-2/4 metrics to measure the quality of reconstruction step in Table TABREF12. As expected, we observe that increasing rate has a consistently positive impact on improving BLEU and ROUGE scores.
Experiments ::: Aggregated Posterior
To understand how the approximated posteriors are being affected by the magnitude of the KL, we adopted an approach from BIBREF6 and looked at the divergence between the aggregated posterior, $q_\phi (z)=\sum _{x\sim q(x)} q_\phi (z|x)$, and prior $p(z$). Since during generation we generate samples from the prior, ideally we would like the aggregated posterior to be as close as possible to the prior.
We obtained unbiased samples of ${z}$ first by sampling an ${x}$ from data and then ${z} \sim q_\phi ({z}|{x})$, and measured the log determinant of covariance of the samples ($\log \det (\mathrm {Cov}[q_\phi ({z})])$). As reported in Figure FIGREF8, we observed that $\log \det (\mathrm {Cov}[q_\phi ({z})])$ degrades as $C$ grows, indicating sharper approximate posteriors. We then consider the difference of $p(z)$ and $q(z)$ in their means and variances, by computing the KL divergence from the moment-matching Gaussian fit of $q(z)$ to $p(z)$: This returns smaller values for $\beta _{C=5}$-VAEGRU (Yelp: 0, Yahoo: 0), and larger values for $\beta _{C=100}$-VAEGRU (Yelp: 8, Yahoo: 5), which illustrates that the overlap between $q_\phi ({z})$ and $p(z)$ shrinks further as $C$ grows.
The above observation is better pronounced in Table TABREF12, where we also report the mean ($||\mu ||^2_2$) of unbiased samples of $z$, highlighting the divergence from the mean of the prior distribution as rate increases. Therefore, for the case of lower $C$, the latent variables observed during training are closer to the generated sample from the prior which makes the decoder more suitable for generation purpose. We will examine this hypothesis in the following section.
Experiments ::: Text Generation
To empirically examine how channel capacity translates into generative capacity of the model, we experimented with the $\beta _C$-VAELSTM models from Table TABREF12. To generate a novel sentence, after a model was trained, a latent variable $z$ is sampled from the prior distribution and then transformed into a sequence of words by the decoder $p(x|z)$.
During decoding for generation we try three decoding schemes: (i) Greedy: which selects the most probable word at each step, (ii) Top-k BIBREF30: which at each step samples from the K most probable words, and (iii) Nucleus Sampling (NS) BIBREF31: which at each step samples from a flexible subset of most probable words chosen based on their cumulative mass (set by a threshold $p$, where $p = 1$ means sampling from the full distribution). While similar to Top-k, the benefit of NS scheme is that the vocabulary size at each time step of decoding varies, a property that encourages diversity and avoids degenerate text patterns of greedy or beam search decoding BIBREF31. We experiment with NS $(p=\lbrace 0.5, 0.9\rbrace )$ and Top-k $(k=\lbrace 5, 15\rbrace )$.
Experiments ::: Text Generation ::: Qualitative Analysis
We follow the settings of homotopy experiment BIBREF2 where first a set of latent variables was obtained by performing a linear interpolation between $z_1 \sim p(z)$ and $z_2 \sim p(z)$. Then each $z$ in the set was converted into a sequence of words by the decoder $p(x|z)$. Besides the initial motivation of BIBREF2 to examine how neighbouring latent codes look like, our additional incentive is to analyse how sensitive the decoder is to small variations in the latent variable when trained with different channel capacities, $C=\lbrace 3,15,100\rbrace $.
Table TABREF17 shows the generated sentences via different decoding schemes for each channel capacity. For space reason, we only report the generated sentences for greedy, Top-$k=15$, and NS $p=0.9$. To make the generated sequences comparable across different decoding schemes or C values, we use the same samples of $z$ for decoding.
Experiments ::: Text Generation ::: Qualitative Analysis ::: Sensitivity of Decoder
To examine the sensitivity of the decoder to variations of the latent variable, we consider the sentences generate with the greedy decoding scheme (the first column in Table TABREF17). The other two schemes are not suitable for this analysis as they include sampling procedure. This means that if we decode the same latent variable twice we will get two different sentences. We observed that with lower channel capacity ($C=3$) the decoder tends to generate identical sentences for the interpolated latent variables (we highlight these sentences in gray), exhibiting decoder's lower sensitivity to $z$'s variations. However, with the increase of channel capacity ($C=15,100$) the decoder becomes more sensitive. This observation is further supported by the increasing pattern of active units in Table TABREF12: Given that AU increases with increase of $C$ one would expect that activation pattern of a latent variable becomes more complex as it comprises more information. Therefore small change in the pattern would have a greater effect on the decoder.
Experiments ::: Text Generation ::: Qualitative Analysis ::: Coherence of Sequences
We observe that the model trained with large values of $C$ compromises sequences' coherence during the sampling. This is especially evident when we compare $C=3$ with $C=100$. Analysis of Top-15 and NS (p=0.9) generated samples reveals that the lack of coherence is not due to the greedy decoding scheme per se, and can be attributed to the model in general. To understand this behavior further, we need two additional results from Table TABREF12: LogDetCov and $||\mu ||^2_2$. One can notice that as $C$ increases LogDetCov decreases and $||\mu ||^2_2$ increases. This indicates that the aggregated posterior becomes further apart from the prior, hence the latent codes seen during the training diverge more from the codes sampled from the prior during generation. We speculate this contributes to the coherence of the generated samples, as the decoder is not equipped to decode prior samples properly at higher $C$s.
Experiments ::: Text Generation ::: Quantitative Analysis
Quantitative analysis of generated text without gold reference sequences (e.g. in Machine Translation or Summarization) has been a long-standing challenge. Recently, there have been efforts towards this direction, with proposal such as self-BLEU BIBREF32, forward cross entropy BIBREF33 and Fréchet InferSent Distance BIBREF33. We opted for FCE as a complementary metric to our qualitative analysis. To calculate FCE, first a collection of synthetic sentences are generated by sampling $z\sim p(z)$ and decoding the samples into sentences. The synthetic sequences are then used to train a language model (an LSTM with the parametrisation of our decoder). The FCE score is estimated by reporting the negative log likelihood (NLL) of the trained LM on the set of human generated sentences.
We generated synthetic corpora using trained models from Table TABREF12 with different C and decoding schemes and using the same exact $z$ samples for all corpora. Since the generated corpora using different C values would have different coverage of words in the test set (i.e., Out-of-Vocabulary ratios), we used a fixed vocabulary to minimize the effect of different vocabularies in our analysis. Our dictionary contains words that are common in all of the three corpora, while the rest of the words that don't exist in this dictionary are replaced with 〈unk〉 symbol. Similarly, we used this fixed dictionary to preprocess the test sets. Also, to reduce bias to a particular set of sampled $z$'s we measure the FCE score three times, each time we sampled a new training corpus from a $\beta _C$-VAELSTM decoder and trained an LM from scratch. In Table TABREF20 we report the average FCE (NLL) for the generated corpora.
In the qualitative analysis we observed that the text generated by the $\beta _C$-VAELSTM trained with large values of $C=100$ exhibits lower quality (i.e., in terms of coherence). This observation is supported by the FCE score of NS(p=0.9) decoding scheme (TABREF20), since the performance drops when the LM is trained on the corpus generated with $C=100$. The generated corpora with $C=3$ and $C=15$ achieve similar FCE score. However, these patterns are reversed for Greedy decoding scheme, where the general tendency of FCE scores suggests that for larger values of $C$ the $\beta _C$-VAELSTM seems to generate text which better approximates the natural sentences in the test set. To understand this further, we report additional statistics in Table TABREF20: percentage of 〈unk〉 symbols, self-BLEU and average sentence length in the corpus.
The average sentence length, in the generated corpora is very similar for both decoding schemes, removing the possibility that the pathological pattern on FCE scores was caused by difference in sentence length. However, we observe that for Greedy decoding more than $30\%$ of the test set consists of 〈unk〉. Intuitively, seeing more evidence of this symbol during training would improve our estimate for the 〈unk〉. As reported in the table, the $\%$unk increases on almost all corpora as $C$ grows, which is then translated into getting a better FCE score at test. Therefore, we believe that FCE at high $\%$unk is not a reliable quantitative metric to assess the quality of the generated syntactic corpora. Furthermore, for Greedy decoding, self-BLEU decreases when $C$ increases. This suggests that generated sentences for higher value of $C$ are more diverse. Hence, the LM trained on more diverse corpora can generalise better, which in turn affects the FCE.
In contrast, the effect the 〈unk〉 symbol has on the corpora generated with the NS(p=0.9) decoding scheme is minimal for two reasons: First, the vocabulary size for the generated corpora, for all values of $C$ is close to the original corpus (the corpus we used to train the $\beta _C$-VAELSTM). Second, the vocabularies of the corpora generated with three values of $C$ is very close to each other. As a result, minimum replacement of the words with the 〈unk〉 symbol is required, making the experiment to be more reflective of the quality of the generated text. Similarly, self-BLEU for the NS(p=0.9) is the same for all values of $C$. This suggests that the diversity of sentences has minimal, if any, effect on the FCE.
Experiments ::: Syntactic Test
In this section, we explore if any form of syntactic information is captured by the encoder and represented in the latent codes despite the lack of any explicit syntactic signal during the training of the $\beta _C$-VAELSTM. To train the models we used the same WIKI data set as in BIBREF24, but we filtered out all the sentences that are longer than 50 space-separated tokens. We use the data set of BIBREF24 which consists of pairs of grammatical and ungrammatical sentences to test various syntactic phenomenon. For example, a pair in subject-verb agreement category would be: (The author laughs, The author laugh). We encode both the grammatical and ungrammatical sentences into the latent codes $z^+$ and $z^-$, respectively. Then we condition the decoder on the $z^+$ and try to determine whether the decoder assigns higher probability to the grammatical sentence (denoted by $x^+$): $p(x^-|z^+) < p(x^+|z^+)$ (denoted by p1 in Table TABREF28). We repeat the same experiment but this time try to determine whether the decoder, when conditioned on the ungrammatical code ($z^-$), still prefers to assign higher probability to the grammatical sentence: $p(x^-|z^-) < p(x^+|z^-)$ (denoted by p2 in Table TABREF28). Table TABREF28 shows the p1 and p2 for the $\beta _C$-VAELSTM model trained with $C=\lbrace 3,100\rbrace $. Both the p1 and p2 are similar to the accuracy and correspond to how many times a grammatical sentence was assigned a higher probability.
As reported for C=3, p1 and p2 match in almost all cases. This is to some degree expected since lower channel capacity encourages a more dominating decoder which in our case was trained on grammatical sentences from the WIKI. On the other hand, this illustrates that despite avoiding the KL-collapse issue, the dependence of the decoder on the latent code is so negligible that the decoder hardly distinguishes the grammatical and ungrammatical inputs. This changes for $C=100$, as in almost all the cases the decoder becomes strongly dependent on the latent code and can differentiate between what it has seen as input and the closely similar sentence it hasn't received as the input: The decoder assigns larger probability to the ungrammatical sentence when conditioned on the $z^-$ and, similarly, larger probability to the grammatical sentence when conditioned on the $z^+$.
However, the above observations neither confirm nor reject existence of grammar signal in the latent codes. We run a second set of experiments where we aim to discard sentence specific information from the latent codes by averaging the codes inside each syntactic category. The averaged codes are denoted by $\bar{z}^+$ and $\bar{z}^-$, and the corresponding accuracies are reported by p̄1 and p̄2 in Table TABREF28. Our hypothesis is that the only invariant factor during averaging the codes inside a category is the grammatical property of its corresponding sentences.
As expected, due to the weak dependence of decoder on latent code, the performance of the model under $C=3$ is almost identical (not included for space limits) when comparing p1 vs. p̄1, and p2 vs. p̄2. However, for $C=100$ the performance of the model deteriorates. While we leave further exploration of this behavior to our future work, we speculate this could be an indication of two things: the increase of complexity in the latent code which encourages a higher variance around the mean, or the absence of syntactic signal in the latent codes.
Discussion and Conclusion
In this paper we analysed the interdependence of the KL term in Evidence Lower Bound (ELBO) and the properties of the approximated posterior for text generation. To perform the analysis we used an information theoretic framework based on a variant of $\beta $-VAE objective, which permits explicit control of the KL term, and treats KL as a mechanism to control the amount of information transmitted between the encoder and decoder.
The immediate impact of the explicit constraint is avoiding the collapse issue ($D_{KL}=0$) by setting a non-zero positive constraint ($C\ge 0$) on the KL term ($|D_{KL}\big (q_\phi ({z}|{x}) || p({z})\big )-C|$). We experimented with a range of constraints ($C$) on the KL term and various powerful and weak decoder architectures (LSTM, GRU, and CNN), and empirically confirmed that in all cases the constraint was satisfied.
We showed that the higher value of KL encourages not only divergence from the prior distribution, but also a sharper and more concentrated approximated posteriors. It encourages the decoder to be more sensitive to the variations on the latent code, and makes the model with higher KL less suitable for generation as the latent variables observed during training are farther away from the prior samples used during generation. To analyse its impact on generation we conducted a set of qualitative and quantitative experiments.
In the qualitative analysis we showed that small and large values of KL term impose different properties on the generated text: the decoder trained under smaller KL term tends to generate repetitive but mainly plausible sentences, while for larger KL the generated sentences were diverse but incoherent. This behaviour was observed across three different decoding schemes and complemented by a quantitative analysis where we measured the performance of an LSTM LM trained on different VAE-generated synthetic corpora via different KL magnitudes, and tested on human generated sentences.
Finally, in an attempt to understand the ability of the latent code in VAEs to represent some form of syntactic information, we tested the ability of the model to distinguish between grammatical and ungrammatical sentences. We verified that at lower (and still non-zero) KL the decoder tends to pay less attention to the latent code, but our findings regarding the presence of a syntactic signal in the latent code were inconclusive. We leave it as a possible avenue to explore in our future work. Also, we plan to develop practical algorithms for the automatic selection of the $C$'s value, and verify our findings under multi-modal priors and complex posteriors.
Acknowledgments
The authors would like to thank the anonymous reviewers for their helpful suggestions. This research was supported by an EPSRC Experienced Researcher Fellowship (N. Collier: EP/M005089/1), an MRC grant (M.T. Pilehvar: MR/M025160/1) and E. Shareghi is supported by the ERC Consolidator Grant LEXICAL (648909). We gratefully acknowledge the donation of a GPU from the NVIDIA. | Answer with content missing: (Formula 2) Formula 2 is an answer:
\big \langle\! \log p_\theta({x}|{z}) \big \rangle_{q_\phi({z}|{x})} - \beta |D_{KL}\big(q_\phi({z}|{x}) || p({z})\big)-C| |
90f80a94fabaab72833256572db1d449c2779beb | 90f80a94fabaab72833256572db1d449c2779beb_0 | Q: Did they experiment with the tool?
Text: Introduction
Large corpora of speech, obtained in the laboratory and in naturalistic conditions, become easier to collect. This new trend broadens the scope of scientific questions on speech and language that can be answered. However, this poses an important challenge for the construction of reliable and usable annotations. Managing annotators and ensuring the quality of their annotations are highly demanding tasks for research endeavours and industrial projects BIBREF0. When organised manually, the manager of annotation campaigns usually faces three major problems: the mishandling of files (e.g., character-encoding problems, incorrect naming of files), the non-conformity of the annotations BIBREF1, and the inconsistency of the annotations BIBREF2.
In this paper, we introduce Seshat, a system for the automated management of annotation campaigns for audio/speech data which addresses these challenges. It is built on two components that communicate via a Restful API: a back-end (server) written in Flask and a front-end (client) in Angular Typescript. Seshat is easy to install for non-developers and easy to use for researchers and annotators while having some extension capabilities for developers.
In Section SECREF2, we describe the related work on annotations tools, which do not provide solutions to all the aforementioned challenges during corpus creation. In Section SECREF3, we make an overview of the different functionalities of the software. Then, we explain, in Section SECREF4, the architecture of the software, and also the several UX/UI design and engineering choices that have been made to facilitate the usage of the platform. We describe how to use of Seshat in Section SECREF5 and Section SECREF6 presents two specific use-cases. Finally, we conclude and describe future plans for Seshat in Section SECREF7.
Related Work
Self-hosted annotation systems. There are many standalone solutions for the transcription of speech data that are already used by researchers: Transcriber BIBREF3, Wavesurfer BIBREF4, Praat BIBREF5, ELAN BIBREF6, XTrans BIBREF7. These systems allow the playback of sound data and the construction of different layers of annotations with various specifications, with some advanced capabilities (such as annotations with hierarchical or no relationship between layers, number of audio channels, video support). Yet, these solutions lack a management system: each researcher must track the files assigned to annotators and build a pipeline to parse (and eventually check) the output annotation files. Moreover, checking can only be done once the annotations have been submitted to the researchers. This task becomes quickly untraceable as the number of files and annotators grow. In addition, most of these transcription systems do not provide a way to evaluate consistency (intra- and inter-annotator agreement) that would be appropriate for speech data BIBREF8.
Web-based annotations systems. There are several web-based annotation systems for the annotation of audio data. Among them we find light-weight systems, like the VIA software BIBREF9 or Praat on the web BIBREF10 that allow to build simple layers of annotations. However, they do not provide a proper management system for a pool of annotators nor do they integrate annotation checking.
On the other side of the spectrum, there are more sophisticated systems with various capabilities. Camomille BIBREF11 and the EMU-SDMS system (that can also be used offline) BIBREF12 allow to work with speech data and to distribute the tasks to several annotators. But these systems require expertise in web hosting and technologies to deploy and modify them.
Finally, WebAnno BIBREF13 and GATE Teamware BIBREF14 are the tools that most closely match our main contributions regarding quality control (conformity and consistency checking), annotators' management and flexibility. WebAnno includes consistency checking with the integration of different metrics BIBREF15. However, these tools have only been built for text data. The format and all the custom layers have been designed for Natural Language Processing tasks. Porting WebAnno to support speech data seemed a major engineering challenge. That is why it appeared necessary to develop a new and user-friendly tool addressed to the speech community.
Overview of Seshat
Seshat is a user-friendly web-based interface whose objective is to smoothly manage large campaigns of audio data annotation, see Figure FIGREF8. Below, we describe the several terms used in Seshat's workflow:
[font=, leftmargin=1cm, style=nextline]
A set of audio/speech files that a Campaign Manager wants to annotate. It is indicated either by a folder containing sound files, or by a CSV summarizing a set of files. We support the same formats as Praat so far: WAV, Flac and MP3.
An object that enables the Campaign Manager to assign Annotation Tasks to the Annotators. It references a Corpus, and allows the Manager to track the annotation's tasks progress and completion in real time. At its creation, a Textgrid Checking Scheme can also be defined for that campaign.
It is contained in an Annotation Campaign, it references an audio file from the campaign's designated Audio Corpus, and assigned to Annotators. It can either be a Single Annotator Task (assigned to one Annotator) or a Double Annotator Task (assigned to two annotators, who will annotatote the assigned task in parallel).
A set of rules defining the TextGrid files' structure and content of the annotations. It is set at the beginning of the Annotation Campaign's creation, and is used to enforce that all TextGrids from the campaign contain the same amount of Tiers, with the same names. It can also enforce, for certain chosen tiers, a set of valid annotations.
Users with the rights to create Annotation Campaigns and Annotators user accounts, and assign Annotation Tasks to Annotators.
Users who are assigned a set of Annotation Tasks. Their job is to complete the annotation of the audio files with the Praat software.
If the TextGrid file they submit does not comply with their Annotation Task's TextGrid Checking Scheme, Seshat pinpoint their annotation errors with detailed messages. The annotator can re-submit the concerned file to the platform based on these different feedbacks.
Once they they connected to their instance of Seshat, campaign managers can access ongoing annotation campaigns or create new ones. Campaign managers are able to add annotators, assign annotation tasks and track progress. Annotator see a list of assigned tasks. The first step for them is to download the sound file with its corresponding auto-generated template TextGrid. In the current implementation, the annotation work has to be done locally with Praat. An upcoming version will use of web tools like Praat on the web BIBREF10. Once the task is completed, the TextGrid file is to be uploaded to Seshat via the web interface. We used the TextGrid format because of the wide acceptance of the Praat software in the speech science community (e.g., language acquisition research, clinical linguistics, phonetics and phonology).
The Textgrid Checking Scheme that encompasses rules on the tier's naming, file structure, and the content of the annotations, is associated with a specific campaign and defined at the creation of the campaign. Seshat back-end will automatically check that the submitted TextGrid file conforms to the Annotation Campaign's Textgrid Checking Scheme.
Seshat allows the campaign manager to create two type of tasks: single annotator, and double annotator. Regarding the first task, one audio file is attributed to one annotator. Once the annotation is completed, Sesha automatically checks the conformity of the annotation, and only declares a tasks completed if the conformity checks is passed. Regarding the second task, one audio file is attributed to two annotators. The two annotators annotate the same file independently, then the two versions are merged and the annotators are guided through a compare and review process to agree one final version. We summarise in the Figure FIGREF7 the different steps for the double-annotator task. At each step during merging, the two annotators are provided feedbacks to focus on where are the disagreements. This process also results in the computation of an Inter-annotator agreement for each file. The double annotator task can be used to train new annotators alongside experts.
Annotating speech data is a joint task of segmentation and categorisation of audio events. That is why we adopted the $\gamma $ measure BIBREF8 to evaluate the inter- or intra- annotator agreement in each individual tier. Campaign manager can customise the distance used by $\gamma $ by inserting a custom distance along their own parser (See short snippet of code for a parser of French Phonetics with the SAMPA alphabet in Algorithm ).
Development ::: Engineering choices
Our utmost priority when building Seshat was to make it as easy as possible for others to deploy, use, administer and eventually contribute to. To do so, we chose the most common frameworks that are free and open-source, all of which are detailed in the following sections. Additionally, to match the current trend in web development, we decided to use the so-called "web-app" architecture for Seshat, i.e., we separated the application into two distinct entities: a front-end, running on the browser, and a back-end, serving data to the front-end and interacting with the database.
Development ::: Engineering choices ::: Back-end Choices
The back-end system runs on a server. It holds and updates the campaign databases and runs the annotation checking and inter-rater agreement evaluation services. We chose Python, given its widespread use in the scientific community, with a wide array of speech and linguistic packages. Moreover, its usage on the back-end side will allow the future integration of powerful speech processing tools like Pyannote BIBREF16 to semi-automatize annotations. We thus went for Python3.6 for Seshat's server back-end. We used the Flask-Smorest extension (which is based on Flask) to clearly and thoroughly document our API, which can be exported to the popular OpenAPI 3.0.2 RESTful API description format.
The files and server data are stored on a MongoDB database, chosen for its flexible document model and general ease of use. We used the Object-Relational Mapping (ORM) MongoEngine to define our database schemas and interact with that database. MongoDB's GridFS system also allowed us to directly store annotation files (which are usually very light-weight) directly in the database, instead of going through the file system.
Development ::: Engineering choices ::: Front-end Choices
The front-end handles all of the interactions between the users (campaing manager or annotator) with the databses. It is implemented as an App within their browser. We decided to base Seshat's front-end on the Angular Typescript framework. Despite its' steep learning curve, it enforces strict design patterns that guarantee that others can make additions to our code without jeopardising the stability of the App. Angular Typescript has a wide community support in the web development industry and is backed by Google and Microsoft. Moreover, the fact that it is based on TypeScript alleviates the numerous shortcomings of JavaScript, ensuring our implementation's readability and stability.
Development ::: UX/UI Choices
The interface and the features we selected for our implementation are the process of a year-long iterative process involving a team of annotators, two campaign managers and software engineers. We followed some guiding principles from the recent Material design language. Our goal while designing our interface (with the help of a professional designer) was to make it fully usable by non-technical people. We also put some extra care into the annotators' interface to give them a clear sense of what is to be done, how they should follow the annotation protocol, and how to correct potential errors in their annotations (See Figure FIGREF21) The goal was to reduce the number of actions to perform for annotators and enable to focus only on the annotations content.
Using Seshat ::: Installation and Setup
Setting up a modern fully-fledged web service is a arduous task, usually requiring a seasoned system administrator as well as sometimes having very precise system requirements. Luckily, the Docker virtualisation platform ensures that anyone with a recent-enough install of that software can set up Seshat in about one command (while still allowing some flexibility via a configuration file). For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation).
Importing an audio corpus that you are willing to annotate is easy as dropping files into a default `corpora/` folder. It is possible to either drop a folder containing audio files (with no constraints on the folder's structure), or a CSV file listing audio filenames along with their durations (in case the files are sensitive and you're not willing to risk them being hosted on the server). It is then possible to review the automatically imported files via the web interface.
Using Seshat ::: Launching and monitoring an annotation campaign
The Campaign manager can easily define and monitor annotation campaign. As shown in Figure FIGREF33, the online form enable to choose corpora, pre-define and pre-configure the annotations scheme (tiers and parsers). There are 2 types of tiers already implemented by default: one with no check at all, and one with pre-defined categories. For the latter, these categories are pre-defined when the campaign is created.
Only Campaign managers can access and build new campaigns. If Campaign manager have several campaigns they can easily switch between them via the menu bar or get a full overview with the dashboard (See Figure FIGREF26). The campaign managers can visualise the progress of the assigned tasks at the campaign level or more precisely at the task level. They can retrieve all the intermediate files that have been created for each task. For instance, the campaign manager can examine qualitatively and quantitatively what are the annotation differences before the merge phases of the double annotator task.
Using Seshat ::: Scripting API
For those willing to interact with Seshat using code, it is possible to interact with Seshat using either its RESTful API or its command-line interface (CLI). The API endpoints that can be called are all listed in a simple interface, and can be made from any programming language able to make HTTP requests. The CLI interface can be used via your terminal, and therefore can be interacted with using Bash scripts.
A typical usage of these features would be to assign annotation tasks from a large speech corpus (spoken by several speakers) to a large pool of annotators, all the while making sure each annotator has a similar number of tasks, with each speaker being evenly distributed among annotators as well. This would be tedious to do manually via the user interface, but easy to program in any scripting language.
Using Seshat ::: Annotation Parser Customisation
We aimed at a reasonable trade-off between simplicity and flexibility for the TextGrid annotations checking component. However, we understand (from our own experience in particular) that sometimes annotations can follow a very specific and complex standard (for instance, parsing SAMPA phonemes strings). To allow users to define their own annotation standards, we added the possibility for users to define an annotation parser, via a simple package-based extension system (taking inspiration from pyannote's extension system). Anyone willing to create a new annotation parser has to be able to program in Python and have a minimal understanding of its packaging system.
As presented in our example French SAMPA Parser (Algorithm ), implementing a custom annotation parsers only requires the overload of two methods from Seshat's BaseCustomParser class:
check-annotation: takes an annotation string as input and raises an error if and only if the annotation is deemed to be invalid. It doesn't return anything.
distance: takes two annotations as input and should return a float corresponding to the distance between these two annotations.
Using Seshat ::: Inter-rater agreement: the @!START@$\gamma $@!END@ measure
It is necessary have a measure of confidence to obtain high-quality datasets and therefore to draw valid conclusions from annotations. Annotations tasks of audio and speech data usually have some specificities. The items to annotate have to be both segmented in time and categorised. The segments can be hierarchically defined or overlapping. In addition, the audio stream may require only sparse annotations (especially in-the-wild recordings which contain a lot of non-speech segments). To evaluate speech annotations, the measure needs to take these characteristics into account. That is why we decided to re-implement and compute the $\gamma $ measure (see mathet2015unified for its design and the advantages of this measure over previous agreement measures).
First, the $\gamma $ software aligns (tier-wise) the annotations of the different annotators. To align the two sets of annotations the $\gamma $ measure the distance between all the individual units. The difference of position of two annotated units $u$ and $v$ is measured with the positional distance:
If the tiers are categorical, the distance for the content of the annotated units $u$ and $v$ is defined as:
This distance can be over-written by the custom parser as mentioned above. These two distance are summed with equal weights to obtain the distance between every annotated units from 2 annotators. Then, it is possible to obtain the disorder $\delta (a)$ of a specific alignment $a$ by summing the distance of all the aligned units in $a$. All possible alignments $a$ are considered and the one that minimises the disorder $\delta (a)$ is kept.
To get the value of $\gamma $, the disorder is chance-corrected to obtain an expected disorder. It is obtained by re-sampling randomly the annotations of the annotators. This means that real annotations are drawn from the annotators, and one position in the audio is randomly chosen. The annotation is split at this random position and the two parts are permuted. It is then possible to obtain an approximation of the expected disorder $\delta _e$. The final agreement measure is defined as:
This $\gamma $ measure is automatically computed by the back-end server for the double-annotator tasks. The Campaign manager can retrieve these measures in Seshat by downloading a simple CSV file.
Use cases
We present two use cases on which Seshat was developped: clinical interviews, and daylong child-centered recordings.
Use cases ::: Clinical interviews
Seshat was intially developped to study the impact of Huntington's Disease BIBREF17 on speech and language production. One hundred and fifty two interviews between a neuropsychologist and a patient with the Huntington's Disease (HD) were recorded between June 2018 and November 2019. The campaign manager created a campaign with multiple tiers to annotate the turn takings and the speech/non speech boundaries of the utterances of the patient. For both tasks, the annotations did not need to cover completely the audio (sparsity property mentioned above). For the Turn-taking annotations, there are 3 pre-defined tiers, each one with a single class ('Patient', 'Non-Patient', and 'Noise'), which results in possible overlap between these classes. For the Utterance annotations, there is only one pre-defined class ('Utterance').
To this date, a total of 67 files have been fully annotated with the help of Seshat by a cohort of 18 speech pathologist students (see Figure FIGREF33). Among these, 16 have been done by 2 different annotators independently with the Double-annotator task. The results are summarised in Table TABREF34.
Even though there are more categories for Turn-Takings than Utterance (gut2004measuring reported that the more categories the more the task is difficult in speech annotations), the mean $\gamma $ for the Turn-Takings $\gamma = 0.64$ is slightly higher than the one for Utterance $\gamma = 0.61$. And the range of values for the Turn-Takings is smaller than the Utterance. Indeed, the speech pathologists reported the difficulty to annotate the boundary of utterances in spontaneous speech, with several ambiguous cases due to pauses. These results will help us to redefine the protocol and be more precise on the given instructions.
Use cases ::: In-the-wild child-centered recordings
The Seshat software is also currently used to annotate audio files in a study of day-long audio-recordings captured by two devices (LENA BIBREF18, and a BabyCloud baby-logger device) worn by young children growing up in remote Papua New Guinea. The project aims at establishing language input and outcomes in this seldom-studied population. To establish reliability levels, 20 1-min files were double-annotated by 2 speech pathology students. Among the tasks given to the annotators there was: (1) locating the portions of Speech (Speech activity), (2) locating the speech produced by an adult that is directed to a child or not (Adult-Directed Speech versus Child-Directed Speech). As in the previous example, the annotations do not need to cover the full audio file. The Speech Activity task has only 1 class ('Speech') and the Addressee task has 2 classes ('ADS', 'CDS').
These recordings have been done in naturalistic and noisy conditions; moreover, the annotators do not understand the language. Probably as a result of these challenges, agreement between annotators is lower than in the Clinical interviews use case. This information is nonetheless valuable to the researchers, as it can help them appropriately lower their confidence in the ensuing speech quantity estimates.
Conclusion and Future work
Seshat is a new tool for the management of audio annotation efforts. Seshat enables users to define their own campaign of annotations. Based on this configuration, Seshat automatically enforces the format of the annotations returned by the annotators. Besides, we also add the capability to finely tailor the parsing of the annotations. Finally, Seshat provides automatic routines to compute the inter-rate agreements that are specifically designed for audio annotations. Seshat lays some foundations for more advanced features, either for the interface or the annotation capabilities. In future work, we plan to implement an automatic task assignments and an integration of a diarization processing step to reduce human effort. Another planned feature is to add possibility for the campaign manager to design more complex annotation workflows such as, for instance, dependencies between tiers or more intermediate steps of annotations.
Acknowledgements
This research was conducted thanks to Agence Nationale de la Recherche (ANR-17-CE28-0007 LangAge, ANR-16-DATA-0004 ACLEW, ANR-14-CE30-0003 MechELex, ANR-17-EURE-0017, ANR-10-IDEX-0001-02 PSL*) and grants from Facebook AI Research (Research Grant), Google (Faculty Research Award), and Microsoft Research (Azure Credits and Grant), and a J. S. McDonnell Foundation Understanding Human Cognition Scholar Award. | Yes |
5872279c5165cc8a0c58cf1f89838b7c43217b0e | 5872279c5165cc8a0c58cf1f89838b7c43217b0e_0 | Q: Can it be used for any language?
Text: Introduction
Large corpora of speech, obtained in the laboratory and in naturalistic conditions, become easier to collect. This new trend broadens the scope of scientific questions on speech and language that can be answered. However, this poses an important challenge for the construction of reliable and usable annotations. Managing annotators and ensuring the quality of their annotations are highly demanding tasks for research endeavours and industrial projects BIBREF0. When organised manually, the manager of annotation campaigns usually faces three major problems: the mishandling of files (e.g., character-encoding problems, incorrect naming of files), the non-conformity of the annotations BIBREF1, and the inconsistency of the annotations BIBREF2.
In this paper, we introduce Seshat, a system for the automated management of annotation campaigns for audio/speech data which addresses these challenges. It is built on two components that communicate via a Restful API: a back-end (server) written in Flask and a front-end (client) in Angular Typescript. Seshat is easy to install for non-developers and easy to use for researchers and annotators while having some extension capabilities for developers.
In Section SECREF2, we describe the related work on annotations tools, which do not provide solutions to all the aforementioned challenges during corpus creation. In Section SECREF3, we make an overview of the different functionalities of the software. Then, we explain, in Section SECREF4, the architecture of the software, and also the several UX/UI design and engineering choices that have been made to facilitate the usage of the platform. We describe how to use of Seshat in Section SECREF5 and Section SECREF6 presents two specific use-cases. Finally, we conclude and describe future plans for Seshat in Section SECREF7.
Related Work
Self-hosted annotation systems. There are many standalone solutions for the transcription of speech data that are already used by researchers: Transcriber BIBREF3, Wavesurfer BIBREF4, Praat BIBREF5, ELAN BIBREF6, XTrans BIBREF7. These systems allow the playback of sound data and the construction of different layers of annotations with various specifications, with some advanced capabilities (such as annotations with hierarchical or no relationship between layers, number of audio channels, video support). Yet, these solutions lack a management system: each researcher must track the files assigned to annotators and build a pipeline to parse (and eventually check) the output annotation files. Moreover, checking can only be done once the annotations have been submitted to the researchers. This task becomes quickly untraceable as the number of files and annotators grow. In addition, most of these transcription systems do not provide a way to evaluate consistency (intra- and inter-annotator agreement) that would be appropriate for speech data BIBREF8.
Web-based annotations systems. There are several web-based annotation systems for the annotation of audio data. Among them we find light-weight systems, like the VIA software BIBREF9 or Praat on the web BIBREF10 that allow to build simple layers of annotations. However, they do not provide a proper management system for a pool of annotators nor do they integrate annotation checking.
On the other side of the spectrum, there are more sophisticated systems with various capabilities. Camomille BIBREF11 and the EMU-SDMS system (that can also be used offline) BIBREF12 allow to work with speech data and to distribute the tasks to several annotators. But these systems require expertise in web hosting and technologies to deploy and modify them.
Finally, WebAnno BIBREF13 and GATE Teamware BIBREF14 are the tools that most closely match our main contributions regarding quality control (conformity and consistency checking), annotators' management and flexibility. WebAnno includes consistency checking with the integration of different metrics BIBREF15. However, these tools have only been built for text data. The format and all the custom layers have been designed for Natural Language Processing tasks. Porting WebAnno to support speech data seemed a major engineering challenge. That is why it appeared necessary to develop a new and user-friendly tool addressed to the speech community.
Overview of Seshat
Seshat is a user-friendly web-based interface whose objective is to smoothly manage large campaigns of audio data annotation, see Figure FIGREF8. Below, we describe the several terms used in Seshat's workflow:
[font=, leftmargin=1cm, style=nextline]
A set of audio/speech files that a Campaign Manager wants to annotate. It is indicated either by a folder containing sound files, or by a CSV summarizing a set of files. We support the same formats as Praat so far: WAV, Flac and MP3.
An object that enables the Campaign Manager to assign Annotation Tasks to the Annotators. It references a Corpus, and allows the Manager to track the annotation's tasks progress and completion in real time. At its creation, a Textgrid Checking Scheme can also be defined for that campaign.
It is contained in an Annotation Campaign, it references an audio file from the campaign's designated Audio Corpus, and assigned to Annotators. It can either be a Single Annotator Task (assigned to one Annotator) or a Double Annotator Task (assigned to two annotators, who will annotatote the assigned task in parallel).
A set of rules defining the TextGrid files' structure and content of the annotations. It is set at the beginning of the Annotation Campaign's creation, and is used to enforce that all TextGrids from the campaign contain the same amount of Tiers, with the same names. It can also enforce, for certain chosen tiers, a set of valid annotations.
Users with the rights to create Annotation Campaigns and Annotators user accounts, and assign Annotation Tasks to Annotators.
Users who are assigned a set of Annotation Tasks. Their job is to complete the annotation of the audio files with the Praat software.
If the TextGrid file they submit does not comply with their Annotation Task's TextGrid Checking Scheme, Seshat pinpoint their annotation errors with detailed messages. The annotator can re-submit the concerned file to the platform based on these different feedbacks.
Once they they connected to their instance of Seshat, campaign managers can access ongoing annotation campaigns or create new ones. Campaign managers are able to add annotators, assign annotation tasks and track progress. Annotator see a list of assigned tasks. The first step for them is to download the sound file with its corresponding auto-generated template TextGrid. In the current implementation, the annotation work has to be done locally with Praat. An upcoming version will use of web tools like Praat on the web BIBREF10. Once the task is completed, the TextGrid file is to be uploaded to Seshat via the web interface. We used the TextGrid format because of the wide acceptance of the Praat software in the speech science community (e.g., language acquisition research, clinical linguistics, phonetics and phonology).
The Textgrid Checking Scheme that encompasses rules on the tier's naming, file structure, and the content of the annotations, is associated with a specific campaign and defined at the creation of the campaign. Seshat back-end will automatically check that the submitted TextGrid file conforms to the Annotation Campaign's Textgrid Checking Scheme.
Seshat allows the campaign manager to create two type of tasks: single annotator, and double annotator. Regarding the first task, one audio file is attributed to one annotator. Once the annotation is completed, Sesha automatically checks the conformity of the annotation, and only declares a tasks completed if the conformity checks is passed. Regarding the second task, one audio file is attributed to two annotators. The two annotators annotate the same file independently, then the two versions are merged and the annotators are guided through a compare and review process to agree one final version. We summarise in the Figure FIGREF7 the different steps for the double-annotator task. At each step during merging, the two annotators are provided feedbacks to focus on where are the disagreements. This process also results in the computation of an Inter-annotator agreement for each file. The double annotator task can be used to train new annotators alongside experts.
Annotating speech data is a joint task of segmentation and categorisation of audio events. That is why we adopted the $\gamma $ measure BIBREF8 to evaluate the inter- or intra- annotator agreement in each individual tier. Campaign manager can customise the distance used by $\gamma $ by inserting a custom distance along their own parser (See short snippet of code for a parser of French Phonetics with the SAMPA alphabet in Algorithm ).
Development ::: Engineering choices
Our utmost priority when building Seshat was to make it as easy as possible for others to deploy, use, administer and eventually contribute to. To do so, we chose the most common frameworks that are free and open-source, all of which are detailed in the following sections. Additionally, to match the current trend in web development, we decided to use the so-called "web-app" architecture for Seshat, i.e., we separated the application into two distinct entities: a front-end, running on the browser, and a back-end, serving data to the front-end and interacting with the database.
Development ::: Engineering choices ::: Back-end Choices
The back-end system runs on a server. It holds and updates the campaign databases and runs the annotation checking and inter-rater agreement evaluation services. We chose Python, given its widespread use in the scientific community, with a wide array of speech and linguistic packages. Moreover, its usage on the back-end side will allow the future integration of powerful speech processing tools like Pyannote BIBREF16 to semi-automatize annotations. We thus went for Python3.6 for Seshat's server back-end. We used the Flask-Smorest extension (which is based on Flask) to clearly and thoroughly document our API, which can be exported to the popular OpenAPI 3.0.2 RESTful API description format.
The files and server data are stored on a MongoDB database, chosen for its flexible document model and general ease of use. We used the Object-Relational Mapping (ORM) MongoEngine to define our database schemas and interact with that database. MongoDB's GridFS system also allowed us to directly store annotation files (which are usually very light-weight) directly in the database, instead of going through the file system.
Development ::: Engineering choices ::: Front-end Choices
The front-end handles all of the interactions between the users (campaing manager or annotator) with the databses. It is implemented as an App within their browser. We decided to base Seshat's front-end on the Angular Typescript framework. Despite its' steep learning curve, it enforces strict design patterns that guarantee that others can make additions to our code without jeopardising the stability of the App. Angular Typescript has a wide community support in the web development industry and is backed by Google and Microsoft. Moreover, the fact that it is based on TypeScript alleviates the numerous shortcomings of JavaScript, ensuring our implementation's readability and stability.
Development ::: UX/UI Choices
The interface and the features we selected for our implementation are the process of a year-long iterative process involving a team of annotators, two campaign managers and software engineers. We followed some guiding principles from the recent Material design language. Our goal while designing our interface (with the help of a professional designer) was to make it fully usable by non-technical people. We also put some extra care into the annotators' interface to give them a clear sense of what is to be done, how they should follow the annotation protocol, and how to correct potential errors in their annotations (See Figure FIGREF21) The goal was to reduce the number of actions to perform for annotators and enable to focus only on the annotations content.
Using Seshat ::: Installation and Setup
Setting up a modern fully-fledged web service is a arduous task, usually requiring a seasoned system administrator as well as sometimes having very precise system requirements. Luckily, the Docker virtualisation platform ensures that anyone with a recent-enough install of that software can set up Seshat in about one command (while still allowing some flexibility via a configuration file). For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation).
Importing an audio corpus that you are willing to annotate is easy as dropping files into a default `corpora/` folder. It is possible to either drop a folder containing audio files (with no constraints on the folder's structure), or a CSV file listing audio filenames along with their durations (in case the files are sensitive and you're not willing to risk them being hosted on the server). It is then possible to review the automatically imported files via the web interface.
Using Seshat ::: Launching and monitoring an annotation campaign
The Campaign manager can easily define and monitor annotation campaign. As shown in Figure FIGREF33, the online form enable to choose corpora, pre-define and pre-configure the annotations scheme (tiers and parsers). There are 2 types of tiers already implemented by default: one with no check at all, and one with pre-defined categories. For the latter, these categories are pre-defined when the campaign is created.
Only Campaign managers can access and build new campaigns. If Campaign manager have several campaigns they can easily switch between them via the menu bar or get a full overview with the dashboard (See Figure FIGREF26). The campaign managers can visualise the progress of the assigned tasks at the campaign level or more precisely at the task level. They can retrieve all the intermediate files that have been created for each task. For instance, the campaign manager can examine qualitatively and quantitatively what are the annotation differences before the merge phases of the double annotator task.
Using Seshat ::: Scripting API
For those willing to interact with Seshat using code, it is possible to interact with Seshat using either its RESTful API or its command-line interface (CLI). The API endpoints that can be called are all listed in a simple interface, and can be made from any programming language able to make HTTP requests. The CLI interface can be used via your terminal, and therefore can be interacted with using Bash scripts.
A typical usage of these features would be to assign annotation tasks from a large speech corpus (spoken by several speakers) to a large pool of annotators, all the while making sure each annotator has a similar number of tasks, with each speaker being evenly distributed among annotators as well. This would be tedious to do manually via the user interface, but easy to program in any scripting language.
Using Seshat ::: Annotation Parser Customisation
We aimed at a reasonable trade-off between simplicity and flexibility for the TextGrid annotations checking component. However, we understand (from our own experience in particular) that sometimes annotations can follow a very specific and complex standard (for instance, parsing SAMPA phonemes strings). To allow users to define their own annotation standards, we added the possibility for users to define an annotation parser, via a simple package-based extension system (taking inspiration from pyannote's extension system). Anyone willing to create a new annotation parser has to be able to program in Python and have a minimal understanding of its packaging system.
As presented in our example French SAMPA Parser (Algorithm ), implementing a custom annotation parsers only requires the overload of two methods from Seshat's BaseCustomParser class:
check-annotation: takes an annotation string as input and raises an error if and only if the annotation is deemed to be invalid. It doesn't return anything.
distance: takes two annotations as input and should return a float corresponding to the distance between these two annotations.
Using Seshat ::: Inter-rater agreement: the @!START@$\gamma $@!END@ measure
It is necessary have a measure of confidence to obtain high-quality datasets and therefore to draw valid conclusions from annotations. Annotations tasks of audio and speech data usually have some specificities. The items to annotate have to be both segmented in time and categorised. The segments can be hierarchically defined or overlapping. In addition, the audio stream may require only sparse annotations (especially in-the-wild recordings which contain a lot of non-speech segments). To evaluate speech annotations, the measure needs to take these characteristics into account. That is why we decided to re-implement and compute the $\gamma $ measure (see mathet2015unified for its design and the advantages of this measure over previous agreement measures).
First, the $\gamma $ software aligns (tier-wise) the annotations of the different annotators. To align the two sets of annotations the $\gamma $ measure the distance between all the individual units. The difference of position of two annotated units $u$ and $v$ is measured with the positional distance:
If the tiers are categorical, the distance for the content of the annotated units $u$ and $v$ is defined as:
This distance can be over-written by the custom parser as mentioned above. These two distance are summed with equal weights to obtain the distance between every annotated units from 2 annotators. Then, it is possible to obtain the disorder $\delta (a)$ of a specific alignment $a$ by summing the distance of all the aligned units in $a$. All possible alignments $a$ are considered and the one that minimises the disorder $\delta (a)$ is kept.
To get the value of $\gamma $, the disorder is chance-corrected to obtain an expected disorder. It is obtained by re-sampling randomly the annotations of the annotators. This means that real annotations are drawn from the annotators, and one position in the audio is randomly chosen. The annotation is split at this random position and the two parts are permuted. It is then possible to obtain an approximation of the expected disorder $\delta _e$. The final agreement measure is defined as:
This $\gamma $ measure is automatically computed by the back-end server for the double-annotator tasks. The Campaign manager can retrieve these measures in Seshat by downloading a simple CSV file.
Use cases
We present two use cases on which Seshat was developped: clinical interviews, and daylong child-centered recordings.
Use cases ::: Clinical interviews
Seshat was intially developped to study the impact of Huntington's Disease BIBREF17 on speech and language production. One hundred and fifty two interviews between a neuropsychologist and a patient with the Huntington's Disease (HD) were recorded between June 2018 and November 2019. The campaign manager created a campaign with multiple tiers to annotate the turn takings and the speech/non speech boundaries of the utterances of the patient. For both tasks, the annotations did not need to cover completely the audio (sparsity property mentioned above). For the Turn-taking annotations, there are 3 pre-defined tiers, each one with a single class ('Patient', 'Non-Patient', and 'Noise'), which results in possible overlap between these classes. For the Utterance annotations, there is only one pre-defined class ('Utterance').
To this date, a total of 67 files have been fully annotated with the help of Seshat by a cohort of 18 speech pathologist students (see Figure FIGREF33). Among these, 16 have been done by 2 different annotators independently with the Double-annotator task. The results are summarised in Table TABREF34.
Even though there are more categories for Turn-Takings than Utterance (gut2004measuring reported that the more categories the more the task is difficult in speech annotations), the mean $\gamma $ for the Turn-Takings $\gamma = 0.64$ is slightly higher than the one for Utterance $\gamma = 0.61$. And the range of values for the Turn-Takings is smaller than the Utterance. Indeed, the speech pathologists reported the difficulty to annotate the boundary of utterances in spontaneous speech, with several ambiguous cases due to pauses. These results will help us to redefine the protocol and be more precise on the given instructions.
Use cases ::: In-the-wild child-centered recordings
The Seshat software is also currently used to annotate audio files in a study of day-long audio-recordings captured by two devices (LENA BIBREF18, and a BabyCloud baby-logger device) worn by young children growing up in remote Papua New Guinea. The project aims at establishing language input and outcomes in this seldom-studied population. To establish reliability levels, 20 1-min files were double-annotated by 2 speech pathology students. Among the tasks given to the annotators there was: (1) locating the portions of Speech (Speech activity), (2) locating the speech produced by an adult that is directed to a child or not (Adult-Directed Speech versus Child-Directed Speech). As in the previous example, the annotations do not need to cover the full audio file. The Speech Activity task has only 1 class ('Speech') and the Addressee task has 2 classes ('ADS', 'CDS').
These recordings have been done in naturalistic and noisy conditions; moreover, the annotators do not understand the language. Probably as a result of these challenges, agreement between annotators is lower than in the Clinical interviews use case. This information is nonetheless valuable to the researchers, as it can help them appropriately lower their confidence in the ensuing speech quantity estimates.
Conclusion and Future work
Seshat is a new tool for the management of audio annotation efforts. Seshat enables users to define their own campaign of annotations. Based on this configuration, Seshat automatically enforces the format of the annotations returned by the annotators. Besides, we also add the capability to finely tailor the parsing of the annotations. Finally, Seshat provides automatic routines to compute the inter-rate agreements that are specifically designed for audio annotations. Seshat lays some foundations for more advanced features, either for the interface or the annotation capabilities. In future work, we plan to implement an automatic task assignments and an integration of a diarization processing step to reduce human effort. Another planned feature is to add possibility for the campaign manager to design more complex annotation workflows such as, for instance, dependencies between tiers or more intermediate steps of annotations.
Acknowledgements
This research was conducted thanks to Agence Nationale de la Recherche (ANR-17-CE28-0007 LangAge, ANR-16-DATA-0004 ACLEW, ANR-14-CE30-0003 MechELex, ANR-17-EURE-0017, ANR-10-IDEX-0001-02 PSL*) and grants from Facebook AI Research (Research Grant), Google (Faculty Research Award), and Microsoft Research (Azure Credits and Grant), and a J. S. McDonnell Foundation Understanding Human Cognition Scholar Award. | Unanswerable |
da55878d048e4dca3ca3cec192015317b0d630b1 | da55878d048e4dca3ca3cec192015317b0d630b1_0 | Q: Is this software available to the public?
Text: Introduction
Large corpora of speech, obtained in the laboratory and in naturalistic conditions, become easier to collect. This new trend broadens the scope of scientific questions on speech and language that can be answered. However, this poses an important challenge for the construction of reliable and usable annotations. Managing annotators and ensuring the quality of their annotations are highly demanding tasks for research endeavours and industrial projects BIBREF0. When organised manually, the manager of annotation campaigns usually faces three major problems: the mishandling of files (e.g., character-encoding problems, incorrect naming of files), the non-conformity of the annotations BIBREF1, and the inconsistency of the annotations BIBREF2.
In this paper, we introduce Seshat, a system for the automated management of annotation campaigns for audio/speech data which addresses these challenges. It is built on two components that communicate via a Restful API: a back-end (server) written in Flask and a front-end (client) in Angular Typescript. Seshat is easy to install for non-developers and easy to use for researchers and annotators while having some extension capabilities for developers.
In Section SECREF2, we describe the related work on annotations tools, which do not provide solutions to all the aforementioned challenges during corpus creation. In Section SECREF3, we make an overview of the different functionalities of the software. Then, we explain, in Section SECREF4, the architecture of the software, and also the several UX/UI design and engineering choices that have been made to facilitate the usage of the platform. We describe how to use of Seshat in Section SECREF5 and Section SECREF6 presents two specific use-cases. Finally, we conclude and describe future plans for Seshat in Section SECREF7.
Related Work
Self-hosted annotation systems. There are many standalone solutions for the transcription of speech data that are already used by researchers: Transcriber BIBREF3, Wavesurfer BIBREF4, Praat BIBREF5, ELAN BIBREF6, XTrans BIBREF7. These systems allow the playback of sound data and the construction of different layers of annotations with various specifications, with some advanced capabilities (such as annotations with hierarchical or no relationship between layers, number of audio channels, video support). Yet, these solutions lack a management system: each researcher must track the files assigned to annotators and build a pipeline to parse (and eventually check) the output annotation files. Moreover, checking can only be done once the annotations have been submitted to the researchers. This task becomes quickly untraceable as the number of files and annotators grow. In addition, most of these transcription systems do not provide a way to evaluate consistency (intra- and inter-annotator agreement) that would be appropriate for speech data BIBREF8.
Web-based annotations systems. There are several web-based annotation systems for the annotation of audio data. Among them we find light-weight systems, like the VIA software BIBREF9 or Praat on the web BIBREF10 that allow to build simple layers of annotations. However, they do not provide a proper management system for a pool of annotators nor do they integrate annotation checking.
On the other side of the spectrum, there are more sophisticated systems with various capabilities. Camomille BIBREF11 and the EMU-SDMS system (that can also be used offline) BIBREF12 allow to work with speech data and to distribute the tasks to several annotators. But these systems require expertise in web hosting and technologies to deploy and modify them.
Finally, WebAnno BIBREF13 and GATE Teamware BIBREF14 are the tools that most closely match our main contributions regarding quality control (conformity and consistency checking), annotators' management and flexibility. WebAnno includes consistency checking with the integration of different metrics BIBREF15. However, these tools have only been built for text data. The format and all the custom layers have been designed for Natural Language Processing tasks. Porting WebAnno to support speech data seemed a major engineering challenge. That is why it appeared necessary to develop a new and user-friendly tool addressed to the speech community.
Overview of Seshat
Seshat is a user-friendly web-based interface whose objective is to smoothly manage large campaigns of audio data annotation, see Figure FIGREF8. Below, we describe the several terms used in Seshat's workflow:
[font=, leftmargin=1cm, style=nextline]
A set of audio/speech files that a Campaign Manager wants to annotate. It is indicated either by a folder containing sound files, or by a CSV summarizing a set of files. We support the same formats as Praat so far: WAV, Flac and MP3.
An object that enables the Campaign Manager to assign Annotation Tasks to the Annotators. It references a Corpus, and allows the Manager to track the annotation's tasks progress and completion in real time. At its creation, a Textgrid Checking Scheme can also be defined for that campaign.
It is contained in an Annotation Campaign, it references an audio file from the campaign's designated Audio Corpus, and assigned to Annotators. It can either be a Single Annotator Task (assigned to one Annotator) or a Double Annotator Task (assigned to two annotators, who will annotatote the assigned task in parallel).
A set of rules defining the TextGrid files' structure and content of the annotations. It is set at the beginning of the Annotation Campaign's creation, and is used to enforce that all TextGrids from the campaign contain the same amount of Tiers, with the same names. It can also enforce, for certain chosen tiers, a set of valid annotations.
Users with the rights to create Annotation Campaigns and Annotators user accounts, and assign Annotation Tasks to Annotators.
Users who are assigned a set of Annotation Tasks. Their job is to complete the annotation of the audio files with the Praat software.
If the TextGrid file they submit does not comply with their Annotation Task's TextGrid Checking Scheme, Seshat pinpoint their annotation errors with detailed messages. The annotator can re-submit the concerned file to the platform based on these different feedbacks.
Once they they connected to their instance of Seshat, campaign managers can access ongoing annotation campaigns or create new ones. Campaign managers are able to add annotators, assign annotation tasks and track progress. Annotator see a list of assigned tasks. The first step for them is to download the sound file with its corresponding auto-generated template TextGrid. In the current implementation, the annotation work has to be done locally with Praat. An upcoming version will use of web tools like Praat on the web BIBREF10. Once the task is completed, the TextGrid file is to be uploaded to Seshat via the web interface. We used the TextGrid format because of the wide acceptance of the Praat software in the speech science community (e.g., language acquisition research, clinical linguistics, phonetics and phonology).
The Textgrid Checking Scheme that encompasses rules on the tier's naming, file structure, and the content of the annotations, is associated with a specific campaign and defined at the creation of the campaign. Seshat back-end will automatically check that the submitted TextGrid file conforms to the Annotation Campaign's Textgrid Checking Scheme.
Seshat allows the campaign manager to create two type of tasks: single annotator, and double annotator. Regarding the first task, one audio file is attributed to one annotator. Once the annotation is completed, Sesha automatically checks the conformity of the annotation, and only declares a tasks completed if the conformity checks is passed. Regarding the second task, one audio file is attributed to two annotators. The two annotators annotate the same file independently, then the two versions are merged and the annotators are guided through a compare and review process to agree one final version. We summarise in the Figure FIGREF7 the different steps for the double-annotator task. At each step during merging, the two annotators are provided feedbacks to focus on where are the disagreements. This process also results in the computation of an Inter-annotator agreement for each file. The double annotator task can be used to train new annotators alongside experts.
Annotating speech data is a joint task of segmentation and categorisation of audio events. That is why we adopted the $\gamma $ measure BIBREF8 to evaluate the inter- or intra- annotator agreement in each individual tier. Campaign manager can customise the distance used by $\gamma $ by inserting a custom distance along their own parser (See short snippet of code for a parser of French Phonetics with the SAMPA alphabet in Algorithm ).
Development ::: Engineering choices
Our utmost priority when building Seshat was to make it as easy as possible for others to deploy, use, administer and eventually contribute to. To do so, we chose the most common frameworks that are free and open-source, all of which are detailed in the following sections. Additionally, to match the current trend in web development, we decided to use the so-called "web-app" architecture for Seshat, i.e., we separated the application into two distinct entities: a front-end, running on the browser, and a back-end, serving data to the front-end and interacting with the database.
Development ::: Engineering choices ::: Back-end Choices
The back-end system runs on a server. It holds and updates the campaign databases and runs the annotation checking and inter-rater agreement evaluation services. We chose Python, given its widespread use in the scientific community, with a wide array of speech and linguistic packages. Moreover, its usage on the back-end side will allow the future integration of powerful speech processing tools like Pyannote BIBREF16 to semi-automatize annotations. We thus went for Python3.6 for Seshat's server back-end. We used the Flask-Smorest extension (which is based on Flask) to clearly and thoroughly document our API, which can be exported to the popular OpenAPI 3.0.2 RESTful API description format.
The files and server data are stored on a MongoDB database, chosen for its flexible document model and general ease of use. We used the Object-Relational Mapping (ORM) MongoEngine to define our database schemas and interact with that database. MongoDB's GridFS system also allowed us to directly store annotation files (which are usually very light-weight) directly in the database, instead of going through the file system.
Development ::: Engineering choices ::: Front-end Choices
The front-end handles all of the interactions between the users (campaing manager or annotator) with the databses. It is implemented as an App within their browser. We decided to base Seshat's front-end on the Angular Typescript framework. Despite its' steep learning curve, it enforces strict design patterns that guarantee that others can make additions to our code without jeopardising the stability of the App. Angular Typescript has a wide community support in the web development industry and is backed by Google and Microsoft. Moreover, the fact that it is based on TypeScript alleviates the numerous shortcomings of JavaScript, ensuring our implementation's readability and stability.
Development ::: UX/UI Choices
The interface and the features we selected for our implementation are the process of a year-long iterative process involving a team of annotators, two campaign managers and software engineers. We followed some guiding principles from the recent Material design language. Our goal while designing our interface (with the help of a professional designer) was to make it fully usable by non-technical people. We also put some extra care into the annotators' interface to give them a clear sense of what is to be done, how they should follow the annotation protocol, and how to correct potential errors in their annotations (See Figure FIGREF21) The goal was to reduce the number of actions to perform for annotators and enable to focus only on the annotations content.
Using Seshat ::: Installation and Setup
Setting up a modern fully-fledged web service is a arduous task, usually requiring a seasoned system administrator as well as sometimes having very precise system requirements. Luckily, the Docker virtualisation platform ensures that anyone with a recent-enough install of that software can set up Seshat in about one command (while still allowing some flexibility via a configuration file). For those willing to have a more tightly-controlled installation of Seshat on their system, we also fully specify the manual installation steps in our online documentation).
Importing an audio corpus that you are willing to annotate is easy as dropping files into a default `corpora/` folder. It is possible to either drop a folder containing audio files (with no constraints on the folder's structure), or a CSV file listing audio filenames along with their durations (in case the files are sensitive and you're not willing to risk them being hosted on the server). It is then possible to review the automatically imported files via the web interface.
Using Seshat ::: Launching and monitoring an annotation campaign
The Campaign manager can easily define and monitor annotation campaign. As shown in Figure FIGREF33, the online form enable to choose corpora, pre-define and pre-configure the annotations scheme (tiers and parsers). There are 2 types of tiers already implemented by default: one with no check at all, and one with pre-defined categories. For the latter, these categories are pre-defined when the campaign is created.
Only Campaign managers can access and build new campaigns. If Campaign manager have several campaigns they can easily switch between them via the menu bar or get a full overview with the dashboard (See Figure FIGREF26). The campaign managers can visualise the progress of the assigned tasks at the campaign level or more precisely at the task level. They can retrieve all the intermediate files that have been created for each task. For instance, the campaign manager can examine qualitatively and quantitatively what are the annotation differences before the merge phases of the double annotator task.
Using Seshat ::: Scripting API
For those willing to interact with Seshat using code, it is possible to interact with Seshat using either its RESTful API or its command-line interface (CLI). The API endpoints that can be called are all listed in a simple interface, and can be made from any programming language able to make HTTP requests. The CLI interface can be used via your terminal, and therefore can be interacted with using Bash scripts.
A typical usage of these features would be to assign annotation tasks from a large speech corpus (spoken by several speakers) to a large pool of annotators, all the while making sure each annotator has a similar number of tasks, with each speaker being evenly distributed among annotators as well. This would be tedious to do manually via the user interface, but easy to program in any scripting language.
Using Seshat ::: Annotation Parser Customisation
We aimed at a reasonable trade-off between simplicity and flexibility for the TextGrid annotations checking component. However, we understand (from our own experience in particular) that sometimes annotations can follow a very specific and complex standard (for instance, parsing SAMPA phonemes strings). To allow users to define their own annotation standards, we added the possibility for users to define an annotation parser, via a simple package-based extension system (taking inspiration from pyannote's extension system). Anyone willing to create a new annotation parser has to be able to program in Python and have a minimal understanding of its packaging system.
As presented in our example French SAMPA Parser (Algorithm ), implementing a custom annotation parsers only requires the overload of two methods from Seshat's BaseCustomParser class:
check-annotation: takes an annotation string as input and raises an error if and only if the annotation is deemed to be invalid. It doesn't return anything.
distance: takes two annotations as input and should return a float corresponding to the distance between these two annotations.
Using Seshat ::: Inter-rater agreement: the @!START@$\gamma $@!END@ measure
It is necessary have a measure of confidence to obtain high-quality datasets and therefore to draw valid conclusions from annotations. Annotations tasks of audio and speech data usually have some specificities. The items to annotate have to be both segmented in time and categorised. The segments can be hierarchically defined or overlapping. In addition, the audio stream may require only sparse annotations (especially in-the-wild recordings which contain a lot of non-speech segments). To evaluate speech annotations, the measure needs to take these characteristics into account. That is why we decided to re-implement and compute the $\gamma $ measure (see mathet2015unified for its design and the advantages of this measure over previous agreement measures).
First, the $\gamma $ software aligns (tier-wise) the annotations of the different annotators. To align the two sets of annotations the $\gamma $ measure the distance between all the individual units. The difference of position of two annotated units $u$ and $v$ is measured with the positional distance:
If the tiers are categorical, the distance for the content of the annotated units $u$ and $v$ is defined as:
This distance can be over-written by the custom parser as mentioned above. These two distance are summed with equal weights to obtain the distance between every annotated units from 2 annotators. Then, it is possible to obtain the disorder $\delta (a)$ of a specific alignment $a$ by summing the distance of all the aligned units in $a$. All possible alignments $a$ are considered and the one that minimises the disorder $\delta (a)$ is kept.
To get the value of $\gamma $, the disorder is chance-corrected to obtain an expected disorder. It is obtained by re-sampling randomly the annotations of the annotators. This means that real annotations are drawn from the annotators, and one position in the audio is randomly chosen. The annotation is split at this random position and the two parts are permuted. It is then possible to obtain an approximation of the expected disorder $\delta _e$. The final agreement measure is defined as:
This $\gamma $ measure is automatically computed by the back-end server for the double-annotator tasks. The Campaign manager can retrieve these measures in Seshat by downloading a simple CSV file.
Use cases
We present two use cases on which Seshat was developped: clinical interviews, and daylong child-centered recordings.
Use cases ::: Clinical interviews
Seshat was intially developped to study the impact of Huntington's Disease BIBREF17 on speech and language production. One hundred and fifty two interviews between a neuropsychologist and a patient with the Huntington's Disease (HD) were recorded between June 2018 and November 2019. The campaign manager created a campaign with multiple tiers to annotate the turn takings and the speech/non speech boundaries of the utterances of the patient. For both tasks, the annotations did not need to cover completely the audio (sparsity property mentioned above). For the Turn-taking annotations, there are 3 pre-defined tiers, each one with a single class ('Patient', 'Non-Patient', and 'Noise'), which results in possible overlap between these classes. For the Utterance annotations, there is only one pre-defined class ('Utterance').
To this date, a total of 67 files have been fully annotated with the help of Seshat by a cohort of 18 speech pathologist students (see Figure FIGREF33). Among these, 16 have been done by 2 different annotators independently with the Double-annotator task. The results are summarised in Table TABREF34.
Even though there are more categories for Turn-Takings than Utterance (gut2004measuring reported that the more categories the more the task is difficult in speech annotations), the mean $\gamma $ for the Turn-Takings $\gamma = 0.64$ is slightly higher than the one for Utterance $\gamma = 0.61$. And the range of values for the Turn-Takings is smaller than the Utterance. Indeed, the speech pathologists reported the difficulty to annotate the boundary of utterances in spontaneous speech, with several ambiguous cases due to pauses. These results will help us to redefine the protocol and be more precise on the given instructions.
Use cases ::: In-the-wild child-centered recordings
The Seshat software is also currently used to annotate audio files in a study of day-long audio-recordings captured by two devices (LENA BIBREF18, and a BabyCloud baby-logger device) worn by young children growing up in remote Papua New Guinea. The project aims at establishing language input and outcomes in this seldom-studied population. To establish reliability levels, 20 1-min files were double-annotated by 2 speech pathology students. Among the tasks given to the annotators there was: (1) locating the portions of Speech (Speech activity), (2) locating the speech produced by an adult that is directed to a child or not (Adult-Directed Speech versus Child-Directed Speech). As in the previous example, the annotations do not need to cover the full audio file. The Speech Activity task has only 1 class ('Speech') and the Addressee task has 2 classes ('ADS', 'CDS').
These recordings have been done in naturalistic and noisy conditions; moreover, the annotators do not understand the language. Probably as a result of these challenges, agreement between annotators is lower than in the Clinical interviews use case. This information is nonetheless valuable to the researchers, as it can help them appropriately lower their confidence in the ensuing speech quantity estimates.
Conclusion and Future work
Seshat is a new tool for the management of audio annotation efforts. Seshat enables users to define their own campaign of annotations. Based on this configuration, Seshat automatically enforces the format of the annotations returned by the annotators. Besides, we also add the capability to finely tailor the parsing of the annotations. Finally, Seshat provides automatic routines to compute the inter-rate agreements that are specifically designed for audio annotations. Seshat lays some foundations for more advanced features, either for the interface or the annotation capabilities. In future work, we plan to implement an automatic task assignments and an integration of a diarization processing step to reduce human effort. Another planned feature is to add possibility for the campaign manager to design more complex annotation workflows such as, for instance, dependencies between tiers or more intermediate steps of annotations.
Acknowledgements
This research was conducted thanks to Agence Nationale de la Recherche (ANR-17-CE28-0007 LangAge, ANR-16-DATA-0004 ACLEW, ANR-14-CE30-0003 MechELex, ANR-17-EURE-0017, ANR-10-IDEX-0001-02 PSL*) and grants from Facebook AI Research (Research Grant), Google (Faculty Research Award), and Microsoft Research (Azure Credits and Grant), and a J. S. McDonnell Foundation Understanding Human Cognition Scholar Award. | Yes |
3a1bd3ec1a7ce9514da0cb2dfcaa454ba8c0ed14 | 3a1bd3ec1a7ce9514da0cb2dfcaa454ba8c0ed14_0 | Q: What were the five English subtasks?
Text: Introduction
Determining the sentiment polarity of tweets has become a landmark homework exercise in natural language processing (NLP) and data science classes. This is perhaps because the task is easy to understand and it is also easy to get good results with very simple methods (e.g. positive - negative words counting). The practical applications of this task are wide, from monitoring popular events (e.g. Presidential debates, Oscars, etc.) to extracting trading signals by monitoring tweets about public companies. These applications often benefit greatly from the best possible accuracy, which is why the SemEval-2017 Twitter competition promotes research in this area. The competition is divided into five subtasks which involve standard classification, ordinal classification and distributional estimation. For a more detailed description see BIBREF0 .
In the last few years, deep learning techniques have significantly out-performed traditional methods in several NLP tasks BIBREF1 , BIBREF2 , and sentiment analysis is no exception to this trend BIBREF3 . In fact, previous iterations of the SemEval Twitter sentiment analysis competition have already established their power over other approaches BIBREF4 , BIBREF5 , BIBREF6 . Two of the most popular deep learning techniques for sentiment analysis are CNNs and LSTMs. Consequently, in an effort to build a state-of-the-art Twitter sentiment classifier, we explore both models and build a system which combines both.
This paper is organized as follows. In sec. SECREF2 we describe the architecture of the CNN and the LSTM used in our system. In sec. SECREF3 we expand on the three training phases used in our system. In sec. SECREF4 we discuss the various tricks that were used to fine tune the system for each individual subtasks. Finally in sec. SECREF5 we present the performance of the system and in sec. SECREF6 we outline our main conclusions.
CNN
Let us now describe the architecture of the CNN we worked with. Its architecture is almost identical to the CNN of BIBREF8 . A smaller version of our model is illustrated on Fig. FIGREF2 . The input of the network are the tweets, which are tokenized into words. Each word is mapped to a word vector representation, i.e. a word embedding, such that an entire tweet can be mapped to a matrix of size INLINEFORM0 , where INLINEFORM1 is the number of words in the tweet and INLINEFORM2 is the dimension of the embedding space (we chose INLINEFORM3 ). We follow BIBREF8 zero-padding strategy such that all tweets have the same matrix dimension INLINEFORM4 , where we chose INLINEFORM5 . We then apply several convolution operations of various sizes to this matrix. A single convolution involves a filtering matrix INLINEFORM6 where INLINEFORM7 is the size of the convolution, meaning the number of words it spans. The convolution operation is defined as DISPLAYFORM0
where INLINEFORM0 is a bias term and f(x) is a non-linear function, which we chose to be the relu function. The output INLINEFORM1 is therefore a concatenation of the convolution operator over all possible window of words in the tweet. Note that because of the zero-padding strategy we use, we are effectively applying wide convolutions BIBREF9 . We can use multiple filtering matrices to learn different features, and additionally we can use multiple convolution sizes to focus on smaller or larger regions of the tweets. In practice, we used three filter sizes (either INLINEFORM2 , INLINEFORM3 or INLINEFORM4 depending on the model) and we used a total of 200 filtering matrices for each filter size.
We then apply a max-pooling operation to each convolution INLINEFORM0 . The max-pooling operation extracts the most important feature for each convolution, independently of where in the tweet this feature is located. In other words, the CNN's structure effectively extracts the most important n-grams in the embedding space, which is why we believe these systems are good at sentence classification. The max-pooling operation also allows us to combine all the INLINEFORM1 of each filter into one vector INLINEFORM2 where m is the total number of filters (in our case INLINEFORM3 ). This vector then goes through a small fully connected hidden layer of size 30, which is then in turn passed through a softmax layer to give the final classification probabilities. To reduce over-fitting, we add a dropout layer BIBREF10 after the max-pooling layer and after the fully connected hidden layer, with a dropout probability of 50 INLINEFORM4 during training.
LSTM
Let us now describe the architecture of the LSTM system we worked with. A smaller version of our model is illustrated on Fig. FIGREF3 . Its main building blocks are two LSTM units. LSTMs are part of the recurrent neural networks (RNN) family, which are neural networks that are constructed to deal with sequential data by sharing their internal weights across the sequence. For each element in the sequence, that is for each word in the tweet, the RNN uses the current word embedding and its previous hidden state to compute the next hidden state. In its simplest version, the hidden state INLINEFORM0 (where m is the dimension of the RNN, which we pick to be INLINEFORM1 ) at time INLINEFORM2 is computed by DISPLAYFORM0
where INLINEFORM0 is the current word embedding, INLINEFORM1 and INLINEFORM2 are weight matrices, INLINEFORM3 is a bias term and INLINEFORM4 is a non-linear function, usually chosen to be INLINEFORM5 . The initial hidden state is chosen to be a vector of zeros. Unfortunately this simple RNN suffers from the exploding and vanishing gradient problem during the backpropagation training stage BIBREF11 . LSTMs solve this problem by having a more complex internal structure which allows LSTMs to remember information for either long or short terms BIBREF12 . The hidden state of an LSTM unit is computed by BIBREF13 DISPLAYFORM0
where INLINEFORM0 is called the input gate, INLINEFORM1 is the forget gate, INLINEFORM2 is the cell state, INLINEFORM3 is the regular hidden state, INLINEFORM4 is the sigmoid function, and INLINEFORM5 is the Hadamard product.
One drawback from the LSTM is that it does not sufficiently take into account post word information because the sentence is read only in one direction; forward. To solve this problem, we use what is known as a bidirectional LSTM, which is two LSTMs whose outputs are stacked together. One LSTM reads the sentence forward, and the other LSTM reads it backward. We concatenate the hidden states of each LSTM after they processed their respective final word. This gives a vector of dimension INLINEFORM0 , which is fed to a fully connected hidden layer of size 30, and then passed through a softmax layer to give the final classification probabilities. Here again we use dropout to reduce over-fitting; we add a dropout layer before and after the LSTMs, and after the fully connected hidden layer, with a dropout probability of 50 INLINEFORM1 during training.
Training
To train those models we had access to 49,693 human labeled tweets for subtask A, 30,849 tweets for subtasks (C, E) and 18,948 tweets for subtasks (B, D). In addition to this human labeled data, we collected 100 million unique unlabeled English tweets using the Twitter streaming API. From this unlabeled dataset, we extracted a distant dataset of 5 million positive tweets and 5 million negative tweets. To extract this distant dataset we used the strategy of BIBREF14 , that is we simply associate positive tweets with the presence of positive emoticons (e.g. “:)”) and vice versa for negative tweets. Those three datasets (unlabeled, distant and labeled) were used separately in the three training stages which we now present. Note that our training strategy is very similar to the one used in BIBREF5 , BIBREF6 .
Pre-processing
Before feeding the tweets to any training stage, they are pre-processed using the following procedure:
URLs are replaced by the INLINEFORM0 url INLINEFORM1 token.
Several emoticons are replaced by the tokens INLINEFORM0 smile INLINEFORM1 , INLINEFORM2 sadface INLINEFORM3 , INLINEFORM4 lolface INLINEFORM5 or INLINEFORM6 neutralface INLINEFORM7 .
Any letter repeated more than 2 times in a row is replaced by 2 repetitions of that letter (for example, “sooooo” is replaced by “soo”).
All tweets are lowercased.
Unsupervised training
We start by using the 100 million unlabeled tweets to pre-train the word embeddings which will later be used in the CNN and LSTM. To do so, we experimented with 3 unsupervised learning algorithms, Google's Word2vec BIBREF15 , BIBREF16 , Facebook's FastText BIBREF17 and Stanford's GloVe BIBREF18 . Word2vec learns word vector representations by attempting to predict context words around an input word. FastText is very similar to Word2vec but it also uses subword information in the prediction model. GloVe on the other hand is a model based on global word-word co-occurrence statistics. For all three algorithms we used the code provided by the authors with their default settings.
Distant training
The embeddings learned in the unsupervised phase contain very little information about the sentiment polarity of the words since the context for a positive word (ex. “good”) tends to be very similar to the context of a negative word (ex. “bad”). To add polarity information to the embeddings, we follow the unsupervised training by a fine tuning of the embeddings via a distant training phase. To do so, we use the CNN described in sec. SECREF2 and initialize the embeddings with the ones learned in the unsupervised phase. We then use the distant dataset to train the CNN to classify noisy positive tweets vs. noisy negative tweets. The first epoch of the training is done with the embeddings frozen in order to minimize large changes in the embeddings. We then unfreeze the embeddings and train for 6 more epochs. After this training stage, words with very different sentiment polarity (ex. “good” vs. “bad”) are far apart in the embedding space.
Supervised training
The final training stage uses the human labeled data provided by SemEval-2017. We initialize the embeddings in the CNN and LSTM models with the fine tuned embeddings of the distant training phase, and freeze them for the first INLINEFORM0 epochs. We then train for another INLINEFORM1 epochs with unfrozen embeddings and a learning rate reduced by a factor of 10. We pick the cross-entropy as the loss function, and we weight it by the inverse frequency of the true classes to counteract the imbalanced dataset. The loss is minimized using the Adam optimizer BIBREF19 with initial learning rate of 0.001. The models were implemented in TensorFlow and experiments were run on a GeForce GTX Titan X GPU.
To reduce variance and boost accuracy, we ensemble 10 CNNs and 10 LSTMs together through soft voting. The models ensembled have different random weight initializations, different number of epochs (from 4 to 20 in total), different set of filter sizes (either INLINEFORM0 , INLINEFORM1 or INLINEFORM2 ) and different embedding pre-training algorithms (either Word2vec or FastText).
Subtask specific tricks
The models described in sec. SECREF2 and the training method described in sec. SECREF3 are used in the same way for all five subtasks, with a few special exceptions which we now address. Clearly, the output dimension differs depending on the subtask, for subtask A the output dimension is 3, while for B and D it is 2 and for subtask C and E it is 5. Furthermore, for quantification subtasks (D and E), we use the probability average approach of BIBREF20 to convert the output probabilities into sentiment distributions.
Finally for subtasks that have a topic associated with the tweet (B, C, D and E), we add two special steps which we noticed improves the accuracy during the cross-validation phase. First, if any of the words in the topic is not explicitly mentioned in the tweet, we add those missing words at the end of the tweet in the pre-processing phase. Second, we concatenate to the regular word embeddings another embedding space of dimension 5 which has only 2 possible vectors. One of these 2 vectors indicates that the current word is part of the topic, while the other vector indicates that the current word is not part of the topic.
Results
Let us now discuss the results obtained from this system. In order to assess the performance of each model and their variations, we first show their scores on the historical Twitter test set of 2013, 2014, 2015 and 2016 without using any of those sets in the training dataset, just like it was required for the 2016 edition of this competition. For brevity, we only focus on task A since it tends to be the most popular one. Moreover, in order to be consistent with historical editions of this competition, we use the average INLINEFORM0 score of the positive and negative class as the metric of interest. This is different from the macro-average recall which is used in the 2017 edition, but this should not affect the conclusions of this analysis significantly since we found that the two metrics were highly correlated. The results are summarized in Table TABREF16 . This table is not meant to be an exhaustive list of all the experiments performed, but it does illustrate the relative performances of the most important variations on the models explored here.
We can see from Table TABREF16 that the GloVe unsupervised algorithm gives a lower score than both FastText and Word2vec. It is for this reason that we did not include the GloVe variation in the ensemble model. We also note that the absence of class weights or the absence of a distant training stage lowers the scores significantly, which demonstrates that these are sound additions. Except for these three variations, the other models have similar scores. However, the ensemble model effectively outperforms all the other individual models. Indeed, while these individual models give similar scores, their outputs are sufficiently uncorrelated such that ensembling them gives the score a small boost. To get a sense of how correlated with each other these models are, we can compute the Pearson correlation coefficient between the output probabilities of any pairs of models, see Table TABREF17 . From this table we can see that the most uncorrelated models come from different supervised learning models (CNN vs. LSTM) and from different unsupervised learning algorithms (Word2vec vs. FastText).
For the predictions on the 2017 test set, the system is retrained on all available training data, which includes previous years testing data. The results of our system on the 2017 test set are shown on Table TABREF18 . Our system achieved the best scores on all of the five English subtasks. For subtask A, there is actually a tie between our submission and another team (DataStories), but note that with respect to the other metrics (accuracy and INLINEFORM0 score) our submission ranks higher.
Conclusion
In this paper we presented the system we used to compete in the SemEval-2017 Twitter sentiment analysis competition. Our goal was to experiment with deep learning models along with modern training strategies in an effort to build the best possible sentiment classifier for tweets. The final model we used was an ensemble of 10 CNNs and 10 LSTMs with different hyper-parameters and different pre-training strategies. We participated in all of the English subtasks, and obtained first rank in all of them.
For future work, it would be interesting to explore systems that combine a CNN and an LSTM more organically than through an ensemble model, perhaps a model similar to the one of BIBREF21 . It would also be interesting to analyze the dependence of the amount of unlabeled and distant data on the performance of the models.
Acknowledgments
We thank Karl Stratos, Anju Kambadur, Liang Zhou, Alexander M. Rush, David Rosenberg and Biye Li for their help on this project. | five subtasks which involve standard classification, ordinal classification and distributional estimation. For a more detailed description see BIBREF0 |
39492338e27cb90bf1763e4337c2f697cf5082ba | 39492338e27cb90bf1763e4337c2f697cf5082ba_0 | Q: How many CNNs and LSTMs were ensembled?
Text: Introduction
Determining the sentiment polarity of tweets has become a landmark homework exercise in natural language processing (NLP) and data science classes. This is perhaps because the task is easy to understand and it is also easy to get good results with very simple methods (e.g. positive - negative words counting). The practical applications of this task are wide, from monitoring popular events (e.g. Presidential debates, Oscars, etc.) to extracting trading signals by monitoring tweets about public companies. These applications often benefit greatly from the best possible accuracy, which is why the SemEval-2017 Twitter competition promotes research in this area. The competition is divided into five subtasks which involve standard classification, ordinal classification and distributional estimation. For a more detailed description see BIBREF0 .
In the last few years, deep learning techniques have significantly out-performed traditional methods in several NLP tasks BIBREF1 , BIBREF2 , and sentiment analysis is no exception to this trend BIBREF3 . In fact, previous iterations of the SemEval Twitter sentiment analysis competition have already established their power over other approaches BIBREF4 , BIBREF5 , BIBREF6 . Two of the most popular deep learning techniques for sentiment analysis are CNNs and LSTMs. Consequently, in an effort to build a state-of-the-art Twitter sentiment classifier, we explore both models and build a system which combines both.
This paper is organized as follows. In sec. SECREF2 we describe the architecture of the CNN and the LSTM used in our system. In sec. SECREF3 we expand on the three training phases used in our system. In sec. SECREF4 we discuss the various tricks that were used to fine tune the system for each individual subtasks. Finally in sec. SECREF5 we present the performance of the system and in sec. SECREF6 we outline our main conclusions.
CNN
Let us now describe the architecture of the CNN we worked with. Its architecture is almost identical to the CNN of BIBREF8 . A smaller version of our model is illustrated on Fig. FIGREF2 . The input of the network are the tweets, which are tokenized into words. Each word is mapped to a word vector representation, i.e. a word embedding, such that an entire tweet can be mapped to a matrix of size INLINEFORM0 , where INLINEFORM1 is the number of words in the tweet and INLINEFORM2 is the dimension of the embedding space (we chose INLINEFORM3 ). We follow BIBREF8 zero-padding strategy such that all tweets have the same matrix dimension INLINEFORM4 , where we chose INLINEFORM5 . We then apply several convolution operations of various sizes to this matrix. A single convolution involves a filtering matrix INLINEFORM6 where INLINEFORM7 is the size of the convolution, meaning the number of words it spans. The convolution operation is defined as DISPLAYFORM0
where INLINEFORM0 is a bias term and f(x) is a non-linear function, which we chose to be the relu function. The output INLINEFORM1 is therefore a concatenation of the convolution operator over all possible window of words in the tweet. Note that because of the zero-padding strategy we use, we are effectively applying wide convolutions BIBREF9 . We can use multiple filtering matrices to learn different features, and additionally we can use multiple convolution sizes to focus on smaller or larger regions of the tweets. In practice, we used three filter sizes (either INLINEFORM2 , INLINEFORM3 or INLINEFORM4 depending on the model) and we used a total of 200 filtering matrices for each filter size.
We then apply a max-pooling operation to each convolution INLINEFORM0 . The max-pooling operation extracts the most important feature for each convolution, independently of where in the tweet this feature is located. In other words, the CNN's structure effectively extracts the most important n-grams in the embedding space, which is why we believe these systems are good at sentence classification. The max-pooling operation also allows us to combine all the INLINEFORM1 of each filter into one vector INLINEFORM2 where m is the total number of filters (in our case INLINEFORM3 ). This vector then goes through a small fully connected hidden layer of size 30, which is then in turn passed through a softmax layer to give the final classification probabilities. To reduce over-fitting, we add a dropout layer BIBREF10 after the max-pooling layer and after the fully connected hidden layer, with a dropout probability of 50 INLINEFORM4 during training.
LSTM
Let us now describe the architecture of the LSTM system we worked with. A smaller version of our model is illustrated on Fig. FIGREF3 . Its main building blocks are two LSTM units. LSTMs are part of the recurrent neural networks (RNN) family, which are neural networks that are constructed to deal with sequential data by sharing their internal weights across the sequence. For each element in the sequence, that is for each word in the tweet, the RNN uses the current word embedding and its previous hidden state to compute the next hidden state. In its simplest version, the hidden state INLINEFORM0 (where m is the dimension of the RNN, which we pick to be INLINEFORM1 ) at time INLINEFORM2 is computed by DISPLAYFORM0
where INLINEFORM0 is the current word embedding, INLINEFORM1 and INLINEFORM2 are weight matrices, INLINEFORM3 is a bias term and INLINEFORM4 is a non-linear function, usually chosen to be INLINEFORM5 . The initial hidden state is chosen to be a vector of zeros. Unfortunately this simple RNN suffers from the exploding and vanishing gradient problem during the backpropagation training stage BIBREF11 . LSTMs solve this problem by having a more complex internal structure which allows LSTMs to remember information for either long or short terms BIBREF12 . The hidden state of an LSTM unit is computed by BIBREF13 DISPLAYFORM0
where INLINEFORM0 is called the input gate, INLINEFORM1 is the forget gate, INLINEFORM2 is the cell state, INLINEFORM3 is the regular hidden state, INLINEFORM4 is the sigmoid function, and INLINEFORM5 is the Hadamard product.
One drawback from the LSTM is that it does not sufficiently take into account post word information because the sentence is read only in one direction; forward. To solve this problem, we use what is known as a bidirectional LSTM, which is two LSTMs whose outputs are stacked together. One LSTM reads the sentence forward, and the other LSTM reads it backward. We concatenate the hidden states of each LSTM after they processed their respective final word. This gives a vector of dimension INLINEFORM0 , which is fed to a fully connected hidden layer of size 30, and then passed through a softmax layer to give the final classification probabilities. Here again we use dropout to reduce over-fitting; we add a dropout layer before and after the LSTMs, and after the fully connected hidden layer, with a dropout probability of 50 INLINEFORM1 during training.
Training
To train those models we had access to 49,693 human labeled tweets for subtask A, 30,849 tweets for subtasks (C, E) and 18,948 tweets for subtasks (B, D). In addition to this human labeled data, we collected 100 million unique unlabeled English tweets using the Twitter streaming API. From this unlabeled dataset, we extracted a distant dataset of 5 million positive tweets and 5 million negative tweets. To extract this distant dataset we used the strategy of BIBREF14 , that is we simply associate positive tweets with the presence of positive emoticons (e.g. “:)”) and vice versa for negative tweets. Those three datasets (unlabeled, distant and labeled) were used separately in the three training stages which we now present. Note that our training strategy is very similar to the one used in BIBREF5 , BIBREF6 .
Pre-processing
Before feeding the tweets to any training stage, they are pre-processed using the following procedure:
URLs are replaced by the INLINEFORM0 url INLINEFORM1 token.
Several emoticons are replaced by the tokens INLINEFORM0 smile INLINEFORM1 , INLINEFORM2 sadface INLINEFORM3 , INLINEFORM4 lolface INLINEFORM5 or INLINEFORM6 neutralface INLINEFORM7 .
Any letter repeated more than 2 times in a row is replaced by 2 repetitions of that letter (for example, “sooooo” is replaced by “soo”).
All tweets are lowercased.
Unsupervised training
We start by using the 100 million unlabeled tweets to pre-train the word embeddings which will later be used in the CNN and LSTM. To do so, we experimented with 3 unsupervised learning algorithms, Google's Word2vec BIBREF15 , BIBREF16 , Facebook's FastText BIBREF17 and Stanford's GloVe BIBREF18 . Word2vec learns word vector representations by attempting to predict context words around an input word. FastText is very similar to Word2vec but it also uses subword information in the prediction model. GloVe on the other hand is a model based on global word-word co-occurrence statistics. For all three algorithms we used the code provided by the authors with their default settings.
Distant training
The embeddings learned in the unsupervised phase contain very little information about the sentiment polarity of the words since the context for a positive word (ex. “good”) tends to be very similar to the context of a negative word (ex. “bad”). To add polarity information to the embeddings, we follow the unsupervised training by a fine tuning of the embeddings via a distant training phase. To do so, we use the CNN described in sec. SECREF2 and initialize the embeddings with the ones learned in the unsupervised phase. We then use the distant dataset to train the CNN to classify noisy positive tweets vs. noisy negative tweets. The first epoch of the training is done with the embeddings frozen in order to minimize large changes in the embeddings. We then unfreeze the embeddings and train for 6 more epochs. After this training stage, words with very different sentiment polarity (ex. “good” vs. “bad”) are far apart in the embedding space.
Supervised training
The final training stage uses the human labeled data provided by SemEval-2017. We initialize the embeddings in the CNN and LSTM models with the fine tuned embeddings of the distant training phase, and freeze them for the first INLINEFORM0 epochs. We then train for another INLINEFORM1 epochs with unfrozen embeddings and a learning rate reduced by a factor of 10. We pick the cross-entropy as the loss function, and we weight it by the inverse frequency of the true classes to counteract the imbalanced dataset. The loss is minimized using the Adam optimizer BIBREF19 with initial learning rate of 0.001. The models were implemented in TensorFlow and experiments were run on a GeForce GTX Titan X GPU.
To reduce variance and boost accuracy, we ensemble 10 CNNs and 10 LSTMs together through soft voting. The models ensembled have different random weight initializations, different number of epochs (from 4 to 20 in total), different set of filter sizes (either INLINEFORM0 , INLINEFORM1 or INLINEFORM2 ) and different embedding pre-training algorithms (either Word2vec or FastText).
Subtask specific tricks
The models described in sec. SECREF2 and the training method described in sec. SECREF3 are used in the same way for all five subtasks, with a few special exceptions which we now address. Clearly, the output dimension differs depending on the subtask, for subtask A the output dimension is 3, while for B and D it is 2 and for subtask C and E it is 5. Furthermore, for quantification subtasks (D and E), we use the probability average approach of BIBREF20 to convert the output probabilities into sentiment distributions.
Finally for subtasks that have a topic associated with the tweet (B, C, D and E), we add two special steps which we noticed improves the accuracy during the cross-validation phase. First, if any of the words in the topic is not explicitly mentioned in the tweet, we add those missing words at the end of the tweet in the pre-processing phase. Second, we concatenate to the regular word embeddings another embedding space of dimension 5 which has only 2 possible vectors. One of these 2 vectors indicates that the current word is part of the topic, while the other vector indicates that the current word is not part of the topic.
Results
Let us now discuss the results obtained from this system. In order to assess the performance of each model and their variations, we first show their scores on the historical Twitter test set of 2013, 2014, 2015 and 2016 without using any of those sets in the training dataset, just like it was required for the 2016 edition of this competition. For brevity, we only focus on task A since it tends to be the most popular one. Moreover, in order to be consistent with historical editions of this competition, we use the average INLINEFORM0 score of the positive and negative class as the metric of interest. This is different from the macro-average recall which is used in the 2017 edition, but this should not affect the conclusions of this analysis significantly since we found that the two metrics were highly correlated. The results are summarized in Table TABREF16 . This table is not meant to be an exhaustive list of all the experiments performed, but it does illustrate the relative performances of the most important variations on the models explored here.
We can see from Table TABREF16 that the GloVe unsupervised algorithm gives a lower score than both FastText and Word2vec. It is for this reason that we did not include the GloVe variation in the ensemble model. We also note that the absence of class weights or the absence of a distant training stage lowers the scores significantly, which demonstrates that these are sound additions. Except for these three variations, the other models have similar scores. However, the ensemble model effectively outperforms all the other individual models. Indeed, while these individual models give similar scores, their outputs are sufficiently uncorrelated such that ensembling them gives the score a small boost. To get a sense of how correlated with each other these models are, we can compute the Pearson correlation coefficient between the output probabilities of any pairs of models, see Table TABREF17 . From this table we can see that the most uncorrelated models come from different supervised learning models (CNN vs. LSTM) and from different unsupervised learning algorithms (Word2vec vs. FastText).
For the predictions on the 2017 test set, the system is retrained on all available training data, which includes previous years testing data. The results of our system on the 2017 test set are shown on Table TABREF18 . Our system achieved the best scores on all of the five English subtasks. For subtask A, there is actually a tie between our submission and another team (DataStories), but note that with respect to the other metrics (accuracy and INLINEFORM0 score) our submission ranks higher.
Conclusion
In this paper we presented the system we used to compete in the SemEval-2017 Twitter sentiment analysis competition. Our goal was to experiment with deep learning models along with modern training strategies in an effort to build the best possible sentiment classifier for tweets. The final model we used was an ensemble of 10 CNNs and 10 LSTMs with different hyper-parameters and different pre-training strategies. We participated in all of the English subtasks, and obtained first rank in all of them.
For future work, it would be interesting to explore systems that combine a CNN and an LSTM more organically than through an ensemble model, perhaps a model similar to the one of BIBREF21 . It would also be interesting to analyze the dependence of the amount of unlabeled and distant data on the performance of the models.
Acknowledgments
We thank Karl Stratos, Anju Kambadur, Liang Zhou, Alexander M. Rush, David Rosenberg and Biye Li for their help on this project. | 10 CNNs and 10 LSTMs |
a7adb63db5066d39fdf2882d8a7ffefbb6b622f0 | a7adb63db5066d39fdf2882d8a7ffefbb6b622f0_0 | Q: what was the baseline?
Text: Introduction
By only reading a single text review of a movie it can be difficult to say what the genre of that movie is, but by using text mining techniques on thousands of movie reviews is it possible to predict the genre?
This paper explores the possibility of classifying genres of a movie based only on a text review of that movie. This is an interesting problem because to the naked eye it may seem difficult to predict the genre by only looking at a text review. One example of a review can be seen in the following example:
I liked the film. Some of the action scenes were very interesting, tense and well done. I especially liked the opening scene which had a semi truck in it. A very tense action scene that seemed well done. Some of the transitional scenes were filmed in interesting ways such as time lapse photography, unusual colors, or interesting angles. Also the film is funny is several parts. I also liked how the evil guy was portrayed too. I'd give the film an 8 out of 10.
http://www.imdb.com/title/tt0211938/reviews
From the quoted review, one could probably predict the movie falls in the action genre; however, it would be difficult to predict all three of the genres (action, comedy, crime) that International Movie Database (IMDB) lists. With the use of text mining techniques it is feasible to predict multiple genres based on a review.
There are numerous previous works on classifying the sentiment of reviews, e.g., maas-EtAl:2011:ACL-HLT2011 by BIBREF0 . There are fewer scientific papers available on specifically classifying movie genres based on reviews; therefore, inspiration for this paper comes from papers describing classification of text for other or general contexts. One of those papers is DBLP:journals/corr/cmp-lg-9707002 where BIBREF1 describe how to use a multilayer perceptron (MLP) for genre classification.
All data, in the form of reviews and genres, used in this paper originates from IMDb.
Theory
In this section all relevant theory and methodology is described. Table TABREF1 lists basic terminology and a short description of their meaning.
Preprocessing
Data preprocessing is important when working with text data because it can reduce the number of features and it formats the data into the desired form BIBREF2 .
Removing stop words is a common type of filtering in text mining. Stop words are words that usually contain little or no information by itself and therefore it is better to remove them. Generally words that occur often can be considered stop words such as the, a and it. BIBREF2
Lemmatization is the process of converting verbs into their infinitive tense form and nouns into their singular form. The reason for doing this is to reduce words into their basic forms and thus simplify the data. For example am, are and is are converted to be. BIBREF2
A way of representing a large corpus is to calculate the Term Frequency Inverse Document Frequency (tf-idf) of the corpus and then feed the models the tf-idf. As described in ramos2003using by BIBREF3 tf-idf is both efficient and simple for matching a query of words with a document in a corpus. Tf-idf is calculated by multiplying the Term Frequency (tf) with the Inverse Document Frequency (idf) , which is formulated as DISPLAYFORM0
where INLINEFORM0 is a document in corpus INLINEFORM1 and INLINEFORM2 is a term. INLINEFORM3 is defined as DISPLAYFORM0
and INLINEFORM0 is defined as DISPLAYFORM0
where INLINEFORM0 is the number of times INLINEFORM1 occurs in INLINEFORM2 and INLINEFORM3 total number of documents in the corpus.
Models
MLP is a class of feedforward neural network built up by a layered acyclic graph. An MLP consists of at least three layers and non-linear activations. The first layer is called input layer, the second layer is called hidden layer and the third layer is called output layer. The three layers are fully connected which means that every node in the hidden layer is connected to every node in the other layers. MLP is trained using backpropagation, where the weights are updated by calculating the gradient descent with respect to an error function. BIBREF4
K-nearest Neighbors (KNN) works by evaluating similarities between entities, where INLINEFORM0 stands for how many neighbors are taken into account during the classification. KNN is different from MLP in the sense that it does not require a computationally heavy training step; instead, all of the computation is done at the classification step. There are multiple ways of calculating the similarity, one way is to calculate the Minkowski distance. The Minkowski distance between two points DISPLAYFORM0
and DISPLAYFORM0
is defined by DISPLAYFORM0
where INLINEFORM0 which is equal to the Euclidean distance. BIBREF2
Evaluation
When evaluating classifiers it is common to use accuracy, precision and recall as well as Hamming loss. Accuracy, precision and recall are defined by the the four terms true positive ( INLINEFORM0 ), true negative ( INLINEFORM1 ), false positive ( INLINEFORM2 ) and false negative ( INLINEFORM3 ) which can be seen in table TABREF16 .
Accuracy is a measurement of how correct a model's predictions are and is defined as DISPLAYFORM0
.
Precision is a ratio of how often positive predictions actually are positve and is defined as DISPLAYFORM0
.
Recall is a measurement of how good the model is to find all true positives and is defined as DISPLAYFORM0
. BIBREF5
It has been shown that when calculating precision and recall on multi-label classifiers, it can be advantageous to use micro averaged precision and recall BIBREF6 . The formulas for micro averaged precision are expressed as DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is label index and INLINEFORM1 is number of labels.
Hamming loss is different in the sense that it is a loss and it is defined as the fraction of wrong labels to the total number of labels. Hamming loss can be a good measurement when it comes to evaluating multi-label classifiers. the hamming loss is expressed as DISPLAYFORM0
where INLINEFORM0 is number of documents, INLINEFORM1 number of labels, INLINEFORM2 is the target value and INLINEFORM3 is predicted value. BIBREF7
For evaluation the INLINEFORM0 and INLINEFORM1 was calculated as defined in section SECREF15 for both the MLP model and the KNN model. For precision and recall formulas EQREF20 and EQREF21 were used because of their advantage in multi-label classification. The distribution of predicted genres was also shown in a histogram and compared to the target distribution of genres.
Furthermore the ratio of reviews that got zero genres predicted was also calculated and can be expressed as DISPLAYFORM0
where INLINEFORM0 is the number of reviews without any predicted genre and INLINEFORM1 is the total amount of predicted reviews.
Data
Data used in this paper comes from two separate sources. The first source was Large Movie Review Dataset v1.0 BIBREF0 which is a dataset for binary sentiment analysis of moview reviews. The dataset contains a total of 50000 reviews in raw text together with information on whether the review is positive or negative and a URL to the movie on IMDb. The sentiment information was not used in this paper. Out of the 50000, reviews only 7000 were used because of limitations on computational power, resulting in a corpus of 7000 documents.
The second source of data was the genres for all reviews which were scraped from the IMDb site. A total of 27 different genres were scraped. A list of all genres can be find in Appendix SECREF8 . A review can have one genre or multiple genres. For example a review can be for a movie that is both Action, Drama and Thriller at the same time while another move only falls into Drama.
Method
This section presents all steps needed to reproduce the results presented in this paper.
Data collection
In this paper the data comes from two sources where the first is a collection of text reviews. Those reviews were downloaded from Large Movie Review Datasets website . Because only 7000 reviews was used in this paper all of them were from the `train` folder and split evenly between positive reviews and negative reviews.
The genres for the reviews where obtained by iterating through all reviews and doing the following steps:
Save the text of the review.
Retrieve IMDb URL to the movie from the Large Movie Review Datasets data.
Scrape that movie website for all genres and download the genres.
The distribution of genres was plotted in a histogram to check that the scraped data looked reasonable and can be seen in figure FIGREF27 . All genres with less than 50 reviews corresponding to that genre were removed.
The number of genres per review can be seen in figure FIGREF28 and it shows that it is most common for a review to have three different genres; furthermore, it shows that no review has more than three genres.
http://ai.stanford.edu/ amaas/data/sentiment
Data preprocessing
All reviews were preprocessed according to the following steps:
Remove all non-alphanumeric characters.
Lower case all tokens.
Remove all stopwords.
Lemmatize all tokens.
Both the removal of stopwords and lemmatization were done with Python's Natural Language Toolkit (NLTK). Next the reviews and corresponding genres were split into a training set and a test set with INLINEFORM0 devided into the train set and INLINEFORM1 into the test set.
The preprocessed corpus was then used to calculate a tf-idf representing all reviews. The calculation of the tf-idf was done using scikit-learn'smodule TfidfVectorizer. Both transform and fit were run on the training set and only the transform was run on the test set. The decision to use tf-idf as a data representation is supported by BIBREF3 in ramos2003using which concludes that tf-idf is both simple and effective at categorizing relevant words.
https://www.python.org http://www.nltk.org http://scikit-learn.org
Model
This paper experimented with two different models and compared them against each other. The inspiration for the first model comes from BIBREF1 in their paper DBLP:journals/corr/cmp-lg-9707002 where they used an MLP for text genre detection. The model used in this paper comes from scikit-learn's neural_network module and is called MLPClassifier. Table TABREF35 shows all parameters that were changed from the default values.
The second model was a KNN which was chosen because of it is simple and does not require the pre-training that the MLP needs. The implementation of this model comes from scikit-learn's neighbors module and is called KNeighborsClassifier. The only parameter that was changed after some trial and error was the k-parameter which was set to 3.
Both models were fitted using the train set and then predictions were done for the test set.
Result
Table TABREF38 shows the INLINEFORM0 , INLINEFORM1 and INLINEFORM2 for the models. The KNN model had a higher accuracy of INLINEFORM3 compared to MPL's accuracy of INLINEFORM4 and the KNN model had a higher recall but slightly lower precision than the MLP model.
Table TABREF39 shows the INLINEFORM0 and INLINEFORM1 for the models, it shows that the KNN model had lower values for both the INLINEFORM2 and INLINEFORM3 compared to the MLP model.
Figure FIGREF40 shows the distribution of the genres for the predicted values when using MLP and the test set. The same comparison between KNN and the test set can be seen in figure FIGREF41 .
Discussion
When looking at the results it is apparent that KNN is better than MLP in these experiments. In particular, the INLINEFORM0 stands out between KNN and MLP where KNN got INLINEFORM1 and MLP got INLINEFORM2 which is considered a significant difference. Given that the INLINEFORM3 was relatively high for both models, this result hints that the models only predicted genres when the confidence was high, which resulted in fewer genres being predicted than the target. This can also be confirmed by looking at the figures FIGREF40 and FIGREF41 where the absolute number of reviews predicted for most genres was lower than the target. This unsatisfyingly low INLINEFORM4 can be explained by the multi-label nature of the problem in this paper. Even if the model correctly predicted 2 out of three genres it is considered a misclassification. A reason for the low accuracy could be that the models appeared to be on the conservative side when predicting genres.
Another factor that affected the performance of the models was the INLINEFORM0 which confirmed that over INLINEFORM1 of the reviews for the KNN model and over INLINEFORM2 of the reviews for the MLP model did not receive any predicted genre. Because no review had zero genres all predictions with zero genres are misclassified and this could be a good place to start when improving the models.
Furthermore, when looking at the INLINEFORM0 it shows that when looking at the individual genres for all reviews the number of wrong predictions are very low which is promising when trying to answer this paper's main question: whether it is possible to predict the genre of the movie associated with a text review. It should be taken into account that this paper only investigated about 7000 movie reviews and the results could change significantly, for better or for worse, if a much larger data set was used. In this paper, some of the genres had very low amounts of training data, which could be why those genres were not predicted in the same frequency as the target. An example of that can be seen by looking at genre Sci-Fi in figure FIGREF40 .
Conclusion
This paper demonstrates that by only looking at text reviews of a movie, there is enough information to predict its genre with an INLINEFORM0 of INLINEFORM1 . This result implies that movie reviews carry latent information about genres. This paper also shows the complexity of doing prediction on multi-label problems, both in implementation and data processing but also when it comes to evaluation. Regular metrics typically work, but they mask the entire picture and the depth of how good a model is.
Finally this paper provides an explanation of the whole process needed to conduct an experiment like this. The process includes downloading a data set, web scraping for extra information, data preprocessing, model tuning and evaluation of the results.
All genres
Action
Adult
Adventure
Animation
Biography
Comedy
Crime
Documentary
Drama
Family
Fantasy
Film-Noir
Game-Show
History
Horror
Music
Musical
Mystery
Reality-TV
Romance
Sci-Fi
Short
Sport
Talk-Show
Thriller
War
Western | There is no baseline. |
980568848cc8e7c43f767da616cf1e176f406b05 | 980568848cc8e7c43f767da616cf1e176f406b05_0 | Q: how many movie genres do they explore?
Text: Introduction
By only reading a single text review of a movie it can be difficult to say what the genre of that movie is, but by using text mining techniques on thousands of movie reviews is it possible to predict the genre?
This paper explores the possibility of classifying genres of a movie based only on a text review of that movie. This is an interesting problem because to the naked eye it may seem difficult to predict the genre by only looking at a text review. One example of a review can be seen in the following example:
I liked the film. Some of the action scenes were very interesting, tense and well done. I especially liked the opening scene which had a semi truck in it. A very tense action scene that seemed well done. Some of the transitional scenes were filmed in interesting ways such as time lapse photography, unusual colors, or interesting angles. Also the film is funny is several parts. I also liked how the evil guy was portrayed too. I'd give the film an 8 out of 10.
http://www.imdb.com/title/tt0211938/reviews
From the quoted review, one could probably predict the movie falls in the action genre; however, it would be difficult to predict all three of the genres (action, comedy, crime) that International Movie Database (IMDB) lists. With the use of text mining techniques it is feasible to predict multiple genres based on a review.
There are numerous previous works on classifying the sentiment of reviews, e.g., maas-EtAl:2011:ACL-HLT2011 by BIBREF0 . There are fewer scientific papers available on specifically classifying movie genres based on reviews; therefore, inspiration for this paper comes from papers describing classification of text for other or general contexts. One of those papers is DBLP:journals/corr/cmp-lg-9707002 where BIBREF1 describe how to use a multilayer perceptron (MLP) for genre classification.
All data, in the form of reviews and genres, used in this paper originates from IMDb.
Theory
In this section all relevant theory and methodology is described. Table TABREF1 lists basic terminology and a short description of their meaning.
Preprocessing
Data preprocessing is important when working with text data because it can reduce the number of features and it formats the data into the desired form BIBREF2 .
Removing stop words is a common type of filtering in text mining. Stop words are words that usually contain little or no information by itself and therefore it is better to remove them. Generally words that occur often can be considered stop words such as the, a and it. BIBREF2
Lemmatization is the process of converting verbs into their infinitive tense form and nouns into their singular form. The reason for doing this is to reduce words into their basic forms and thus simplify the data. For example am, are and is are converted to be. BIBREF2
A way of representing a large corpus is to calculate the Term Frequency Inverse Document Frequency (tf-idf) of the corpus and then feed the models the tf-idf. As described in ramos2003using by BIBREF3 tf-idf is both efficient and simple for matching a query of words with a document in a corpus. Tf-idf is calculated by multiplying the Term Frequency (tf) with the Inverse Document Frequency (idf) , which is formulated as DISPLAYFORM0
where INLINEFORM0 is a document in corpus INLINEFORM1 and INLINEFORM2 is a term. INLINEFORM3 is defined as DISPLAYFORM0
and INLINEFORM0 is defined as DISPLAYFORM0
where INLINEFORM0 is the number of times INLINEFORM1 occurs in INLINEFORM2 and INLINEFORM3 total number of documents in the corpus.
Models
MLP is a class of feedforward neural network built up by a layered acyclic graph. An MLP consists of at least three layers and non-linear activations. The first layer is called input layer, the second layer is called hidden layer and the third layer is called output layer. The three layers are fully connected which means that every node in the hidden layer is connected to every node in the other layers. MLP is trained using backpropagation, where the weights are updated by calculating the gradient descent with respect to an error function. BIBREF4
K-nearest Neighbors (KNN) works by evaluating similarities between entities, where INLINEFORM0 stands for how many neighbors are taken into account during the classification. KNN is different from MLP in the sense that it does not require a computationally heavy training step; instead, all of the computation is done at the classification step. There are multiple ways of calculating the similarity, one way is to calculate the Minkowski distance. The Minkowski distance between two points DISPLAYFORM0
and DISPLAYFORM0
is defined by DISPLAYFORM0
where INLINEFORM0 which is equal to the Euclidean distance. BIBREF2
Evaluation
When evaluating classifiers it is common to use accuracy, precision and recall as well as Hamming loss. Accuracy, precision and recall are defined by the the four terms true positive ( INLINEFORM0 ), true negative ( INLINEFORM1 ), false positive ( INLINEFORM2 ) and false negative ( INLINEFORM3 ) which can be seen in table TABREF16 .
Accuracy is a measurement of how correct a model's predictions are and is defined as DISPLAYFORM0
.
Precision is a ratio of how often positive predictions actually are positve and is defined as DISPLAYFORM0
.
Recall is a measurement of how good the model is to find all true positives and is defined as DISPLAYFORM0
. BIBREF5
It has been shown that when calculating precision and recall on multi-label classifiers, it can be advantageous to use micro averaged precision and recall BIBREF6 . The formulas for micro averaged precision are expressed as DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is label index and INLINEFORM1 is number of labels.
Hamming loss is different in the sense that it is a loss and it is defined as the fraction of wrong labels to the total number of labels. Hamming loss can be a good measurement when it comes to evaluating multi-label classifiers. the hamming loss is expressed as DISPLAYFORM0
where INLINEFORM0 is number of documents, INLINEFORM1 number of labels, INLINEFORM2 is the target value and INLINEFORM3 is predicted value. BIBREF7
For evaluation the INLINEFORM0 and INLINEFORM1 was calculated as defined in section SECREF15 for both the MLP model and the KNN model. For precision and recall formulas EQREF20 and EQREF21 were used because of their advantage in multi-label classification. The distribution of predicted genres was also shown in a histogram and compared to the target distribution of genres.
Furthermore the ratio of reviews that got zero genres predicted was also calculated and can be expressed as DISPLAYFORM0
where INLINEFORM0 is the number of reviews without any predicted genre and INLINEFORM1 is the total amount of predicted reviews.
Data
Data used in this paper comes from two separate sources. The first source was Large Movie Review Dataset v1.0 BIBREF0 which is a dataset for binary sentiment analysis of moview reviews. The dataset contains a total of 50000 reviews in raw text together with information on whether the review is positive or negative and a URL to the movie on IMDb. The sentiment information was not used in this paper. Out of the 50000, reviews only 7000 were used because of limitations on computational power, resulting in a corpus of 7000 documents.
The second source of data was the genres for all reviews which were scraped from the IMDb site. A total of 27 different genres were scraped. A list of all genres can be find in Appendix SECREF8 . A review can have one genre or multiple genres. For example a review can be for a movie that is both Action, Drama and Thriller at the same time while another move only falls into Drama.
Method
This section presents all steps needed to reproduce the results presented in this paper.
Data collection
In this paper the data comes from two sources where the first is a collection of text reviews. Those reviews were downloaded from Large Movie Review Datasets website . Because only 7000 reviews was used in this paper all of them were from the `train` folder and split evenly between positive reviews and negative reviews.
The genres for the reviews where obtained by iterating through all reviews and doing the following steps:
Save the text of the review.
Retrieve IMDb URL to the movie from the Large Movie Review Datasets data.
Scrape that movie website for all genres and download the genres.
The distribution of genres was plotted in a histogram to check that the scraped data looked reasonable and can be seen in figure FIGREF27 . All genres with less than 50 reviews corresponding to that genre were removed.
The number of genres per review can be seen in figure FIGREF28 and it shows that it is most common for a review to have three different genres; furthermore, it shows that no review has more than three genres.
http://ai.stanford.edu/ amaas/data/sentiment
Data preprocessing
All reviews were preprocessed according to the following steps:
Remove all non-alphanumeric characters.
Lower case all tokens.
Remove all stopwords.
Lemmatize all tokens.
Both the removal of stopwords and lemmatization were done with Python's Natural Language Toolkit (NLTK). Next the reviews and corresponding genres were split into a training set and a test set with INLINEFORM0 devided into the train set and INLINEFORM1 into the test set.
The preprocessed corpus was then used to calculate a tf-idf representing all reviews. The calculation of the tf-idf was done using scikit-learn'smodule TfidfVectorizer. Both transform and fit were run on the training set and only the transform was run on the test set. The decision to use tf-idf as a data representation is supported by BIBREF3 in ramos2003using which concludes that tf-idf is both simple and effective at categorizing relevant words.
https://www.python.org http://www.nltk.org http://scikit-learn.org
Model
This paper experimented with two different models and compared them against each other. The inspiration for the first model comes from BIBREF1 in their paper DBLP:journals/corr/cmp-lg-9707002 where they used an MLP for text genre detection. The model used in this paper comes from scikit-learn's neural_network module and is called MLPClassifier. Table TABREF35 shows all parameters that were changed from the default values.
The second model was a KNN which was chosen because of it is simple and does not require the pre-training that the MLP needs. The implementation of this model comes from scikit-learn's neighbors module and is called KNeighborsClassifier. The only parameter that was changed after some trial and error was the k-parameter which was set to 3.
Both models were fitted using the train set and then predictions were done for the test set.
Result
Table TABREF38 shows the INLINEFORM0 , INLINEFORM1 and INLINEFORM2 for the models. The KNN model had a higher accuracy of INLINEFORM3 compared to MPL's accuracy of INLINEFORM4 and the KNN model had a higher recall but slightly lower precision than the MLP model.
Table TABREF39 shows the INLINEFORM0 and INLINEFORM1 for the models, it shows that the KNN model had lower values for both the INLINEFORM2 and INLINEFORM3 compared to the MLP model.
Figure FIGREF40 shows the distribution of the genres for the predicted values when using MLP and the test set. The same comparison between KNN and the test set can be seen in figure FIGREF41 .
Discussion
When looking at the results it is apparent that KNN is better than MLP in these experiments. In particular, the INLINEFORM0 stands out between KNN and MLP where KNN got INLINEFORM1 and MLP got INLINEFORM2 which is considered a significant difference. Given that the INLINEFORM3 was relatively high for both models, this result hints that the models only predicted genres when the confidence was high, which resulted in fewer genres being predicted than the target. This can also be confirmed by looking at the figures FIGREF40 and FIGREF41 where the absolute number of reviews predicted for most genres was lower than the target. This unsatisfyingly low INLINEFORM4 can be explained by the multi-label nature of the problem in this paper. Even if the model correctly predicted 2 out of three genres it is considered a misclassification. A reason for the low accuracy could be that the models appeared to be on the conservative side when predicting genres.
Another factor that affected the performance of the models was the INLINEFORM0 which confirmed that over INLINEFORM1 of the reviews for the KNN model and over INLINEFORM2 of the reviews for the MLP model did not receive any predicted genre. Because no review had zero genres all predictions with zero genres are misclassified and this could be a good place to start when improving the models.
Furthermore, when looking at the INLINEFORM0 it shows that when looking at the individual genres for all reviews the number of wrong predictions are very low which is promising when trying to answer this paper's main question: whether it is possible to predict the genre of the movie associated with a text review. It should be taken into account that this paper only investigated about 7000 movie reviews and the results could change significantly, for better or for worse, if a much larger data set was used. In this paper, some of the genres had very low amounts of training data, which could be why those genres were not predicted in the same frequency as the target. An example of that can be seen by looking at genre Sci-Fi in figure FIGREF40 .
Conclusion
This paper demonstrates that by only looking at text reviews of a movie, there is enough information to predict its genre with an INLINEFORM0 of INLINEFORM1 . This result implies that movie reviews carry latent information about genres. This paper also shows the complexity of doing prediction on multi-label problems, both in implementation and data processing but also when it comes to evaluation. Regular metrics typically work, but they mask the entire picture and the depth of how good a model is.
Finally this paper provides an explanation of the whole process needed to conduct an experiment like this. The process includes downloading a data set, web scraping for extra information, data preprocessing, model tuning and evaluation of the results.
All genres
Action
Adult
Adventure
Animation
Biography
Comedy
Crime
Documentary
Drama
Family
Fantasy
Film-Noir
Game-Show
History
Horror
Music
Musical
Mystery
Reality-TV
Romance
Sci-Fi
Short
Sport
Talk-Show
Thriller
War
Western | 27 |
f1b738a7f118438663f9d77b4ccd3a2c4fd97c01 | f1b738a7f118438663f9d77b4ccd3a2c4fd97c01_0 | Q: what evaluation metrics are discussed?
Text: Introduction
By only reading a single text review of a movie it can be difficult to say what the genre of that movie is, but by using text mining techniques on thousands of movie reviews is it possible to predict the genre?
This paper explores the possibility of classifying genres of a movie based only on a text review of that movie. This is an interesting problem because to the naked eye it may seem difficult to predict the genre by only looking at a text review. One example of a review can be seen in the following example:
I liked the film. Some of the action scenes were very interesting, tense and well done. I especially liked the opening scene which had a semi truck in it. A very tense action scene that seemed well done. Some of the transitional scenes were filmed in interesting ways such as time lapse photography, unusual colors, or interesting angles. Also the film is funny is several parts. I also liked how the evil guy was portrayed too. I'd give the film an 8 out of 10.
http://www.imdb.com/title/tt0211938/reviews
From the quoted review, one could probably predict the movie falls in the action genre; however, it would be difficult to predict all three of the genres (action, comedy, crime) that International Movie Database (IMDB) lists. With the use of text mining techniques it is feasible to predict multiple genres based on a review.
There are numerous previous works on classifying the sentiment of reviews, e.g., maas-EtAl:2011:ACL-HLT2011 by BIBREF0 . There are fewer scientific papers available on specifically classifying movie genres based on reviews; therefore, inspiration for this paper comes from papers describing classification of text for other or general contexts. One of those papers is DBLP:journals/corr/cmp-lg-9707002 where BIBREF1 describe how to use a multilayer perceptron (MLP) for genre classification.
All data, in the form of reviews and genres, used in this paper originates from IMDb.
Theory
In this section all relevant theory and methodology is described. Table TABREF1 lists basic terminology and a short description of their meaning.
Preprocessing
Data preprocessing is important when working with text data because it can reduce the number of features and it formats the data into the desired form BIBREF2 .
Removing stop words is a common type of filtering in text mining. Stop words are words that usually contain little or no information by itself and therefore it is better to remove them. Generally words that occur often can be considered stop words such as the, a and it. BIBREF2
Lemmatization is the process of converting verbs into their infinitive tense form and nouns into their singular form. The reason for doing this is to reduce words into their basic forms and thus simplify the data. For example am, are and is are converted to be. BIBREF2
A way of representing a large corpus is to calculate the Term Frequency Inverse Document Frequency (tf-idf) of the corpus and then feed the models the tf-idf. As described in ramos2003using by BIBREF3 tf-idf is both efficient and simple for matching a query of words with a document in a corpus. Tf-idf is calculated by multiplying the Term Frequency (tf) with the Inverse Document Frequency (idf) , which is formulated as DISPLAYFORM0
where INLINEFORM0 is a document in corpus INLINEFORM1 and INLINEFORM2 is a term. INLINEFORM3 is defined as DISPLAYFORM0
and INLINEFORM0 is defined as DISPLAYFORM0
where INLINEFORM0 is the number of times INLINEFORM1 occurs in INLINEFORM2 and INLINEFORM3 total number of documents in the corpus.
Models
MLP is a class of feedforward neural network built up by a layered acyclic graph. An MLP consists of at least three layers and non-linear activations. The first layer is called input layer, the second layer is called hidden layer and the third layer is called output layer. The three layers are fully connected which means that every node in the hidden layer is connected to every node in the other layers. MLP is trained using backpropagation, where the weights are updated by calculating the gradient descent with respect to an error function. BIBREF4
K-nearest Neighbors (KNN) works by evaluating similarities between entities, where INLINEFORM0 stands for how many neighbors are taken into account during the classification. KNN is different from MLP in the sense that it does not require a computationally heavy training step; instead, all of the computation is done at the classification step. There are multiple ways of calculating the similarity, one way is to calculate the Minkowski distance. The Minkowski distance between two points DISPLAYFORM0
and DISPLAYFORM0
is defined by DISPLAYFORM0
where INLINEFORM0 which is equal to the Euclidean distance. BIBREF2
Evaluation
When evaluating classifiers it is common to use accuracy, precision and recall as well as Hamming loss. Accuracy, precision and recall are defined by the the four terms true positive ( INLINEFORM0 ), true negative ( INLINEFORM1 ), false positive ( INLINEFORM2 ) and false negative ( INLINEFORM3 ) which can be seen in table TABREF16 .
Accuracy is a measurement of how correct a model's predictions are and is defined as DISPLAYFORM0
.
Precision is a ratio of how often positive predictions actually are positve and is defined as DISPLAYFORM0
.
Recall is a measurement of how good the model is to find all true positives and is defined as DISPLAYFORM0
. BIBREF5
It has been shown that when calculating precision and recall on multi-label classifiers, it can be advantageous to use micro averaged precision and recall BIBREF6 . The formulas for micro averaged precision are expressed as DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is label index and INLINEFORM1 is number of labels.
Hamming loss is different in the sense that it is a loss and it is defined as the fraction of wrong labels to the total number of labels. Hamming loss can be a good measurement when it comes to evaluating multi-label classifiers. the hamming loss is expressed as DISPLAYFORM0
where INLINEFORM0 is number of documents, INLINEFORM1 number of labels, INLINEFORM2 is the target value and INLINEFORM3 is predicted value. BIBREF7
For evaluation the INLINEFORM0 and INLINEFORM1 was calculated as defined in section SECREF15 for both the MLP model and the KNN model. For precision and recall formulas EQREF20 and EQREF21 were used because of their advantage in multi-label classification. The distribution of predicted genres was also shown in a histogram and compared to the target distribution of genres.
Furthermore the ratio of reviews that got zero genres predicted was also calculated and can be expressed as DISPLAYFORM0
where INLINEFORM0 is the number of reviews without any predicted genre and INLINEFORM1 is the total amount of predicted reviews.
Data
Data used in this paper comes from two separate sources. The first source was Large Movie Review Dataset v1.0 BIBREF0 which is a dataset for binary sentiment analysis of moview reviews. The dataset contains a total of 50000 reviews in raw text together with information on whether the review is positive or negative and a URL to the movie on IMDb. The sentiment information was not used in this paper. Out of the 50000, reviews only 7000 were used because of limitations on computational power, resulting in a corpus of 7000 documents.
The second source of data was the genres for all reviews which were scraped from the IMDb site. A total of 27 different genres were scraped. A list of all genres can be find in Appendix SECREF8 . A review can have one genre or multiple genres. For example a review can be for a movie that is both Action, Drama and Thriller at the same time while another move only falls into Drama.
Method
This section presents all steps needed to reproduce the results presented in this paper.
Data collection
In this paper the data comes from two sources where the first is a collection of text reviews. Those reviews were downloaded from Large Movie Review Datasets website . Because only 7000 reviews was used in this paper all of them were from the `train` folder and split evenly between positive reviews and negative reviews.
The genres for the reviews where obtained by iterating through all reviews and doing the following steps:
Save the text of the review.
Retrieve IMDb URL to the movie from the Large Movie Review Datasets data.
Scrape that movie website for all genres and download the genres.
The distribution of genres was plotted in a histogram to check that the scraped data looked reasonable and can be seen in figure FIGREF27 . All genres with less than 50 reviews corresponding to that genre were removed.
The number of genres per review can be seen in figure FIGREF28 and it shows that it is most common for a review to have three different genres; furthermore, it shows that no review has more than three genres.
http://ai.stanford.edu/ amaas/data/sentiment
Data preprocessing
All reviews were preprocessed according to the following steps:
Remove all non-alphanumeric characters.
Lower case all tokens.
Remove all stopwords.
Lemmatize all tokens.
Both the removal of stopwords and lemmatization were done with Python's Natural Language Toolkit (NLTK). Next the reviews and corresponding genres were split into a training set and a test set with INLINEFORM0 devided into the train set and INLINEFORM1 into the test set.
The preprocessed corpus was then used to calculate a tf-idf representing all reviews. The calculation of the tf-idf was done using scikit-learn'smodule TfidfVectorizer. Both transform and fit were run on the training set and only the transform was run on the test set. The decision to use tf-idf as a data representation is supported by BIBREF3 in ramos2003using which concludes that tf-idf is both simple and effective at categorizing relevant words.
https://www.python.org http://www.nltk.org http://scikit-learn.org
Model
This paper experimented with two different models and compared them against each other. The inspiration for the first model comes from BIBREF1 in their paper DBLP:journals/corr/cmp-lg-9707002 where they used an MLP for text genre detection. The model used in this paper comes from scikit-learn's neural_network module and is called MLPClassifier. Table TABREF35 shows all parameters that were changed from the default values.
The second model was a KNN which was chosen because of it is simple and does not require the pre-training that the MLP needs. The implementation of this model comes from scikit-learn's neighbors module and is called KNeighborsClassifier. The only parameter that was changed after some trial and error was the k-parameter which was set to 3.
Both models were fitted using the train set and then predictions were done for the test set.
Result
Table TABREF38 shows the INLINEFORM0 , INLINEFORM1 and INLINEFORM2 for the models. The KNN model had a higher accuracy of INLINEFORM3 compared to MPL's accuracy of INLINEFORM4 and the KNN model had a higher recall but slightly lower precision than the MLP model.
Table TABREF39 shows the INLINEFORM0 and INLINEFORM1 for the models, it shows that the KNN model had lower values for both the INLINEFORM2 and INLINEFORM3 compared to the MLP model.
Figure FIGREF40 shows the distribution of the genres for the predicted values when using MLP and the test set. The same comparison between KNN and the test set can be seen in figure FIGREF41 .
Discussion
When looking at the results it is apparent that KNN is better than MLP in these experiments. In particular, the INLINEFORM0 stands out between KNN and MLP where KNN got INLINEFORM1 and MLP got INLINEFORM2 which is considered a significant difference. Given that the INLINEFORM3 was relatively high for both models, this result hints that the models only predicted genres when the confidence was high, which resulted in fewer genres being predicted than the target. This can also be confirmed by looking at the figures FIGREF40 and FIGREF41 where the absolute number of reviews predicted for most genres was lower than the target. This unsatisfyingly low INLINEFORM4 can be explained by the multi-label nature of the problem in this paper. Even if the model correctly predicted 2 out of three genres it is considered a misclassification. A reason for the low accuracy could be that the models appeared to be on the conservative side when predicting genres.
Another factor that affected the performance of the models was the INLINEFORM0 which confirmed that over INLINEFORM1 of the reviews for the KNN model and over INLINEFORM2 of the reviews for the MLP model did not receive any predicted genre. Because no review had zero genres all predictions with zero genres are misclassified and this could be a good place to start when improving the models.
Furthermore, when looking at the INLINEFORM0 it shows that when looking at the individual genres for all reviews the number of wrong predictions are very low which is promising when trying to answer this paper's main question: whether it is possible to predict the genre of the movie associated with a text review. It should be taken into account that this paper only investigated about 7000 movie reviews and the results could change significantly, for better or for worse, if a much larger data set was used. In this paper, some of the genres had very low amounts of training data, which could be why those genres were not predicted in the same frequency as the target. An example of that can be seen by looking at genre Sci-Fi in figure FIGREF40 .
Conclusion
This paper demonstrates that by only looking at text reviews of a movie, there is enough information to predict its genre with an INLINEFORM0 of INLINEFORM1 . This result implies that movie reviews carry latent information about genres. This paper also shows the complexity of doing prediction on multi-label problems, both in implementation and data processing but also when it comes to evaluation. Regular metrics typically work, but they mask the entire picture and the depth of how good a model is.
Finally this paper provides an explanation of the whole process needed to conduct an experiment like this. The process includes downloading a data set, web scraping for extra information, data preprocessing, model tuning and evaluation of the results.
All genres
Action
Adult
Adventure
Animation
Biography
Comedy
Crime
Documentary
Drama
Family
Fantasy
Film-Noir
Game-Show
History
Horror
Music
Musical
Mystery
Reality-TV
Romance
Sci-Fi
Short
Sport
Talk-Show
Thriller
War
Western | precision , recall , Hamming loss, micro averaged precision and recall |
5a23f436a7e0c33e4842425cf86d5fd8ba78ac92 | 5a23f436a7e0c33e4842425cf86d5fd8ba78ac92_0 | Q: How big is dataset used?
Text: Introduction
Stock movement prediction is a central task in computational and quantitative finance. With recent advances in deep learning and natural language processing technology, event-driven stock prediction has received increasing research attention BIBREF0, BIBREF1. The goal is to predict the movement of stock prices according to financial news. Existing work has investigated news representation using bag-of-words BIBREF2, named entities BIBREF3, event structures BIBREF4 or deep learning BIBREF1, BIBREF5.
Most previous work focuses on enhancing news representations, while adopting a relatively simple model on the stock movement process, casting it as a simple response to a set of historical news. The prediction model can therefore be viewed as variations of a classifier that takes news as input and yields stock movement predictions. In contrast, work on time-series based stock prediction BIBREF6, BIBREF7, BIBREF5, BIBREF8, aims to capture continuous movements of prices themselves.
We aim to introduce underlying price movement trends into news-driven stock movement prediction by casting the underlaying stock value as a recurrent state, integrating the influence of news events and random noise simultaneously into the recurrent state transitions. In particular, we take a LSTM with peephole connections BIBREF9 for modeling a stock value state over time, which can reflect the fundamentals of a stock. The influence of news over a time window is captured in each recurrent state transition by using neural attention to aggregate representations of individual news. In addition, all other factors to the stock price are modeled using a random factor component, so that sentiments, expectations and noise can be dealt with explicitly.
Compared with existing work, our method has three salient advantages. First, the process in which the influence of news events are absorbed into stock price changes is explicitly modeled. Though previous work has attempted towards this goal BIBREF1, existing models predict each stock movement independently, only modeling the correlation between news in historical news sequences. As shown in Figure FIGREF1, our method can better capture a continuous process of stock movement by modeling the correlation between past and future stock values directly. In addition, non-linear compositional effects of multiple events in a time window can be captured.
Second, to our knowledge, our method allows noise to be explicitly addressed in a model, therefore separating the effects of news and other factors. In contrast, existing work trains a stock prediction model by fitting stock movements to events, and therefore can suffer from overfitting due to external factors and noise.
Third, our model is also more explainable thanks to the use of attention over news events, which is similar to the work of BIBREF10 and BIBREF11. Due to the use of recurrent states, we can visualize past events over a large time window. In addition, we propose a novel future event prediction module to factor in likely next events according to natural events consequences. The future event module is trained over gold “future” data over historical events. Therefore, it can also deal with insider trading factors to some extent.
Experiments over the benchmark of BIBREF1 show that our method outperforms strong baselines, giving the best reported results in the literature. To our knowledge, we are the first to explicitly model both events and noise over a fundamental stock value state for news-driven stock movement prediction. Note that unlike time-series stock prediction models BIBREF12, BIBREF5, we do not take explicit historical prices as part of model inputs, and therefore our research still focuses on the influence of news information alone, and are directly comparable to existing work on news-driven stock prediction.
Related Work
There has been a line of work predicting stock markets using text information from daily news. We compare this paper with previous work from the following two perspectives.
Modeling Price Movements Correlation
Most existing work treats the modeling of each stock movement independently using bag-of-words BIBREF2, named entities BIBREF3, semantic frames BIBREF0, event structures BIBREF4, event embeddings BIBREF1 or knowledge bases BIBREF13. Differently, we study modeling the correlation between past and future stock value movements.
There are also some work modeling the correlations between samples by sparse matrix factorization BIBREF14, hidden Markov model BIBREF8 and Bi-RNNs BIBREF5, BIBREF11 using both news and historical price data. Some work models the correlations among different stocks by pre-defined correlation graph BIBREF15 and tensor factorization BIBREF12. Our work is different from this line of work in that we use only news events as inputs, and our recurrent states are combined with impact-related noises.
Explainable Prediction
Rationalization is an important problem for news-driven stock price movement prediction, which is to find the most important news event along with the model's prediction. Factorization, such as sparse matrix factorization BIBREF14 and tensor factorization BIBREF12, is a popular method where results can be traced back upon the input features. While this type of method are limited because of the dimension of input feature, our attention-based module has linear time complexity on feature size.
BIBREF11 apply dual-layer attention to predict the stock movement by using news published in the previous six days. Each day's news embeddings and seven days' embeddings are summed by the layer. Our work is different from BIBREF11 in that our news events attention is query-based, which is more strongly related to the noisy recurrent states. In contrast, their attention is not query-based and tends to output the same result for each day even if the previous day's decision is changed.
Task Definition
Following previous work BIBREF4, BIBREF1, the task is formalized as a binary classification task for each trading day. Formally, given a history news set about a targeted stock or index, the input of the task is a trading day $x$ and the output is a label $y \in \lbrace +1, -1\rbrace $ indicating whether the adjusted closing price $p_x$ will be greater than $p_{x-1}$ ($y=+1$) or not ($y=-1$).
Method
The framework of our model is shown in Figure FIGREF2. We explicitly model both events and noise over a recurrent stock value state, which is modeled using LSTM. For each trading day, we consider the news events happened in that day as well as the past news events using neural attention BIBREF16. Considering the impacts of insider trading, we also involve future news in the training procedure. To model the high stochasticity of stock market, we sample an additive noise using a neural module. Our model is named attention-based noisy recurrent states transition (ANRES).
Considering the general principle of sample independence, building temporal connections between individual trading days in training set is not suitable for training BIBREF5 and we find it easy to overfit. We notice that a LSTM usually takes several steps to generate a more stable hidden state. As an alternative method, we extended the time span of one sample to $T$ previous continuous trading days (${t-T+1, t-T+2, ..., t-1, t}$), which we call a trading sequence, is used as the basic training element in this paper.
Method ::: LSTM-based Recurrent State Transition
ANRES uses LSTM with peephole connections BIBREF9. The underlying stock value trends are represented as a recurrent state $z$ transited over time, which can reflect the fundamentals of a stock. In each trading day, we consider the impact of corresponding news events and a random noise as:
where $v_t$ is the news events impact vector on the trading day $t$ and $f$ is a function in which random noise will be integrated.
By using this basic framework, the non-linear compositional effects of multiple events can also be captured in a time window. Then we use the sequential state $z_t$ to make binary classification as:
where $\hat{p}_t$ is the estimated probabilities, $\hat{y}_t$ is the predicted label and $x_t$ is the input trading day.
Method ::: Modeling News Events
For a trading day $t$ in a trading sequence, we model both long-term and short-term impact of news events. For short-term impact, we use the news published after the previous trading day $t-1$ and before the trading day $t$ as the present news set. Similarly, for long-term impact, we use the news published no more than thirty calendar days ago as the past news set.
For each news event, we extract its headline and use ELMo BIBREF17 to transform it to $V$-dim hidden state by concatenating the output bidirectional hidden states of the last words as the basic representation of a news event. By stacking those vectors accordingly, we obtain two embedding matrices $C^{\prime }_t$ and $B^{\prime }_t$ for the present and past news events as:
where ${hc}^i_t$ is one of the news event headline in the present news set, ${ec}^i_t$ is the headline representation of ${hc}^i_t$, $L_c$ is the size of present news set; while ${hb}^j_t$, ${eb}^j_t$ and $L_b$ are for the past news set.
To make the model more numerically stable and avoiding overfitting, we apply the over-parameterized component of BIBREF18 to the news events embedding matrices, where
$\odot $ is element-wise multiplication and $\sigma (\cdot )$ is the sigmoid function.
Due to the unequal importance news events contribute to the stock price movement in $t$, we use scaled dot-product attention BIBREF16 to capture the influence of news over a period for the recurrent state transition. In practical, we first transform the last trading day's stock value $z_{t-1}$ to a query vector $q_t$, and then calculate two attention score vectors $\gamma _t$ and $\beta _t$ for the present and past news events as:
We sum the news events embedding matrices to obtain news events impact vectors $c_t$ and $b_t$ on the trading day $t$ according to the weights $\gamma _t$ and $\beta _t$, respectively:
Method ::: Modeling Future News
In spite of the long-term and short-term impact, we find that some short-term future news events will exert an influence on the stock price movement before the news release, which can be attributed to news delay or insider trading BIBREF19 factors to some extent.
We propose a novel future event prediction module to consider likely next events according to natural consequences. In this paper, we define future news events as those that are published within seven calendar days after the trading day $t$.
Similarly to the past and present news events, we stack the headline ELMo embeddings of future news events to an embedding matrix $A^{\prime }_t$. Then adapting the over-parameterized component and summing the stacked embedding vectors by scaled dot-product attention. We calculate the future news events impact vector $a_t$ on the trading day $t$ as:
Although the above steps can work in the training procedure, where the future event module is trained over gold “future” data over historical events, at test time, future news events are not accessible. To address this issue, we use a non-linear transformation to estimate a future news events impact vector $\hat{a}_t$ with the past and present news events impact vectors $b_t$ and $c_t$ as:
where $[,]$ is the vector concatenation operation.
We concatenate the above-mentioned three types of news events impact vectors to obtain the input $v_t$ for LSTM-based recurrent state transition on trading day $t$ as:
where $[,]$ is the vector concatenation operation.
Method ::: Modeling Noise
In this model, all other factors to the stock price such as sentiments, expectations and noise are explicitly modeled as noise using a random factor. We sample a random factor from a normal distribution $\mathcal {N}(\textbf {0}, \sigma _t)$ parameterized by $z^{\prime }_t$ as:
However, in practice, the model can face difficulty of back propagating gradients if we directly sample a random factor from $\mathcal {N}(\textbf {0}, \sigma _t)$. We use re-parameterization BIBREF20 for normal distributions to address the problem and enhance the transition result $z^{\prime }_t$ with sample random factor to obtain the noisy recurrent state $z_t$ as:
Method ::: Training Objective
For training, there are two main terms in our loss function. The first term is a cross entropy loss for the predicted probabilities $\hat{p}_t$ and gold labels $y_t$, and the second term is the mean squared error between the estimated future impact vector $\hat{a}_t$ and the true future impact vector $a_t$.
The total loss for a trading sequence containing $T$ trading days with standard $L_2$ regularization is calculated as:
where $\theta $ is a hyper-parameter which indicates how much important $L_{mse}$ is comparing to $L_{ce}$, $\Phi $ is the set of trainable parameters in the entire ANRES model and $\lambda $ is the regularization weight.
Experiments
We use the public financial news dataset released by BIBREF4, which is crawled from Reuters and Bloomberg over the period from October 2006 to November 2013. We conduct our experiments on predicting the Standard & Poor’s 500 stock (S&P 500) index and its selected individual stocks, obtaining indices and prices from Yahoo Finance. Detailed statistics of the training, development and test sets are shown in Table TABREF8. We report the final results on test set after using development set to tune some hyper-parameters.
Experiments ::: Settings
The hyper-parameters of our ANRES model are shown in Table TABREF11. We use mini-batches and stochastic gradient descent (SGD) with momentum to update the parameters. Most of the hyper-parameters are chosen according to development experiments, while others like dropout rate $r$ and SGD momentum $\mu $ are set according to common values.
Following previous work BIBREF0, BIBREF4, BIBREF5, we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) to evaluate S&P 500 index prediction and selected individual stock prediction. MCC is applied because it avoids bias due to data skew. Given the confusion matrix which contains true positive, false positive, true negative and false negative values, MCC is calculated as:
Experiments ::: Initializing Noisy Recurrent States
As the first set of development experiments, we try different ways to initialize the noisy recurrent states of our ANRES model to find a suitable approach. For each trading day, we compare the results whether states transitions are modeled or not. Besides, we also compare the methods of random initialization and zero initialization. Note that the random initialization method we use here returns a tensor filled with random numbers from the standard normal distribution $\mathcal {N}(0, 1)$. In summary, the following four baselines are designed:
ANRES_Sing_R: randomly initializing the states for each single trading day.
ANRES_Sing_Z: initializing the states as zeros for each single trading day.
ANRES_Seq_R: randomly initializing the first states for each trading sequence only.
ANRES_Seq_Z: initializing the first states as zeros for each trading sequence only.
Development set results on predicting S&P 500 index are shown in Table TABREF13. We can see that modeling recurrent value sequences performs better than treating each trading day separately, which shows that modeling trading sequences can capture the correlations between trading days and the non-linear compositional effects of multiple events. From another perspective, the models ANRES_Sing_R and ANRES_Sing_Z also represent the strengths of our basic representations of news events in isolation. Therefore, we can also see that using only the basic news events representations is not sufficient for index prediction, while combining with our states transition module can achieve strong results.
By comparing the results of ANRES_Seq_R and ANRES_Seq_Z, we decide to use zero initialization for our ANRES models, including the noisy recurrent states also in the remaining experiments.
Experiments ::: Study on Trading Sequence Length
We use the development set to find a suitable length $T$ for trading sequence, which is searched from $\lbrace 1, 3, 5, 7, 9, 11, 13, 15\rbrace $. The S&P 500 index prediction results of accuracy, MCC and consumed minutes per training epoch on the development set are shown in Figure FIGREF19.
We can see that the accuracy and MCC are positively correlated with the growth of $T$, while the change of accuracy is smaller than MCC. When $T \ge 7$, the growth of MCC becomes slower than that when $T < 7$. Also considering the running time per training epoch, which is nearly linear w.r.t. $T$, we choose the hyper-parameter $T=7$ and use it in the remaining experiments.
Experiments ::: Predicting S&P 500 Index
We compare our approach with the following strong baselines on predicting the S&P 500 index, which also only use financial news:
BIBREF21 uses bags-of-words to represent news documents, and constructs the prediction model by using Support Vector Machines (SVMs).
BIBREF1 uses event embeddings as input and convolutional neural network prediction model.
BIBREF13 empowers event embeddings with knowledge bases like YAGO and also adopts convolutional neural networks as the basic prediction framework.
BIBREF22 uses fully connected model and character-level embedding input with LSTM to encode news texts.
BIBREF23 uses recurrent neural networks with skip-thought vectors to represent news text.
Table TABREF26 shows the test set results on predicting the S&P 500 index. From the table we can see that our ANRES model achieves the best results on the test sets. By comparing with BIBREF21, we can find that using news event embeddings and deep learning modules can be better representative and also flexible when dealing with high-dimension features.
When comparing with BIBREF1 and the knowledge-enhanced BIBREF13, we find that extracting structured events may suffer from error propagation. And more importantly, modeling the correlations between trading days can better capture the compositional effects of multiple news events.
By comparing with BIBREF22 and BIBREF23, despite that modeling the correlations between trading days can bring better results, we also find that modeling the noise by using a state-related random factor may be effective because of the high market stochasticity.
Experiments ::: Ablation Study on News and Noise
We explore the effects of different types of news events and the introduced random noise factor with ablation on the test set. More specifically, we disable the past news, the present news, future news and the noise factor, respectively. The S&P 500 index prediction results of the ablated models are shown in Table TABREF28. First, without using the past news events, the result becomes the lowest. The reason may be that history news contains the biggest amount of news events. In addition, considering the trading sequence length and the time windows of future news, if we disable the past news, most of them will not be involved in our model at any chance, while the present or the past news will be input on adjacent trading days.
Second, it is worth noticing that using the future news events is more effective than using the present news events. On the one hand, it confirms the importances to involve the future news in our ANRES model, which can deal with insider trading factors to some extent. On the other hand, the reason may be the news impact redundancy in sequence, as the future news impact on the $t-1$-th day should be transited to the $t$-th day to compensate the absent loss of the present news events.
The effect of modeling the noise factor is lower only to modeling the past news events, but higher than the other ablated models, which demonstrates the effectiveness of the noise factor module. We think the reason may because that modeling such an additive noise can separate the effects of news event impacts from other factors, which makes modeling the stock price movement trends more clearly.
Experiments ::: Predicting Individual Stock Movements
Other than predicting the S&P 500 index, we also investigate the effectiveness of our approach on the problem of individual stock prediction using the test set. We count the amounts of individual company related news events for each company by name matching, and select five well known companies with sufficient news, Apple, Citigroup, Boeing Company, Google and Wells Fargo from four different sectors, which is classified by the Global Industry Classification Standard. For each company, we prepare not only news events about itself, but also news events about the whole companies in the sector. We use company news, sector news and all financial news to predict individual stock price movements, respectively. The experimental results and news statistics are listed in Table TABREF30.
The result of individual stock prediction by only using company news dramatically outperforms that of sector news and all news, which presents a negative correlation between total used amounts of news events and model performance. The main reason maybe that company-related news events can more directly affect the volatility of company shares, while sector news and all news contain many irrelevant news events, which would obstruct our ANRES model's learning the underlaying stock price movement trends.
Note that BIBREF1, BIBREF13 and BIBREF11 also reported results on individual stocks. But we cannot directly compare our results with them because the existing methods used different individual stocks on different data split to report results, and BIBREF1, BIBREF13 reported only development set results. This is reasonable since the performance of each model can vary from stock to stock over the S&P 500 chart and comparison over the whole index is more indicative.
Experiments ::: Case Study
To look into what news event contributes the most to our prediction result, we further analyze the test set results of predicting Apple Inc.'s stock price movements only using company news, which achieves the best results among the five selected companies mentioned before.
As shown in Figure FIGREF31, we take the example trading sequence from 07/15/2013 to 07/23/2013 for illustration. The table on the left shows the selected top-ten news events, while attention visualization and results are shown on the right chart. Note that there are almost fifty different past news events in total for the trading sequence, and the news events listed on the left table are selected by ranking attention scores from the past news events, which are the most effective news according to the ablation study. There are some zeros in the attention heat map because these news do not belong to the corresponding trading days.
We can find that the news event No. 1 has been correlated with the stock price rises on 07/15/2013, but for the next two trading days, its impact fades out. On 07/18/2013, the news event No. 7 begins to show its impact. However, our ANRES model pays too much attention in it and makes the incorrect prediction that the stock price decreases. On the next trading day, our model infers that the impact of the news event No. 2 is bigger than that of the news event No. 7, which makes an incorrect prediction again. From these findings, we can see that our ANRES model tends to pay more attention to a new event when it first occurs, which offers us a potential improving direction in the future.
Conclusion
We investigated explicit modeling of stock value sequences in news-driven stock prediction by suing an LSTM state to model the fundamentals, adding news impact and noise impact by using attention and noise sampling, respectively. Results show that our method is highly effective, giving the best performance on a standard benchmark. To our knowledge, we are the first to explicitly model both events and noise over a fundamental stock value state for news-driven stock movement prediction. | 553,451 documents |
2f4acd34eb2d09db9b5ad9b1eb82cb4a88c13f5b | 2f4acd34eb2d09db9b5ad9b1eb82cb4a88c13f5b_0 | Q: What is dataset used for news-driven stock movement prediction?
Text: Introduction
Stock movement prediction is a central task in computational and quantitative finance. With recent advances in deep learning and natural language processing technology, event-driven stock prediction has received increasing research attention BIBREF0, BIBREF1. The goal is to predict the movement of stock prices according to financial news. Existing work has investigated news representation using bag-of-words BIBREF2, named entities BIBREF3, event structures BIBREF4 or deep learning BIBREF1, BIBREF5.
Most previous work focuses on enhancing news representations, while adopting a relatively simple model on the stock movement process, casting it as a simple response to a set of historical news. The prediction model can therefore be viewed as variations of a classifier that takes news as input and yields stock movement predictions. In contrast, work on time-series based stock prediction BIBREF6, BIBREF7, BIBREF5, BIBREF8, aims to capture continuous movements of prices themselves.
We aim to introduce underlying price movement trends into news-driven stock movement prediction by casting the underlaying stock value as a recurrent state, integrating the influence of news events and random noise simultaneously into the recurrent state transitions. In particular, we take a LSTM with peephole connections BIBREF9 for modeling a stock value state over time, which can reflect the fundamentals of a stock. The influence of news over a time window is captured in each recurrent state transition by using neural attention to aggregate representations of individual news. In addition, all other factors to the stock price are modeled using a random factor component, so that sentiments, expectations and noise can be dealt with explicitly.
Compared with existing work, our method has three salient advantages. First, the process in which the influence of news events are absorbed into stock price changes is explicitly modeled. Though previous work has attempted towards this goal BIBREF1, existing models predict each stock movement independently, only modeling the correlation between news in historical news sequences. As shown in Figure FIGREF1, our method can better capture a continuous process of stock movement by modeling the correlation between past and future stock values directly. In addition, non-linear compositional effects of multiple events in a time window can be captured.
Second, to our knowledge, our method allows noise to be explicitly addressed in a model, therefore separating the effects of news and other factors. In contrast, existing work trains a stock prediction model by fitting stock movements to events, and therefore can suffer from overfitting due to external factors and noise.
Third, our model is also more explainable thanks to the use of attention over news events, which is similar to the work of BIBREF10 and BIBREF11. Due to the use of recurrent states, we can visualize past events over a large time window. In addition, we propose a novel future event prediction module to factor in likely next events according to natural events consequences. The future event module is trained over gold “future” data over historical events. Therefore, it can also deal with insider trading factors to some extent.
Experiments over the benchmark of BIBREF1 show that our method outperforms strong baselines, giving the best reported results in the literature. To our knowledge, we are the first to explicitly model both events and noise over a fundamental stock value state for news-driven stock movement prediction. Note that unlike time-series stock prediction models BIBREF12, BIBREF5, we do not take explicit historical prices as part of model inputs, and therefore our research still focuses on the influence of news information alone, and are directly comparable to existing work on news-driven stock prediction.
Related Work
There has been a line of work predicting stock markets using text information from daily news. We compare this paper with previous work from the following two perspectives.
Modeling Price Movements Correlation
Most existing work treats the modeling of each stock movement independently using bag-of-words BIBREF2, named entities BIBREF3, semantic frames BIBREF0, event structures BIBREF4, event embeddings BIBREF1 or knowledge bases BIBREF13. Differently, we study modeling the correlation between past and future stock value movements.
There are also some work modeling the correlations between samples by sparse matrix factorization BIBREF14, hidden Markov model BIBREF8 and Bi-RNNs BIBREF5, BIBREF11 using both news and historical price data. Some work models the correlations among different stocks by pre-defined correlation graph BIBREF15 and tensor factorization BIBREF12. Our work is different from this line of work in that we use only news events as inputs, and our recurrent states are combined with impact-related noises.
Explainable Prediction
Rationalization is an important problem for news-driven stock price movement prediction, which is to find the most important news event along with the model's prediction. Factorization, such as sparse matrix factorization BIBREF14 and tensor factorization BIBREF12, is a popular method where results can be traced back upon the input features. While this type of method are limited because of the dimension of input feature, our attention-based module has linear time complexity on feature size.
BIBREF11 apply dual-layer attention to predict the stock movement by using news published in the previous six days. Each day's news embeddings and seven days' embeddings are summed by the layer. Our work is different from BIBREF11 in that our news events attention is query-based, which is more strongly related to the noisy recurrent states. In contrast, their attention is not query-based and tends to output the same result for each day even if the previous day's decision is changed.
Task Definition
Following previous work BIBREF4, BIBREF1, the task is formalized as a binary classification task for each trading day. Formally, given a history news set about a targeted stock or index, the input of the task is a trading day $x$ and the output is a label $y \in \lbrace +1, -1\rbrace $ indicating whether the adjusted closing price $p_x$ will be greater than $p_{x-1}$ ($y=+1$) or not ($y=-1$).
Method
The framework of our model is shown in Figure FIGREF2. We explicitly model both events and noise over a recurrent stock value state, which is modeled using LSTM. For each trading day, we consider the news events happened in that day as well as the past news events using neural attention BIBREF16. Considering the impacts of insider trading, we also involve future news in the training procedure. To model the high stochasticity of stock market, we sample an additive noise using a neural module. Our model is named attention-based noisy recurrent states transition (ANRES).
Considering the general principle of sample independence, building temporal connections between individual trading days in training set is not suitable for training BIBREF5 and we find it easy to overfit. We notice that a LSTM usually takes several steps to generate a more stable hidden state. As an alternative method, we extended the time span of one sample to $T$ previous continuous trading days (${t-T+1, t-T+2, ..., t-1, t}$), which we call a trading sequence, is used as the basic training element in this paper.
Method ::: LSTM-based Recurrent State Transition
ANRES uses LSTM with peephole connections BIBREF9. The underlying stock value trends are represented as a recurrent state $z$ transited over time, which can reflect the fundamentals of a stock. In each trading day, we consider the impact of corresponding news events and a random noise as:
where $v_t$ is the news events impact vector on the trading day $t$ and $f$ is a function in which random noise will be integrated.
By using this basic framework, the non-linear compositional effects of multiple events can also be captured in a time window. Then we use the sequential state $z_t$ to make binary classification as:
where $\hat{p}_t$ is the estimated probabilities, $\hat{y}_t$ is the predicted label and $x_t$ is the input trading day.
Method ::: Modeling News Events
For a trading day $t$ in a trading sequence, we model both long-term and short-term impact of news events. For short-term impact, we use the news published after the previous trading day $t-1$ and before the trading day $t$ as the present news set. Similarly, for long-term impact, we use the news published no more than thirty calendar days ago as the past news set.
For each news event, we extract its headline and use ELMo BIBREF17 to transform it to $V$-dim hidden state by concatenating the output bidirectional hidden states of the last words as the basic representation of a news event. By stacking those vectors accordingly, we obtain two embedding matrices $C^{\prime }_t$ and $B^{\prime }_t$ for the present and past news events as:
where ${hc}^i_t$ is one of the news event headline in the present news set, ${ec}^i_t$ is the headline representation of ${hc}^i_t$, $L_c$ is the size of present news set; while ${hb}^j_t$, ${eb}^j_t$ and $L_b$ are for the past news set.
To make the model more numerically stable and avoiding overfitting, we apply the over-parameterized component of BIBREF18 to the news events embedding matrices, where
$\odot $ is element-wise multiplication and $\sigma (\cdot )$ is the sigmoid function.
Due to the unequal importance news events contribute to the stock price movement in $t$, we use scaled dot-product attention BIBREF16 to capture the influence of news over a period for the recurrent state transition. In practical, we first transform the last trading day's stock value $z_{t-1}$ to a query vector $q_t$, and then calculate two attention score vectors $\gamma _t$ and $\beta _t$ for the present and past news events as:
We sum the news events embedding matrices to obtain news events impact vectors $c_t$ and $b_t$ on the trading day $t$ according to the weights $\gamma _t$ and $\beta _t$, respectively:
Method ::: Modeling Future News
In spite of the long-term and short-term impact, we find that some short-term future news events will exert an influence on the stock price movement before the news release, which can be attributed to news delay or insider trading BIBREF19 factors to some extent.
We propose a novel future event prediction module to consider likely next events according to natural consequences. In this paper, we define future news events as those that are published within seven calendar days after the trading day $t$.
Similarly to the past and present news events, we stack the headline ELMo embeddings of future news events to an embedding matrix $A^{\prime }_t$. Then adapting the over-parameterized component and summing the stacked embedding vectors by scaled dot-product attention. We calculate the future news events impact vector $a_t$ on the trading day $t$ as:
Although the above steps can work in the training procedure, where the future event module is trained over gold “future” data over historical events, at test time, future news events are not accessible. To address this issue, we use a non-linear transformation to estimate a future news events impact vector $\hat{a}_t$ with the past and present news events impact vectors $b_t$ and $c_t$ as:
where $[,]$ is the vector concatenation operation.
We concatenate the above-mentioned three types of news events impact vectors to obtain the input $v_t$ for LSTM-based recurrent state transition on trading day $t$ as:
where $[,]$ is the vector concatenation operation.
Method ::: Modeling Noise
In this model, all other factors to the stock price such as sentiments, expectations and noise are explicitly modeled as noise using a random factor. We sample a random factor from a normal distribution $\mathcal {N}(\textbf {0}, \sigma _t)$ parameterized by $z^{\prime }_t$ as:
However, in practice, the model can face difficulty of back propagating gradients if we directly sample a random factor from $\mathcal {N}(\textbf {0}, \sigma _t)$. We use re-parameterization BIBREF20 for normal distributions to address the problem and enhance the transition result $z^{\prime }_t$ with sample random factor to obtain the noisy recurrent state $z_t$ as:
Method ::: Training Objective
For training, there are two main terms in our loss function. The first term is a cross entropy loss for the predicted probabilities $\hat{p}_t$ and gold labels $y_t$, and the second term is the mean squared error between the estimated future impact vector $\hat{a}_t$ and the true future impact vector $a_t$.
The total loss for a trading sequence containing $T$ trading days with standard $L_2$ regularization is calculated as:
where $\theta $ is a hyper-parameter which indicates how much important $L_{mse}$ is comparing to $L_{ce}$, $\Phi $ is the set of trainable parameters in the entire ANRES model and $\lambda $ is the regularization weight.
Experiments
We use the public financial news dataset released by BIBREF4, which is crawled from Reuters and Bloomberg over the period from October 2006 to November 2013. We conduct our experiments on predicting the Standard & Poor’s 500 stock (S&P 500) index and its selected individual stocks, obtaining indices and prices from Yahoo Finance. Detailed statistics of the training, development and test sets are shown in Table TABREF8. We report the final results on test set after using development set to tune some hyper-parameters.
Experiments ::: Settings
The hyper-parameters of our ANRES model are shown in Table TABREF11. We use mini-batches and stochastic gradient descent (SGD) with momentum to update the parameters. Most of the hyper-parameters are chosen according to development experiments, while others like dropout rate $r$ and SGD momentum $\mu $ are set according to common values.
Following previous work BIBREF0, BIBREF4, BIBREF5, we adopt the standard measure of accuracy and Matthews Correlation Coefficient (MCC) to evaluate S&P 500 index prediction and selected individual stock prediction. MCC is applied because it avoids bias due to data skew. Given the confusion matrix which contains true positive, false positive, true negative and false negative values, MCC is calculated as:
Experiments ::: Initializing Noisy Recurrent States
As the first set of development experiments, we try different ways to initialize the noisy recurrent states of our ANRES model to find a suitable approach. For each trading day, we compare the results whether states transitions are modeled or not. Besides, we also compare the methods of random initialization and zero initialization. Note that the random initialization method we use here returns a tensor filled with random numbers from the standard normal distribution $\mathcal {N}(0, 1)$. In summary, the following four baselines are designed:
ANRES_Sing_R: randomly initializing the states for each single trading day.
ANRES_Sing_Z: initializing the states as zeros for each single trading day.
ANRES_Seq_R: randomly initializing the first states for each trading sequence only.
ANRES_Seq_Z: initializing the first states as zeros for each trading sequence only.
Development set results on predicting S&P 500 index are shown in Table TABREF13. We can see that modeling recurrent value sequences performs better than treating each trading day separately, which shows that modeling trading sequences can capture the correlations between trading days and the non-linear compositional effects of multiple events. From another perspective, the models ANRES_Sing_R and ANRES_Sing_Z also represent the strengths of our basic representations of news events in isolation. Therefore, we can also see that using only the basic news events representations is not sufficient for index prediction, while combining with our states transition module can achieve strong results.
By comparing the results of ANRES_Seq_R and ANRES_Seq_Z, we decide to use zero initialization for our ANRES models, including the noisy recurrent states also in the remaining experiments.
Experiments ::: Study on Trading Sequence Length
We use the development set to find a suitable length $T$ for trading sequence, which is searched from $\lbrace 1, 3, 5, 7, 9, 11, 13, 15\rbrace $. The S&P 500 index prediction results of accuracy, MCC and consumed minutes per training epoch on the development set are shown in Figure FIGREF19.
We can see that the accuracy and MCC are positively correlated with the growth of $T$, while the change of accuracy is smaller than MCC. When $T \ge 7$, the growth of MCC becomes slower than that when $T < 7$. Also considering the running time per training epoch, which is nearly linear w.r.t. $T$, we choose the hyper-parameter $T=7$ and use it in the remaining experiments.
Experiments ::: Predicting S&P 500 Index
We compare our approach with the following strong baselines on predicting the S&P 500 index, which also only use financial news:
BIBREF21 uses bags-of-words to represent news documents, and constructs the prediction model by using Support Vector Machines (SVMs).
BIBREF1 uses event embeddings as input and convolutional neural network prediction model.
BIBREF13 empowers event embeddings with knowledge bases like YAGO and also adopts convolutional neural networks as the basic prediction framework.
BIBREF22 uses fully connected model and character-level embedding input with LSTM to encode news texts.
BIBREF23 uses recurrent neural networks with skip-thought vectors to represent news text.
Table TABREF26 shows the test set results on predicting the S&P 500 index. From the table we can see that our ANRES model achieves the best results on the test sets. By comparing with BIBREF21, we can find that using news event embeddings and deep learning modules can be better representative and also flexible when dealing with high-dimension features.
When comparing with BIBREF1 and the knowledge-enhanced BIBREF13, we find that extracting structured events may suffer from error propagation. And more importantly, modeling the correlations between trading days can better capture the compositional effects of multiple news events.
By comparing with BIBREF22 and BIBREF23, despite that modeling the correlations between trading days can bring better results, we also find that modeling the noise by using a state-related random factor may be effective because of the high market stochasticity.
Experiments ::: Ablation Study on News and Noise
We explore the effects of different types of news events and the introduced random noise factor with ablation on the test set. More specifically, we disable the past news, the present news, future news and the noise factor, respectively. The S&P 500 index prediction results of the ablated models are shown in Table TABREF28. First, without using the past news events, the result becomes the lowest. The reason may be that history news contains the biggest amount of news events. In addition, considering the trading sequence length and the time windows of future news, if we disable the past news, most of them will not be involved in our model at any chance, while the present or the past news will be input on adjacent trading days.
Second, it is worth noticing that using the future news events is more effective than using the present news events. On the one hand, it confirms the importances to involve the future news in our ANRES model, which can deal with insider trading factors to some extent. On the other hand, the reason may be the news impact redundancy in sequence, as the future news impact on the $t-1$-th day should be transited to the $t$-th day to compensate the absent loss of the present news events.
The effect of modeling the noise factor is lower only to modeling the past news events, but higher than the other ablated models, which demonstrates the effectiveness of the noise factor module. We think the reason may because that modeling such an additive noise can separate the effects of news event impacts from other factors, which makes modeling the stock price movement trends more clearly.
Experiments ::: Predicting Individual Stock Movements
Other than predicting the S&P 500 index, we also investigate the effectiveness of our approach on the problem of individual stock prediction using the test set. We count the amounts of individual company related news events for each company by name matching, and select five well known companies with sufficient news, Apple, Citigroup, Boeing Company, Google and Wells Fargo from four different sectors, which is classified by the Global Industry Classification Standard. For each company, we prepare not only news events about itself, but also news events about the whole companies in the sector. We use company news, sector news and all financial news to predict individual stock price movements, respectively. The experimental results and news statistics are listed in Table TABREF30.
The result of individual stock prediction by only using company news dramatically outperforms that of sector news and all news, which presents a negative correlation between total used amounts of news events and model performance. The main reason maybe that company-related news events can more directly affect the volatility of company shares, while sector news and all news contain many irrelevant news events, which would obstruct our ANRES model's learning the underlaying stock price movement trends.
Note that BIBREF1, BIBREF13 and BIBREF11 also reported results on individual stocks. But we cannot directly compare our results with them because the existing methods used different individual stocks on different data split to report results, and BIBREF1, BIBREF13 reported only development set results. This is reasonable since the performance of each model can vary from stock to stock over the S&P 500 chart and comparison over the whole index is more indicative.
Experiments ::: Case Study
To look into what news event contributes the most to our prediction result, we further analyze the test set results of predicting Apple Inc.'s stock price movements only using company news, which achieves the best results among the five selected companies mentioned before.
As shown in Figure FIGREF31, we take the example trading sequence from 07/15/2013 to 07/23/2013 for illustration. The table on the left shows the selected top-ten news events, while attention visualization and results are shown on the right chart. Note that there are almost fifty different past news events in total for the trading sequence, and the news events listed on the left table are selected by ranking attention scores from the past news events, which are the most effective news according to the ablation study. There are some zeros in the attention heat map because these news do not belong to the corresponding trading days.
We can find that the news event No. 1 has been correlated with the stock price rises on 07/15/2013, but for the next two trading days, its impact fades out. On 07/18/2013, the news event No. 7 begins to show its impact. However, our ANRES model pays too much attention in it and makes the incorrect prediction that the stock price decreases. On the next trading day, our model infers that the impact of the news event No. 2 is bigger than that of the news event No. 7, which makes an incorrect prediction again. From these findings, we can see that our ANRES model tends to pay more attention to a new event when it first occurs, which offers us a potential improving direction in the future.
Conclusion
We investigated explicit modeling of stock value sequences in news-driven stock prediction by suing an LSTM state to model the fundamentals, adding news impact and noise impact by using attention and noise sampling, respectively. Results show that our method is highly effective, giving the best performance on a standard benchmark. To our knowledge, we are the first to explicitly model both events and noise over a fundamental stock value state for news-driven stock movement prediction. | the public financial news dataset released by BIBREF4 |
e7329c403af26b7e6eef8b60ba6fefbe40ccf8ce | e7329c403af26b7e6eef8b60ba6fefbe40ccf8ce_0 | Q: How much better does this baseline neural model do?
Text: Introduction
Open Information Extraction (OpenIE) is the NLP task of generating (subject, relation, object) tuples from unstructured text e.g. “Fed chair Powell indicates rate hike” outputs (Powell, indicates, rate hike). The modifier open is used to contrast IE research in which the relation belongs to a fixed set. OpenIE has been shown to be useful for several downstream applications such as knowledge base construction BIBREF0 , textual entailment BIBREF1 , and other natural language understanding tasks BIBREF2 . In our previous example an extraction was missing: (Powell, works for, Fed). Implicit extractions are our term for this type of tuple where the relation (“works for” in this example) is not contained in the input sentence. In both colloquial and formal language, many relations are evident without being explicitly stated. However, despite their pervasiveness, there has not been prior work targeted at implicit predicates in the general case. Implicit information extractors for some specific implicit relations such as noun-mediated relations, numerical relations, and others BIBREF3 , BIBREF4 , BIBREF5 have been researched. While specific extractors are important, there are a multiplicity of implicit relation types and it would be intractable to categorize and design extractors for each one.
Past general OpenIE systems have been plagued by low recall on implicit relations BIBREF6 . In OpenIE's original application – web-scale knowledge base construction – this low recall is tolerable because facts are often restated in many ways BIBREF7 . However, in downstream NLU applications an implied relationship may be significant and only stated once BIBREF2 .
The contribution of this work is twofold. In Section 4, we introduce our parse-based conversion tool and convert two large reading comprehension datasets into implicit OpenIE datasets. In Section 5 and 6, we train a simple neural model on this data and compare to previous systems on precision-recall curves using a new gold test set for implicit tuples.
Problem Statement
We suggest that OpenIE research focus on producing implicit relations where the predicate is not contained in the input span. Formally, we define implicit tuples as (subject, relation, object) tuples that:
These “implicit” or “common sense” tuples reproduce the relation explicitly, which may be important for downstream NLU applications using OpenIE as an intermediate schema. For example, in Figure 1, the input sentence tells us that the Norsemen swore fealty to Charles III under “their leader Rollo”. From this our model outputs (The Norse leader, was, Rollo) despite the relation never being contained in the input sentence. Our definition of implicit tuples corresponds to the “frequently occurring recall errors” identified in previous OpenIE systems BIBREF6 : noun-mediated, sentence-level inference, long sentence, nominalization, noisy informal, and PP-attachment. We use the term implicit tuple to collectively refer to all of these situations where the predicate is absent or very obfuscated.
Traditional Methods
Due to space constraints, see Niklaus et al. Survey for a survey of of non-neural methods. Of these, several works have focused on pattern-based implicit information extractors for noun-mediated relations, numerical relations, and others BIBREF3 , BIBREF4 , BIBREF5 . In this work we compare to OpenIE-4 , ClausIE BIBREF8 , ReVerb BIBREF9 , OLLIE BIBREF10 , Stanford OpenIE BIBREF11 , and PropS BIBREF12 .
Neural Network Methods
Stanovsky et al. SupervisedOIE frame OpenIE as a BIO-tagging problem and train an LSTM to tag an input sentence. Tuples can be derived from the tagger, input, and BIO CFG parser. This method outperforms traditional systems, though the tagging scheme inherently constrains the relations to be part of the input sentence, prohibiting implicit relation extraction. Cui et al. NeuralOpenIE bootstrap (sentence, tuple) pairs from OpenIE-4 and train a standard seq2seq with attention model using OpenNMT-py BIBREF13 . The system is inhibited by its synthetic training data which is bootstrapped from a rule-based system.
Dataset Conversion Methods
Due to the lack of large datasets for OpenIE, previous works have focused on generating datasets from other tasks. These have included QA-SRL datasets BIBREF14 and QAMR datasets BIBREF6 . These methods are limited by the size of the source training data which are an order of magnitude smaller than existing reading comprehension datasets.
Dataset Conversion Method
Span-based Question-Answer datasets are a type of reading comprehension dataset where each entry consists of a short passage, a question about the passage, and an answer contained in the passage. The datasets used in this work are the Stanford Question Answering Dataset (SQuADv1.1) BIBREF15 and NewsQA BIBREF16 . These QA datasets were built to require reasoning beyond simple pattern-recognition, which is exactly what we desire for implicit OpenIE. Our goal is to convert the QA schema to OpenIE, as was successfully done for NLI BIBREF17 . The repository of software and converted datasets is available at http://toAppear.
QA Pairs to OpenIE Tuples
We started by examining SQuAD and noticing that each answer, $A$ , corresponds to either the subject, relation, or object in an implicit extraction. The corresponding question, $Q$ , contains the other two parts, i.e. either the (1) subject and relation, (2) subject and object, or (3) relation and object. Which two pieces the question contains depends on the type of question. For example, “who was... factoid” type questions contain the relation (“was”) and object (the factoid), which means that the answer is the subject. In Figure 1, “Who was Rollo” is recognized as a who was question and caught by the whoParse() parser. Similarly, a question in the form of “When did person do action” expresses a subject and a relation, with the answer containing the object. For example, “When did Einstein emigrate to the US“ and answer 1933, would convert to (Einstein, when did emigrate to the US, 1933). In cases like these the relation might not be grammatically ideal, but nevertheless captures the meaning of the input sentence.
In order to identify generic patterns, we build our parse-based tool on top of a dependency parser BIBREF18 . It uses fifteen rules, with the proper rule being identified and run based on the question type. The rule then uses its pre-specified pattern to parse the input QA pair and output a tuple. These fifteen rules are certainly not exhaustive, but cover around eighty percent of the inputs. The tool ignores questions greater than 60 characters and complex questions it cannot parse, leaving a dataset smaller than the original (see Table 1).
Each rule is on average forty lines of code that traverses a dependency parse tree according to its pre-specified pattern, extracting the matching spans at each step. A master function parse() determines which rule to apply based on the question type which is categorized by nsubj presence, and the type of question (who/what/etc.). Most questions contain an nsubj which makes the parse task easier, as this will also be the subject of the tuple. We allow the master parse() method try multiple rules. It first tries very specific rules (e.g. a parser for how questions where no subject is identified), then falls down to more generic rules. If no output is returned after all the methods are tried we throw the QA pair out. Otherwise, we find the appropriate sentence in the passage based on the index.
Sentence Alignment
Following QA to tuple conversion, the tuple must be aligned with a sentence in the input passage. We segment the passage into sentences using periods as delimiters. The sentence containing the answer is taken as the input sentence for the tuple. Outputted sentences predominantly align with their tuple, but some exhibit partial misalignment in the case of some multi-sentence reasoning questions. 13.6% of questions require multi-sentence reasoning, so this is an upper bound on the number of partially misaligned tuples/sentences BIBREF15 . While there may be heuristics that can be used to check alignment, we didn't find a significant number of these misalignments and so left them in the corpus. Figure 1 demonstrates the conversion process.
Tuple Examination
Examining a random subset of one hundred generated tuples in the combined dataset we find 12 noun-mediated, 33 sentence-level inference, 11 long sentence, 7 nominzalization, 0 noisy informal, 3 pp-attachment, 24 explicit, and 10 partially misaligned. With 66% implicit relations, this dataset shows promise in improving OpenIE's recall on implicit relations.
Our model
Our implicit OpenIE extractor is implemented as a sequence to sequence model with attention BIBREF19 . We use a 2-Layer LSTM Encoder/Decoder with 500 parameters, general attention, SGD optimizer with adaptive learning rate, and 0.33 dropout BIBREF20 . The training objective is to maximize the likelihood of the output tuple given the input sentence. In the case of a sentence having multiple extractions, it appears in the dataset once for each output tuple. At test time, beam search is used for decoding to produce the top-10 outputs and an associated log likelihood value for each tuple (used to generate the precision-recall curves in Section 7).
Evaluation
We make use of the evaluation tool developed by Stanovsky and Dagan benchmark to test the precision and recall of our model against previous methods. We make two changes to the tool as described below.
Creating a Gold Dataset
The test corpus contained no implicit data, so we re-annotate 300 tuples from the CoNLL-2009 English training data to use as gold data. Both authors worked on different sentence sets then pruned the other set to ensure only implicit relations remained. We note that this is a different dataset than our training data so should be a good test of generalizability; the training data consists of Wikipedia and news articles, while the test data resembles corporate press release headlines.
Matching function for implicit tuples
We implement a new matching function (i.e. the function that decides if a generated tuple matches a gold tuple). The included matching functions used BoW overlap or BLEU, which aren't appropriate for implicit relations; our goal is to assess whether the meaning of the predicted tuple matches the gold, not the only tokens. For example, the if the gold relation is “is employed by” we want to accept “works for”. Thus, we instead compute the cosine similarity of the subject, relation, and object embeddings to our gold tuple. All three must be above a threshold to evaluate as a match. The sequence embeddings are computed by taking the average of the GloVe embeddings of each word (i.e. BoW embedding) BIBREF21 .
Results
The results on our implicit corpus are shown in Figure 2 (our method in blue). For continuity with prior work, we also compare our model on the origional corpus but using our new matching function in Figure 3.
Our model outperforms at every point in the implicit-tuples PR curve, accomplishing our goal of increasing recall on implicit relations. Our system performs poorly on explicit tuples, as we would expect considering our training data. We tried creating a multi-task model, but found the model either learned to produce implit or explicit tuples. Creating a multi-task network would be ideal, though it is sufficient for production systems to use both systems in tandem.
Conclusion
We created a large training corpus for implicit OpenIE extractors based on SQuAD and NewsQA, trained a baseline on this dataset, and presented promising results on implicit extraction. We see this as part of a larger body of work in text-representation schemes which aim to represent meaning in a more structured form than free text. Implicit information extraction goes further than traditional OpenIE to elicit relations not contained in the original free text. This allows maximally-shortened tuples where common sense relations are made explicit. Our model should improve further as more QA datasets are released and converted to OpenIE data using our conversion tool. | The model outperforms at every point in the
implicit-tuples PR curve reaching almost 0.8 in recall |
e79a5e435fcf5587535f06c9215d19a66caadaff | e79a5e435fcf5587535f06c9215d19a66caadaff_0 | Q: What is the SemEval-2016 task 8?
Text: Introduction
Abstract Meaning Representation (AMR) BIBREF0 is a semantic formalism encoding the meaning of a sentence as a rooted, directed graph. AMR uses a graph to represent meaning, where nodes (such as “boy”, “want-01”) represent concepts, and edges (such as “ARG0”, “ARG1”) represent relations between concepts. Encoding many semantic phenomena into a graph structure, AMR is useful for NLP tasks such as machine translation BIBREF1 , BIBREF2 , question answering BIBREF3 , summarization BIBREF4 and event detection BIBREF5 .
AMR-to-text generation is challenging as function words and syntactic structures are abstracted away, making an AMR graph correspond to multiple realizations. Despite much literature so far on text-to-AMR parsing BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , there has been little work on AMR-to-text generation BIBREF16 , BIBREF17 , BIBREF18 .
jeff2016amrgen transform a given AMR graph into a spanning tree, before translating it to a sentence using a tree-to-string transducer. Their method leverages existing machine translation techniques, capturing hierarchical correspondences between the spanning tree and the surface string. However, it suffers from error propagation since the output is constrained given a spanning tree due to the projective correspondence between them. Information loss in the graph-to-tree transformation step cannot be recovered. song-EtAl:2016:EMNLP2016 directly generate sentences using graph-fragment-to-string rules. They cast the task of finding a sequence of disjoint rules to transduce an AMR graph into a sentence as a traveling salesman problem, using local features and a language model to rank candidate sentences. However, their method does not learn hierarchical structural correspondences between AMR graphs and strings.
We propose to leverage the advantages of hierarchical rules without suffering from graph-to-tree errors by directly learning graph-to-string rules. As shown in Figure 1 , we learn a synchronous node replacement grammar (NRG) from a corpus of aligned AMR and sentence pairs. At test time, we apply a graph transducer to collapse input AMR graphs and generate output strings according to the learned grammar. Our system makes use of a log-linear model with real-valued features, tuned using MERT BIBREF19 , and beam search decoding. It gives a BLEU score of 25.62 on LDC2015E86, which is the state-of-the-art on this dataset.
Grammar Definition
A synchronous node replacement grammar (NRG) is a rewriting formalism: $G=\langle N, \Sigma , \Delta , P, S \rangle $ , where $N$ is a finite set of nonterminals, $\Sigma $ and $\Delta $ are finite sets of terminal symbols for the source and target sides, respectively. $S \in N$ is the start symbol, and $P$ is a finite set of productions. Each instance of $P$ takes the form $X_i \rightarrow (\langle F, E\rangle ,\sim )$ , where $X_i \in N$ is a nonterminal node, $F$ is a rooted, connected AMR fragment with edge labels over $N$0 and node labels over $N$1 , $N$2 is a corresponding target string over $N$3 and $N$4 denotes the alignment of nonterminal symbols between $N$5 and $N$6 . A classic NRG BIBREF20 also defines $N$7 , which is an embedding mechanism defining how $N$8 is connected to the rest of the graph when replacing $N$9 with $\Sigma $0 on the graph. Here we omit defining $\Sigma $1 and allow arbitrary connections. Following chiang:2005:ACL, we use only one nonterminal $\Sigma $6 in addition to $\Sigma $7 , and use subscripts to distinguish different non-terminal instances.
Figure 2 shows an example derivation process for the sentence “the boy wants to go” given the rule set in Table 1 . Given the start symbol $S$ , which is first replaced with $X_1$ , rule (c) is applied to generate “ $X_2$ to go” and its AMR counterpart. Then rule (b) is used to generate “ $X_3$ wants” and its AMR counterpart from $X_2$ . Finally, rule (a) is used to generate “the boy” and its AMR counterpart from $X_3$ . Our graph-to-string rules are inspired by synchronous grammars for machine translation BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 .
Induced Rules
[t] training corpus $C$ rule instances $R$ $R$ $\leftarrow $ [] $(Sent,AMR,\sim )$ in $C$ $R_{cur}$ $\leftarrow $ FragmentExtract( $Sent$ , $AMR$ , $R$0 ) $R$1 in $R$2 $R$3 .append( $R$4 ) $R$5 in $R$6 $R$7 .Contains $R$8 $R$9 $R$0 $R$1 .collapse( $R$2 ) $R$3 .append( $R$4 ) Rule extraction
There are three types of rules in our system, namely induced rules, concept rules and graph glue rules. Here we first introduce induced rules, which are obtained by a two-step procedure on a training corpus. Shown in Algorithm "Induced Rules" , the first step is to extract a set of initial rules from training $\langle $ sentence, AMR, $\sim $ $\rangle $ pairs (Line 2) using the phrase-to-graph-fragment extraction algorithm of peng2015synchronous (Line 3). Here an initial rule contains only terminal symbols in both $F$ and $E$ . As a next step, we match between pairs of initial rules $r_i$ and $r_j$ , and generate $r_{ij}$ by collapsing $r_i$ with $\sim $0 , if $\sim $1 contains $\sim $2 (Line 6-8). Here $\sim $3 contains $\sim $4 , if $\sim $5 is a subgraph of $\sim $6 and $\sim $7 is a sub-phrase of $\sim $8 . When collapsing $\sim $9 with $\rangle $0 , we replace the corresponding subgraph in $\rangle $1 with a new non-terminal node, and the sub-phrase in $\rangle $2 with the same non-terminal. For example, we obtain rule (b) by collapsing (d) with (a) in Table 1 . All initial and generated rules are stored in a rule list $\rangle $3 (Lines 5 and 9), which will be further normalized to obtain the final induced rule set.
Concept Rules and Glue Rules
In addition to induced rules, we adopt concept rules BIBREF17 and graph glue rules to ensure existence of derivations. For a concept rule, $F$ is a single node in the input AMR graph, and $E$ is a morphological string of the node concept. A concept rule is used in case no induced rule can cover the node. We refer to the verbalization list and AMR guidelines for creating more complex concept rules. For example, one concept rule created from the verbalization list is “(k / keep-01 :ARG1 (p / peace)) $|||$ peacekeeping”.
Inspired by chiang:2005:ACL, we define graph glue rules to concatenate non-terminal nodes connected with an edge, when no induced rules can be applied. Three glue rules are defined for each type of edge label. Taking the edge label “ARG0” as an example, we create the following glue rules:
where for both $r_1$ and $r_2$ , $F$ contains two non-terminal nodes with a directed edge connecting them, and $E$ is the concatenation the two non-terminals in either the monotonic or the inverse order. For $r_3$ , $F$ contains one non-terminal node with a self-pointing edge, and $E$ is the non-terminal. With concept rules and glue rules in our final rule set, it is easily guaranteed that there are legal derivations for any input AMR graph.
Model
We adopt a log-linear model for scoring search hypotheses. Given an input AMR graph, we find the highest scored derivation $t^{\ast }$ from all possible derivations $t$ :
$$t^{\ast } = \arg \!\max _{t} \exp \sum _i w_i f_i(g,t)\textrm {,}$$ (Eq. 11)
where $g$ denotes the input AMR, $f_i(\cdot ,\cdot )$ and $w_i$ represent a feature and the corresponding weight, respectively. The feature set that we adopt includes phrase-to-graph and graph-to-phrase translation probabilities and their corresponding lexicalized translation probabilities (section "Translation Probabilities" ), language model score, word count, rule count, reordering model score (section "Reordering Model" ) and moving distance (section "Moving Distance" ). The language model score, word count and phrase count features are adopted from SMT BIBREF30 , BIBREF24 .
We perform bottom-up search to transduce input AMRs to surface strings. Each hypothesis contains the current AMR graph, translations of collapsed subgraphs, the feature vector and the current model score. Beam search is adopted, where hypotheses with the same number of collapsed edges and nodes are put into the same beam.
Translation Probabilities
Production rules serve as a basis for scoring hypotheses. We associate each synchronous NRG rule $n \rightarrow (\langle F, E \rangle ,\sim )$ with a set of probabilities. First, phrase-to-fragment translation probabilities are defined based on maximum likelihood estimation (MLE), as shown in Equation 13 , where $c_{\langle F, E \rangle }$ is the fractional count of $\langle F, E \rangle $ .
$$p(F|E)=\frac{c_{\langle F,E \rangle }}{\sum _{F^{\prime }}c_{\langle F^{\prime },E \rangle }}$$ (Eq. 13)
In addition, lexicalized translation probabilities are defined as:
$$p_w(F|E)=\prod _{l \in F}{\sum _{w \in E} p(l|w)}$$ (Eq. 14)
Here $l$ is a label (including both edge labels such as “ARG0” and concept labels such as “want-01”) in the AMR fragment $F$ , and $w$ is a word in the phrase $E$ . Equation 14 can be regarded as a “soft” version of the lexicalized translation probabilities adopted by SMT, which picks the alignment yielding the maximum lexicalized probability for each translation rule. In addition to $p(F|E)$ and $p_w(F|E)$ , we use features in the reverse direction, namely $p(E|F)$ and $p_w(E|F)$ , the definitions of which are omitted as they are consistent with Equations 13 and 14 , respectively. The probabilities associated with concept rules and glue rules are manually set to 0.0001.
Reordering Model
Although the word order is defined for induced rules, it is not the case for glue rules. We learn a reordering model that helps to decide whether the translations of the nodes should be monotonic or inverse given the directed connecting edge label. The probabilistic model using smoothed counts is defined as:
$$p(M|h,l,t)=\\ \frac{1.0+\sum _{h}\sum _{t}c(h,l,t,M)}{2.0+\sum _{o\in \lbrace M,I\rbrace }\sum _{h}\sum _{t}c(h,l,t,o)}$$ (Eq. 16)
$c(h,l,t,M)$ is the count of monotonic translations of head $h$ and tail $t$ , connected by edge $l$ .
Moving Distance
The moving distance feature captures the distances between the subgraph roots of two consecutive rule matches in the decoding process, which controls a bias towards collapsing nearby subgraphs consecutively.
Setup
We use LDC2015E86 as our experimental dataset, which contains 16833 training, 1368 dev and 1371 test instances. Each instance contains a sentence, an AMR graph and the alignment generated by a heuristic aligner. Rules are extracted from the training data, and model parameters are tuned on the dev set. For tuning and testing, we filter out sentences with more than 30 words, resulting in 1103 dev instances and 1055 test instances. We train a 4-gram language model (LM) on gigaword (LDC2011T07), and use BLEU BIBREF31 as the evaluation metric. MERT is used BIBREF19 to tune model parameters on $k$ -best outputs on the devset, where $k$ is set 50.
We investigate the effectiveness of rules and features by ablation tests: “NoInducedRule” does not adopt induced rules, “NoConceptRule” does not adopt concept rules, “NoMovingDistance” does not adopt the moving distance feature, and “NoReorderModel” disables the reordering model. Given an AMR graph, if NoConceptRule cannot produce a legal derivation, we concatenate existing translation fragments into a final translation, and if a subgraph can not be translated, the empty string is used as the output. We also compare our method with previous works, in particular JAMR-gen BIBREF16 and TSP-gen BIBREF17 , on the same dataset.
Main results
The results are shown in Table 2 . First, All outperforms all baselines. NoInducedRule leads to the greatest performance drop compared with All, demonstrating that induced rules play a very important role in our system. On the other hand, NoConceptRule does not lead to much performance drop. This observation is consistent with the observation of song-EtAl:2016:EMNLP2016 for their TSP-based system. NoMovingDistance leads to a significant performance drop, empirically verifying the fact that the translations of nearby subgraphs are also close. Finally, NoReorderingModel does not affect the performance significantly, which can be because the most important reordering patterns are already covered by the hierarchical induced rules. Compared with TSP-gen and JAMR-gen, our final model All improves the BLEU from 22.44 and 23.00 to 25.62, showing the advantage of our model. To our knowledge, this is the best result reported so far on the task.
Grammar analysis
We have shown the effectiveness of our synchronous node replacement grammar (SNRG) on the AMR-to-text generation task. Here we further analyze our grammar as it is relatively less studied than the hyperedge replacement grammar (HRG) BIBREF32 .
Statistics on the whole rule set
We first categorize our rule set by the number of terminals and nonterminals in the AMR fragment $F$ , and show the percentages of each type in Figure 3 . Each rule contains at most 1 nonterminal, as we collapse each initial rule only once. First of all, the percentage of rules containing nonterminals are much more than those without nonterminals, as we collapse each pair of initial rules (in Algorithm "Induced Rules" ) and the results can be quadratic the number of initial rules. In addition, most rules are small containing 1 to 3 terminals, meaning that they represent small pieces of meaning and are easier to matched on a new AMR graph. Finally, there are a few large rules, which represent complex meaning.
Statistics on the rules used for decoding
In addition, we collect the rules that our well-tuned system used for generating the 1-best output on the testset, and categorize them into 3 types: (1) glue rules, (2) nonterminal rules, which are not glue rules but contain nonterminals on the right-hand side and (3) terminal rules, whose right-hand side only contain terminals. Over the rules used on the 1-best result, more than 30% are non-terminal rules, showing that the induced rules play an important role. On the other hand, 30% are glue rules. The reason is that the data sparsity for graph grammars is more severe than string-based grammars (such as CFG), as the graph structures are more complex than strings. Finally, terminal rules take the largest percentage, while most are induced rules, but not concept rules.
Rule examples
Finally, we show some rules in Table 4 , where $F$ and $E$ are the right-hand-side AMR fragment and phrase, respectively. For the first rule, the root of $F$ is a verb (“give-01”) whose subject is a nonterminal and object is a AMR fragment “(p / person :ARG0-of (u / use-01))”, which means “user”. So it is easy to see that the corresponding phrase $E$ conveys the same meaning. For the second rule, “(s3 / stay-01 :accompanier (i / i))” means “stay with me”, which is also covered by its phrase.
Generation example
Finally, we show an example in Table 5 , where the top is the input AMR graph, and the bottom is the generation result. Generally, most of the meaning of the input AMR are correctly translated, such as “:example”, which means “such as”, and “thing”, which is an abstract concept and should not be translated, while there are a few errors, such as “that” in the result should be “what”, and there should be an “in” between “tmt” and “fairfax”.
Conclusion
We showed that synchronous node replacement grammar is useful for AMR-to-text generation by developing a system that learns a synchronous NRG in the training time, and applies a graph transducer to collapse input AMR graphs and generate output strings according to the learned grammar at test time. Our method performs better than the previous systems, empirically proving the advantages of our graph-to-string rules.
Acknowledgement
This work was funded by a Google Faculty Research Award. Yue Zhang is funded by NSFC61572245 and T2MOE201301 from Singapore Ministry of Education. | Unanswerable |
f7d67d6c6fbc62b2953ab74db6871b122b3c92cc | f7d67d6c6fbc62b2953ab74db6871b122b3c92cc_0 | Q: How much faster is training time for MGNC-CNN over the baselines?
Text: Introduction
Neural models have recently gained popularity for Natural Language Processing (NLP) tasks BIBREF0 , BIBREF1 , BIBREF2 . For sentence classification, in particular, Convolution Neural Networks (CNN) have realized impressive performance BIBREF3 , BIBREF4 . These models operate over word embeddings, i.e., dense, low dimensional vector representations of words that aim to capture salient semantic and syntactic properties BIBREF1 .
An important consideration for such models is the specification of the word embeddings. Several options exist. For example, Kalchbrenner et al. kalchbrenner2014convolutional initialize word vectors to random low-dimensional vectors to be fit during training, while Johnson and Zhang johnson2014effective use fixed, one-hot encodings for each word. By contrast, Kim kim2014convolutional initializes word vectors to those estimated via the word2vec model trained on 100 billion words of Google News BIBREF5 ; these are then updated during training. Initializing embeddings to pre-trained word vectors is intuitively appealing because it allows transfer of learned distributional semantics. This has allowed a relatively simple CNN architecture to achieve remarkably strong results.
Many pre-trained word embeddings are now readily available on the web, induced using different models, corpora, and processing steps. Different embeddings may encode different aspects of language BIBREF6 , BIBREF7 , BIBREF8 : those based on bag-of-words (BoW) statistics tend to capture associations (doctor and hospital), while embeddings based on dependency-parses encode similarity in terms of use (doctor and surgeon). It is natural to consider how these embeddings might be combined to improve NLP models in general and CNNs in particular.
Contributions. We propose MGNC-CNN, a novel, simple, scalable CNN architecture that can accommodate multiple off-the-shelf embeddings of variable sizes. Our model treats different word embeddings as distinct groups, and applies CNNs independently to each, thus generating corresponding feature vectors (one per embedding) which are then concatenated at the classification layer. Inspired by prior work exploiting regularization to encode structure for NLP tasks BIBREF9 , BIBREF10 , we impose different regularization penalties on weights for features generated from the respective word embedding sets.
Our approach enjoys the following advantages compared to the only existing comparable model BIBREF11 : (i) It can leverage diverse, readily available word embeddings with different dimensions, thus providing flexibility. (ii) It is comparatively simple, and does not, for example, require mutual learning or pre-training. (iii) It is an order of magnitude more efficient in terms of training time.
Related Work
Prior work has considered combining latent representations of words that capture syntactic and semantic properties BIBREF12 , and inducing multi-modal embeddings BIBREF13 for general NLP tasks. And recently, Luo et al. luo2014pre proposed a framework that combines multiple word embeddings to measure text similarity, however their focus was not on classification.
More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement).
Model Description
We first review standard one-layer CNN (which exploits a single set of embeddings) for sentence classification BIBREF3 , and then propose our augmentations, which exploit multiple embedding sets.
Basic CNN. In this model we first replace each word in a sentence with its vector representation, resulting in a sentence matrix INLINEFORM0 , where INLINEFORM1 is the (zero-padded) sentence length, and INLINEFORM2 is the dimensionality of the embeddings. We apply a convolution operation between linear filters with parameters INLINEFORM3 and the sentence matrix. For each INLINEFORM4 , where INLINEFORM5 denotes `height', we slide filter INLINEFORM6 across INLINEFORM7 , considering `local regions' of INLINEFORM8 adjacent rows at a time. At each local region, we perform element-wise multiplication and then take the element-wise sum between the filter and the (flattened) sub-matrix of INLINEFORM9 , producing a scalar. We do this for each sub-region of INLINEFORM10 that the filter spans, resulting in a feature map vector INLINEFORM11 . We can use multiple filter sizes with different heights, and for each filter size we can have multiple filters. Thus the model comprises INLINEFORM12 weight vectors INLINEFORM13 , each of which is associated with an instantiation of a specific filter size. These in turn generate corresponding feature maps INLINEFORM14 , with dimensions varying with filter size. A 1-max pooling operation is applied to each feature map, extracting the largest number INLINEFORM15 from each feature map INLINEFORM16 . Finally, we combine all INLINEFORM17 together to form a feature vector INLINEFORM18 to be fed through a softmax function for classification. We regularize weights at this level in two ways. (1) Dropout, in which we randomly set elements in INLINEFORM19 to zero during the training phase with probability INLINEFORM20 , and multiply INLINEFORM21 with the parameters trained in INLINEFORM22 at test time. (2) An l2 norm penalty, for which we set a threshold INLINEFORM23 for the l2 norm of INLINEFORM24 during training; if this is exceeded, we rescale the vector accordingly. For more details, see BIBREF4 .
MG-CNN. Assuming we have INLINEFORM0 word embeddings with corresponding dimensions INLINEFORM1 , we can simply treat each word embedding independently. In this case, the input to the CNN comprises multiple sentence matrices INLINEFORM2 , where each INLINEFORM3 may have its own width INLINEFORM4 . We then apply different groups of filters INLINEFORM5 independently to each INLINEFORM6 , where INLINEFORM7 denotes the set of filters for INLINEFORM8 . As in basic CNN, INLINEFORM9 may have multiple filter sizes, and multiple filters of each size may be introduced. At the classification layer we then obtain a feature vector INLINEFORM10 for each embedding set, and we can simply concatenate these together to form the final feature vector INLINEFORM11 to feed into the softmax function, where INLINEFORM12 . This representation contains feature vectors generated from all sets of embeddings under consideration. We call this method multiple group CNN (MG-CNN). Here groups refer to the features generated from different embeddings. Note that this differs from `multi-channel' models because at the convolution layer we use different filters on each word embedding matrix independently, whereas in a standard multi-channel approach each filter would consider all channels simultaneously and generate a scalar from all channels at each local region. As above, we impose a max l2 norm constraint on the final feature vector INLINEFORM13 for regularization. Figure FIGREF1 illustrates this approach.
MGNC-CNN. We propose an augmentation of MG-CNN, Multi-Group Norm Constraint CNN (MGNC-CNN), which differs in its regularization strategy. Specifically, in this variant we impose grouped regularization constraints, independently regularizing subcomponents INLINEFORM0 derived from the respective embeddings, i.e., we impose separate max norm constraints INLINEFORM1 for each INLINEFORM2 (where INLINEFORM3 again indexes embedding sets); these INLINEFORM4 hyper-parameters are to be tuned on a validation set. Intuitively, this method aims to better capitalize on features derived from word embeddings that capture discriminative properties of text for the task at hand by penalizing larger weight estimates for features derived from less discriminative embeddings.
Datasets
Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set.
Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each.
TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances.
Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced.
Pre-trained Word Embeddings
We consider three sets of word embeddings for our experiments: (i) word2vec is trained on 100 billion tokens of Google News dataset; (ii) GloVe BIBREF18 is trained on aggregated global word-word co-occurrence statistics from Common Crawl (840B tokens); and (iii) syntactic word embedding trained on dependency-parsed corpora. These three embedding sets happen to all be 300-dimensional, but our model could accommodate arbitrary and variable sizes. We pre-trained our own syntactic embeddings following BIBREF8 . We parsed the ukWaC corpus BIBREF19 using the Stanford Dependency Parser v3.5.2 with Stanford Dependencies BIBREF20 and extracted (word, relation+context) pairs from parse trees. We “collapsed" nodes with prepositions and notated inverse relations separately, e.g., “dog barks" emits two tuples: (barks, nsubj_dog) and (dog, nsubj INLINEFORM0 _barks). We filter words and contexts that appear fewer than 100 times, resulting in INLINEFORM1 173k words and 1M contexts. We trained 300d vectors using word2vecf with default parameters.
Setup
We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .
We used standard train/test splits for those datasets that had them. Otherwise, we performed 10-fold cross validation, creating nested development sets with which to tune hyperparameters. For all experiments we used filters sizes of 3, 4 and 5 and we created 100 feature maps for each filter size. We applied 1 max-pooling and dropout (rate: 0.5) at the classification layer. For training we used back-propagation in mini-batches and used AdaDelta as the stochastic gradient descent (SGD) update rule, and set mini-batch size as 50. In this work, we treat word embeddings as part of the parameters of the model, and update them as well during training. In all our experiments, we only tuned the max norm constraint(s), fixing all other hyperparameters.
Results and Discussion
We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . We also show results on Subj, SST-1 and SST-2 achieved by the more complex model of BIBREF11 for comparison; this represents the state-of-the-art on the three datasets other than TREC.
We can see that MGNC-CNN and MG-CNN always outperform baseline methods (including C-CNN), and MGNC-CNN is usually better than MG-CNN. And on the Subj dataset, MG-CNN actually achieves slightly better results than BIBREF11 , with far less complexity and required training time (MGNC-CNN performs comparably, although no better, here). On the TREC dataset, the best-ever accuracy we are aware of is 96.0% BIBREF21 , which falls within the range of the result of our MGNC-CNN model with three word embeddings. On the irony dataset, our model with three embeddings achieves 4% improvement (in terms of AUC) compared to the baseline model. On SST-1 and SST-2, our model performs slightly worse than BIBREF11 . However, we again note that their performance is achieved using a much more complex model which involves pre-training and mutual-learning steps. This model takes days to train, whereas our model requires on the order of an hour.
We note that the method proposed by Astudillo et al. astudillo2015learning is able to accommodate multiple embedding sets with different dimensions by projecting the original word embeddings into a lower-dimensional space. However, this work requires training the optimal projection matrix on laebled data first, which again incurs large overhead.
Of course, our model also has its own limitations: in MGNC-CNN, we need to tune the norm constraint hyperparameter for all the word embeddings. As the number of word embedding increases, this will increase the running time. However, this tuning procedure is embarrassingly parallel.
Conclusions
We have proposed MGNC-CNN: a simple, flexible CNN architecture for sentence classification that can exploit multiple, variable sized word embeddings. We demonstrated that this consistently achieves better results than a baseline architecture that exploits only a single set of word embeddings, and also a naive concatenation approach to capitalizing on multiple embeddings. Furthermore, our results are comparable to those achieved with a recently proposed model BIBREF11 that is much more complex. However, our simple model is easy to implement and requires an order of magnitude less training time. Furthermore, our model is much more flexible than previous approaches, because it can accommodate variable-size word embeddings.
Acknowledgments
This work was supported in part by the Army Research Office (grant W911NF-14-1-0442) and by The Foundation for Science and Technology, Portugal (grant UTAP-EXPL/EEIESS/0031/2014). This work was also made possible by the support of the Texas Advanced Computer Center (TACC) at UT Austin. | It is an order of magnitude more efficient in terms of training time., his model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour |
085147cd32153d46dd9901ab0f9195bfdbff6a85 | 085147cd32153d46dd9901ab0f9195bfdbff6a85_0 | Q: What are the baseline models?
Text: Introduction
Neural models have recently gained popularity for Natural Language Processing (NLP) tasks BIBREF0 , BIBREF1 , BIBREF2 . For sentence classification, in particular, Convolution Neural Networks (CNN) have realized impressive performance BIBREF3 , BIBREF4 . These models operate over word embeddings, i.e., dense, low dimensional vector representations of words that aim to capture salient semantic and syntactic properties BIBREF1 .
An important consideration for such models is the specification of the word embeddings. Several options exist. For example, Kalchbrenner et al. kalchbrenner2014convolutional initialize word vectors to random low-dimensional vectors to be fit during training, while Johnson and Zhang johnson2014effective use fixed, one-hot encodings for each word. By contrast, Kim kim2014convolutional initializes word vectors to those estimated via the word2vec model trained on 100 billion words of Google News BIBREF5 ; these are then updated during training. Initializing embeddings to pre-trained word vectors is intuitively appealing because it allows transfer of learned distributional semantics. This has allowed a relatively simple CNN architecture to achieve remarkably strong results.
Many pre-trained word embeddings are now readily available on the web, induced using different models, corpora, and processing steps. Different embeddings may encode different aspects of language BIBREF6 , BIBREF7 , BIBREF8 : those based on bag-of-words (BoW) statistics tend to capture associations (doctor and hospital), while embeddings based on dependency-parses encode similarity in terms of use (doctor and surgeon). It is natural to consider how these embeddings might be combined to improve NLP models in general and CNNs in particular.
Contributions. We propose MGNC-CNN, a novel, simple, scalable CNN architecture that can accommodate multiple off-the-shelf embeddings of variable sizes. Our model treats different word embeddings as distinct groups, and applies CNNs independently to each, thus generating corresponding feature vectors (one per embedding) which are then concatenated at the classification layer. Inspired by prior work exploiting regularization to encode structure for NLP tasks BIBREF9 , BIBREF10 , we impose different regularization penalties on weights for features generated from the respective word embedding sets.
Our approach enjoys the following advantages compared to the only existing comparable model BIBREF11 : (i) It can leverage diverse, readily available word embeddings with different dimensions, thus providing flexibility. (ii) It is comparatively simple, and does not, for example, require mutual learning or pre-training. (iii) It is an order of magnitude more efficient in terms of training time.
Related Work
Prior work has considered combining latent representations of words that capture syntactic and semantic properties BIBREF12 , and inducing multi-modal embeddings BIBREF13 for general NLP tasks. And recently, Luo et al. luo2014pre proposed a framework that combines multiple word embeddings to measure text similarity, however their focus was not on classification.
More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement).
Model Description
We first review standard one-layer CNN (which exploits a single set of embeddings) for sentence classification BIBREF3 , and then propose our augmentations, which exploit multiple embedding sets.
Basic CNN. In this model we first replace each word in a sentence with its vector representation, resulting in a sentence matrix INLINEFORM0 , where INLINEFORM1 is the (zero-padded) sentence length, and INLINEFORM2 is the dimensionality of the embeddings. We apply a convolution operation between linear filters with parameters INLINEFORM3 and the sentence matrix. For each INLINEFORM4 , where INLINEFORM5 denotes `height', we slide filter INLINEFORM6 across INLINEFORM7 , considering `local regions' of INLINEFORM8 adjacent rows at a time. At each local region, we perform element-wise multiplication and then take the element-wise sum between the filter and the (flattened) sub-matrix of INLINEFORM9 , producing a scalar. We do this for each sub-region of INLINEFORM10 that the filter spans, resulting in a feature map vector INLINEFORM11 . We can use multiple filter sizes with different heights, and for each filter size we can have multiple filters. Thus the model comprises INLINEFORM12 weight vectors INLINEFORM13 , each of which is associated with an instantiation of a specific filter size. These in turn generate corresponding feature maps INLINEFORM14 , with dimensions varying with filter size. A 1-max pooling operation is applied to each feature map, extracting the largest number INLINEFORM15 from each feature map INLINEFORM16 . Finally, we combine all INLINEFORM17 together to form a feature vector INLINEFORM18 to be fed through a softmax function for classification. We regularize weights at this level in two ways. (1) Dropout, in which we randomly set elements in INLINEFORM19 to zero during the training phase with probability INLINEFORM20 , and multiply INLINEFORM21 with the parameters trained in INLINEFORM22 at test time. (2) An l2 norm penalty, for which we set a threshold INLINEFORM23 for the l2 norm of INLINEFORM24 during training; if this is exceeded, we rescale the vector accordingly. For more details, see BIBREF4 .
MG-CNN. Assuming we have INLINEFORM0 word embeddings with corresponding dimensions INLINEFORM1 , we can simply treat each word embedding independently. In this case, the input to the CNN comprises multiple sentence matrices INLINEFORM2 , where each INLINEFORM3 may have its own width INLINEFORM4 . We then apply different groups of filters INLINEFORM5 independently to each INLINEFORM6 , where INLINEFORM7 denotes the set of filters for INLINEFORM8 . As in basic CNN, INLINEFORM9 may have multiple filter sizes, and multiple filters of each size may be introduced. At the classification layer we then obtain a feature vector INLINEFORM10 for each embedding set, and we can simply concatenate these together to form the final feature vector INLINEFORM11 to feed into the softmax function, where INLINEFORM12 . This representation contains feature vectors generated from all sets of embeddings under consideration. We call this method multiple group CNN (MG-CNN). Here groups refer to the features generated from different embeddings. Note that this differs from `multi-channel' models because at the convolution layer we use different filters on each word embedding matrix independently, whereas in a standard multi-channel approach each filter would consider all channels simultaneously and generate a scalar from all channels at each local region. As above, we impose a max l2 norm constraint on the final feature vector INLINEFORM13 for regularization. Figure FIGREF1 illustrates this approach.
MGNC-CNN. We propose an augmentation of MG-CNN, Multi-Group Norm Constraint CNN (MGNC-CNN), which differs in its regularization strategy. Specifically, in this variant we impose grouped regularization constraints, independently regularizing subcomponents INLINEFORM0 derived from the respective embeddings, i.e., we impose separate max norm constraints INLINEFORM1 for each INLINEFORM2 (where INLINEFORM3 again indexes embedding sets); these INLINEFORM4 hyper-parameters are to be tuned on a validation set. Intuitively, this method aims to better capitalize on features derived from word embeddings that capture discriminative properties of text for the task at hand by penalizing larger weight estimates for features derived from less discriminative embeddings.
Datasets
Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set.
Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each.
TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances.
Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced.
Pre-trained Word Embeddings
We consider three sets of word embeddings for our experiments: (i) word2vec is trained on 100 billion tokens of Google News dataset; (ii) GloVe BIBREF18 is trained on aggregated global word-word co-occurrence statistics from Common Crawl (840B tokens); and (iii) syntactic word embedding trained on dependency-parsed corpora. These three embedding sets happen to all be 300-dimensional, but our model could accommodate arbitrary and variable sizes. We pre-trained our own syntactic embeddings following BIBREF8 . We parsed the ukWaC corpus BIBREF19 using the Stanford Dependency Parser v3.5.2 with Stanford Dependencies BIBREF20 and extracted (word, relation+context) pairs from parse trees. We “collapsed" nodes with prepositions and notated inverse relations separately, e.g., “dog barks" emits two tuples: (barks, nsubj_dog) and (dog, nsubj INLINEFORM0 _barks). We filter words and contexts that appear fewer than 100 times, resulting in INLINEFORM1 173k words and 1M contexts. We trained 300d vectors using word2vecf with default parameters.
Setup
We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .
We used standard train/test splits for those datasets that had them. Otherwise, we performed 10-fold cross validation, creating nested development sets with which to tune hyperparameters. For all experiments we used filters sizes of 3, 4 and 5 and we created 100 feature maps for each filter size. We applied 1 max-pooling and dropout (rate: 0.5) at the classification layer. For training we used back-propagation in mini-batches and used AdaDelta as the stochastic gradient descent (SGD) update rule, and set mini-batch size as 50. In this work, we treat word embeddings as part of the parameters of the model, and update them as well during training. In all our experiments, we only tuned the max norm constraint(s), fixing all other hyperparameters.
Results and Discussion
We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . We also show results on Subj, SST-1 and SST-2 achieved by the more complex model of BIBREF11 for comparison; this represents the state-of-the-art on the three datasets other than TREC.
We can see that MGNC-CNN and MG-CNN always outperform baseline methods (including C-CNN), and MGNC-CNN is usually better than MG-CNN. And on the Subj dataset, MG-CNN actually achieves slightly better results than BIBREF11 , with far less complexity and required training time (MGNC-CNN performs comparably, although no better, here). On the TREC dataset, the best-ever accuracy we are aware of is 96.0% BIBREF21 , which falls within the range of the result of our MGNC-CNN model with three word embeddings. On the irony dataset, our model with three embeddings achieves 4% improvement (in terms of AUC) compared to the baseline model. On SST-1 and SST-2, our model performs slightly worse than BIBREF11 . However, we again note that their performance is achieved using a much more complex model which involves pre-training and mutual-learning steps. This model takes days to train, whereas our model requires on the order of an hour.
We note that the method proposed by Astudillo et al. astudillo2015learning is able to accommodate multiple embedding sets with different dimensions by projecting the original word embeddings into a lower-dimensional space. However, this work requires training the optimal projection matrix on laebled data first, which again incurs large overhead.
Of course, our model also has its own limitations: in MGNC-CNN, we need to tune the norm constraint hyperparameter for all the word embeddings. As the number of word embedding increases, this will increase the running time. However, this tuning procedure is embarrassingly parallel.
Conclusions
We have proposed MGNC-CNN: a simple, flexible CNN architecture for sentence classification that can exploit multiple, variable sized word embeddings. We demonstrated that this consistently achieves better results than a baseline architecture that exploits only a single set of word embeddings, and also a naive concatenation approach to capitalizing on multiple embeddings. Furthermore, our results are comparable to those achieved with a recently proposed model BIBREF11 that is much more complex. However, our simple model is easy to implement and requires an order of magnitude less training time. Furthermore, our model is much more flexible than previous approaches, because it can accommodate variable-size word embeddings.
Acknowledgments
This work was supported in part by the Army Research Office (grant W911NF-14-1-0442) and by The Foundation for Science and Technology, Portugal (grant UTAP-EXPL/EEIESS/0031/2014). This work was also made possible by the support of the Texas Advanced Computer Center (TACC) at UT Austin. | MC-CNN
MVCNN
CNN |
c0035fb1c2b3de15146a7ce186ccd2e366fb4da2 | c0035fb1c2b3de15146a7ce186ccd2e366fb4da2_0 | Q: By how much of MGNC-CNN out perform the baselines?
Text: Introduction
Neural models have recently gained popularity for Natural Language Processing (NLP) tasks BIBREF0 , BIBREF1 , BIBREF2 . For sentence classification, in particular, Convolution Neural Networks (CNN) have realized impressive performance BIBREF3 , BIBREF4 . These models operate over word embeddings, i.e., dense, low dimensional vector representations of words that aim to capture salient semantic and syntactic properties BIBREF1 .
An important consideration for such models is the specification of the word embeddings. Several options exist. For example, Kalchbrenner et al. kalchbrenner2014convolutional initialize word vectors to random low-dimensional vectors to be fit during training, while Johnson and Zhang johnson2014effective use fixed, one-hot encodings for each word. By contrast, Kim kim2014convolutional initializes word vectors to those estimated via the word2vec model trained on 100 billion words of Google News BIBREF5 ; these are then updated during training. Initializing embeddings to pre-trained word vectors is intuitively appealing because it allows transfer of learned distributional semantics. This has allowed a relatively simple CNN architecture to achieve remarkably strong results.
Many pre-trained word embeddings are now readily available on the web, induced using different models, corpora, and processing steps. Different embeddings may encode different aspects of language BIBREF6 , BIBREF7 , BIBREF8 : those based on bag-of-words (BoW) statistics tend to capture associations (doctor and hospital), while embeddings based on dependency-parses encode similarity in terms of use (doctor and surgeon). It is natural to consider how these embeddings might be combined to improve NLP models in general and CNNs in particular.
Contributions. We propose MGNC-CNN, a novel, simple, scalable CNN architecture that can accommodate multiple off-the-shelf embeddings of variable sizes. Our model treats different word embeddings as distinct groups, and applies CNNs independently to each, thus generating corresponding feature vectors (one per embedding) which are then concatenated at the classification layer. Inspired by prior work exploiting regularization to encode structure for NLP tasks BIBREF9 , BIBREF10 , we impose different regularization penalties on weights for features generated from the respective word embedding sets.
Our approach enjoys the following advantages compared to the only existing comparable model BIBREF11 : (i) It can leverage diverse, readily available word embeddings with different dimensions, thus providing flexibility. (ii) It is comparatively simple, and does not, for example, require mutual learning or pre-training. (iii) It is an order of magnitude more efficient in terms of training time.
Related Work
Prior work has considered combining latent representations of words that capture syntactic and semantic properties BIBREF12 , and inducing multi-modal embeddings BIBREF13 for general NLP tasks. And recently, Luo et al. luo2014pre proposed a framework that combines multiple word embeddings to measure text similarity, however their focus was not on classification.
More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement).
Model Description
We first review standard one-layer CNN (which exploits a single set of embeddings) for sentence classification BIBREF3 , and then propose our augmentations, which exploit multiple embedding sets.
Basic CNN. In this model we first replace each word in a sentence with its vector representation, resulting in a sentence matrix INLINEFORM0 , where INLINEFORM1 is the (zero-padded) sentence length, and INLINEFORM2 is the dimensionality of the embeddings. We apply a convolution operation between linear filters with parameters INLINEFORM3 and the sentence matrix. For each INLINEFORM4 , where INLINEFORM5 denotes `height', we slide filter INLINEFORM6 across INLINEFORM7 , considering `local regions' of INLINEFORM8 adjacent rows at a time. At each local region, we perform element-wise multiplication and then take the element-wise sum between the filter and the (flattened) sub-matrix of INLINEFORM9 , producing a scalar. We do this for each sub-region of INLINEFORM10 that the filter spans, resulting in a feature map vector INLINEFORM11 . We can use multiple filter sizes with different heights, and for each filter size we can have multiple filters. Thus the model comprises INLINEFORM12 weight vectors INLINEFORM13 , each of which is associated with an instantiation of a specific filter size. These in turn generate corresponding feature maps INLINEFORM14 , with dimensions varying with filter size. A 1-max pooling operation is applied to each feature map, extracting the largest number INLINEFORM15 from each feature map INLINEFORM16 . Finally, we combine all INLINEFORM17 together to form a feature vector INLINEFORM18 to be fed through a softmax function for classification. We regularize weights at this level in two ways. (1) Dropout, in which we randomly set elements in INLINEFORM19 to zero during the training phase with probability INLINEFORM20 , and multiply INLINEFORM21 with the parameters trained in INLINEFORM22 at test time. (2) An l2 norm penalty, for which we set a threshold INLINEFORM23 for the l2 norm of INLINEFORM24 during training; if this is exceeded, we rescale the vector accordingly. For more details, see BIBREF4 .
MG-CNN. Assuming we have INLINEFORM0 word embeddings with corresponding dimensions INLINEFORM1 , we can simply treat each word embedding independently. In this case, the input to the CNN comprises multiple sentence matrices INLINEFORM2 , where each INLINEFORM3 may have its own width INLINEFORM4 . We then apply different groups of filters INLINEFORM5 independently to each INLINEFORM6 , where INLINEFORM7 denotes the set of filters for INLINEFORM8 . As in basic CNN, INLINEFORM9 may have multiple filter sizes, and multiple filters of each size may be introduced. At the classification layer we then obtain a feature vector INLINEFORM10 for each embedding set, and we can simply concatenate these together to form the final feature vector INLINEFORM11 to feed into the softmax function, where INLINEFORM12 . This representation contains feature vectors generated from all sets of embeddings under consideration. We call this method multiple group CNN (MG-CNN). Here groups refer to the features generated from different embeddings. Note that this differs from `multi-channel' models because at the convolution layer we use different filters on each word embedding matrix independently, whereas in a standard multi-channel approach each filter would consider all channels simultaneously and generate a scalar from all channels at each local region. As above, we impose a max l2 norm constraint on the final feature vector INLINEFORM13 for regularization. Figure FIGREF1 illustrates this approach.
MGNC-CNN. We propose an augmentation of MG-CNN, Multi-Group Norm Constraint CNN (MGNC-CNN), which differs in its regularization strategy. Specifically, in this variant we impose grouped regularization constraints, independently regularizing subcomponents INLINEFORM0 derived from the respective embeddings, i.e., we impose separate max norm constraints INLINEFORM1 for each INLINEFORM2 (where INLINEFORM3 again indexes embedding sets); these INLINEFORM4 hyper-parameters are to be tuned on a validation set. Intuitively, this method aims to better capitalize on features derived from word embeddings that capture discriminative properties of text for the task at hand by penalizing larger weight estimates for features derived from less discriminative embeddings.
Datasets
Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set.
Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each.
TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances.
Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced.
Pre-trained Word Embeddings
We consider three sets of word embeddings for our experiments: (i) word2vec is trained on 100 billion tokens of Google News dataset; (ii) GloVe BIBREF18 is trained on aggregated global word-word co-occurrence statistics from Common Crawl (840B tokens); and (iii) syntactic word embedding trained on dependency-parsed corpora. These three embedding sets happen to all be 300-dimensional, but our model could accommodate arbitrary and variable sizes. We pre-trained our own syntactic embeddings following BIBREF8 . We parsed the ukWaC corpus BIBREF19 using the Stanford Dependency Parser v3.5.2 with Stanford Dependencies BIBREF20 and extracted (word, relation+context) pairs from parse trees. We “collapsed" nodes with prepositions and notated inverse relations separately, e.g., “dog barks" emits two tuples: (barks, nsubj_dog) and (dog, nsubj INLINEFORM0 _barks). We filter words and contexts that appear fewer than 100 times, resulting in INLINEFORM1 173k words and 1M contexts. We trained 300d vectors using word2vecf with default parameters.
Setup
We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .
We used standard train/test splits for those datasets that had them. Otherwise, we performed 10-fold cross validation, creating nested development sets with which to tune hyperparameters. For all experiments we used filters sizes of 3, 4 and 5 and we created 100 feature maps for each filter size. We applied 1 max-pooling and dropout (rate: 0.5) at the classification layer. For training we used back-propagation in mini-batches and used AdaDelta as the stochastic gradient descent (SGD) update rule, and set mini-batch size as 50. In this work, we treat word embeddings as part of the parameters of the model, and update them as well during training. In all our experiments, we only tuned the max norm constraint(s), fixing all other hyperparameters.
Results and Discussion
We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . We also show results on Subj, SST-1 and SST-2 achieved by the more complex model of BIBREF11 for comparison; this represents the state-of-the-art on the three datasets other than TREC.
We can see that MGNC-CNN and MG-CNN always outperform baseline methods (including C-CNN), and MGNC-CNN is usually better than MG-CNN. And on the Subj dataset, MG-CNN actually achieves slightly better results than BIBREF11 , with far less complexity and required training time (MGNC-CNN performs comparably, although no better, here). On the TREC dataset, the best-ever accuracy we are aware of is 96.0% BIBREF21 , which falls within the range of the result of our MGNC-CNN model with three word embeddings. On the irony dataset, our model with three embeddings achieves 4% improvement (in terms of AUC) compared to the baseline model. On SST-1 and SST-2, our model performs slightly worse than BIBREF11 . However, we again note that their performance is achieved using a much more complex model which involves pre-training and mutual-learning steps. This model takes days to train, whereas our model requires on the order of an hour.
We note that the method proposed by Astudillo et al. astudillo2015learning is able to accommodate multiple embedding sets with different dimensions by projecting the original word embeddings into a lower-dimensional space. However, this work requires training the optimal projection matrix on laebled data first, which again incurs large overhead.
Of course, our model also has its own limitations: in MGNC-CNN, we need to tune the norm constraint hyperparameter for all the word embeddings. As the number of word embedding increases, this will increase the running time. However, this tuning procedure is embarrassingly parallel.
Conclusions
We have proposed MGNC-CNN: a simple, flexible CNN architecture for sentence classification that can exploit multiple, variable sized word embeddings. We demonstrated that this consistently achieves better results than a baseline architecture that exploits only a single set of word embeddings, and also a naive concatenation approach to capitalizing on multiple embeddings. Furthermore, our results are comparable to those achieved with a recently proposed model BIBREF11 that is much more complex. However, our simple model is easy to implement and requires an order of magnitude less training time. Furthermore, our model is much more flexible than previous approaches, because it can accommodate variable-size word embeddings.
Acknowledgments
This work was supported in part by the Army Research Office (grant W911NF-14-1-0442) and by The Foundation for Science and Technology, Portugal (grant UTAP-EXPL/EEIESS/0031/2014). This work was also made possible by the support of the Texas Advanced Computer Center (TACC) at UT Austin. | In terms of Subj the Average MGNC-CNN is better than the average score of baselines by 0.5. Similarly, Scores of SST-1, SST-2, and TREC where MGNC-CNN has similar improvements.
In case of Irony the difference is about 2.0.
|
a8e4a67dd67ae4a9ebf983a90b0d256f4b9ff6c6 | a8e4a67dd67ae4a9ebf983a90b0d256f4b9ff6c6_0 | Q: What dataset/corpus is this evaluated over?
Text: Introduction
Neural models have recently gained popularity for Natural Language Processing (NLP) tasks BIBREF0 , BIBREF1 , BIBREF2 . For sentence classification, in particular, Convolution Neural Networks (CNN) have realized impressive performance BIBREF3 , BIBREF4 . These models operate over word embeddings, i.e., dense, low dimensional vector representations of words that aim to capture salient semantic and syntactic properties BIBREF1 .
An important consideration for such models is the specification of the word embeddings. Several options exist. For example, Kalchbrenner et al. kalchbrenner2014convolutional initialize word vectors to random low-dimensional vectors to be fit during training, while Johnson and Zhang johnson2014effective use fixed, one-hot encodings for each word. By contrast, Kim kim2014convolutional initializes word vectors to those estimated via the word2vec model trained on 100 billion words of Google News BIBREF5 ; these are then updated during training. Initializing embeddings to pre-trained word vectors is intuitively appealing because it allows transfer of learned distributional semantics. This has allowed a relatively simple CNN architecture to achieve remarkably strong results.
Many pre-trained word embeddings are now readily available on the web, induced using different models, corpora, and processing steps. Different embeddings may encode different aspects of language BIBREF6 , BIBREF7 , BIBREF8 : those based on bag-of-words (BoW) statistics tend to capture associations (doctor and hospital), while embeddings based on dependency-parses encode similarity in terms of use (doctor and surgeon). It is natural to consider how these embeddings might be combined to improve NLP models in general and CNNs in particular.
Contributions. We propose MGNC-CNN, a novel, simple, scalable CNN architecture that can accommodate multiple off-the-shelf embeddings of variable sizes. Our model treats different word embeddings as distinct groups, and applies CNNs independently to each, thus generating corresponding feature vectors (one per embedding) which are then concatenated at the classification layer. Inspired by prior work exploiting regularization to encode structure for NLP tasks BIBREF9 , BIBREF10 , we impose different regularization penalties on weights for features generated from the respective word embedding sets.
Our approach enjoys the following advantages compared to the only existing comparable model BIBREF11 : (i) It can leverage diverse, readily available word embeddings with different dimensions, thus providing flexibility. (ii) It is comparatively simple, and does not, for example, require mutual learning or pre-training. (iii) It is an order of magnitude more efficient in terms of training time.
Related Work
Prior work has considered combining latent representations of words that capture syntactic and semantic properties BIBREF12 , and inducing multi-modal embeddings BIBREF13 for general NLP tasks. And recently, Luo et al. luo2014pre proposed a framework that combines multiple word embeddings to measure text similarity, however their focus was not on classification.
More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement).
Model Description
We first review standard one-layer CNN (which exploits a single set of embeddings) for sentence classification BIBREF3 , and then propose our augmentations, which exploit multiple embedding sets.
Basic CNN. In this model we first replace each word in a sentence with its vector representation, resulting in a sentence matrix INLINEFORM0 , where INLINEFORM1 is the (zero-padded) sentence length, and INLINEFORM2 is the dimensionality of the embeddings. We apply a convolution operation between linear filters with parameters INLINEFORM3 and the sentence matrix. For each INLINEFORM4 , where INLINEFORM5 denotes `height', we slide filter INLINEFORM6 across INLINEFORM7 , considering `local regions' of INLINEFORM8 adjacent rows at a time. At each local region, we perform element-wise multiplication and then take the element-wise sum between the filter and the (flattened) sub-matrix of INLINEFORM9 , producing a scalar. We do this for each sub-region of INLINEFORM10 that the filter spans, resulting in a feature map vector INLINEFORM11 . We can use multiple filter sizes with different heights, and for each filter size we can have multiple filters. Thus the model comprises INLINEFORM12 weight vectors INLINEFORM13 , each of which is associated with an instantiation of a specific filter size. These in turn generate corresponding feature maps INLINEFORM14 , with dimensions varying with filter size. A 1-max pooling operation is applied to each feature map, extracting the largest number INLINEFORM15 from each feature map INLINEFORM16 . Finally, we combine all INLINEFORM17 together to form a feature vector INLINEFORM18 to be fed through a softmax function for classification. We regularize weights at this level in two ways. (1) Dropout, in which we randomly set elements in INLINEFORM19 to zero during the training phase with probability INLINEFORM20 , and multiply INLINEFORM21 with the parameters trained in INLINEFORM22 at test time. (2) An l2 norm penalty, for which we set a threshold INLINEFORM23 for the l2 norm of INLINEFORM24 during training; if this is exceeded, we rescale the vector accordingly. For more details, see BIBREF4 .
MG-CNN. Assuming we have INLINEFORM0 word embeddings with corresponding dimensions INLINEFORM1 , we can simply treat each word embedding independently. In this case, the input to the CNN comprises multiple sentence matrices INLINEFORM2 , where each INLINEFORM3 may have its own width INLINEFORM4 . We then apply different groups of filters INLINEFORM5 independently to each INLINEFORM6 , where INLINEFORM7 denotes the set of filters for INLINEFORM8 . As in basic CNN, INLINEFORM9 may have multiple filter sizes, and multiple filters of each size may be introduced. At the classification layer we then obtain a feature vector INLINEFORM10 for each embedding set, and we can simply concatenate these together to form the final feature vector INLINEFORM11 to feed into the softmax function, where INLINEFORM12 . This representation contains feature vectors generated from all sets of embeddings under consideration. We call this method multiple group CNN (MG-CNN). Here groups refer to the features generated from different embeddings. Note that this differs from `multi-channel' models because at the convolution layer we use different filters on each word embedding matrix independently, whereas in a standard multi-channel approach each filter would consider all channels simultaneously and generate a scalar from all channels at each local region. As above, we impose a max l2 norm constraint on the final feature vector INLINEFORM13 for regularization. Figure FIGREF1 illustrates this approach.
MGNC-CNN. We propose an augmentation of MG-CNN, Multi-Group Norm Constraint CNN (MGNC-CNN), which differs in its regularization strategy. Specifically, in this variant we impose grouped regularization constraints, independently regularizing subcomponents INLINEFORM0 derived from the respective embeddings, i.e., we impose separate max norm constraints INLINEFORM1 for each INLINEFORM2 (where INLINEFORM3 again indexes embedding sets); these INLINEFORM4 hyper-parameters are to be tuned on a validation set. Intuitively, this method aims to better capitalize on features derived from word embeddings that capture discriminative properties of text for the task at hand by penalizing larger weight estimates for features derived from less discriminative embeddings.
Datasets
Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set.
Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each.
TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances.
Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced.
Pre-trained Word Embeddings
We consider three sets of word embeddings for our experiments: (i) word2vec is trained on 100 billion tokens of Google News dataset; (ii) GloVe BIBREF18 is trained on aggregated global word-word co-occurrence statistics from Common Crawl (840B tokens); and (iii) syntactic word embedding trained on dependency-parsed corpora. These three embedding sets happen to all be 300-dimensional, but our model could accommodate arbitrary and variable sizes. We pre-trained our own syntactic embeddings following BIBREF8 . We parsed the ukWaC corpus BIBREF19 using the Stanford Dependency Parser v3.5.2 with Stanford Dependencies BIBREF20 and extracted (word, relation+context) pairs from parse trees. We “collapsed" nodes with prepositions and notated inverse relations separately, e.g., “dog barks" emits two tuples: (barks, nsubj_dog) and (dog, nsubj INLINEFORM0 _barks). We filter words and contexts that appear fewer than 100 times, resulting in INLINEFORM1 173k words and 1M contexts. We trained 300d vectors using word2vecf with default parameters.
Setup
We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .
We used standard train/test splits for those datasets that had them. Otherwise, we performed 10-fold cross validation, creating nested development sets with which to tune hyperparameters. For all experiments we used filters sizes of 3, 4 and 5 and we created 100 feature maps for each filter size. We applied 1 max-pooling and dropout (rate: 0.5) at the classification layer. For training we used back-propagation in mini-batches and used AdaDelta as the stochastic gradient descent (SGD) update rule, and set mini-batch size as 50. In this work, we treat word embeddings as part of the parameters of the model, and update them as well during training. In all our experiments, we only tuned the max norm constraint(s), fixing all other hyperparameters.
Results and Discussion
We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . We also show results on Subj, SST-1 and SST-2 achieved by the more complex model of BIBREF11 for comparison; this represents the state-of-the-art on the three datasets other than TREC.
We can see that MGNC-CNN and MG-CNN always outperform baseline methods (including C-CNN), and MGNC-CNN is usually better than MG-CNN. And on the Subj dataset, MG-CNN actually achieves slightly better results than BIBREF11 , with far less complexity and required training time (MGNC-CNN performs comparably, although no better, here). On the TREC dataset, the best-ever accuracy we are aware of is 96.0% BIBREF21 , which falls within the range of the result of our MGNC-CNN model with three word embeddings. On the irony dataset, our model with three embeddings achieves 4% improvement (in terms of AUC) compared to the baseline model. On SST-1 and SST-2, our model performs slightly worse than BIBREF11 . However, we again note that their performance is achieved using a much more complex model which involves pre-training and mutual-learning steps. This model takes days to train, whereas our model requires on the order of an hour.
We note that the method proposed by Astudillo et al. astudillo2015learning is able to accommodate multiple embedding sets with different dimensions by projecting the original word embeddings into a lower-dimensional space. However, this work requires training the optimal projection matrix on laebled data first, which again incurs large overhead.
Of course, our model also has its own limitations: in MGNC-CNN, we need to tune the norm constraint hyperparameter for all the word embeddings. As the number of word embedding increases, this will increase the running time. However, this tuning procedure is embarrassingly parallel.
Conclusions
We have proposed MGNC-CNN: a simple, flexible CNN architecture for sentence classification that can exploit multiple, variable sized word embeddings. We demonstrated that this consistently achieves better results than a baseline architecture that exploits only a single set of word embeddings, and also a naive concatenation approach to capitalizing on multiple embeddings. Furthermore, our results are comparable to those achieved with a recently proposed model BIBREF11 that is much more complex. However, our simple model is easy to implement and requires an order of magnitude less training time. Furthermore, our model is much more flexible than previous approaches, because it can accommodate variable-size word embeddings.
Acknowledgments
This work was supported in part by the Army Research Office (grant W911NF-14-1-0442) and by The Foundation for Science and Technology, Portugal (grant UTAP-EXPL/EEIESS/0031/2014). This work was also made possible by the support of the Texas Advanced Computer Center (TACC) at UT Austin. | SST-1, SST-2, Subj , TREC , Irony |
34dd0ee1374a3afd16cf8b0c803f4ef4c6fec8ac | 34dd0ee1374a3afd16cf8b0c803f4ef4c6fec8ac_0 | Q: What are the comparable alternative architectures?
Text: Introduction
Neural models have recently gained popularity for Natural Language Processing (NLP) tasks BIBREF0 , BIBREF1 , BIBREF2 . For sentence classification, in particular, Convolution Neural Networks (CNN) have realized impressive performance BIBREF3 , BIBREF4 . These models operate over word embeddings, i.e., dense, low dimensional vector representations of words that aim to capture salient semantic and syntactic properties BIBREF1 .
An important consideration for such models is the specification of the word embeddings. Several options exist. For example, Kalchbrenner et al. kalchbrenner2014convolutional initialize word vectors to random low-dimensional vectors to be fit during training, while Johnson and Zhang johnson2014effective use fixed, one-hot encodings for each word. By contrast, Kim kim2014convolutional initializes word vectors to those estimated via the word2vec model trained on 100 billion words of Google News BIBREF5 ; these are then updated during training. Initializing embeddings to pre-trained word vectors is intuitively appealing because it allows transfer of learned distributional semantics. This has allowed a relatively simple CNN architecture to achieve remarkably strong results.
Many pre-trained word embeddings are now readily available on the web, induced using different models, corpora, and processing steps. Different embeddings may encode different aspects of language BIBREF6 , BIBREF7 , BIBREF8 : those based on bag-of-words (BoW) statistics tend to capture associations (doctor and hospital), while embeddings based on dependency-parses encode similarity in terms of use (doctor and surgeon). It is natural to consider how these embeddings might be combined to improve NLP models in general and CNNs in particular.
Contributions. We propose MGNC-CNN, a novel, simple, scalable CNN architecture that can accommodate multiple off-the-shelf embeddings of variable sizes. Our model treats different word embeddings as distinct groups, and applies CNNs independently to each, thus generating corresponding feature vectors (one per embedding) which are then concatenated at the classification layer. Inspired by prior work exploiting regularization to encode structure for NLP tasks BIBREF9 , BIBREF10 , we impose different regularization penalties on weights for features generated from the respective word embedding sets.
Our approach enjoys the following advantages compared to the only existing comparable model BIBREF11 : (i) It can leverage diverse, readily available word embeddings with different dimensions, thus providing flexibility. (ii) It is comparatively simple, and does not, for example, require mutual learning or pre-training. (iii) It is an order of magnitude more efficient in terms of training time.
Related Work
Prior work has considered combining latent representations of words that capture syntactic and semantic properties BIBREF12 , and inducing multi-modal embeddings BIBREF13 for general NLP tasks. And recently, Luo et al. luo2014pre proposed a framework that combines multiple word embeddings to measure text similarity, however their focus was not on classification.
More similar to our work, Yin and Schütze yin-schutze:2015:CoNLL proposed MVCNN for sentence classification. This CNN-based architecture accepts multiple word embeddings as inputs. These are then treated as separate `channels', analogous to RGB channels in images. Filters consider all channels simultaneously. MVCNN achieved state-of-the-art performance on multiple sentence classification tasks. However, this model has practical drawbacks. (i) MVCNN requires that input word embeddings have the same dimensionality. Thus to incorporate a second set of word vectors trained on a corpus (or using a model) of interest, one needs to either find embeddings that happen to have a set number of dimensions or to estimate embeddings from scratch. (ii) The model is complex, both in terms of implementation and run-time. Indeed, this model requires pre-training and mutual-learning and requires days of training time, whereas the simple architecture we propose requires on the order of an hour (and is easy to implement).
Model Description
We first review standard one-layer CNN (which exploits a single set of embeddings) for sentence classification BIBREF3 , and then propose our augmentations, which exploit multiple embedding sets.
Basic CNN. In this model we first replace each word in a sentence with its vector representation, resulting in a sentence matrix INLINEFORM0 , where INLINEFORM1 is the (zero-padded) sentence length, and INLINEFORM2 is the dimensionality of the embeddings. We apply a convolution operation between linear filters with parameters INLINEFORM3 and the sentence matrix. For each INLINEFORM4 , where INLINEFORM5 denotes `height', we slide filter INLINEFORM6 across INLINEFORM7 , considering `local regions' of INLINEFORM8 adjacent rows at a time. At each local region, we perform element-wise multiplication and then take the element-wise sum between the filter and the (flattened) sub-matrix of INLINEFORM9 , producing a scalar. We do this for each sub-region of INLINEFORM10 that the filter spans, resulting in a feature map vector INLINEFORM11 . We can use multiple filter sizes with different heights, and for each filter size we can have multiple filters. Thus the model comprises INLINEFORM12 weight vectors INLINEFORM13 , each of which is associated with an instantiation of a specific filter size. These in turn generate corresponding feature maps INLINEFORM14 , with dimensions varying with filter size. A 1-max pooling operation is applied to each feature map, extracting the largest number INLINEFORM15 from each feature map INLINEFORM16 . Finally, we combine all INLINEFORM17 together to form a feature vector INLINEFORM18 to be fed through a softmax function for classification. We regularize weights at this level in two ways. (1) Dropout, in which we randomly set elements in INLINEFORM19 to zero during the training phase with probability INLINEFORM20 , and multiply INLINEFORM21 with the parameters trained in INLINEFORM22 at test time. (2) An l2 norm penalty, for which we set a threshold INLINEFORM23 for the l2 norm of INLINEFORM24 during training; if this is exceeded, we rescale the vector accordingly. For more details, see BIBREF4 .
MG-CNN. Assuming we have INLINEFORM0 word embeddings with corresponding dimensions INLINEFORM1 , we can simply treat each word embedding independently. In this case, the input to the CNN comprises multiple sentence matrices INLINEFORM2 , where each INLINEFORM3 may have its own width INLINEFORM4 . We then apply different groups of filters INLINEFORM5 independently to each INLINEFORM6 , where INLINEFORM7 denotes the set of filters for INLINEFORM8 . As in basic CNN, INLINEFORM9 may have multiple filter sizes, and multiple filters of each size may be introduced. At the classification layer we then obtain a feature vector INLINEFORM10 for each embedding set, and we can simply concatenate these together to form the final feature vector INLINEFORM11 to feed into the softmax function, where INLINEFORM12 . This representation contains feature vectors generated from all sets of embeddings under consideration. We call this method multiple group CNN (MG-CNN). Here groups refer to the features generated from different embeddings. Note that this differs from `multi-channel' models because at the convolution layer we use different filters on each word embedding matrix independently, whereas in a standard multi-channel approach each filter would consider all channels simultaneously and generate a scalar from all channels at each local region. As above, we impose a max l2 norm constraint on the final feature vector INLINEFORM13 for regularization. Figure FIGREF1 illustrates this approach.
MGNC-CNN. We propose an augmentation of MG-CNN, Multi-Group Norm Constraint CNN (MGNC-CNN), which differs in its regularization strategy. Specifically, in this variant we impose grouped regularization constraints, independently regularizing subcomponents INLINEFORM0 derived from the respective embeddings, i.e., we impose separate max norm constraints INLINEFORM1 for each INLINEFORM2 (where INLINEFORM3 again indexes embedding sets); these INLINEFORM4 hyper-parameters are to be tuned on a validation set. Intuitively, this method aims to better capitalize on features derived from word embeddings that capture discriminative properties of text for the task at hand by penalizing larger weight estimates for features derived from less discriminative embeddings.
Datasets
Stanford Sentiment Treebank Stanford Sentiment Treebank (SST) BIBREF14 . This concerns predicting movie review sentiment. Two datasets are derived from this corpus: (1) SST-1, containing five classes: very negative, negative, neutral, positive, and very positive. (2) SST-2, which has only two classes: negative and positive. For both, we remove phrases of length less than 4 from the training set.
Subj BIBREF15 . The aim here is to classify sentences as either subjective or objective. This comprises 5000 instances of each.
TREC BIBREF16 . A question classification dataset containing six classes: abbreviation, entity, description, human, location and numeric. There are 5500 training and 500 test instances.
Irony BIBREF17 . This dataset contains 16,006 sentences from reddit labeled as ironic (or not). The dataset is imbalanced (relatively few sentences are ironic). Thus before training, we under-sampled negative instances to make classes sizes equal. Note that for this dataset we report the Area Under Curve (AUC), rather than accuracy, because it is imbalanced.
Pre-trained Word Embeddings
We consider three sets of word embeddings for our experiments: (i) word2vec is trained on 100 billion tokens of Google News dataset; (ii) GloVe BIBREF18 is trained on aggregated global word-word co-occurrence statistics from Common Crawl (840B tokens); and (iii) syntactic word embedding trained on dependency-parsed corpora. These three embedding sets happen to all be 300-dimensional, but our model could accommodate arbitrary and variable sizes. We pre-trained our own syntactic embeddings following BIBREF8 . We parsed the ukWaC corpus BIBREF19 using the Stanford Dependency Parser v3.5.2 with Stanford Dependencies BIBREF20 and extracted (word, relation+context) pairs from parse trees. We “collapsed" nodes with prepositions and notated inverse relations separately, e.g., “dog barks" emits two tuples: (barks, nsubj_dog) and (dog, nsubj INLINEFORM0 _barks). We filter words and contexts that appear fewer than 100 times, resulting in INLINEFORM1 173k words and 1M contexts. We trained 300d vectors using word2vecf with default parameters.
Setup
We compared our proposed approaches to a standard CNN that exploits a single set of word embeddings BIBREF3 . We also compared to a baseline of simply concatenating embeddings for each word to form long vector inputs. We refer to this as Concatenation-CNN C-CNN. For all multiple embedding approaches (C-CNN, MG-CNN and MGNC-CNN), we explored two combined sets of embedding: word2vec+Glove, and word2vec+syntactic, and one three sets of embedding: word2vec+Glove+syntactic. For all models, we tuned the l2 norm constraint INLINEFORM0 over the range INLINEFORM1 on a validation set. For instantiations of MGNC-CNN in which we exploited two embeddings, we tuned both INLINEFORM2 , and INLINEFORM3 ; where we used three embedding sets, we tuned INLINEFORM4 and INLINEFORM5 .
We used standard train/test splits for those datasets that had them. Otherwise, we performed 10-fold cross validation, creating nested development sets with which to tune hyperparameters. For all experiments we used filters sizes of 3, 4 and 5 and we created 100 feature maps for each filter size. We applied 1 max-pooling and dropout (rate: 0.5) at the classification layer. For training we used back-propagation in mini-batches and used AdaDelta as the stochastic gradient descent (SGD) update rule, and set mini-batch size as 50. In this work, we treat word embeddings as part of the parameters of the model, and update them as well during training. In all our experiments, we only tuned the max norm constraint(s), fixing all other hyperparameters.
Results and Discussion
We repeated each experiment 10 times and report the mean and ranges across these. This replication is important because training is stochastic and thus introduces variance in performance BIBREF4 . Results are shown in Table TABREF2 , and the corresponding best norm constraint value is shown in Table TABREF2 . We also show results on Subj, SST-1 and SST-2 achieved by the more complex model of BIBREF11 for comparison; this represents the state-of-the-art on the three datasets other than TREC.
We can see that MGNC-CNN and MG-CNN always outperform baseline methods (including C-CNN), and MGNC-CNN is usually better than MG-CNN. And on the Subj dataset, MG-CNN actually achieves slightly better results than BIBREF11 , with far less complexity and required training time (MGNC-CNN performs comparably, although no better, here). On the TREC dataset, the best-ever accuracy we are aware of is 96.0% BIBREF21 , which falls within the range of the result of our MGNC-CNN model with three word embeddings. On the irony dataset, our model with three embeddings achieves 4% improvement (in terms of AUC) compared to the baseline model. On SST-1 and SST-2, our model performs slightly worse than BIBREF11 . However, we again note that their performance is achieved using a much more complex model which involves pre-training and mutual-learning steps. This model takes days to train, whereas our model requires on the order of an hour.
We note that the method proposed by Astudillo et al. astudillo2015learning is able to accommodate multiple embedding sets with different dimensions by projecting the original word embeddings into a lower-dimensional space. However, this work requires training the optimal projection matrix on laebled data first, which again incurs large overhead.
Of course, our model also has its own limitations: in MGNC-CNN, we need to tune the norm constraint hyperparameter for all the word embeddings. As the number of word embedding increases, this will increase the running time. However, this tuning procedure is embarrassingly parallel.
Conclusions
We have proposed MGNC-CNN: a simple, flexible CNN architecture for sentence classification that can exploit multiple, variable sized word embeddings. We demonstrated that this consistently achieves better results than a baseline architecture that exploits only a single set of word embeddings, and also a naive concatenation approach to capitalizing on multiple embeddings. Furthermore, our results are comparable to those achieved with a recently proposed model BIBREF11 that is much more complex. However, our simple model is easy to implement and requires an order of magnitude less training time. Furthermore, our model is much more flexible than previous approaches, because it can accommodate variable-size word embeddings.
Acknowledgments
This work was supported in part by the Army Research Office (grant W911NF-14-1-0442) and by The Foundation for Science and Technology, Portugal (grant UTAP-EXPL/EEIESS/0031/2014). This work was also made possible by the support of the Texas Advanced Computer Center (TACC) at UT Austin. | standard CNN, C-CNN, MVCNN |
53377f1c5eda961e438424d71d16150e669f7072 | 53377f1c5eda961e438424d71d16150e669f7072_0 | Q: Which state-of-the-art model is surpassed by 9.68% attraction score?
Text: Introduction
Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.”
To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others.
SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style.
In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2.
The main contributions of our paper are listed below:
To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data.
Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones.
Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box.
Related Work
Our work is related to summarization and text style transfer.
Related Work ::: Headline Generation as Summarization
Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27.
Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles.
Related Work ::: Text Style Transfer
Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem.
Methods ::: Problem Formulation
The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\lbrace (\mathbf {a^{(i)}},\mathbf {h^{(i)}})\rbrace _{i=1}^N$ consists of pairs of a news article $\mathbf {a}$ and its plain headline $\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\lbrace \mathbf {a^{(i)}}\rbrace _{i=1}^N$, and $H=\lbrace \mathbf {h^{(i)}}\rbrace _{i=1}^N$. The target corpus $T=\lbrace \mathbf {t^{(i)}}\rbrace _{i=1}^{M}$ comprises of sentences $\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$.
Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$.
Methods ::: Seq2Seq Model Architecture
For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\mathbf {\cdot }; \mathbf {\theta _E})$ and a 6-layer decoder $G(\mathbf {\cdot }; \mathbf {\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG.
Methods ::: Multitask Training Scheme
To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10).
Methods ::: Multitask Training Scheme ::: Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@
With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is
where $\mathbf {\theta _{E_S}}$ and $\mathbf {\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\mathbf {h}|\mathbf {a})$ denotes the overall probability of generating an output sequence $\mathbf {h}$ given the input article $\mathbf {a}$, which can be further expanded as follows:
where $L$ is the sequence length.
Methods ::: Multitask Training Scheme ::: DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@
For the target style corpus $T$, since we only have the sentence $\mathbf {t}$ without paired news articles, we train $\mathbf {z_T}=E_T(\mathbf {\tilde{t}})$ and $\mathbf {t}=G_T(\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\mathbf {z_T}$ is the learned latent representation in the target domain, and $\mathbf {\tilde{t}}$ is the corrupted version of $\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\mathcal {L}_T$:
where $\mathbf {\theta _{E_T}}$ and $\mathbf {\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\mathcal {L}_T$ via multitask learning, so the total loss becomes
where $\lambda $ is a hyper-parameter.
Methods ::: Parameter-Sharing Scheme
More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\mathbf {\theta _{\mathrm {ind}}}$ and style-dependent parameters $\mathbf {\theta _{\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below.
Methods ::: Parameter-Sharing Scheme ::: Type 1. Style Layer Normalization
Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\mathbf {x}$ into a normalized activation $\mathbf {z}$ specific to the style $s$:
where $\mu $ and $\sigma $ are the mean and standard deviation of the batch of $\mathbf {x}$, and $\gamma _s$ and $\beta _s$ are style-specific parameters learned from data.
Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers.
Methods ::: Parameter-Sharing Scheme ::: Type 2. Style-Guided Encoder Attention
Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows:
where $\mathbf {\mathrm {query}}$, $\mathbf {\mathrm {key}}$, and $\mathbf {\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\mathbf {W_q^s}$, $\mathbf {W_k}$, and $\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\mathbf {W_q^s}$ of the query for different styles, so that $\mathbf {Q}$ can be different to induce diverse attention patterns.
Experiments ::: Datasets
We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively.
Experiments ::: Datasets ::: Source Dataset
The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set.
We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs.
We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs.
Experiments ::: Datasets ::: Three Target Style Corpora ::: Humor and Romance
For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets.
Experiments ::: Datasets ::: Three Target Style Corpora ::: Clickbait
We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use.
Some examples from each style corpus are listed in Table TABREF32.
Experiments ::: Baselines
We compared the proposed TitleStylist against the following five strong baseline approaches.
Experiments ::: Baselines ::: Neural Headline Generation (NHG)
We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data.
Experiments ::: Baselines ::: Gigaword-MASS
We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles.
Experiments ::: Baselines ::: Neural Story Teller (NST)
It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website.
Experiments ::: Baselines ::: Fine-Tuned
We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training.
Experiments ::: Baselines ::: Multitask
We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG.
Experiments ::: Evaluation Metrics
To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation.
Experiments ::: Evaluation Metrics ::: Setup of Human Evaluation
We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation
Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Summarization Quality
We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency
We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs.
Experiments ::: Experimental Details
We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\lambda $.
Results and Discussion ::: Human Evaluation Results
The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters.
Results and Discussion ::: Human Evaluation Results ::: Relevance
We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity.
Results and Discussion ::: Human Evaluation Results ::: Attraction
In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores.
Results and Discussion ::: Human Evaluation Results ::: Fluency
The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability.
Results and Discussion ::: Human Evaluation Results ::: Style Strength
We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57.
Results and Discussion ::: Automatic Evaluation Results
Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability.
Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body.
From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability.
In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation.
We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization.
It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation.
We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines.
Results and Discussion ::: Extension to Multi-Style
We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature.
Conclusion
We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models.
Acknowledgement
We appreciate all the volunteer native speakers (Shreya Karpoor, Lisa Orii, Abhishek Mohan, Paloma Quiroga, etc.) for the human evaluation of our study, and thank the reviewers for their inspiring comments. Joey Tianyi Zhou is partially supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045). | pure summarization model NHG |
f37ed011e7eb259360170de027c1e8557371f002 | f37ed011e7eb259360170de027c1e8557371f002_0 | Q: What is increase in percentage of humor contained in headlines generated with TitleStylist method (w.r.t. baselines)?
Text: Introduction
Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.”
To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others.
SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style.
In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2.
The main contributions of our paper are listed below:
To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data.
Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones.
Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box.
Related Work
Our work is related to summarization and text style transfer.
Related Work ::: Headline Generation as Summarization
Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27.
Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles.
Related Work ::: Text Style Transfer
Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem.
Methods ::: Problem Formulation
The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\lbrace (\mathbf {a^{(i)}},\mathbf {h^{(i)}})\rbrace _{i=1}^N$ consists of pairs of a news article $\mathbf {a}$ and its plain headline $\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\lbrace \mathbf {a^{(i)}}\rbrace _{i=1}^N$, and $H=\lbrace \mathbf {h^{(i)}}\rbrace _{i=1}^N$. The target corpus $T=\lbrace \mathbf {t^{(i)}}\rbrace _{i=1}^{M}$ comprises of sentences $\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$.
Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$.
Methods ::: Seq2Seq Model Architecture
For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\mathbf {\cdot }; \mathbf {\theta _E})$ and a 6-layer decoder $G(\mathbf {\cdot }; \mathbf {\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG.
Methods ::: Multitask Training Scheme
To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10).
Methods ::: Multitask Training Scheme ::: Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@
With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is
where $\mathbf {\theta _{E_S}}$ and $\mathbf {\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\mathbf {h}|\mathbf {a})$ denotes the overall probability of generating an output sequence $\mathbf {h}$ given the input article $\mathbf {a}$, which can be further expanded as follows:
where $L$ is the sequence length.
Methods ::: Multitask Training Scheme ::: DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@
For the target style corpus $T$, since we only have the sentence $\mathbf {t}$ without paired news articles, we train $\mathbf {z_T}=E_T(\mathbf {\tilde{t}})$ and $\mathbf {t}=G_T(\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\mathbf {z_T}$ is the learned latent representation in the target domain, and $\mathbf {\tilde{t}}$ is the corrupted version of $\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\mathcal {L}_T$:
where $\mathbf {\theta _{E_T}}$ and $\mathbf {\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\mathcal {L}_T$ via multitask learning, so the total loss becomes
where $\lambda $ is a hyper-parameter.
Methods ::: Parameter-Sharing Scheme
More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\mathbf {\theta _{\mathrm {ind}}}$ and style-dependent parameters $\mathbf {\theta _{\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below.
Methods ::: Parameter-Sharing Scheme ::: Type 1. Style Layer Normalization
Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\mathbf {x}$ into a normalized activation $\mathbf {z}$ specific to the style $s$:
where $\mu $ and $\sigma $ are the mean and standard deviation of the batch of $\mathbf {x}$, and $\gamma _s$ and $\beta _s$ are style-specific parameters learned from data.
Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers.
Methods ::: Parameter-Sharing Scheme ::: Type 2. Style-Guided Encoder Attention
Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows:
where $\mathbf {\mathrm {query}}$, $\mathbf {\mathrm {key}}$, and $\mathbf {\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\mathbf {W_q^s}$, $\mathbf {W_k}$, and $\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\mathbf {W_q^s}$ of the query for different styles, so that $\mathbf {Q}$ can be different to induce diverse attention patterns.
Experiments ::: Datasets
We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively.
Experiments ::: Datasets ::: Source Dataset
The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set.
We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs.
We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs.
Experiments ::: Datasets ::: Three Target Style Corpora ::: Humor and Romance
For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets.
Experiments ::: Datasets ::: Three Target Style Corpora ::: Clickbait
We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use.
Some examples from each style corpus are listed in Table TABREF32.
Experiments ::: Baselines
We compared the proposed TitleStylist against the following five strong baseline approaches.
Experiments ::: Baselines ::: Neural Headline Generation (NHG)
We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data.
Experiments ::: Baselines ::: Gigaword-MASS
We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles.
Experiments ::: Baselines ::: Neural Story Teller (NST)
It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website.
Experiments ::: Baselines ::: Fine-Tuned
We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training.
Experiments ::: Baselines ::: Multitask
We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG.
Experiments ::: Evaluation Metrics
To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation.
Experiments ::: Evaluation Metrics ::: Setup of Human Evaluation
We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation
Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Summarization Quality
We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency
We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs.
Experiments ::: Experimental Details
We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\lambda $.
Results and Discussion ::: Human Evaluation Results
The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters.
Results and Discussion ::: Human Evaluation Results ::: Relevance
We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity.
Results and Discussion ::: Human Evaluation Results ::: Attraction
In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores.
Results and Discussion ::: Human Evaluation Results ::: Fluency
The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability.
Results and Discussion ::: Human Evaluation Results ::: Style Strength
We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57.
Results and Discussion ::: Automatic Evaluation Results
Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability.
Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body.
From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability.
In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation.
We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization.
It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation.
We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines.
Results and Discussion ::: Extension to Multi-Style
We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature.
Conclusion
We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models.
Acknowledgement
We appreciate all the volunteer native speakers (Shreya Karpoor, Lisa Orii, Abhishek Mohan, Paloma Quiroga, etc.) for the human evaluation of our study, and thank the reviewers for their inspiring comments. Joey Tianyi Zhou is partially supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045). | Humor in headlines (TitleStylist vs Multitask baseline):
Relevance: +6.53% (5.87 vs 5.51)
Attraction: +3.72% (8.93 vs 8.61)
Fluency: 1,98% (9.29 vs 9.11) |
41d3750ae666ea5a9cea498ddfb973a8366cccd6 | 41d3750ae666ea5a9cea498ddfb973a8366cccd6_0 | Q: How is attraction score measured?
Text: Introduction
Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.”
To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others.
SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style.
In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2.
The main contributions of our paper are listed below:
To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data.
Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones.
Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box.
Related Work
Our work is related to summarization and text style transfer.
Related Work ::: Headline Generation as Summarization
Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27.
Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles.
Related Work ::: Text Style Transfer
Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem.
Methods ::: Problem Formulation
The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\lbrace (\mathbf {a^{(i)}},\mathbf {h^{(i)}})\rbrace _{i=1}^N$ consists of pairs of a news article $\mathbf {a}$ and its plain headline $\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\lbrace \mathbf {a^{(i)}}\rbrace _{i=1}^N$, and $H=\lbrace \mathbf {h^{(i)}}\rbrace _{i=1}^N$. The target corpus $T=\lbrace \mathbf {t^{(i)}}\rbrace _{i=1}^{M}$ comprises of sentences $\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$.
Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$.
Methods ::: Seq2Seq Model Architecture
For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\mathbf {\cdot }; \mathbf {\theta _E})$ and a 6-layer decoder $G(\mathbf {\cdot }; \mathbf {\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG.
Methods ::: Multitask Training Scheme
To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10).
Methods ::: Multitask Training Scheme ::: Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@
With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is
where $\mathbf {\theta _{E_S}}$ and $\mathbf {\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\mathbf {h}|\mathbf {a})$ denotes the overall probability of generating an output sequence $\mathbf {h}$ given the input article $\mathbf {a}$, which can be further expanded as follows:
where $L$ is the sequence length.
Methods ::: Multitask Training Scheme ::: DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@
For the target style corpus $T$, since we only have the sentence $\mathbf {t}$ without paired news articles, we train $\mathbf {z_T}=E_T(\mathbf {\tilde{t}})$ and $\mathbf {t}=G_T(\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\mathbf {z_T}$ is the learned latent representation in the target domain, and $\mathbf {\tilde{t}}$ is the corrupted version of $\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\mathcal {L}_T$:
where $\mathbf {\theta _{E_T}}$ and $\mathbf {\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\mathcal {L}_T$ via multitask learning, so the total loss becomes
where $\lambda $ is a hyper-parameter.
Methods ::: Parameter-Sharing Scheme
More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\mathbf {\theta _{\mathrm {ind}}}$ and style-dependent parameters $\mathbf {\theta _{\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below.
Methods ::: Parameter-Sharing Scheme ::: Type 1. Style Layer Normalization
Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\mathbf {x}$ into a normalized activation $\mathbf {z}$ specific to the style $s$:
where $\mu $ and $\sigma $ are the mean and standard deviation of the batch of $\mathbf {x}$, and $\gamma _s$ and $\beta _s$ are style-specific parameters learned from data.
Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers.
Methods ::: Parameter-Sharing Scheme ::: Type 2. Style-Guided Encoder Attention
Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows:
where $\mathbf {\mathrm {query}}$, $\mathbf {\mathrm {key}}$, and $\mathbf {\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\mathbf {W_q^s}$, $\mathbf {W_k}$, and $\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\mathbf {W_q^s}$ of the query for different styles, so that $\mathbf {Q}$ can be different to induce diverse attention patterns.
Experiments ::: Datasets
We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively.
Experiments ::: Datasets ::: Source Dataset
The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set.
We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs.
We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs.
Experiments ::: Datasets ::: Three Target Style Corpora ::: Humor and Romance
For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets.
Experiments ::: Datasets ::: Three Target Style Corpora ::: Clickbait
We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use.
Some examples from each style corpus are listed in Table TABREF32.
Experiments ::: Baselines
We compared the proposed TitleStylist against the following five strong baseline approaches.
Experiments ::: Baselines ::: Neural Headline Generation (NHG)
We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data.
Experiments ::: Baselines ::: Gigaword-MASS
We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles.
Experiments ::: Baselines ::: Neural Story Teller (NST)
It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website.
Experiments ::: Baselines ::: Fine-Tuned
We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training.
Experiments ::: Baselines ::: Multitask
We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG.
Experiments ::: Evaluation Metrics
To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation.
Experiments ::: Evaluation Metrics ::: Setup of Human Evaluation
We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation
Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Summarization Quality
We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency
We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs.
Experiments ::: Experimental Details
We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\lambda $.
Results and Discussion ::: Human Evaluation Results
The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters.
Results and Discussion ::: Human Evaluation Results ::: Relevance
We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity.
Results and Discussion ::: Human Evaluation Results ::: Attraction
In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores.
Results and Discussion ::: Human Evaluation Results ::: Fluency
The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability.
Results and Discussion ::: Human Evaluation Results ::: Style Strength
We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57.
Results and Discussion ::: Automatic Evaluation Results
Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability.
Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body.
From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability.
In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation.
We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization.
It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation.
We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines.
Results and Discussion ::: Extension to Multi-Style
We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature.
Conclusion
We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models.
Acknowledgement
We appreciate all the volunteer native speakers (Shreya Karpoor, Lisa Orii, Abhishek Mohan, Paloma Quiroga, etc.) for the human evaluation of our study, and thank the reviewers for their inspiring comments. Joey Tianyi Zhou is partially supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045). | annotators are asked how attractive the headlines are, Likert scale from 1 to 10 (integer values) |
90b2154ec3723f770c74d255ddfcf7972fe136a2 | 90b2154ec3723f770c74d255ddfcf7972fe136a2_0 | Q: How is presence of three target styles detected?
Text: Introduction
Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.”
To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others.
SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style.
In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2.
The main contributions of our paper are listed below:
To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data.
Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones.
Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box.
Related Work
Our work is related to summarization and text style transfer.
Related Work ::: Headline Generation as Summarization
Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27.
Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles.
Related Work ::: Text Style Transfer
Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem.
Methods ::: Problem Formulation
The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\lbrace (\mathbf {a^{(i)}},\mathbf {h^{(i)}})\rbrace _{i=1}^N$ consists of pairs of a news article $\mathbf {a}$ and its plain headline $\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\lbrace \mathbf {a^{(i)}}\rbrace _{i=1}^N$, and $H=\lbrace \mathbf {h^{(i)}}\rbrace _{i=1}^N$. The target corpus $T=\lbrace \mathbf {t^{(i)}}\rbrace _{i=1}^{M}$ comprises of sentences $\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$.
Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$.
Methods ::: Seq2Seq Model Architecture
For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\mathbf {\cdot }; \mathbf {\theta _E})$ and a 6-layer decoder $G(\mathbf {\cdot }; \mathbf {\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG.
Methods ::: Multitask Training Scheme
To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10).
Methods ::: Multitask Training Scheme ::: Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@
With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is
where $\mathbf {\theta _{E_S}}$ and $\mathbf {\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\mathbf {h}|\mathbf {a})$ denotes the overall probability of generating an output sequence $\mathbf {h}$ given the input article $\mathbf {a}$, which can be further expanded as follows:
where $L$ is the sequence length.
Methods ::: Multitask Training Scheme ::: DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@
For the target style corpus $T$, since we only have the sentence $\mathbf {t}$ without paired news articles, we train $\mathbf {z_T}=E_T(\mathbf {\tilde{t}})$ and $\mathbf {t}=G_T(\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\mathbf {z_T}$ is the learned latent representation in the target domain, and $\mathbf {\tilde{t}}$ is the corrupted version of $\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\mathcal {L}_T$:
where $\mathbf {\theta _{E_T}}$ and $\mathbf {\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\mathcal {L}_T$ via multitask learning, so the total loss becomes
where $\lambda $ is a hyper-parameter.
Methods ::: Parameter-Sharing Scheme
More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\mathbf {\theta _{\mathrm {ind}}}$ and style-dependent parameters $\mathbf {\theta _{\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below.
Methods ::: Parameter-Sharing Scheme ::: Type 1. Style Layer Normalization
Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\mathbf {x}$ into a normalized activation $\mathbf {z}$ specific to the style $s$:
where $\mu $ and $\sigma $ are the mean and standard deviation of the batch of $\mathbf {x}$, and $\gamma _s$ and $\beta _s$ are style-specific parameters learned from data.
Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers.
Methods ::: Parameter-Sharing Scheme ::: Type 2. Style-Guided Encoder Attention
Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows:
where $\mathbf {\mathrm {query}}$, $\mathbf {\mathrm {key}}$, and $\mathbf {\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\mathbf {W_q^s}$, $\mathbf {W_k}$, and $\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\mathbf {W_q^s}$ of the query for different styles, so that $\mathbf {Q}$ can be different to induce diverse attention patterns.
Experiments ::: Datasets
We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively.
Experiments ::: Datasets ::: Source Dataset
The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set.
We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs.
We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs.
Experiments ::: Datasets ::: Three Target Style Corpora ::: Humor and Romance
For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets.
Experiments ::: Datasets ::: Three Target Style Corpora ::: Clickbait
We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use.
Some examples from each style corpus are listed in Table TABREF32.
Experiments ::: Baselines
We compared the proposed TitleStylist against the following five strong baseline approaches.
Experiments ::: Baselines ::: Neural Headline Generation (NHG)
We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data.
Experiments ::: Baselines ::: Gigaword-MASS
We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles.
Experiments ::: Baselines ::: Neural Story Teller (NST)
It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website.
Experiments ::: Baselines ::: Fine-Tuned
We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training.
Experiments ::: Baselines ::: Multitask
We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG.
Experiments ::: Evaluation Metrics
To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation.
Experiments ::: Evaluation Metrics ::: Setup of Human Evaluation
We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation
Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Summarization Quality
We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency
We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs.
Experiments ::: Experimental Details
We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\lambda $.
Results and Discussion ::: Human Evaluation Results
The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters.
Results and Discussion ::: Human Evaluation Results ::: Relevance
We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity.
Results and Discussion ::: Human Evaluation Results ::: Attraction
In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores.
Results and Discussion ::: Human Evaluation Results ::: Fluency
The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability.
Results and Discussion ::: Human Evaluation Results ::: Style Strength
We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57.
Results and Discussion ::: Automatic Evaluation Results
Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability.
Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body.
From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability.
In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation.
We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization.
It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation.
We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines.
Results and Discussion ::: Extension to Multi-Style
We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature.
Conclusion
We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models.
Acknowledgement
We appreciate all the volunteer native speakers (Shreya Karpoor, Lisa Orii, Abhishek Mohan, Paloma Quiroga, etc.) for the human evaluation of our study, and thank the reviewers for their inspiring comments. Joey Tianyi Zhou is partially supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045). | human evaluation task about the style strength |
f3766c6937a4c8c8d5e954b4753701a023e3da74 | f3766c6937a4c8c8d5e954b4753701a023e3da74_0 | Q: How is fluency automatically evaluated?
Text: Introduction
Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness. However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter. For example, in Figure FIGREF2, the plain headline by an HG model “Summ: Leopard Frog Found in New York City” is less eye-catching than the style-carrying ones such as “What's That Chuckle You Hear? It May Be the New Frog From NYC.”
To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG). Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity. It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others.
SHG is a highly skilled creative process, and usually only possessed by expert writers. One of the most famous headlines in American publications, “Sticks Nix Hick Pix,” could be such an example. In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly. More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset BIBREF5), obstructing the models from learning a distinct style.
In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation. We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity. Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoencoder (DAE) on a style corpus. In particular, based on the transformer architecture BIBREF6, we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text. This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters. We validate the model on three tasks: humorous, romantic, and click-baity headline generation. Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure FIGREF2.
The main contributions of our paper are listed below:
To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data.
Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones.
Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box.
Related Work
Our work is related to summarization and text style transfer.
Related Work ::: Headline Generation as Summarization
Headline generation is a very popular area of research. Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13. To enrich the diversity of the extractive summarization, abstractive models were then proposed. With the help of neural networks, BIBREF14 proposed attention-based summarization (ABS) to make BIBREF15's framework of summarization more powerful. Many recent works extended ABS by utilizing additional features BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Other variants of the standard headline generation setting include headlines for community question answering BIBREF23, multiple headline generation BIBREF24, user-specific generation using user embeddings in recommendation systems BIBREF25, bilingual headline generation BIBREF26 and question-style headline generation BIBREF27.
Only a few works have recently started to focus on increasing the attractiveness of generated headlines BIBREF28, BIBREF29. BIBREF28 focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail. These controls serve as a way to boost the model performance, and the CNN- and DailyMail-style control shows a negligible improvement. BIBREF29 utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines. BIBREF30 proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one. This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines. Our model does not have this limitation, thus enabling transferring to many more styles.
Related Work ::: Text Style Transfer
Our work is also related to text style transfer, which aims to change the style attribute of the text while preserving its content. First proposed by BIBREF31, it has achieved great progress in recent years BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38. However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem.
Methods ::: Problem Formulation
The model is trained on a source dataset $S$ and target dataset $T$. The source dataset $S=\lbrace (\mathbf {a^{(i)}},\mathbf {h^{(i)}})\rbrace _{i=1}^N$ consists of pairs of a news article $\mathbf {a}$ and its plain headline $\mathbf {h}$. We assume that the source corpus has a distribution $P(A, H)$, where $A=\lbrace \mathbf {a^{(i)}}\rbrace _{i=1}^N$, and $H=\lbrace \mathbf {h^{(i)}}\rbrace _{i=1}^N$. The target corpus $T=\lbrace \mathbf {t^{(i)}}\rbrace _{i=1}^{M}$ comprises of sentences $\mathbf {t}$ written in a specific style (e.g., humor). We assume that it conforms to the distribution $P(T)$.
Note that the target corpus $T$ only contains style-carrying sentences, not necessarily headlines — it can be just book text. Also no sentence $\mathbf {t}$ is paired with a news article. Overall, our task is to learn the conditional distribution $P(T|A)$ using only $S$ and $T$. This task is fully unsupervised because there is no sample from the joint distribution $P(A, T)$.
Methods ::: Seq2Seq Model Architecture
For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture BIBREF6. As in Figure FIGREF8, it consists of a 6-layer encoder $E(\mathbf {\cdot }; \mathbf {\theta _E})$ and a 6-layer decoder $G(\mathbf {\cdot }; \mathbf {\theta _G})$ with a hidden size of 1024 and a feed-forward filter size of 4096. For better generation quality, we initialize with the MASS model BIBREF3. MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data. This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG.
Methods ::: Multitask Training Scheme
To disentangle the latent style from the text, we adopt a multitask learning framework BIBREF39, training on summarization and DAE simultaneously (as shown in Figure FIGREF10).
Methods ::: Multitask Training Scheme ::: Supervised Seq2Seq Training for @!START@$E_S$@!END@ and @!START@$G_S$@!END@
With the source domain dataset $S$, based on the encoder-decoder architecture, we can learn the conditional distribution $P(H|A)$ by training $\mathbf {z}_S=E_S(A)$ and $H_S=G_S(\mathbf {z_S})$ to solve the supervised Seq2Seq learning task, where $\mathbf {z_S}$ is the learned latent representation in the source domain. The loss function of this task is
where $\mathbf {\theta _{E_S}}$ and $\mathbf {\theta _{G_S}}$ are the set of model parameters of the encoder and decoder in the source domain and $p(\mathbf {h}|\mathbf {a})$ denotes the overall probability of generating an output sequence $\mathbf {h}$ given the input article $\mathbf {a}$, which can be further expanded as follows:
where $L$ is the sequence length.
Methods ::: Multitask Training Scheme ::: DAE Training for @!START@$\mathbf {\theta _{E_T}}$@!END@ and @!START@$\mathbf {\theta _{G_T}}$@!END@
For the target style corpus $T$, since we only have the sentence $\mathbf {t}$ without paired news articles, we train $\mathbf {z_T}=E_T(\mathbf {\tilde{t}})$ and $\mathbf {t}=G_T(\mathbf {z_T})$ by solving an unsupervised reconstruction learning task, where $\mathbf {z_T}$ is the learned latent representation in the target domain, and $\mathbf {\tilde{t}}$ is the corrupted version of $\mathbf {t}$ by randomly deleting or blanking some words and shuffling the word orders. To train the model, we minimize the reconstruction error $\mathcal {L}_T$:
where $\mathbf {\theta _{E_T}}$ and $\mathbf {\theta _{G_T}}$ are the set of model parameters for the encoder and generator in the target domain. We train the whole model by jointly minimizing the supervised Seq2Seq training loss $\mathcal {L}_S$ and the unsupervised denoised auto-encoding loss $\mathcal {L}_T$ via multitask learning, so the total loss becomes
where $\lambda $ is a hyper-parameter.
Methods ::: Parameter-Sharing Scheme
More constraints are necessary in the multitask training process. We aim to infer the conditional distribution as $ P(T|A)=G_T(E_S(A))$. However, without samples from $P(A, T)$, this is a challenging or even impossible task if $E_S$ and $E_T$, or $G_S$ and $G_T$ are completely independent of each other. Hence, we need to add some constraints to the network by relating $E_S$ and $E_T$, and $G_S$ and $G_T$. The simplest design is to share all parameters between $E_S$ and $E_T$, and apply the same strategy to $G_S$ and $G_T$. The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article. However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus $T$, we share all parameters of the encoder between two domains, i.e., between $E_S$ and $E_T$, whereas we divide the parameters of the decoder into two types: style-independent parameters $\mathbf {\theta _{\mathrm {ind}}}$ and style-dependent parameters $\mathbf {\theta _{\mathrm {dep}}}$. This means that only the style-independent parameters are shared between $G_S$ and $G_T$ while the style-dependent parameters are not. More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below.
Methods ::: Parameter-Sharing Scheme ::: Type 1. Style Layer Normalization
Inspired by previous work on image style transfer BIBREF40, we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style. This style layer normalization approach aims to transform a layer’s activation $\mathbf {x}$ into a normalized activation $\mathbf {z}$ specific to the style $s$:
where $\mu $ and $\sigma $ are the mean and standard deviation of the batch of $\mathbf {x}$, and $\gamma _s$ and $\beta _s$ are style-specific parameters learned from data.
Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers.
Methods ::: Parameter-Sharing Scheme ::: Type 2. Style-Guided Encoder Attention
Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states. The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature. We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows:
where $\mathbf {\mathrm {query}}$, $\mathbf {\mathrm {key}}$, and $\mathbf {\mathrm {value}}$ denote the triple of inputs into the multi-head attention module; $\mathbf {W_q^s}$, $\mathbf {W_k}$, and $\mathbf {W_v}$ denote the scaled dot-product matrix for affine transformation; $d_{\mathrm {model}}$ is the dimension of the hidden states. We specialize the dot-product matrix $\mathbf {W_q^s}$ of the query for different styles, so that $\mathbf {Q}$ can be different to induce diverse attention patterns.
Experiments ::: Datasets
We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text. The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively.
Experiments ::: Datasets ::: Source Dataset
The source dataset contains news articles paired with corresponding headlines. To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K). After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set.
We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus BIBREF41 and treat the abstracts as the news articles. Following the standard pre-processing procedures BIBREF42, we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs.
We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models BIBREF43. We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, and in total collected 90,236 news abstract-headline pairs.
Experiments ::: Datasets ::: Three Target Style Corpora ::: Humor and Romance
For the target style datasets, we follow BIBREF44 to use humor and romance novel collections in BookCorpus BIBREF45 as the Humor and Romance datasets. We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets.
Experiments ::: Datasets ::: Three Target Style Corpora ::: Clickbait
We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers. Thus we used The Examiner - SpamClickBait News dataset, denoted as the Clickbait dataset. We collected 500K headlines for our use.
Some examples from each style corpus are listed in Table TABREF32.
Experiments ::: Baselines
We compared the proposed TitleStylist against the following five strong baseline approaches.
Experiments ::: Baselines ::: Neural Headline Generation (NHG)
We train the state-of-the-art summarization model, MASS BIBREF3, on our collected news abstracts-headlines paired data.
Experiments ::: Baselines ::: Gigaword-MASS
We test an off-the-shelf headline generation model, MASS from BIBREF3, which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles.
Experiments ::: Baselines ::: Neural Story Teller (NST)
It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines BIBREF46. In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation. Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated. More details of this method can refer to the official website.
Experiments ::: Baselines ::: Fine-Tuned
We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training.
Experiments ::: Baselines ::: Multitask
We share all parameters between $E_S$ and $E_T$, and between $G_S$ and $G_T$, and trained the model on both the summarization and DAE tasks. The model architecture is the same as NHG.
Experiments ::: Evaluation Metrics
To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation.
Experiments ::: Evaluation Metrics ::: Setup of Human Evaluation
We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines. Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength. For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values). For relevance, human annotators are asked to evaluate how semantically relevant the headline is to the news body. For attractiveness, annotators are asked how attractive the headlines are. For fluency, we ask the annotators to evaluate how fluent and readable the text is. After the collection of human evaluation results, we averaged the scores as the final score. In addition, we have another independent human evaluation task about the style strength – we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor. Then we define the style strength score as the proportion of choices.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation
Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency. Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range. Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Summarization Quality
We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU BIBREF47, METEOR BIBREF48, ROUGE BIBREF49 and CIDEr BIBREF50. For ROUGE, we used the Files2ROUGE toolkit, and for other metrics, we used the pycocoeval toolkit.
Experiments ::: Evaluation Metrics ::: Setup of Automatic Evaluation ::: Language Fluency
We fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs.
Experiments ::: Experimental Details
We used the fairseq code base BIBREF52. During training, we use Adam optimizer with an initial learning rate of $5\times 10^{-4}$, and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as 4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of $0.2$, and randomly shuffled the word order within 5 tokens. All datasets are lower-cased. $\lambda $ is set as 0.5 in experiments. For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to $\lambda $.
Results and Discussion ::: Human Evaluation Results
The human evaluation is to have a comprehensive measurement of the performances. We conduct experiments on four criteria, relevance, attraction, fluency, and style strength. We summarize the human evaluation results on the first three criteria in Table TABREF51, and the last criteria in Table TABREF57. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section SECREF58), thereby we removed them in human evaluation to save unnecessary work for human raters.
Results and Discussion ::: Human Evaluation Results ::: Relevance
We first look at the relevance scores in Table TABREF51. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score. The outputs from NHG are usually like an organic reorganization of several keywords in the source context (as shown in Table TABREF52), thus appearing most relevant. It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect. Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity.
Results and Discussion ::: Human Evaluation Results ::: Attraction
In terms of attraction scores in Table TABREF51, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section SECREF1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction. (3) Adapting the model to the “Clickbait” style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention. To be noted, although we learned the “Clickbait” style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores.
Results and Discussion ::: Human Evaluation Results ::: Fluency
The human-annotated fluency scores in Table TABREF51 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability.
Results and Discussion ::: Human Evaluation Results ::: Style Strength
We also validated that our TitleStylist can carry more styles compared with the Multitask and NHG baselines by summarizing the percentage of choices by humans for the most humorous or romantic headlines in Table TABREF57.
Results and Discussion ::: Automatic Evaluation Results
Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality. This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability.
Table TABREF59 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines. We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines. In Table TABREF59, the first row “NHG” shows the performance of the current state-of-the-art summarization model on our data, and Table TABREF52 provides two examples of its generation output. Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body.
From Table TABREF59, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset. Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability.
In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG. This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task. More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data. However, unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation.
We find that in Table TABREF59 TitleStylist-F achieves the best summarization performance. This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make $G_S$ focus more on summarization.
It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG. This agrees with the fact that the $G_T$ branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree. However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on. Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation.
We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table TABREF59. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines.
Results and Discussion ::: Extension to Multi-Style
We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model. That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora. And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters. We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table TABREF61. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles. Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent. The result is presented in the last column of Table TABREF61, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist. TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient feature.
Conclusion
We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction. To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities. Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models.
Acknowledgement
We appreciate all the volunteer native speakers (Shreya Karpoor, Lisa Orii, Abhishek Mohan, Paloma Quiroga, etc.) for the human evaluation of our study, and thank the reviewers for their inspiring comments. Joey Tianyi Zhou is partially supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045). | fine-tuned the GPT-2 medium model BIBREF51 on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs |
2898e4aa7a3496c628e7ddf2985b48fb11aa3bba | 2898e4aa7a3496c628e7ddf2985b48fb11aa3bba_0 | Q: What are the measures of "performance" used in this paper?
Text: Introduction
Emergence of web services such as blog, microblog and social networking websites allows people to contribute information publicly. This user-generated information is generally more personal, informal and often contains personal opinions. In aggregate, it can be useful for reputation analysis of entities and products, natural disasters detection, obtaining first-hand news, or even demographic analysis. Twitter, an easily accessible source of information, allows users to voice their opinions and thoughts in short text known as tweets.
Latent Dirichlet allocation (LDA) BIBREF0 is a popular form of topic model. Unfortunately, a direct application of LDA on tweets yields poor result as tweets are short and often noisy BIBREF1 , i.e. tweets are unstructured and often contain grammatical and spelling errors, as well as informal words such as user-defined abbreviations due to the 140 characters limit. LDA fails on short tweets since it is heavily dependent on word co-occurrence. Also notable is that text in tweets may contain special tokens known as hashtags; they are used as keywords and allow users to link their tweets with other tweets tagged with the same hashtag. Nevertheless, hashtags are informal since they have no standards. Hashtags can be used as both inline words or categorical labels. Hence instead of being hard labels, hashtags are best treated as special words which can be the themes of the tweets. Tweets are thus challenging for topic models, and ad hoc alternatives are used instead. In other text analysis applications, tweets are often `cleansed' by NLP methods such as lexical normalization BIBREF2 . However, the use of normalization is also criticized BIBREF3 .
In this paper, we propose a novel method for short text modeling by leveraging the auxiliary information that accompanies tweets. This information, complementing word co-occurrence, allows us to model the tweets better, as well as opening the door to more applications, such as user recommendation and hashtag suggestion. Our main contributions include: 1) a fully Bayesian nonparametric model called Twitter-Network (TN) topic model that models tweets very well; and 2) a combination of both the hierarchical Poisson Dirichlet process (HPDP) and the Gaussian process (GP) to jointly model text, hashtags, authors and the followers network. We also develop a flexible framework for arbitrary PDP networks, which allows quick deployment (including inference) of new variants of HPDP topic models. Despite the complexity of the TN topic model, its implementation is made relatively straightforward with the use of the framework.
Background and Related Work
LDA is often extended for different types of data, some notable examples that use auxiliary information are the author-topic model BIBREF4 , the tag-topic model BIBREF5 , and Topic-Link LDA BIBREF6 . However, these models only deal with just one kind of additional information and do not work well with tweets since they are designed for other types of text data. Note that the tag-topic model treats tags as hard labels and uses them to group text documents, which is not appropriate for tweets due to the noisy nature of hashtags. Twitter-LDA BIBREF1 and the behavior-topic model BIBREF7 were designed to explicitly model tweets. Both models are not admixture models since they limit one topic per document. The behavior-topic model analyzes tweets' “posting behavior” of each topic for user recommendation. On the other hand, the biterm topic model BIBREF8 uses only the biterm co-occurrence to model tweets, discarding document level information. Both biterm topic model and Twitter-LDA do not incorporate any auxiliary information. All the above topic models also have a limitation in that the number of topics need to be chosen in advance, which is difficult since this number is not known.
To sidestep the need of choosing the number of topics, BIBREF9 proposed Hierarchical Dirichlet process (HDP) LDA, which utilizes the Dirichlet process (DP) as nonparametric prior. Furthermore, one can replace the DP with the Poisson-Dirichlet process (PDP, also known as the Pitman-Yor process), which models the power-law of word frequencies distributions in natural languages. In natural languages, the distribution of word frequencies exhibits a power-law BIBREF10 . For topic models, replacing the Dirichlet distribution with the PDP can yield great improvement BIBREF11 .
Some recent work models text data with network information ( BIBREF6 , BIBREF12 , BIBREF13 ), however, these models are parametric in nature and can be restrictive. On the contrary, Miller et al. BIBREF14 and Lloyd et al. BIBREF15 model network data directly with nonparametric priors, i.e. with the Indian Buffet process and the Gaussian process respectively, but do not model text.
Model Summary
The TN topic model makes use of the accompanying hashtags, authors, and followers network to model tweets better. The TN topic model is composed of two main components: a HPDP topic model for the text and hashtags, and a GP based random function model for the followers network. The authorship information serves to connect the two together.
We design our HPDP topic model for text as follows. First, generate the global topic distribution $\mu _0$ that serves as a prior. Then generate the respective authors' topic distributions $\nu $ for each author, and a miscellaneous topic distribution $\mu _1$ to capture topics that deviate from the authors' usual topics. Given $\nu $ and $\mu _1$ , we generate the topic distributions for the documents, and words ( $\eta $ , $\theta ^{\prime }$ , $\theta $ ). We also explicitly model the influence of hashtags to words. Hashtag and word generation follows standard LDA and is not discussed here. Note that the tokens of hashtags are shared with the words, i.e. the hashtag #happy share the same token as the word happy. Also note that all distributions on probability vectors are modeled by the PDP, making the model a network of PDP nodes.
The network modeling is connected to the HPDP topic model via the author topic distributions $\nu $ , where we treat $\nu $ as inputs to the GP in the network model. The GP, denoted as $\mathcal {F}$ , determines the links between the authors ( $x$ ). Figure 1 displays the graphical model of TN, where region a and b shows the network model and topic model respectively. See supplementary material for a detailed description. We emphasize that our treatment of the network model is different to that of BIBREF15 . We define a new kernel function based on the cosine similarity in our network model, which provides significant improvement over the original kernel function. Also, we derive a new sampling procedure for inference due to the additive coupling of topic distributions and network connections.
Posterior Inference
We alternatively perform Markov chain Monte Carlo (MCMC) sampling on the topic model and the network model, conditioned on each other. We derive a collapsed Gibbs sampler for the topic model, and a Metropolis-Hastings (MH) algorithm for the network model. We develop a framework to perform collapse Gibbs sampling generally on any Bayesian network of PDPs, built upon the work of BIBREF16 , BIBREF17 , which allows quick prototyping and development of new variants of topic model. We refer the readers to the supplementary materials for the technical details.
Experiments and Applications
We evaluate the TN topic model quantitatively with standard topic model measures such as test-set perplexity, likelihood convergence and clustering measures. Qualitatively, we evaluate the model by visualizing the topic summaries, authors' topic distributions and by performing an automatic labeling task. We compare our model with HDP-LDA, a nonparametric variant of the author-topic model (ATM), and the original random function network model. We also perform ablation studies to show the importance of each component in the model. The results of the comparison and ablation studies are shown in Table 1 . We use two tweets corpus for experiments, first is a subset of Twitter7 dataset BIBREF18 , obtained by querying with certain keywords (e.g. finance, sports, politics). we remove tweets that are not English with langid.py BIBREF19 and filter authors who do not have network information and who authored less than 100 tweets. The corpus consists of 60370 tweets by 94 authors. We then randomly select 90% of the dataset as training documents and use the rest for testing. Second tweets corpus is obtained from BIBREF20 , which contains a total of 781186 tweets. We note that we perform no word normalization to prevent any loss of meaning of the noisy text.
Conclusion and Future Work
We propose a full Bayesian nonparametric Twitter-Network (TN) topic model that jointly models tweets and the associated social network information. Our model employs a nonparametric Bayesian approach by using the PDP and GP, and achieves flexible modeling by performing inference on a network of PDPs. Our experiments with Twitter dataset show that the TN topic model achieves significant improvement compared to existing baselines. Furthermore, our ablation study demonstrates the usefulness of each component of the TN model. Our model also shows interesting applications such as author recommendation, as well as providing additional informative inferences.
We also engineered a framework for rapid topic model development, which is important due to the complexity of the model. While we could have used Adaptor Grammars BIBREF21 , our framework yields more efficient computation for topic models.
Future work includes speeding up the posterior inference algorithm, especially for the network model, as well as incorporating other auxiliary information that is available in social media such as location, hyperlinks and multimedia contents. We also intend to explore other applications that can be addressed with the TN topic model, such as hashtag recommendation. It is also interesting to apply the TN topic model to other types of data such as blog and publication data.
Acknowledgement
We would like to thank the anonymous reviewers for their helpful feedback and comments.
NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program. | test-set perplexity, likelihood convergence and clustering measures, visualizing the topic summaries, authors' topic distributions and by performing an automatic labeling task |
fa9df782d743ce0ce1a7a5de6a3de226a7e423df | fa9df782d743ce0ce1a7a5de6a3de226a7e423df_0 | Q: What are the languages they consider in this paper?
Text: Introduction
Anecdotally speaking, fluent bilingual speakers rarely face trouble translating a task learned in one language to another. For example, a bilingual speaker who is taught a math problem in English will trivially generalize to other known languages. Furthermore there is a large collection of evidence in linguistics arguing that although separate lexicons exist in multilingual speakers the core representations of concepts and theories are shared in memory BIBREF2 , BIBREF3 , BIBREF4 . The fundamental question we're interested in answering is on the learnability of these shared representations within a statistical framework.
We approached this problem from a linguistics perspective. Languages have vastly varying syntactic features and rules. Linguistic Relativity studies the impact of these syntactic variations on the formations of concepts and theories BIBREF5 . Within this framework of study, the two schools of thoughts are linguistic determinism and weak linguistic influence. Linguistic determinism argues that language entirely forms the range of cognitive processes, including the creation of various concepts, but is generally agreed to be false BIBREF6 , BIBREF5 . Although there exists some weak linguistic influence, it is by no means fundamental BIBREF7 . The superfluous nature of syntactic variations across languages brings forward the argument of principles and parameters (PnP) which hypothesizes the existence of a small distributed parameter representation that captures the syntactic variance between languages denoted by parameters (e.g. head-first or head-final syntax), as well as common principles shared across all languages BIBREF8 . Universal Grammar (UG) is the study of principles and the parameters that are universal across languages BIBREF1 .
The ability to learn these universalities would allow us to learn representations of language that are fundamentally agnostic of the specific language itself. Doing so would allow us to learn a task in one language and reap the benefits of all other languages without needing multilingual datasets. Our attempt to learn these representations begins by taking inspiration from linguistics and formalizing UG as an optimization problem.
We train downstream models using language agnostic universal representations on a set of tasks and show the ability for the downstream models to generalize to languages that we did not train on.
Related Work
Our work attempts to unite universal (task agnostic) representations with multilingual (language agnostic) representations BIBREF9 , BIBREF10 . The recent trend in universal representations has been moving away from context-less unsupervised word embeddings to context-rich representations. Deep contextualized word representations (ELMo) trains an unsupervised language model on a large corpus of data and applies it to a large set of auxiliary tasks BIBREF9 . These unsupervised representations boosted the performance of models on a wide array of tasks. Along the same lines BIBREF10 showed the power of using latent representations of translation models as features across other non-translation tasks. In general, initializing models with pre-trained language models shows promise against the standard initialization with word embeddings. Even further, BIBREF11 show that an unsupervised language model trained on a large corpus will contain a neuron that strongly correlates with sentiment without ever training on a sentiment task implying that unsupervised language models maybe picking up informative and structured signals.
In the field of multilingual representations, a fair bit of work has been done on multilingual word embeddings. BIBREF12 explored the possibility of training massive amounts of word embeddings utilizing either parallel data or bilingual dictionaries via the SkipGram paradigm. Later on an unsupervised approach to multilingual word representations was proposed by BIBREF13 which utilized an adversarial training regimen to place word embeddings into a shared latent space. Although word embeddings show great utility, they fall behind methods which exploit sentence structure as well as words. Less work has been done on multilingual sentence representations. Most notably both BIBREF14 and BIBREF15 propose a way to learn multilingual sentence representation through a translation task.
We propose learning language agnostic representations through constrained language modeling to capture the power of both multilingual and universal representations. By decoupling language from our representations we can train downstream models on monolingual data and automatically apply the models to other languages.
Universal Grammar as an Optimization Problem
Statistical language models approximate the probability distribution of a series of words by predicting the next word given a sequence of previous words. $ p(w_0,...,w_n) = \prod _{i=1}^n p(w_i \mid w_0,...,w_{i-1}) $
where $w_i$ are indices representing words in an arbitrary vocabulary.
Learning grammar is equivalent to language modeling, as the support of $p$ will represent the set of all grammatically correct sentences. Furthermore, let $p_j(\cdot )$ represent the language model for the jth language and $w^j$ represents a word from the jth language. Let $k_j$ represent a distributed representation of a specific language along the lines of the PnP argument BIBREF8 . UG, through the lens of statistical language modeling, hypothesizes the existence of a factorization of $p_j(\cdot )$ containing a language agnostic segment. The factorization used throughout this paper is the following:
b = u ej(wj0,...,wji)
pj(wi w0,...,wi-1) = ej-1(h(b,kj))
s.t. d(p(bj) p(bj))
The distribution matching constraint $d$ , insures that the representations across languages are common as hypothesized by the UG argument.
Function $e_j: \mathbb {N}^i \rightarrow \mathbb {R}^{i \times d}$ is a language specific function which takes an ordered set of integers representing tokens and outputs a vector of size $d$ per token. Function $u: \mathbb {R}^{i \times d} \rightarrow \mathbb {R}^{i \times d}$ takes the language specific representation and attempts to embed into a language agnostic representation. Function $h: (\mathbb {R}^{i \times d}, \mathbb {R}^{f}) \rightarrow \mathbb {R}^{i \times d}$ takes the universal representation as well as a distributed representation of the language of size $f$ and returns a language specific decoded representation. $e^{-1}$ maps our decoded representation back to the token space.
For the purposes of distribution matching we utilize the GAN framework. Following recent successes we use Wasserstein-1 as our distance function $d$ BIBREF16 .
Given two languages $j_\alpha $ and $j_\beta $ the distribution of the universal representations should be within $\epsilon $ with respect to the $W_1$ of each other. Using the Kantarovich-Rubenstein duality we define
$$d(\mathbf {p}(b\mid j_\alpha ) \mid \mid \mathbf {p}(b\mid j_\beta )) = \sup _{||f_{\alpha ,\beta }||_L \le 1} \mathbb {E}_{x\sim \mathbf {p}(b\mid j_\alpha )}\left[f_{\alpha ,\beta }(x)\right] - \mathbb {E}_{x\sim \mathbf {p}(b\mid j_\beta )}\left[f_{\alpha ,\beta }(x)\right]$$ (Eq. 2)
where $L$ is the Lipschitz constant of $f$ . Throughout this paper we satisfy the Lipschitz constraint by clamping the parameters to a compact space, as done in the original WGAN paper BIBREF16 . Therefore the complete loss function for $m$ languages each containing $N$ documents becomes:
$$\max _{\theta } \sum _{\alpha =0}^m \sum _{i=0}^N \log p_{j_\alpha }(w_{i,0}^\alpha ,...,w_{i,n}^\alpha ; \theta )\nonumber - \frac{\lambda }{m^2}\sum _{\alpha =0}^m \sum _{\beta =0}^m d(\mathbf {p}(b\mid j_\alpha ) \mid \mid \mathbf {p}(b\mid j_\beta ))$$ (Eq. 3)
$\lambda $ is a scaling factor for the distribution constraint loss.
UG-WGAN
Our specific implementation of this optimization problem we denote as UG-WGAN. Each function described in the previous section we implement using neural networks. For $e_j$ in equation "Universal Grammar as an Optimization Problem" we use a language specific embedding table followed by a LSTM BIBREF17 . Function $u$ in equation "Universal Grammar as an Optimization Problem" is simply stacked LSTM's. Function $h$ in equation "Universal Grammar as an Optimization Problem" takes input from $u$ as well as a PnP representation of the language via an embedding table. Calculating the real inverse of $e^{-1}$ is non trivial therefore we use another language specific LSTM whose outputs we multiply by the transpose of the embedding table of $e$ to obtain token probabilities. For regularization we utilized dropout and locked dropout where appropriate BIBREF18 .
The critic, adopting the terminology from BIBREF16 , takes the input from $u$ , feeds it through a stacked LSTM, aggregates the hidden states using linear sequence attention as described in DrQA BIBREF19 . Once we have the aggregated state we map to a $m \times m$ matrix from where we can compute the total Wasserstein loss. A Batch Normalization layer is appended to the end of the critic BIBREF20 . The $\alpha , \beta $ th index in the matrix correspond to the function output of $f$ in calculating $W_1(\mathbf {p}(b\mid j_\alpha ) \mid \mid \mathbf {p}(b\mid j_\beta ))$ .
We trained UG-WGAN with a variety of languages depending on the downstream task. For each language we utilized the respective Wikipedia dump. From the wikipedia dump we extract all pages using the wiki2text utility and build language specific vocabularies consisting of 16k BPE tokens BIBREF21 . During each batch we sample documents from our set of languages which are approximately the same length. We train our language model via BPTT where the truncation length progressively grows from 15 to 50 throughout training. The critic is updated 10 times for every update of the language model. We trained each language model for 14 days on a NVidia Titan X. For each language model we would do a sweep over $\lambda $ , but in general we have found that $\lambda =0.1$ works sufficiently well for minimizing both perplexity and Wasserstein distance.
Exploration
A couple of interesting questions arise from the described training procedure. Is the distribution matching constraint necessary or will simple joint language model training exhibit the properties we're interested in? Can this optimization process fundamentally learn individual languages grammar while being constrained by a universal channel? What commonalities between languages can we learn and are they informative enough to be exploited?
We can test out the usefulness of the distribution matching constraint by running an ablation study on the $\lambda $ hyper-parameter. We trained UG-WGAN on English, Spanish and Arabic wikidumps following the procedure described above. We kept all the hyper-parameters consistent apart for augmenting $\lambda $ from 0 to 10. The results are shown in Figure 2 . Without any weight on the distribution matching term the critic trivially learns to separate the various languages and no further training reduces the wasserstein distance. The joint language model internally learns individual language models who are partitioned in the latent space. We can see this by running a t-SNE plot on the universal ( $u(\cdot )$ ) representation of our model and seeing existence of clusters of the same language as we did in Figure 3 BIBREF22 . An universal model satisfying the distribution matching constrain would mix all languages uniformly within it's latent space.
To test the universality of UG-WGAN representations we will apply them to a set of orthogonal NLP tasks. We will leave the discussion on the learnability of grammar to the Discussion section of this paper.
Experiments
By introducing a universal channel in our language model we reduced a representations dependence on a single language. Therefore we can utilize an arbitrary set of languages in training an auxiliary task over UG encodings. For example we can train a downstream model only on one languages data and transfer the model trivially to any other language that UG-WGAN was trained on.
Sentiment Analysis
To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section "UG-WGAN" . The embedding size of the table was 300 and the internal LSTM hidden size was 512. A dropout rate of $0.1$ was used and trained with the ADAM optimization method BIBREF23 . Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 . We binarize the label's for all the datasets.
Our sentiment analysis model ran a bi-directional LSTM on top of fixed UG representations from where we took the last hidden state and computed a logistic regression. This was trained using standard SGD with momentum.
We also compare against encodings learned as a by-product of multi-encoder and decoder neural machine translation as a baseline BIBREF28 . We see that UG representations are useful in situations when there is a lack of data in an specific language. The language agnostics properties of UG embeddings allows us to do successful zero-shot learning without needing any parallel corpus, furthermore the ability to generalize from language modeling to sentiment attests for the universal properties of these representations. Although we aren't able to improve over the state of the art in a single language we are able to learn a model that does surprisingly well on a set of languages without multilingual data.
NLI
A natural language inference task consists of two sentences; a premise and a hypothesis which are either contradictions, entailments or neutral. Learning a NLI task takes a certain nuanced understanding of language. Therefore it is of interest whether or not UG-WGAN captures the necessary linguistic features. For this task we use the Stanford NLI (sNLI) dataset as our training data in english BIBREF29 . To test the zero-shot learning capabilities we created a russian sNLI test set by random sampling 400 sNLI test samples and having a native russian speaker translate both premise and hypothesis to russian. The label was kept the same.
For this experiment we trained UG-WGAN on the English and Russian language following the procedure described in Section "UG-WGAN" . We kept the hyper-parameters equivalent to the Sentiment Analysis experiment. All of the NLI model tested were run over the fixed UG embeddings. We trained two different models from literature, Densely-Connected Recurrent and Co-Attentive Network by BIBREF30 and Multiway Attention Network by BIBREF31 . Please refer to this papers for further implementation details.
UG representations contain enough information to non-trivially generalize the NLI task to unseen languages. That being said, we do see a relatively large drop in performance moving across languages which hints that either our calculation of the Wasserstein distance may not be sufficiently accurate or the universal representations are biased toward specific languages or tasks.
One hypothesis might be that as we increase $\lambda $ the cross lingual generalization gap (difference in test error on a task across languages) will vanish. To test this hypothesis we conducted the same experiment where UG-WGAN was trained with a $\lambda $ ranging from 0 to 10. From each of the experiments we picked the model epoch which showed the best perplexity. The NLI specific model was the Densely-Connected Recurrent and Co-Attentive Network.
Increasing $\lambda $ doesn't seem to have a significant impact on the generalization gap but has a large impact on test error. Our hypothesis is that a large $\lambda $ doesn't provide the model with enough freedom to learn useful representations since the optimizations focus would largely be on minimizing the Wasserstein distance, while a small $\lambda $ permits this freedom. One reason we might be seeing this generalization gap might be due to the way we satisfy the Lipschitz constraint. It's been shown that there are better constraints than clipping parameters to a compact space such as a gradient penalty BIBREF32 . This is a future direction that can be explored.
Discussion
Universal Grammar also comments on the learnability of grammar, stating that statistical information alone is not enough to learn grammar and some form of native language faculty must exist, sometimes titled the poverty of stimulus (POS) argument BIBREF33 , BIBREF34 . From a machine learning perspective, we're interested in extracting informative features and not necessarily a completely grammatical language model. That being said it is of interest to what extent language models capture grammar and furthermore the extent to which models trained toward the universal grammar objective learn grammar.
One way to measure universality is by studying perplexity of our multi-lingual language model as we increase the number of languages. To do so we trained 6 UG-WGAN models on the following languages: English, Russian, Arabic, Chinese, German, Spanish, French. We maintain the same procedure as described above. The hidden size of the language model was increased to 1024 with 16K BPE tokens being used. The first model was trained on English Russian, second was trained on English Russian Arabic and so on. For arabic we still trained from left to right even though naturally the language is read from right to left. We report the results in Figure 5 . As the number of languages increases the gap between a UG-WGAN without any distribution matching and one with diminishes. This implies that the efficiency and representative power of UG-WGAN grows as we increase the number of languages it has to model.
We see from Figure 2 that perplexity worsens proportional to $\lambda $ . We explore the differences by sampling sentences from an unconstrained language model and $\lambda =0.1$ language model trained towards English and Spanish in Table 3 . In general there is a very small difference between a language model trained with a Universal Grammar objective and one without. The Universal Grammar model tends to make more gender mistakes and mistakes due to Plural-Singular Form in Spanish. In English we saw virtually no fundamental differences between the language models. This seems to hint the existence of an universal set of representations for languages, as hypothesized by Universal Grammar. And although completely learning grammar from statistical signals might be improbable, we can still extract useful information.
Conclusion
In this paper we introduced an unsupervised approach toward learning language agnostic universal representations by formalizing Universal Grammar as an optimization problem. We showed that we can use these representations to learn tasks in one language and automatically transfer them to others with no additional training. Furthermore we studied the importance of the Wasserstein constraint through the $\lambda $ hyper-parameter. And lastly we explored the difference between a standard multi-lingual language model and UG-WGAN by studying the generated outputs of the respective language models as well as the perplexity gap growth with respect to the number of languages. | The languages considered were English, Chinese, German, Russian, Arabic, Spanish, French |
6270d5247f788c4627be57de6cf30112560c863f | 6270d5247f788c4627be57de6cf30112560c863f_0 | Q: Did they experiment with tasks other than word problems in math?
Text: Introduction
Anecdotally speaking, fluent bilingual speakers rarely face trouble translating a task learned in one language to another. For example, a bilingual speaker who is taught a math problem in English will trivially generalize to other known languages. Furthermore there is a large collection of evidence in linguistics arguing that although separate lexicons exist in multilingual speakers the core representations of concepts and theories are shared in memory BIBREF2 , BIBREF3 , BIBREF4 . The fundamental question we're interested in answering is on the learnability of these shared representations within a statistical framework.
We approached this problem from a linguistics perspective. Languages have vastly varying syntactic features and rules. Linguistic Relativity studies the impact of these syntactic variations on the formations of concepts and theories BIBREF5 . Within this framework of study, the two schools of thoughts are linguistic determinism and weak linguistic influence. Linguistic determinism argues that language entirely forms the range of cognitive processes, including the creation of various concepts, but is generally agreed to be false BIBREF6 , BIBREF5 . Although there exists some weak linguistic influence, it is by no means fundamental BIBREF7 . The superfluous nature of syntactic variations across languages brings forward the argument of principles and parameters (PnP) which hypothesizes the existence of a small distributed parameter representation that captures the syntactic variance between languages denoted by parameters (e.g. head-first or head-final syntax), as well as common principles shared across all languages BIBREF8 . Universal Grammar (UG) is the study of principles and the parameters that are universal across languages BIBREF1 .
The ability to learn these universalities would allow us to learn representations of language that are fundamentally agnostic of the specific language itself. Doing so would allow us to learn a task in one language and reap the benefits of all other languages without needing multilingual datasets. Our attempt to learn these representations begins by taking inspiration from linguistics and formalizing UG as an optimization problem.
We train downstream models using language agnostic universal representations on a set of tasks and show the ability for the downstream models to generalize to languages that we did not train on.
Related Work
Our work attempts to unite universal (task agnostic) representations with multilingual (language agnostic) representations BIBREF9 , BIBREF10 . The recent trend in universal representations has been moving away from context-less unsupervised word embeddings to context-rich representations. Deep contextualized word representations (ELMo) trains an unsupervised language model on a large corpus of data and applies it to a large set of auxiliary tasks BIBREF9 . These unsupervised representations boosted the performance of models on a wide array of tasks. Along the same lines BIBREF10 showed the power of using latent representations of translation models as features across other non-translation tasks. In general, initializing models with pre-trained language models shows promise against the standard initialization with word embeddings. Even further, BIBREF11 show that an unsupervised language model trained on a large corpus will contain a neuron that strongly correlates with sentiment without ever training on a sentiment task implying that unsupervised language models maybe picking up informative and structured signals.
In the field of multilingual representations, a fair bit of work has been done on multilingual word embeddings. BIBREF12 explored the possibility of training massive amounts of word embeddings utilizing either parallel data or bilingual dictionaries via the SkipGram paradigm. Later on an unsupervised approach to multilingual word representations was proposed by BIBREF13 which utilized an adversarial training regimen to place word embeddings into a shared latent space. Although word embeddings show great utility, they fall behind methods which exploit sentence structure as well as words. Less work has been done on multilingual sentence representations. Most notably both BIBREF14 and BIBREF15 propose a way to learn multilingual sentence representation through a translation task.
We propose learning language agnostic representations through constrained language modeling to capture the power of both multilingual and universal representations. By decoupling language from our representations we can train downstream models on monolingual data and automatically apply the models to other languages.
Universal Grammar as an Optimization Problem
Statistical language models approximate the probability distribution of a series of words by predicting the next word given a sequence of previous words. $ p(w_0,...,w_n) = \prod _{i=1}^n p(w_i \mid w_0,...,w_{i-1}) $
where $w_i$ are indices representing words in an arbitrary vocabulary.
Learning grammar is equivalent to language modeling, as the support of $p$ will represent the set of all grammatically correct sentences. Furthermore, let $p_j(\cdot )$ represent the language model for the jth language and $w^j$ represents a word from the jth language. Let $k_j$ represent a distributed representation of a specific language along the lines of the PnP argument BIBREF8 . UG, through the lens of statistical language modeling, hypothesizes the existence of a factorization of $p_j(\cdot )$ containing a language agnostic segment. The factorization used throughout this paper is the following:
b = u ej(wj0,...,wji)
pj(wi w0,...,wi-1) = ej-1(h(b,kj))
s.t. d(p(bj) p(bj))
The distribution matching constraint $d$ , insures that the representations across languages are common as hypothesized by the UG argument.
Function $e_j: \mathbb {N}^i \rightarrow \mathbb {R}^{i \times d}$ is a language specific function which takes an ordered set of integers representing tokens and outputs a vector of size $d$ per token. Function $u: \mathbb {R}^{i \times d} \rightarrow \mathbb {R}^{i \times d}$ takes the language specific representation and attempts to embed into a language agnostic representation. Function $h: (\mathbb {R}^{i \times d}, \mathbb {R}^{f}) \rightarrow \mathbb {R}^{i \times d}$ takes the universal representation as well as a distributed representation of the language of size $f$ and returns a language specific decoded representation. $e^{-1}$ maps our decoded representation back to the token space.
For the purposes of distribution matching we utilize the GAN framework. Following recent successes we use Wasserstein-1 as our distance function $d$ BIBREF16 .
Given two languages $j_\alpha $ and $j_\beta $ the distribution of the universal representations should be within $\epsilon $ with respect to the $W_1$ of each other. Using the Kantarovich-Rubenstein duality we define
$$d(\mathbf {p}(b\mid j_\alpha ) \mid \mid \mathbf {p}(b\mid j_\beta )) = \sup _{||f_{\alpha ,\beta }||_L \le 1} \mathbb {E}_{x\sim \mathbf {p}(b\mid j_\alpha )}\left[f_{\alpha ,\beta }(x)\right] - \mathbb {E}_{x\sim \mathbf {p}(b\mid j_\beta )}\left[f_{\alpha ,\beta }(x)\right]$$ (Eq. 2)
where $L$ is the Lipschitz constant of $f$ . Throughout this paper we satisfy the Lipschitz constraint by clamping the parameters to a compact space, as done in the original WGAN paper BIBREF16 . Therefore the complete loss function for $m$ languages each containing $N$ documents becomes:
$$\max _{\theta } \sum _{\alpha =0}^m \sum _{i=0}^N \log p_{j_\alpha }(w_{i,0}^\alpha ,...,w_{i,n}^\alpha ; \theta )\nonumber - \frac{\lambda }{m^2}\sum _{\alpha =0}^m \sum _{\beta =0}^m d(\mathbf {p}(b\mid j_\alpha ) \mid \mid \mathbf {p}(b\mid j_\beta ))$$ (Eq. 3)
$\lambda $ is a scaling factor for the distribution constraint loss.
UG-WGAN
Our specific implementation of this optimization problem we denote as UG-WGAN. Each function described in the previous section we implement using neural networks. For $e_j$ in equation "Universal Grammar as an Optimization Problem" we use a language specific embedding table followed by a LSTM BIBREF17 . Function $u$ in equation "Universal Grammar as an Optimization Problem" is simply stacked LSTM's. Function $h$ in equation "Universal Grammar as an Optimization Problem" takes input from $u$ as well as a PnP representation of the language via an embedding table. Calculating the real inverse of $e^{-1}$ is non trivial therefore we use another language specific LSTM whose outputs we multiply by the transpose of the embedding table of $e$ to obtain token probabilities. For regularization we utilized dropout and locked dropout where appropriate BIBREF18 .
The critic, adopting the terminology from BIBREF16 , takes the input from $u$ , feeds it through a stacked LSTM, aggregates the hidden states using linear sequence attention as described in DrQA BIBREF19 . Once we have the aggregated state we map to a $m \times m$ matrix from where we can compute the total Wasserstein loss. A Batch Normalization layer is appended to the end of the critic BIBREF20 . The $\alpha , \beta $ th index in the matrix correspond to the function output of $f$ in calculating $W_1(\mathbf {p}(b\mid j_\alpha ) \mid \mid \mathbf {p}(b\mid j_\beta ))$ .
We trained UG-WGAN with a variety of languages depending on the downstream task. For each language we utilized the respective Wikipedia dump. From the wikipedia dump we extract all pages using the wiki2text utility and build language specific vocabularies consisting of 16k BPE tokens BIBREF21 . During each batch we sample documents from our set of languages which are approximately the same length. We train our language model via BPTT where the truncation length progressively grows from 15 to 50 throughout training. The critic is updated 10 times for every update of the language model. We trained each language model for 14 days on a NVidia Titan X. For each language model we would do a sweep over $\lambda $ , but in general we have found that $\lambda =0.1$ works sufficiently well for minimizing both perplexity and Wasserstein distance.
Exploration
A couple of interesting questions arise from the described training procedure. Is the distribution matching constraint necessary or will simple joint language model training exhibit the properties we're interested in? Can this optimization process fundamentally learn individual languages grammar while being constrained by a universal channel? What commonalities between languages can we learn and are they informative enough to be exploited?
We can test out the usefulness of the distribution matching constraint by running an ablation study on the $\lambda $ hyper-parameter. We trained UG-WGAN on English, Spanish and Arabic wikidumps following the procedure described above. We kept all the hyper-parameters consistent apart for augmenting $\lambda $ from 0 to 10. The results are shown in Figure 2 . Without any weight on the distribution matching term the critic trivially learns to separate the various languages and no further training reduces the wasserstein distance. The joint language model internally learns individual language models who are partitioned in the latent space. We can see this by running a t-SNE plot on the universal ( $u(\cdot )$ ) representation of our model and seeing existence of clusters of the same language as we did in Figure 3 BIBREF22 . An universal model satisfying the distribution matching constrain would mix all languages uniformly within it's latent space.
To test the universality of UG-WGAN representations we will apply them to a set of orthogonal NLP tasks. We will leave the discussion on the learnability of grammar to the Discussion section of this paper.
Experiments
By introducing a universal channel in our language model we reduced a representations dependence on a single language. Therefore we can utilize an arbitrary set of languages in training an auxiliary task over UG encodings. For example we can train a downstream model only on one languages data and transfer the model trivially to any other language that UG-WGAN was trained on.
Sentiment Analysis
To test this hypothesis we first trained UG-WGAN in English, Chinese and German following the procedure described in Section "UG-WGAN" . The embedding size of the table was 300 and the internal LSTM hidden size was 512. A dropout rate of $0.1$ was used and trained with the ADAM optimization method BIBREF23 . Since we are interested in the zero-shot capabilities of our representation, we trained our sentiment analysis model only on the english IMDB Large Movie Review dataset and tested it on the chinese ChnSentiCorp dataset and german SB-10K BIBREF24 , BIBREF25 . We binarize the label's for all the datasets.
Our sentiment analysis model ran a bi-directional LSTM on top of fixed UG representations from where we took the last hidden state and computed a logistic regression. This was trained using standard SGD with momentum.
We also compare against encodings learned as a by-product of multi-encoder and decoder neural machine translation as a baseline BIBREF28 . We see that UG representations are useful in situations when there is a lack of data in an specific language. The language agnostics properties of UG embeddings allows us to do successful zero-shot learning without needing any parallel corpus, furthermore the ability to generalize from language modeling to sentiment attests for the universal properties of these representations. Although we aren't able to improve over the state of the art in a single language we are able to learn a model that does surprisingly well on a set of languages without multilingual data.
NLI
A natural language inference task consists of two sentences; a premise and a hypothesis which are either contradictions, entailments or neutral. Learning a NLI task takes a certain nuanced understanding of language. Therefore it is of interest whether or not UG-WGAN captures the necessary linguistic features. For this task we use the Stanford NLI (sNLI) dataset as our training data in english BIBREF29 . To test the zero-shot learning capabilities we created a russian sNLI test set by random sampling 400 sNLI test samples and having a native russian speaker translate both premise and hypothesis to russian. The label was kept the same.
For this experiment we trained UG-WGAN on the English and Russian language following the procedure described in Section "UG-WGAN" . We kept the hyper-parameters equivalent to the Sentiment Analysis experiment. All of the NLI model tested were run over the fixed UG embeddings. We trained two different models from literature, Densely-Connected Recurrent and Co-Attentive Network by BIBREF30 and Multiway Attention Network by BIBREF31 . Please refer to this papers for further implementation details.
UG representations contain enough information to non-trivially generalize the NLI task to unseen languages. That being said, we do see a relatively large drop in performance moving across languages which hints that either our calculation of the Wasserstein distance may not be sufficiently accurate or the universal representations are biased toward specific languages or tasks.
One hypothesis might be that as we increase $\lambda $ the cross lingual generalization gap (difference in test error on a task across languages) will vanish. To test this hypothesis we conducted the same experiment where UG-WGAN was trained with a $\lambda $ ranging from 0 to 10. From each of the experiments we picked the model epoch which showed the best perplexity. The NLI specific model was the Densely-Connected Recurrent and Co-Attentive Network.
Increasing $\lambda $ doesn't seem to have a significant impact on the generalization gap but has a large impact on test error. Our hypothesis is that a large $\lambda $ doesn't provide the model with enough freedom to learn useful representations since the optimizations focus would largely be on minimizing the Wasserstein distance, while a small $\lambda $ permits this freedom. One reason we might be seeing this generalization gap might be due to the way we satisfy the Lipschitz constraint. It's been shown that there are better constraints than clipping parameters to a compact space such as a gradient penalty BIBREF32 . This is a future direction that can be explored.
Discussion
Universal Grammar also comments on the learnability of grammar, stating that statistical information alone is not enough to learn grammar and some form of native language faculty must exist, sometimes titled the poverty of stimulus (POS) argument BIBREF33 , BIBREF34 . From a machine learning perspective, we're interested in extracting informative features and not necessarily a completely grammatical language model. That being said it is of interest to what extent language models capture grammar and furthermore the extent to which models trained toward the universal grammar objective learn grammar.
One way to measure universality is by studying perplexity of our multi-lingual language model as we increase the number of languages. To do so we trained 6 UG-WGAN models on the following languages: English, Russian, Arabic, Chinese, German, Spanish, French. We maintain the same procedure as described above. The hidden size of the language model was increased to 1024 with 16K BPE tokens being used. The first model was trained on English Russian, second was trained on English Russian Arabic and so on. For arabic we still trained from left to right even though naturally the language is read from right to left. We report the results in Figure 5 . As the number of languages increases the gap between a UG-WGAN without any distribution matching and one with diminishes. This implies that the efficiency and representative power of UG-WGAN grows as we increase the number of languages it has to model.
We see from Figure 2 that perplexity worsens proportional to $\lambda $ . We explore the differences by sampling sentences from an unconstrained language model and $\lambda =0.1$ language model trained towards English and Spanish in Table 3 . In general there is a very small difference between a language model trained with a Universal Grammar objective and one without. The Universal Grammar model tends to make more gender mistakes and mistakes due to Plural-Singular Form in Spanish. In English we saw virtually no fundamental differences between the language models. This seems to hint the existence of an universal set of representations for languages, as hypothesized by Universal Grammar. And although completely learning grammar from statistical signals might be improbable, we can still extract useful information.
Conclusion
In this paper we introduced an unsupervised approach toward learning language agnostic universal representations by formalizing Universal Grammar as an optimization problem. We showed that we can use these representations to learn tasks in one language and automatically transfer them to others with no additional training. Furthermore we studied the importance of the Wasserstein constraint through the $\lambda $ hyper-parameter. And lastly we explored the difference between a standard multi-lingual language model and UG-WGAN by studying the generated outputs of the respective language models as well as the perplexity gap growth with respect to the number of languages. | They experimented with sentiment analysis and natural language inference task |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.