id
stringlengths
40
40
pid
stringlengths
42
42
input
stringlengths
8.37k
169k
output
stringlengths
1
1.63k
f6202100cfb83286dc51f57c68cffdbf5cf50a3f
f6202100cfb83286dc51f57c68cffdbf5cf50a3f_0
Q: What are 3 novel fusion techniques that are proposed? Text: Introduction A number of works have explored integrating the visual modality for Neural Machine Translation (NMT) models, though, there has been relatively modest gains or no gains at all by incorporating the visual modality in the translation pipeline BIBREF0. In particular, BIBREF1 leverage multi-task learning, BIBREF2 use visual adaptive training, while BIBREF3, BIBREF4, BIBREF5 use a number of fusion techniques to incorporate features obtained from the visual modality. Regarding the seemingly low utility of visual modality in machine translation, BIBREF6 hypothesize that the highly relevant visual properties are often not represented by linguistic models because they are too obvious to be explicitly mentioned in text (e.g., birds have wings, violins are brown). Similarly, BIBREF7 argue that perceptual information is already sufficiently encoded in textual cues. However, recently BIBREF0 have demonstrated that neural models are capable of leveraging the visual modality for translations, and posit that it is the nature of the Multi30k dataset (the only multimodal machine translation dataset at the time) which is inhibiting gains from the visual modality to emerge, due to the presence of short, simple and repetitive sentences, which renders the source text as sufficient context for translation. In this work, we further investigate this hypothesis on a large-scale multimodal machine translation (MMT) dataset, named How2 BIBREF2, which has 1.57 times longer sentences, in terms of the mean sentence length, when compared to Multi30k . To this end, we restrict ourselves to the Sequence-to-Sequence (Seq2Seq) framework and propose three simple but novel fusion techniques to ensure the utilization of visual context during different stages (Input Context Encoding, Attention and Supervision) of the Sequence-to-Sequence transduction pipeline. We then evaluate and analyze the results for further insights, with the goal of testing the utility of visual modality for NMT under full source-side linguistic context. Proposed Fusion Techniques In this section, we describe three additions to the Seq2Seq model to ensure that the visual context is utilized at different stages, namely when computing context during each step of the decoder, during attention as well as when computing the supervision signal in the Sequence-to-Sequence pipeline. This is done to encourage the Seq2Seq NMT model to make use of the visual features under full linguistic context. In each case, we assume that the visual features are fine-tuned using a visual encoder, which is trained jointly alongside the Seq2Seq model. Proposed Fusion Techniques ::: Step-Wise Decoder Fusion Our first proposed technique is the step-wise decoder fusion of visual features during every prediction step i.e. we concatenate the visual encoding as context at each step of the decoding process. This differs from the usual practice of passing the visual feature only at the beginning of the decoding process BIBREF5. Proposed Fusion Techniques ::: Multimodal Attention Modulation Similar to general attention BIBREF8, wherein a variable-length alignment vector $a_{th}(s)$, whose size equals the number of time steps on the source side, is derived by comparing the current target hidden state $h_{t}$ with each source hidden state $\overline{h_{s}}$; we consider a variant wherein the visual encoding $v_{t}$ is used to calculate an attention distribution $a_{tv}(s)$ over the source encodings as well. Then, the true attention distribution $a_{t}(s)$ is computed as an interpolation between the visual and text based attention scores. The score function is a content based scoring mechanism as usual. This formulation differs from BIBREF3 in that we use both the natural language as well as the visual modality to compute attention over the source sentence, rather than having attention over images. Since attention is computed over the same source embeddings (arising from a single encoder) using two different modalities, our approach also differs from BIBREF4, which focuses on combining the attention scores of multiple source encoders. Proposed Fusion Techniques ::: Visual-Semantic (VS) Regularizer In terms of leveraging the visual modality for supervision, BIBREF1 use multi-task learning to learn grounded representations through image representation prediction. However, to our knowledge, visual-semantic supervision hasn't been much explored for multimodal translation in terms of loss functions. Our proposed technique is the inclusion of visual-semantic supervision to the machine translation model. Recently, BIBREF9 proposed an optimal transport based loss function which computes the distance between the word embeddings of the predicted sentence and the target sentence and uses it as a regularizer $L_{\text{ot}}^{\text{tgt}}$. The purpose of this term is to provide the model with sequence level supervision. We leverage this idea by including a Cosine distance term, $L_{\text{cosine}}^{\text{visual}}$, between the visual encoding (which is at the sentence level) and the target/predicted sentence embeddings (computed as the average of the target/predicted word embeddings). The purpose of this distance term is to provide sequence level supervision by aligning the visual and text embeddings. In practice, as in BIBREF9, we introduce a hyperparameter in the loss function: where $\gamma $ is a hyper-parameter balancing the effect of loss components (a separate hyperparameter than in Section 2.2). Results and Analysis Throughout our experiments, we use the 300 hours subset of How2 dataset BIBREF10, which contains 300 hours of videos, sentence-level time alignments to the ground-truth English subtitles, and Portuguese translations of English subtitles. The How2 dataset has 2048 dimensional pre-trained ResNeXt embeddings BIBREF11 available for each of the video clips aligned to the sentences. Further, our baseline model is the canonical Seq2Seq model BIBREF12 consisting of bidirectional LSTM as encoder and decoder, general attention BIBREF8 and length normalization BIBREF13. In all cases, we use the embedding size of 300 and the hidden size of 512. Whenever the visual modality is used, we encode each of the visual features to 300 dimensional vectors through an encoder (consisting of a Linear layer followed by Batch Normalization and ReLU non-linearity) which is also trained end-to-end with the Seq2Seq model. Further, to integrate sequence level supervision as in BIBREF9, we utilize the Geomloss library , which provides a batched implementation of the Sinkhorn algorithm for the Optimal Transport computation. For all the translation experiments, we preprocess the data by lowercasing and removing the punctuations BIBREF2, and construct vocabulary at word level. Adam optimizer with a learning rate of 0.001 and a learning rate decay of 0.5 is used to throughout to train our models. Results and Analysis ::: Experimental Results The performances of the models are summarized in Table TABREF9, along with the gains in BLEU points. From Table TABREF9, we can make a few observations: The visual modality leads to modest gains in BLEU scores. The proposed VS regularizer leads to slightly higher gain when compared to Decoder-Fusion and Attention modulation techniques for the En-Pt language pair. Further, the gains from incorporating the visual modality are less for Multimodal Attention and VS Regularization in the case of the reversed language pair of Pt-En (Table TABREF10), even though the visual modality is common to both the languages. This can possibly be attributed to the How2 dataset creation process wherein first the videos were aligned with English sentences and then the Portuguese translations were created, implying a reduction in correspondence with the visual modality due to errors introduced in the translation process. Results and Analysis ::: Discussion To analyze the reasons for modest gains, despite incorporating multiple techniques to effectively leverage the visual modality for machine translation, we inspect the dataset as well as the proposed mechanisms. Results and Analysis ::: Discussion ::: PCA of Visual Features We first investigate and compare the visual feature quality of the How2 dataset with respect to that of the Multi30k dataset . To analyze the discriminativeness of the visual features for both of these datasets, we leverage an analysis mechanism used in BIBREF14 in the context of analyzing word embedding discriminativeness. We analyze the variance of the visual features corresponding to each sentence in the training set. Since the visual features semantically represent the sentence as well, we could analyze how well the features are able to discriminate between the sentences and consequently between the individual words, as a measure of their utility for NMT. Figure FIGREF14 (Top) shows the variance explained by the Top 100 principal components, obtained by applying PCA on the How2 and Multi30k training set visual features. The original feature dimensions are 2048 in both the cases. It is clear from the Figure FIGREF14 that most of the energy of the visual feature space resides in a low-dimensional subspace BIBREF14. In other words, there exist a few directions in the embedding space which disproportionately explain the variance. These "common" directions affect all of the embeddings in the same way, rendering them less discriminative. Figure FIGREF14 also shows the cumulative variance explained by Top 10, 20, 50 and 100 principal components respectively. It is clear that the visual features in the case of How2 dataset are much more dominated by the "common" dimensions, when compared to the Multi30k dataset. Further, this analysis is still at the sentence level, i.e. the visual features are much less discriminative among individual sentences, further aggravating the problem at the token level. This suggests that the existing visual features aren't sufficient enough to expect benefits from the visual modality in NMT, since they won't provide discriminativeness among the vocabulary elements at the token level during prediction. Further, this also indicates that under subword vocabulary such as BPE BIBREF15 or Sentence-Piece BIBREF16, the utility of such visual embeddings will only aggravate. Results and Analysis ::: Discussion ::: Comparison of Attention Components In this section, we analyze the visual and text based attention mechanisms. We find that the visual attention is very sparse, in that just one source encoding is attended to (the maximum visual attention over source encodings, across the test set, has mean 0.99 and standard deviation 0.015), thereby limiting the use of modulation. Thus, in practice, we find that a small weight ($\gamma =0.1$) is necessary to prevent degradation due to this sparse visual attention component. Figure FIGREF18 & FIGREF19 shows the comparison of visual and text based attention for two sentences, one long source sentence of length 21 and one short source sentence of length 7. In both cases, we find that the visual component of the attention hasn't learnt any variation over the source encodings, again suggesting that the visual embeddings do not lend themselves to enhancing token-level discriminativess during prediction. We find this to be consistent across sentences of different lengths. Conclusions and Future Work To conclude, we investigated the utility of visual modality for NMT, under full linguistic context on a new large-scale MMT dataset named How2. Our results on the How2 dataset confirm the general consensus that the visual modality does not lead to any significant gains for NMT, however, unlike BIBREF0 we attribute the relatively modest gains to the limited discriminativeness offered by the existing visual features, rather than the length of the sentences in the dataset. We validate this hypothesis quantitatively through a PCA based analysis of the visual features as well as qualitatively by analyzing attention components. We hope that our work would lead to more useful techniques and better visual features for MMT. An immediate future direction to explore would be to construct more discriminative features for utilizing the visual modality in NMT.
Step-Wise Decoder Fusion, Multimodal Attention Modulation, Visual-Semantic (VS) Regularizer
bd7039f81a5417474efa36f703ebddcf51835254
bd7039f81a5417474efa36f703ebddcf51835254_0
Q: What are two models' architectures in proposed solution? Text: Introduction NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks, e.g., HotpotQA BIBREF3 and DocRED BIBREF4, where the evidence sentences for the reasoning process were labeled by human annotators. Such evidence annotations are crucial for modern model training, since they provide finer-grained supervision for better guiding the model learning. Furthermore, they allow a pipeline fashion of model training, with each step, such as passage ranking and answer extraction, trained as a supervised learning sub-task. This is crucial from a practical perspective, in order to reduce the memory usage when handling a large amount of inputs with advanced, large pre-trained models BIBREF5, BIBREF6, BIBREF7. Manual evidence annotation is expensive, so there are only a few benchmarks with supporting evidence annotated. Even for these datasets, the structures of the annotations are still limited, as new model designs keep emerging and they may require different forms of evidence annotations. As a result, the supervision from these datasets can still be insufficient for training accurate models. Taking question answering with multi-hop reasoning as an example, annotating only supporting passages is not sufficient to show the reasoning processes due to the lack of necessary structural information (Figure FIGREF1). One example is the order of annotated evidence, which is crucial in logic reasoning and the importance of which has also been demonstrated in text-based QA BIBREF8. The other example is how the annotated evidence pieces are connected, which requires at least the definition of arguments, such as a linking entity, concept, or event. Such information has proved useful by the recently popular entity-centric methods BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF0, BIBREF2 and intuitively will be a benefit to these methods if available. We propose a cooperative game approach to recovering the reasoning chains with the aforementioned necessary structural information for multi-hop QA. Each recovered chain corresponds to a list of ordered passages and each pair of adjacent passages is connected with a linking entity. Specifically, we start with a model, the Ranker, which selects a sequence of passages arriving at the answers, with the restriction that each adjacent passage pair shares at least an entity. This is essentially an unsupervised task and the selection suffers from noise and ambiguity. Therefore we introduce another model, the Reasoner, which predicts the exact linking entity that points to the next passage. The two models play a cooperative game and are rewarded when they find a consistent chain. In this way, we restrict the selection to satisfy not only the format constraints (i.e., ordered passages with connected adjacencies) but also the semantic constraints (i.e., finding the next passage given that the partial selection can be effectively modeled by a Reasoner). Therefore, the selection can be less noisy. We evaluate the proposed method on datasets with different properties, i.e., HotpotQA and MedHop BIBREF13, to cover cases with both 2-hop and 3-hop reasoning. We created labeled reasoning chains for both datasets. Experimental results demonstrate the significant advantage of our proposed approach. Task Definition Reasoning Chains Examples of reasoning chains in HotpotQA and MedHop are shown in Figure FIGREF1. Formally, we aim at recovering the reasoning chain in the form of $(p_1 \rightarrow e_{1,2} \rightarrow p_2 \rightarrow e_{2,3} \rightarrow \cdots \rightarrow e_{n-1,n} \rightarrow p_n)$, where each $p_i$ is a passage and each $e_{i,i+1}$ is an entity that connects $p_i$ and $p_{i+1}$, i.e., appearing in both passages. The last passage $p_n$ in the chain contains the correct answer. We say $p_i$ connects $e_{i-1,i}$ and $e_{i,i+1}$ in the sense that it describes a relationship between the two entities. Our Task Given a QA pair $(q,a)$ and all its candidate passages $\mathcal {P}$, we can extract all possible candidate chains that satisfy the conditions mentioned above, denoted as $\mathcal {C}$. The goal of reasoning chain recovery is to extract the correct chains from all the candidates, given $q,a$ and $\mathcal {P}$ as inputs. Related Work Although there are recent interests on predicting reasoning chains for multi-hop QA BIBREF0, BIBREF14, BIBREF2, they all consider a fully supervised setting; i.e., annotated reasoning chains are available. Our work is the first to recover reasoning chains in a more general unsupervised setting, thus falling into the direction of denoising over distant supervised signals. From this perspective, the most relevant studies in the NLP field includes BIBREF15, BIBREF16 for evidence identification in open-domain QA and BIBREF17, BIBREF18, BIBREF19 for rationale recovery. Method The task of recovering reasoning chains is essentially an unsupervised problem, as we have no access to annotated reasoning chains. Therefore, we resort to the noisy training signal from chains obtained by distant supervision. We first propose a conditional selection model that optimizes the passage selection by considering their orders (Section SECREF4). We then propose a cooperative Reasoner-Ranker game (Section SECREF12) in which the Reasoner recovers the linking entities that point to the next passage. This enhancement encourages the Ranker to select the chains such that their distribution is easier for a linking entity prediction model (Reasoner) to capture. Therefore, it enables our model to denoise the supervision signals while recovering chains with entity information. Figure FIGREF3 gives our overall framework, with a flow describing how the Reasoner passes additional rewards to the Ranker. Method ::: Passage Ranking Model The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\mathcal {P} = \lbrace p_1, p_2 ... p_K\rbrace $ from a pool of candidates, and outputs a chain of selected passages. Method ::: Passage Ranking Model ::: Passage Scoring For each step of the chain, the Ranker estimates a distribution of the selection of each passage. To this end we first encode the question and passage with a 2-layer bi-directional GRU network, resulting in an encoded question $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ for each passage $p_i \in P$ of length $M_i$. Then we use the MatchLSTM model BIBREF20 to get the matching score between $\mathbf {Q}$ and each $\mathbf {H}_i$ and derive the distribution of passage selection $P(p_i|q)$ (see Appendix SECREF6 for details). We denote $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity. Method ::: Passage Ranking Model ::: Conditional Selection To model passage dependency along the chain of reasoning, we use a hard selection model that builds a chain incrementally. Provided with the $K$ passages, at each step $t$ the Ranker computes $P^t(p_i|\mathbf {Q}^{t-1}), i = 0, ..., K$, which is the probability of selecting passage $p_i$ conditioned on the query and previous states representation $\mathbf {Q}^{t-1}$. Then we sample one passage $p^t_{\tau }$ according to the predicted selection probability. The first step starts with the original question $\mathbf {Q}^0$. A feed-forward network is used to project the concatenation of query encoding and selected passage encoding $\tilde{\mathbf {m}}^t_{p_{\tau }}$ back to the query space, and the new query $\mathbf {Q}^{t+1}$ is used to select the next passage. Method ::: Passage Ranking Model ::: Reward via Distant Supervision We use policy gradient BIBREF21 to optimize our model. As we have no access to annotated reasoning chains during training, the reward comes from distant supervision. Specifically, we reward the Ranker if a selected passage appears as the corresponding part of a distant supervised chain in $\mathcal {C}$. The model receives immediate reward at each step of selection. In this paper we only consider chains consist of $\le 3$ passages (2-hop and 3-hop chains). For the 2-hop cases, our model predicts a chain of two passages from the candidate set $\mathcal {C}$ in the form of $p_h\rightarrow e \rightarrow p_t$. Each candidate chain satisfies that $p_t$ contains the answer, while $p_h$ and $p_t$ contain a shared entity $e$. We call $p_h$ the head passage and $p_t$ the tail passage. Let $\mathcal {P}_{T}/\mathcal {P}_{H}$ denote the set of all tail/head passages from $\mathcal {C}$. Our model receives rewards $r_h, r_t$ according to its selections: For the 3-hop cases, we need to select an additional intermediate passage $p_m$ between $p_h$ and $p_t$. If we reward any $p_m$ selection that appears in the middle of a chain in candidate chain set $\mathcal {C}$, the number of feasible options can be very large. Therefore, we make our model first select the head passage $p_h$ and the tail passage $p_t$ independently and then select $p_m$ conditioned on $(p_h,p_t)$. We further restrict that each path in $\mathcal {C}$ must have the head passage containing an entity from $q$. Then the selected $p_m$ is only rewarded if it appears in a chain in $\mathcal {C}$ that starts with $p_h$ and ends with $p_t$: Method ::: Cooperative Reasoner To alleviate the noise in the distant supervision signal $\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game: Reasoner Step: Given the first passage $p_t$ selected by the trained Ranker, the Reasoner predicts the probability of each entity $e$ appearing in $p_t$. The Reasoner is trained with the cross-entropy loss: Ranker Step: Given the Reasoner's top-1 predicted linking entity $e$, the reward for Ranker at the $2^{\textrm {nd}}$ step is defined as: The extension to 3-hop cases is straightforward; the only difference is that the Reasoner reads both the selected $p_h$ and $p_t$ to output two entities. The Ranker receives one extra reward if the Reasoner picks the correct linking entity from $p_h$, so does $p_t$. Experiments ::: Settings ::: Datasets We evaluate our path selection model on HotpotQA bridge type questions and on the MedHop dataset. In HotpotQA, the entities are pre-processed Wiki anchor link objects and in MedHop they are drug/protein database identifiers. For HotpotQA, two supporting passages are provided along with each question. We ignore the support annotations during training and use them to create ground truth on development set: following BIBREF8, we determine the order of passages according to whether a passage contains the answer. We discard ambiguous instances. For MedHop, there is no evidence annotated. Therefore we created a new evaluation dataset by manually annotating the correct paths for part of the development set: we first extract all candidate paths in form of passage triplets $(p_h, p_m, p_t)$, such that $p_h$ contains the query drug and $p_t$ contains the answer drug, and $p_h/p_m$ and $p_m/p_t$ are connected by shared proteins. We label a chain as positive if all the drug-protein or protein-protein interactions are described in the corresponding passages. Note that the positive paths are not unique for a question. During training we select chains based on the full passage set $\mathcal {P}$; at inference time we extract the chains from the candidate set $\mathcal {C}$ (see Section SECREF2). Experiments ::: Settings ::: Baselines and Evaluation Metric We compare our model with (1) random baseline, which randomly selects a candidate chain from the distant supervision chain set $\mathcal {C}$; and (2) distant supervised MatchLSTM, which uses the same base model as ours but scores and selects the passages independently. We use accuracy as our evaluation metric. As HotpotQA does not provide ground-truth linking entities, we only evaluate whether the supporting passages are fully recovered (yet our model still output the full chains). For MedHop we evaluate whether the whole predicted chain is correct. More details can be found in Appendix SECREF7. We use BIBREF24 as word embedding for HotpotQA, and BIBREF25 for MedHop. Experiments ::: Results ::: HotpotQA We first evaluate on the 2-hop HotpotQA task. Our best performed model first selects the tail passage $p_t$ and then the head passage $p_h$, because the number of candidates of tail is smaller ($\sim $2 per question). Table TABREF21 shows the results. First, training a ranker with distant supervision performs significantly better than the random baseline, showing that the training process itself has a certain degree of denoising ability to distinguish the more informative signals from distant supervision labels. By introducing additional inductive bias of orders, the conditional selection model further improves with a large margin. Finally, our cooperative game gives the best performance, showing that a trained Reasoner has the ability of ignoring entity links that are irrelevant to the reasoning chain. Table TABREF22 demonstrates the effect of selecting directions, together with the methods' recall on head passages and tail passages. The latter is evaluated on a subset of bridge-type questions in HotpotQA which has no ambiguous support annotations in passage orders; i.e., among the two human-labeled supporting passages, only one contains the answer and thus must be a tail. The results show that selecting tail first performs better. The cooperative game mainly improves the head selection. Experiments ::: Results ::: MedHop Results in table TABREF21 show that recovering chains from MedHop is a much harder task: first, the large number of distant supervision chains in $\mathcal {C}$ introduce too much noise so the Distant Supervised Ranker improves only 3%; second, the dependent model leads to no improvement because $\mathcal {C}$ is strictly ordered given our data construction. Our cooperative game manages to remain effective and gives further improvement. Conclusions In this paper we propose the problem of recovering reasoning chains in multi-hop QA from weak supervision signals. Our model adopts an cooperative game approach where a ranker and a reasoner cooperate to select the most confident chains. Experiments on the HotpotQA and MedHop benchmarks show the effectiveness of the proposed approach. Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Passage Scoring Given the embeddings $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ of the question $q$, and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ of each passage $p_i \in P$, we use the MatchLSTM BIBREF20 to match $\mathbf {Q}$ and $\mathbf {H}_i$ as follows: The final vector $\tilde{\mathbf {m}}_i$ represents the matching state between $q$ and $p_i$. All the $\tilde{\mathbf {m}}_i$s are then passed to a linear layer that outputs the ranking score of each passage. We apply softmax over the scores to get the probability of passage selection $P(p_i|q)$. We denote the above computation as $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity. Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Reasoner Given the question embedding $\mathbf {Q}^r = \lbrace \vec{\mathbf {q}^r_0}, \vec{\mathbf {q}^r_1}, ..., \vec{\mathbf {q}^r_N}\rbrace $ and the input passage embedding $\mathbf {H}^r = \lbrace \vec{\mathbf {h}^r_{0}}, \vec{\mathbf {h}^r_{1}}, ..., \vec{\mathbf {h}^r_{M}}\rbrace $ of $p$, the Reasoner predicts the probability of each entity in the passage being the linking entity of the next passage in the chain. We use a reader model similar to BIBREF3 as our Reasoner network. We first describe an attention sub-module. Given input sequence embedding $\mathbf {A} = \lbrace \vec{\mathbf {a}_0}, \vec{\mathbf {a}_1}, ..., \vec{\mathbf {a}_N}\rbrace $ and $\mathbf {B} = \lbrace \vec{\mathbf {b}_{0}}, \vec{\mathbf {b}_{1}}, ..., \vec{\mathbf {b}_{M}}\rbrace $, we define $\tilde{\mathcal {M}} = \text{Attention}(\mathbf {A}, \mathbf {B})$: where FFN denotes a feed forward layer which projects the concatenated embedding back to the original space. The Reasoner network consists of multiple attention layers, together with a bidirectional GRU encoder and skip connection. For each token $e_k, k = 0, 1,..., M$ represented by $h^r_{p,k}$ at the corresponding location, we have: where $g$ is the classification layer, softmax is applied across all entities to get the probability. We denote the computation above as $P^r(e_k| \mathbf {p}) = \textrm {MatchLSTM.Reader}(e_k, \mathbf {p})$ for simplicity. Definition of Chain Accuracy In HotpotQA, on average we can find 6 candidate chains (2-hop) in a instance, and the human labeled true reasoning chain is unique. A predicted chain is correct if the chain only contains all supporting passages (exact match of passages). In MedHop, on average we can find 30 candidate chains (3-hop). For each candidate chain our human annotators labeled whether it is correct or not, and the correct reasoning chain is not unique. A predicted chain is correct if it is one of the chains that human labeled as correct. The accuracy is defined as the ratio:
Reasoner model, also implemented with the MatchLSTM architecture, Ranker model
022e5c996a72aeab890401a7fdb925ecd0570529
022e5c996a72aeab890401a7fdb925ecd0570529_0
Q: How do two models cooperate to select the most confident chains? Text: Introduction NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks, e.g., HotpotQA BIBREF3 and DocRED BIBREF4, where the evidence sentences for the reasoning process were labeled by human annotators. Such evidence annotations are crucial for modern model training, since they provide finer-grained supervision for better guiding the model learning. Furthermore, they allow a pipeline fashion of model training, with each step, such as passage ranking and answer extraction, trained as a supervised learning sub-task. This is crucial from a practical perspective, in order to reduce the memory usage when handling a large amount of inputs with advanced, large pre-trained models BIBREF5, BIBREF6, BIBREF7. Manual evidence annotation is expensive, so there are only a few benchmarks with supporting evidence annotated. Even for these datasets, the structures of the annotations are still limited, as new model designs keep emerging and they may require different forms of evidence annotations. As a result, the supervision from these datasets can still be insufficient for training accurate models. Taking question answering with multi-hop reasoning as an example, annotating only supporting passages is not sufficient to show the reasoning processes due to the lack of necessary structural information (Figure FIGREF1). One example is the order of annotated evidence, which is crucial in logic reasoning and the importance of which has also been demonstrated in text-based QA BIBREF8. The other example is how the annotated evidence pieces are connected, which requires at least the definition of arguments, such as a linking entity, concept, or event. Such information has proved useful by the recently popular entity-centric methods BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF0, BIBREF2 and intuitively will be a benefit to these methods if available. We propose a cooperative game approach to recovering the reasoning chains with the aforementioned necessary structural information for multi-hop QA. Each recovered chain corresponds to a list of ordered passages and each pair of adjacent passages is connected with a linking entity. Specifically, we start with a model, the Ranker, which selects a sequence of passages arriving at the answers, with the restriction that each adjacent passage pair shares at least an entity. This is essentially an unsupervised task and the selection suffers from noise and ambiguity. Therefore we introduce another model, the Reasoner, which predicts the exact linking entity that points to the next passage. The two models play a cooperative game and are rewarded when they find a consistent chain. In this way, we restrict the selection to satisfy not only the format constraints (i.e., ordered passages with connected adjacencies) but also the semantic constraints (i.e., finding the next passage given that the partial selection can be effectively modeled by a Reasoner). Therefore, the selection can be less noisy. We evaluate the proposed method on datasets with different properties, i.e., HotpotQA and MedHop BIBREF13, to cover cases with both 2-hop and 3-hop reasoning. We created labeled reasoning chains for both datasets. Experimental results demonstrate the significant advantage of our proposed approach. Task Definition Reasoning Chains Examples of reasoning chains in HotpotQA and MedHop are shown in Figure FIGREF1. Formally, we aim at recovering the reasoning chain in the form of $(p_1 \rightarrow e_{1,2} \rightarrow p_2 \rightarrow e_{2,3} \rightarrow \cdots \rightarrow e_{n-1,n} \rightarrow p_n)$, where each $p_i$ is a passage and each $e_{i,i+1}$ is an entity that connects $p_i$ and $p_{i+1}$, i.e., appearing in both passages. The last passage $p_n$ in the chain contains the correct answer. We say $p_i$ connects $e_{i-1,i}$ and $e_{i,i+1}$ in the sense that it describes a relationship between the two entities. Our Task Given a QA pair $(q,a)$ and all its candidate passages $\mathcal {P}$, we can extract all possible candidate chains that satisfy the conditions mentioned above, denoted as $\mathcal {C}$. The goal of reasoning chain recovery is to extract the correct chains from all the candidates, given $q,a$ and $\mathcal {P}$ as inputs. Related Work Although there are recent interests on predicting reasoning chains for multi-hop QA BIBREF0, BIBREF14, BIBREF2, they all consider a fully supervised setting; i.e., annotated reasoning chains are available. Our work is the first to recover reasoning chains in a more general unsupervised setting, thus falling into the direction of denoising over distant supervised signals. From this perspective, the most relevant studies in the NLP field includes BIBREF15, BIBREF16 for evidence identification in open-domain QA and BIBREF17, BIBREF18, BIBREF19 for rationale recovery. Method The task of recovering reasoning chains is essentially an unsupervised problem, as we have no access to annotated reasoning chains. Therefore, we resort to the noisy training signal from chains obtained by distant supervision. We first propose a conditional selection model that optimizes the passage selection by considering their orders (Section SECREF4). We then propose a cooperative Reasoner-Ranker game (Section SECREF12) in which the Reasoner recovers the linking entities that point to the next passage. This enhancement encourages the Ranker to select the chains such that their distribution is easier for a linking entity prediction model (Reasoner) to capture. Therefore, it enables our model to denoise the supervision signals while recovering chains with entity information. Figure FIGREF3 gives our overall framework, with a flow describing how the Reasoner passes additional rewards to the Ranker. Method ::: Passage Ranking Model The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\mathcal {P} = \lbrace p_1, p_2 ... p_K\rbrace $ from a pool of candidates, and outputs a chain of selected passages. Method ::: Passage Ranking Model ::: Passage Scoring For each step of the chain, the Ranker estimates a distribution of the selection of each passage. To this end we first encode the question and passage with a 2-layer bi-directional GRU network, resulting in an encoded question $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ for each passage $p_i \in P$ of length $M_i$. Then we use the MatchLSTM model BIBREF20 to get the matching score between $\mathbf {Q}$ and each $\mathbf {H}_i$ and derive the distribution of passage selection $P(p_i|q)$ (see Appendix SECREF6 for details). We denote $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity. Method ::: Passage Ranking Model ::: Conditional Selection To model passage dependency along the chain of reasoning, we use a hard selection model that builds a chain incrementally. Provided with the $K$ passages, at each step $t$ the Ranker computes $P^t(p_i|\mathbf {Q}^{t-1}), i = 0, ..., K$, which is the probability of selecting passage $p_i$ conditioned on the query and previous states representation $\mathbf {Q}^{t-1}$. Then we sample one passage $p^t_{\tau }$ according to the predicted selection probability. The first step starts with the original question $\mathbf {Q}^0$. A feed-forward network is used to project the concatenation of query encoding and selected passage encoding $\tilde{\mathbf {m}}^t_{p_{\tau }}$ back to the query space, and the new query $\mathbf {Q}^{t+1}$ is used to select the next passage. Method ::: Passage Ranking Model ::: Reward via Distant Supervision We use policy gradient BIBREF21 to optimize our model. As we have no access to annotated reasoning chains during training, the reward comes from distant supervision. Specifically, we reward the Ranker if a selected passage appears as the corresponding part of a distant supervised chain in $\mathcal {C}$. The model receives immediate reward at each step of selection. In this paper we only consider chains consist of $\le 3$ passages (2-hop and 3-hop chains). For the 2-hop cases, our model predicts a chain of two passages from the candidate set $\mathcal {C}$ in the form of $p_h\rightarrow e \rightarrow p_t$. Each candidate chain satisfies that $p_t$ contains the answer, while $p_h$ and $p_t$ contain a shared entity $e$. We call $p_h$ the head passage and $p_t$ the tail passage. Let $\mathcal {P}_{T}/\mathcal {P}_{H}$ denote the set of all tail/head passages from $\mathcal {C}$. Our model receives rewards $r_h, r_t$ according to its selections: For the 3-hop cases, we need to select an additional intermediate passage $p_m$ between $p_h$ and $p_t$. If we reward any $p_m$ selection that appears in the middle of a chain in candidate chain set $\mathcal {C}$, the number of feasible options can be very large. Therefore, we make our model first select the head passage $p_h$ and the tail passage $p_t$ independently and then select $p_m$ conditioned on $(p_h,p_t)$. We further restrict that each path in $\mathcal {C}$ must have the head passage containing an entity from $q$. Then the selected $p_m$ is only rewarded if it appears in a chain in $\mathcal {C}$ that starts with $p_h$ and ends with $p_t$: Method ::: Cooperative Reasoner To alleviate the noise in the distant supervision signal $\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game: Reasoner Step: Given the first passage $p_t$ selected by the trained Ranker, the Reasoner predicts the probability of each entity $e$ appearing in $p_t$. The Reasoner is trained with the cross-entropy loss: Ranker Step: Given the Reasoner's top-1 predicted linking entity $e$, the reward for Ranker at the $2^{\textrm {nd}}$ step is defined as: The extension to 3-hop cases is straightforward; the only difference is that the Reasoner reads both the selected $p_h$ and $p_t$ to output two entities. The Ranker receives one extra reward if the Reasoner picks the correct linking entity from $p_h$, so does $p_t$. Experiments ::: Settings ::: Datasets We evaluate our path selection model on HotpotQA bridge type questions and on the MedHop dataset. In HotpotQA, the entities are pre-processed Wiki anchor link objects and in MedHop they are drug/protein database identifiers. For HotpotQA, two supporting passages are provided along with each question. We ignore the support annotations during training and use them to create ground truth on development set: following BIBREF8, we determine the order of passages according to whether a passage contains the answer. We discard ambiguous instances. For MedHop, there is no evidence annotated. Therefore we created a new evaluation dataset by manually annotating the correct paths for part of the development set: we first extract all candidate paths in form of passage triplets $(p_h, p_m, p_t)$, such that $p_h$ contains the query drug and $p_t$ contains the answer drug, and $p_h/p_m$ and $p_m/p_t$ are connected by shared proteins. We label a chain as positive if all the drug-protein or protein-protein interactions are described in the corresponding passages. Note that the positive paths are not unique for a question. During training we select chains based on the full passage set $\mathcal {P}$; at inference time we extract the chains from the candidate set $\mathcal {C}$ (see Section SECREF2). Experiments ::: Settings ::: Baselines and Evaluation Metric We compare our model with (1) random baseline, which randomly selects a candidate chain from the distant supervision chain set $\mathcal {C}$; and (2) distant supervised MatchLSTM, which uses the same base model as ours but scores and selects the passages independently. We use accuracy as our evaluation metric. As HotpotQA does not provide ground-truth linking entities, we only evaluate whether the supporting passages are fully recovered (yet our model still output the full chains). For MedHop we evaluate whether the whole predicted chain is correct. More details can be found in Appendix SECREF7. We use BIBREF24 as word embedding for HotpotQA, and BIBREF25 for MedHop. Experiments ::: Results ::: HotpotQA We first evaluate on the 2-hop HotpotQA task. Our best performed model first selects the tail passage $p_t$ and then the head passage $p_h$, because the number of candidates of tail is smaller ($\sim $2 per question). Table TABREF21 shows the results. First, training a ranker with distant supervision performs significantly better than the random baseline, showing that the training process itself has a certain degree of denoising ability to distinguish the more informative signals from distant supervision labels. By introducing additional inductive bias of orders, the conditional selection model further improves with a large margin. Finally, our cooperative game gives the best performance, showing that a trained Reasoner has the ability of ignoring entity links that are irrelevant to the reasoning chain. Table TABREF22 demonstrates the effect of selecting directions, together with the methods' recall on head passages and tail passages. The latter is evaluated on a subset of bridge-type questions in HotpotQA which has no ambiguous support annotations in passage orders; i.e., among the two human-labeled supporting passages, only one contains the answer and thus must be a tail. The results show that selecting tail first performs better. The cooperative game mainly improves the head selection. Experiments ::: Results ::: MedHop Results in table TABREF21 show that recovering chains from MedHop is a much harder task: first, the large number of distant supervision chains in $\mathcal {C}$ introduce too much noise so the Distant Supervised Ranker improves only 3%; second, the dependent model leads to no improvement because $\mathcal {C}$ is strictly ordered given our data construction. Our cooperative game manages to remain effective and gives further improvement. Conclusions In this paper we propose the problem of recovering reasoning chains in multi-hop QA from weak supervision signals. Our model adopts an cooperative game approach where a ranker and a reasoner cooperate to select the most confident chains. Experiments on the HotpotQA and MedHop benchmarks show the effectiveness of the proposed approach. Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Passage Scoring Given the embeddings $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ of the question $q$, and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ of each passage $p_i \in P$, we use the MatchLSTM BIBREF20 to match $\mathbf {Q}$ and $\mathbf {H}_i$ as follows: The final vector $\tilde{\mathbf {m}}_i$ represents the matching state between $q$ and $p_i$. All the $\tilde{\mathbf {m}}_i$s are then passed to a linear layer that outputs the ranking score of each passage. We apply softmax over the scores to get the probability of passage selection $P(p_i|q)$. We denote the above computation as $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity. Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Reasoner Given the question embedding $\mathbf {Q}^r = \lbrace \vec{\mathbf {q}^r_0}, \vec{\mathbf {q}^r_1}, ..., \vec{\mathbf {q}^r_N}\rbrace $ and the input passage embedding $\mathbf {H}^r = \lbrace \vec{\mathbf {h}^r_{0}}, \vec{\mathbf {h}^r_{1}}, ..., \vec{\mathbf {h}^r_{M}}\rbrace $ of $p$, the Reasoner predicts the probability of each entity in the passage being the linking entity of the next passage in the chain. We use a reader model similar to BIBREF3 as our Reasoner network. We first describe an attention sub-module. Given input sequence embedding $\mathbf {A} = \lbrace \vec{\mathbf {a}_0}, \vec{\mathbf {a}_1}, ..., \vec{\mathbf {a}_N}\rbrace $ and $\mathbf {B} = \lbrace \vec{\mathbf {b}_{0}}, \vec{\mathbf {b}_{1}}, ..., \vec{\mathbf {b}_{M}}\rbrace $, we define $\tilde{\mathcal {M}} = \text{Attention}(\mathbf {A}, \mathbf {B})$: where FFN denotes a feed forward layer which projects the concatenated embedding back to the original space. The Reasoner network consists of multiple attention layers, together with a bidirectional GRU encoder and skip connection. For each token $e_k, k = 0, 1,..., M$ represented by $h^r_{p,k}$ at the corresponding location, we have: where $g$ is the classification layer, softmax is applied across all entities to get the probability. We denote the computation above as $P^r(e_k| \mathbf {p}) = \textrm {MatchLSTM.Reader}(e_k, \mathbf {p})$ for simplicity. Definition of Chain Accuracy In HotpotQA, on average we can find 6 candidate chains (2-hop) in a instance, and the human labeled true reasoning chain is unique. A predicted chain is correct if the chain only contains all supporting passages (exact match of passages). In MedHop, on average we can find 30 candidate chains (3-hop). For each candidate chain our human annotators labeled whether it is correct or not, and the correct reasoning chain is not unique. A predicted chain is correct if it is one of the chains that human labeled as correct. The accuracy is defined as the ratio:
Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards
2a950ede24b26a45613169348d5db9176fda4f82
2a950ede24b26a45613169348d5db9176fda4f82_0
Q: How many hand-labeled reasoning chains have been created? Text: Introduction NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks, e.g., HotpotQA BIBREF3 and DocRED BIBREF4, where the evidence sentences for the reasoning process were labeled by human annotators. Such evidence annotations are crucial for modern model training, since they provide finer-grained supervision for better guiding the model learning. Furthermore, they allow a pipeline fashion of model training, with each step, such as passage ranking and answer extraction, trained as a supervised learning sub-task. This is crucial from a practical perspective, in order to reduce the memory usage when handling a large amount of inputs with advanced, large pre-trained models BIBREF5, BIBREF6, BIBREF7. Manual evidence annotation is expensive, so there are only a few benchmarks with supporting evidence annotated. Even for these datasets, the structures of the annotations are still limited, as new model designs keep emerging and they may require different forms of evidence annotations. As a result, the supervision from these datasets can still be insufficient for training accurate models. Taking question answering with multi-hop reasoning as an example, annotating only supporting passages is not sufficient to show the reasoning processes due to the lack of necessary structural information (Figure FIGREF1). One example is the order of annotated evidence, which is crucial in logic reasoning and the importance of which has also been demonstrated in text-based QA BIBREF8. The other example is how the annotated evidence pieces are connected, which requires at least the definition of arguments, such as a linking entity, concept, or event. Such information has proved useful by the recently popular entity-centric methods BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF0, BIBREF2 and intuitively will be a benefit to these methods if available. We propose a cooperative game approach to recovering the reasoning chains with the aforementioned necessary structural information for multi-hop QA. Each recovered chain corresponds to a list of ordered passages and each pair of adjacent passages is connected with a linking entity. Specifically, we start with a model, the Ranker, which selects a sequence of passages arriving at the answers, with the restriction that each adjacent passage pair shares at least an entity. This is essentially an unsupervised task and the selection suffers from noise and ambiguity. Therefore we introduce another model, the Reasoner, which predicts the exact linking entity that points to the next passage. The two models play a cooperative game and are rewarded when they find a consistent chain. In this way, we restrict the selection to satisfy not only the format constraints (i.e., ordered passages with connected adjacencies) but also the semantic constraints (i.e., finding the next passage given that the partial selection can be effectively modeled by a Reasoner). Therefore, the selection can be less noisy. We evaluate the proposed method on datasets with different properties, i.e., HotpotQA and MedHop BIBREF13, to cover cases with both 2-hop and 3-hop reasoning. We created labeled reasoning chains for both datasets. Experimental results demonstrate the significant advantage of our proposed approach. Task Definition Reasoning Chains Examples of reasoning chains in HotpotQA and MedHop are shown in Figure FIGREF1. Formally, we aim at recovering the reasoning chain in the form of $(p_1 \rightarrow e_{1,2} \rightarrow p_2 \rightarrow e_{2,3} \rightarrow \cdots \rightarrow e_{n-1,n} \rightarrow p_n)$, where each $p_i$ is a passage and each $e_{i,i+1}$ is an entity that connects $p_i$ and $p_{i+1}$, i.e., appearing in both passages. The last passage $p_n$ in the chain contains the correct answer. We say $p_i$ connects $e_{i-1,i}$ and $e_{i,i+1}$ in the sense that it describes a relationship between the two entities. Our Task Given a QA pair $(q,a)$ and all its candidate passages $\mathcal {P}$, we can extract all possible candidate chains that satisfy the conditions mentioned above, denoted as $\mathcal {C}$. The goal of reasoning chain recovery is to extract the correct chains from all the candidates, given $q,a$ and $\mathcal {P}$ as inputs. Related Work Although there are recent interests on predicting reasoning chains for multi-hop QA BIBREF0, BIBREF14, BIBREF2, they all consider a fully supervised setting; i.e., annotated reasoning chains are available. Our work is the first to recover reasoning chains in a more general unsupervised setting, thus falling into the direction of denoising over distant supervised signals. From this perspective, the most relevant studies in the NLP field includes BIBREF15, BIBREF16 for evidence identification in open-domain QA and BIBREF17, BIBREF18, BIBREF19 for rationale recovery. Method The task of recovering reasoning chains is essentially an unsupervised problem, as we have no access to annotated reasoning chains. Therefore, we resort to the noisy training signal from chains obtained by distant supervision. We first propose a conditional selection model that optimizes the passage selection by considering their orders (Section SECREF4). We then propose a cooperative Reasoner-Ranker game (Section SECREF12) in which the Reasoner recovers the linking entities that point to the next passage. This enhancement encourages the Ranker to select the chains such that their distribution is easier for a linking entity prediction model (Reasoner) to capture. Therefore, it enables our model to denoise the supervision signals while recovering chains with entity information. Figure FIGREF3 gives our overall framework, with a flow describing how the Reasoner passes additional rewards to the Ranker. Method ::: Passage Ranking Model The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\mathcal {P} = \lbrace p_1, p_2 ... p_K\rbrace $ from a pool of candidates, and outputs a chain of selected passages. Method ::: Passage Ranking Model ::: Passage Scoring For each step of the chain, the Ranker estimates a distribution of the selection of each passage. To this end we first encode the question and passage with a 2-layer bi-directional GRU network, resulting in an encoded question $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ for each passage $p_i \in P$ of length $M_i$. Then we use the MatchLSTM model BIBREF20 to get the matching score between $\mathbf {Q}$ and each $\mathbf {H}_i$ and derive the distribution of passage selection $P(p_i|q)$ (see Appendix SECREF6 for details). We denote $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity. Method ::: Passage Ranking Model ::: Conditional Selection To model passage dependency along the chain of reasoning, we use a hard selection model that builds a chain incrementally. Provided with the $K$ passages, at each step $t$ the Ranker computes $P^t(p_i|\mathbf {Q}^{t-1}), i = 0, ..., K$, which is the probability of selecting passage $p_i$ conditioned on the query and previous states representation $\mathbf {Q}^{t-1}$. Then we sample one passage $p^t_{\tau }$ according to the predicted selection probability. The first step starts with the original question $\mathbf {Q}^0$. A feed-forward network is used to project the concatenation of query encoding and selected passage encoding $\tilde{\mathbf {m}}^t_{p_{\tau }}$ back to the query space, and the new query $\mathbf {Q}^{t+1}$ is used to select the next passage. Method ::: Passage Ranking Model ::: Reward via Distant Supervision We use policy gradient BIBREF21 to optimize our model. As we have no access to annotated reasoning chains during training, the reward comes from distant supervision. Specifically, we reward the Ranker if a selected passage appears as the corresponding part of a distant supervised chain in $\mathcal {C}$. The model receives immediate reward at each step of selection. In this paper we only consider chains consist of $\le 3$ passages (2-hop and 3-hop chains). For the 2-hop cases, our model predicts a chain of two passages from the candidate set $\mathcal {C}$ in the form of $p_h\rightarrow e \rightarrow p_t$. Each candidate chain satisfies that $p_t$ contains the answer, while $p_h$ and $p_t$ contain a shared entity $e$. We call $p_h$ the head passage and $p_t$ the tail passage. Let $\mathcal {P}_{T}/\mathcal {P}_{H}$ denote the set of all tail/head passages from $\mathcal {C}$. Our model receives rewards $r_h, r_t$ according to its selections: For the 3-hop cases, we need to select an additional intermediate passage $p_m$ between $p_h$ and $p_t$. If we reward any $p_m$ selection that appears in the middle of a chain in candidate chain set $\mathcal {C}$, the number of feasible options can be very large. Therefore, we make our model first select the head passage $p_h$ and the tail passage $p_t$ independently and then select $p_m$ conditioned on $(p_h,p_t)$. We further restrict that each path in $\mathcal {C}$ must have the head passage containing an entity from $q$. Then the selected $p_m$ is only rewarded if it appears in a chain in $\mathcal {C}$ that starts with $p_h$ and ends with $p_t$: Method ::: Cooperative Reasoner To alleviate the noise in the distant supervision signal $\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game: Reasoner Step: Given the first passage $p_t$ selected by the trained Ranker, the Reasoner predicts the probability of each entity $e$ appearing in $p_t$. The Reasoner is trained with the cross-entropy loss: Ranker Step: Given the Reasoner's top-1 predicted linking entity $e$, the reward for Ranker at the $2^{\textrm {nd}}$ step is defined as: The extension to 3-hop cases is straightforward; the only difference is that the Reasoner reads both the selected $p_h$ and $p_t$ to output two entities. The Ranker receives one extra reward if the Reasoner picks the correct linking entity from $p_h$, so does $p_t$. Experiments ::: Settings ::: Datasets We evaluate our path selection model on HotpotQA bridge type questions and on the MedHop dataset. In HotpotQA, the entities are pre-processed Wiki anchor link objects and in MedHop they are drug/protein database identifiers. For HotpotQA, two supporting passages are provided along with each question. We ignore the support annotations during training and use them to create ground truth on development set: following BIBREF8, we determine the order of passages according to whether a passage contains the answer. We discard ambiguous instances. For MedHop, there is no evidence annotated. Therefore we created a new evaluation dataset by manually annotating the correct paths for part of the development set: we first extract all candidate paths in form of passage triplets $(p_h, p_m, p_t)$, such that $p_h$ contains the query drug and $p_t$ contains the answer drug, and $p_h/p_m$ and $p_m/p_t$ are connected by shared proteins. We label a chain as positive if all the drug-protein or protein-protein interactions are described in the corresponding passages. Note that the positive paths are not unique for a question. During training we select chains based on the full passage set $\mathcal {P}$; at inference time we extract the chains from the candidate set $\mathcal {C}$ (see Section SECREF2). Experiments ::: Settings ::: Baselines and Evaluation Metric We compare our model with (1) random baseline, which randomly selects a candidate chain from the distant supervision chain set $\mathcal {C}$; and (2) distant supervised MatchLSTM, which uses the same base model as ours but scores and selects the passages independently. We use accuracy as our evaluation metric. As HotpotQA does not provide ground-truth linking entities, we only evaluate whether the supporting passages are fully recovered (yet our model still output the full chains). For MedHop we evaluate whether the whole predicted chain is correct. More details can be found in Appendix SECREF7. We use BIBREF24 as word embedding for HotpotQA, and BIBREF25 for MedHop. Experiments ::: Results ::: HotpotQA We first evaluate on the 2-hop HotpotQA task. Our best performed model first selects the tail passage $p_t$ and then the head passage $p_h$, because the number of candidates of tail is smaller ($\sim $2 per question). Table TABREF21 shows the results. First, training a ranker with distant supervision performs significantly better than the random baseline, showing that the training process itself has a certain degree of denoising ability to distinguish the more informative signals from distant supervision labels. By introducing additional inductive bias of orders, the conditional selection model further improves with a large margin. Finally, our cooperative game gives the best performance, showing that a trained Reasoner has the ability of ignoring entity links that are irrelevant to the reasoning chain. Table TABREF22 demonstrates the effect of selecting directions, together with the methods' recall on head passages and tail passages. The latter is evaluated on a subset of bridge-type questions in HotpotQA which has no ambiguous support annotations in passage orders; i.e., among the two human-labeled supporting passages, only one contains the answer and thus must be a tail. The results show that selecting tail first performs better. The cooperative game mainly improves the head selection. Experiments ::: Results ::: MedHop Results in table TABREF21 show that recovering chains from MedHop is a much harder task: first, the large number of distant supervision chains in $\mathcal {C}$ introduce too much noise so the Distant Supervised Ranker improves only 3%; second, the dependent model leads to no improvement because $\mathcal {C}$ is strictly ordered given our data construction. Our cooperative game manages to remain effective and gives further improvement. Conclusions In this paper we propose the problem of recovering reasoning chains in multi-hop QA from weak supervision signals. Our model adopts an cooperative game approach where a ranker and a reasoner cooperate to select the most confident chains. Experiments on the HotpotQA and MedHop benchmarks show the effectiveness of the proposed approach. Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Passage Scoring Given the embeddings $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ of the question $q$, and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ of each passage $p_i \in P$, we use the MatchLSTM BIBREF20 to match $\mathbf {Q}$ and $\mathbf {H}_i$ as follows: The final vector $\tilde{\mathbf {m}}_i$ represents the matching state between $q$ and $p_i$. All the $\tilde{\mathbf {m}}_i$s are then passed to a linear layer that outputs the ranking score of each passage. We apply softmax over the scores to get the probability of passage selection $P(p_i|q)$. We denote the above computation as $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity. Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Reasoner Given the question embedding $\mathbf {Q}^r = \lbrace \vec{\mathbf {q}^r_0}, \vec{\mathbf {q}^r_1}, ..., \vec{\mathbf {q}^r_N}\rbrace $ and the input passage embedding $\mathbf {H}^r = \lbrace \vec{\mathbf {h}^r_{0}}, \vec{\mathbf {h}^r_{1}}, ..., \vec{\mathbf {h}^r_{M}}\rbrace $ of $p$, the Reasoner predicts the probability of each entity in the passage being the linking entity of the next passage in the chain. We use a reader model similar to BIBREF3 as our Reasoner network. We first describe an attention sub-module. Given input sequence embedding $\mathbf {A} = \lbrace \vec{\mathbf {a}_0}, \vec{\mathbf {a}_1}, ..., \vec{\mathbf {a}_N}\rbrace $ and $\mathbf {B} = \lbrace \vec{\mathbf {b}_{0}}, \vec{\mathbf {b}_{1}}, ..., \vec{\mathbf {b}_{M}}\rbrace $, we define $\tilde{\mathcal {M}} = \text{Attention}(\mathbf {A}, \mathbf {B})$: where FFN denotes a feed forward layer which projects the concatenated embedding back to the original space. The Reasoner network consists of multiple attention layers, together with a bidirectional GRU encoder and skip connection. For each token $e_k, k = 0, 1,..., M$ represented by $h^r_{p,k}$ at the corresponding location, we have: where $g$ is the classification layer, softmax is applied across all entities to get the probability. We denote the computation above as $P^r(e_k| \mathbf {p}) = \textrm {MatchLSTM.Reader}(e_k, \mathbf {p})$ for simplicity. Definition of Chain Accuracy In HotpotQA, on average we can find 6 candidate chains (2-hop) in a instance, and the human labeled true reasoning chain is unique. A predicted chain is correct if the chain only contains all supporting passages (exact match of passages). In MedHop, on average we can find 30 candidate chains (3-hop). For each candidate chain our human annotators labeled whether it is correct or not, and the correct reasoning chain is not unique. A predicted chain is correct if it is one of the chains that human labeled as correct. The accuracy is defined as the ratio:
Unanswerable
34af2c512ec38483754e94e1ea814aa76552d60a
34af2c512ec38483754e94e1ea814aa76552d60a_0
Q: What benchmarks are created? Text: Introduction NLP tasks that require multi-hop reasoning have recently enjoyed rapid progress, especially on multi-hop question answering BIBREF0, BIBREF1, BIBREF2. Advances have benefited from rich annotations of supporting evidence, as in the popular multi-hop QA and relation extraction benchmarks, e.g., HotpotQA BIBREF3 and DocRED BIBREF4, where the evidence sentences for the reasoning process were labeled by human annotators. Such evidence annotations are crucial for modern model training, since they provide finer-grained supervision for better guiding the model learning. Furthermore, they allow a pipeline fashion of model training, with each step, such as passage ranking and answer extraction, trained as a supervised learning sub-task. This is crucial from a practical perspective, in order to reduce the memory usage when handling a large amount of inputs with advanced, large pre-trained models BIBREF5, BIBREF6, BIBREF7. Manual evidence annotation is expensive, so there are only a few benchmarks with supporting evidence annotated. Even for these datasets, the structures of the annotations are still limited, as new model designs keep emerging and they may require different forms of evidence annotations. As a result, the supervision from these datasets can still be insufficient for training accurate models. Taking question answering with multi-hop reasoning as an example, annotating only supporting passages is not sufficient to show the reasoning processes due to the lack of necessary structural information (Figure FIGREF1). One example is the order of annotated evidence, which is crucial in logic reasoning and the importance of which has also been demonstrated in text-based QA BIBREF8. The other example is how the annotated evidence pieces are connected, which requires at least the definition of arguments, such as a linking entity, concept, or event. Such information has proved useful by the recently popular entity-centric methods BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF0, BIBREF2 and intuitively will be a benefit to these methods if available. We propose a cooperative game approach to recovering the reasoning chains with the aforementioned necessary structural information for multi-hop QA. Each recovered chain corresponds to a list of ordered passages and each pair of adjacent passages is connected with a linking entity. Specifically, we start with a model, the Ranker, which selects a sequence of passages arriving at the answers, with the restriction that each adjacent passage pair shares at least an entity. This is essentially an unsupervised task and the selection suffers from noise and ambiguity. Therefore we introduce another model, the Reasoner, which predicts the exact linking entity that points to the next passage. The two models play a cooperative game and are rewarded when they find a consistent chain. In this way, we restrict the selection to satisfy not only the format constraints (i.e., ordered passages with connected adjacencies) but also the semantic constraints (i.e., finding the next passage given that the partial selection can be effectively modeled by a Reasoner). Therefore, the selection can be less noisy. We evaluate the proposed method on datasets with different properties, i.e., HotpotQA and MedHop BIBREF13, to cover cases with both 2-hop and 3-hop reasoning. We created labeled reasoning chains for both datasets. Experimental results demonstrate the significant advantage of our proposed approach. Task Definition Reasoning Chains Examples of reasoning chains in HotpotQA and MedHop are shown in Figure FIGREF1. Formally, we aim at recovering the reasoning chain in the form of $(p_1 \rightarrow e_{1,2} \rightarrow p_2 \rightarrow e_{2,3} \rightarrow \cdots \rightarrow e_{n-1,n} \rightarrow p_n)$, where each $p_i$ is a passage and each $e_{i,i+1}$ is an entity that connects $p_i$ and $p_{i+1}$, i.e., appearing in both passages. The last passage $p_n$ in the chain contains the correct answer. We say $p_i$ connects $e_{i-1,i}$ and $e_{i,i+1}$ in the sense that it describes a relationship between the two entities. Our Task Given a QA pair $(q,a)$ and all its candidate passages $\mathcal {P}$, we can extract all possible candidate chains that satisfy the conditions mentioned above, denoted as $\mathcal {C}$. The goal of reasoning chain recovery is to extract the correct chains from all the candidates, given $q,a$ and $\mathcal {P}$ as inputs. Related Work Although there are recent interests on predicting reasoning chains for multi-hop QA BIBREF0, BIBREF14, BIBREF2, they all consider a fully supervised setting; i.e., annotated reasoning chains are available. Our work is the first to recover reasoning chains in a more general unsupervised setting, thus falling into the direction of denoising over distant supervised signals. From this perspective, the most relevant studies in the NLP field includes BIBREF15, BIBREF16 for evidence identification in open-domain QA and BIBREF17, BIBREF18, BIBREF19 for rationale recovery. Method The task of recovering reasoning chains is essentially an unsupervised problem, as we have no access to annotated reasoning chains. Therefore, we resort to the noisy training signal from chains obtained by distant supervision. We first propose a conditional selection model that optimizes the passage selection by considering their orders (Section SECREF4). We then propose a cooperative Reasoner-Ranker game (Section SECREF12) in which the Reasoner recovers the linking entities that point to the next passage. This enhancement encourages the Ranker to select the chains such that their distribution is easier for a linking entity prediction model (Reasoner) to capture. Therefore, it enables our model to denoise the supervision signals while recovering chains with entity information. Figure FIGREF3 gives our overall framework, with a flow describing how the Reasoner passes additional rewards to the Ranker. Method ::: Passage Ranking Model The key component of our framework is the Ranker model, which is provided with a question $q$ and $K$ passages $\mathcal {P} = \lbrace p_1, p_2 ... p_K\rbrace $ from a pool of candidates, and outputs a chain of selected passages. Method ::: Passage Ranking Model ::: Passage Scoring For each step of the chain, the Ranker estimates a distribution of the selection of each passage. To this end we first encode the question and passage with a 2-layer bi-directional GRU network, resulting in an encoded question $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ for each passage $p_i \in P$ of length $M_i$. Then we use the MatchLSTM model BIBREF20 to get the matching score between $\mathbf {Q}$ and each $\mathbf {H}_i$ and derive the distribution of passage selection $P(p_i|q)$ (see Appendix SECREF6 for details). We denote $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity. Method ::: Passage Ranking Model ::: Conditional Selection To model passage dependency along the chain of reasoning, we use a hard selection model that builds a chain incrementally. Provided with the $K$ passages, at each step $t$ the Ranker computes $P^t(p_i|\mathbf {Q}^{t-1}), i = 0, ..., K$, which is the probability of selecting passage $p_i$ conditioned on the query and previous states representation $\mathbf {Q}^{t-1}$. Then we sample one passage $p^t_{\tau }$ according to the predicted selection probability. The first step starts with the original question $\mathbf {Q}^0$. A feed-forward network is used to project the concatenation of query encoding and selected passage encoding $\tilde{\mathbf {m}}^t_{p_{\tau }}$ back to the query space, and the new query $\mathbf {Q}^{t+1}$ is used to select the next passage. Method ::: Passage Ranking Model ::: Reward via Distant Supervision We use policy gradient BIBREF21 to optimize our model. As we have no access to annotated reasoning chains during training, the reward comes from distant supervision. Specifically, we reward the Ranker if a selected passage appears as the corresponding part of a distant supervised chain in $\mathcal {C}$. The model receives immediate reward at each step of selection. In this paper we only consider chains consist of $\le 3$ passages (2-hop and 3-hop chains). For the 2-hop cases, our model predicts a chain of two passages from the candidate set $\mathcal {C}$ in the form of $p_h\rightarrow e \rightarrow p_t$. Each candidate chain satisfies that $p_t$ contains the answer, while $p_h$ and $p_t$ contain a shared entity $e$. We call $p_h$ the head passage and $p_t$ the tail passage. Let $\mathcal {P}_{T}/\mathcal {P}_{H}$ denote the set of all tail/head passages from $\mathcal {C}$. Our model receives rewards $r_h, r_t$ according to its selections: For the 3-hop cases, we need to select an additional intermediate passage $p_m$ between $p_h$ and $p_t$. If we reward any $p_m$ selection that appears in the middle of a chain in candidate chain set $\mathcal {C}$, the number of feasible options can be very large. Therefore, we make our model first select the head passage $p_h$ and the tail passage $p_t$ independently and then select $p_m$ conditioned on $(p_h,p_t)$. We further restrict that each path in $\mathcal {C}$ must have the head passage containing an entity from $q$. Then the selected $p_m$ is only rewarded if it appears in a chain in $\mathcal {C}$ that starts with $p_h$ and ends with $p_t$: Method ::: Cooperative Reasoner To alleviate the noise in the distant supervision signal $\mathcal {C}$, in addition to the conditional selection, we further propose a cooperative Reasoner model, also implemented with the MatchLSTM architecture (see Appendix SECREF6), to predict the linking entity from the selected passages. Intuitively, when the Ranker makes more accurate passage selections, the Reasoner will work with less noisy data and thus is easier to succeed. Specifically, the Reasoner learns to extract the linking entity from chains selected by a well-trained Ranker, and it benefits the Ranker training by providing extra rewards. Taking 2-hop as an example, we train the Ranker and Reasoner alternatively as a cooperative game: Reasoner Step: Given the first passage $p_t$ selected by the trained Ranker, the Reasoner predicts the probability of each entity $e$ appearing in $p_t$. The Reasoner is trained with the cross-entropy loss: Ranker Step: Given the Reasoner's top-1 predicted linking entity $e$, the reward for Ranker at the $2^{\textrm {nd}}$ step is defined as: The extension to 3-hop cases is straightforward; the only difference is that the Reasoner reads both the selected $p_h$ and $p_t$ to output two entities. The Ranker receives one extra reward if the Reasoner picks the correct linking entity from $p_h$, so does $p_t$. Experiments ::: Settings ::: Datasets We evaluate our path selection model on HotpotQA bridge type questions and on the MedHop dataset. In HotpotQA, the entities are pre-processed Wiki anchor link objects and in MedHop they are drug/protein database identifiers. For HotpotQA, two supporting passages are provided along with each question. We ignore the support annotations during training and use them to create ground truth on development set: following BIBREF8, we determine the order of passages according to whether a passage contains the answer. We discard ambiguous instances. For MedHop, there is no evidence annotated. Therefore we created a new evaluation dataset by manually annotating the correct paths for part of the development set: we first extract all candidate paths in form of passage triplets $(p_h, p_m, p_t)$, such that $p_h$ contains the query drug and $p_t$ contains the answer drug, and $p_h/p_m$ and $p_m/p_t$ are connected by shared proteins. We label a chain as positive if all the drug-protein or protein-protein interactions are described in the corresponding passages. Note that the positive paths are not unique for a question. During training we select chains based on the full passage set $\mathcal {P}$; at inference time we extract the chains from the candidate set $\mathcal {C}$ (see Section SECREF2). Experiments ::: Settings ::: Baselines and Evaluation Metric We compare our model with (1) random baseline, which randomly selects a candidate chain from the distant supervision chain set $\mathcal {C}$; and (2) distant supervised MatchLSTM, which uses the same base model as ours but scores and selects the passages independently. We use accuracy as our evaluation metric. As HotpotQA does not provide ground-truth linking entities, we only evaluate whether the supporting passages are fully recovered (yet our model still output the full chains). For MedHop we evaluate whether the whole predicted chain is correct. More details can be found in Appendix SECREF7. We use BIBREF24 as word embedding for HotpotQA, and BIBREF25 for MedHop. Experiments ::: Results ::: HotpotQA We first evaluate on the 2-hop HotpotQA task. Our best performed model first selects the tail passage $p_t$ and then the head passage $p_h$, because the number of candidates of tail is smaller ($\sim $2 per question). Table TABREF21 shows the results. First, training a ranker with distant supervision performs significantly better than the random baseline, showing that the training process itself has a certain degree of denoising ability to distinguish the more informative signals from distant supervision labels. By introducing additional inductive bias of orders, the conditional selection model further improves with a large margin. Finally, our cooperative game gives the best performance, showing that a trained Reasoner has the ability of ignoring entity links that are irrelevant to the reasoning chain. Table TABREF22 demonstrates the effect of selecting directions, together with the methods' recall on head passages and tail passages. The latter is evaluated on a subset of bridge-type questions in HotpotQA which has no ambiguous support annotations in passage orders; i.e., among the two human-labeled supporting passages, only one contains the answer and thus must be a tail. The results show that selecting tail first performs better. The cooperative game mainly improves the head selection. Experiments ::: Results ::: MedHop Results in table TABREF21 show that recovering chains from MedHop is a much harder task: first, the large number of distant supervision chains in $\mathcal {C}$ introduce too much noise so the Distant Supervised Ranker improves only 3%; second, the dependent model leads to no improvement because $\mathcal {C}$ is strictly ordered given our data construction. Our cooperative game manages to remain effective and gives further improvement. Conclusions In this paper we propose the problem of recovering reasoning chains in multi-hop QA from weak supervision signals. Our model adopts an cooperative game approach where a ranker and a reasoner cooperate to select the most confident chains. Experiments on the HotpotQA and MedHop benchmarks show the effectiveness of the proposed approach. Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Passage Scoring Given the embeddings $\mathbf {Q} = \lbrace \vec{\mathbf {q}_0}, \vec{\mathbf {q}_1}, ..., \vec{\mathbf {q}_N}\rbrace $ of the question $q$, and $\mathbf {H}_i = \lbrace \vec{\mathbf {h}_{i,0}}, \vec{\mathbf {h}_{i,1}}, ..., \vec{\mathbf {h}_{i,M_i}}\rbrace $ of each passage $p_i \in P$, we use the MatchLSTM BIBREF20 to match $\mathbf {Q}$ and $\mathbf {H}_i$ as follows: The final vector $\tilde{\mathbf {m}}_i$ represents the matching state between $q$ and $p_i$. All the $\tilde{\mathbf {m}}_i$s are then passed to a linear layer that outputs the ranking score of each passage. We apply softmax over the scores to get the probability of passage selection $P(p_i|q)$. We denote the above computation as $P(p_i|q)=\textrm {MatchLSTM}(\mathbf {H}_i, \mathbf {Q})$ for simplicity. Details of MatchLSTMs for Passage Scoring and Reasoner ::: MatchLSTM for Reasoner Given the question embedding $\mathbf {Q}^r = \lbrace \vec{\mathbf {q}^r_0}, \vec{\mathbf {q}^r_1}, ..., \vec{\mathbf {q}^r_N}\rbrace $ and the input passage embedding $\mathbf {H}^r = \lbrace \vec{\mathbf {h}^r_{0}}, \vec{\mathbf {h}^r_{1}}, ..., \vec{\mathbf {h}^r_{M}}\rbrace $ of $p$, the Reasoner predicts the probability of each entity in the passage being the linking entity of the next passage in the chain. We use a reader model similar to BIBREF3 as our Reasoner network. We first describe an attention sub-module. Given input sequence embedding $\mathbf {A} = \lbrace \vec{\mathbf {a}_0}, \vec{\mathbf {a}_1}, ..., \vec{\mathbf {a}_N}\rbrace $ and $\mathbf {B} = \lbrace \vec{\mathbf {b}_{0}}, \vec{\mathbf {b}_{1}}, ..., \vec{\mathbf {b}_{M}}\rbrace $, we define $\tilde{\mathcal {M}} = \text{Attention}(\mathbf {A}, \mathbf {B})$: where FFN denotes a feed forward layer which projects the concatenated embedding back to the original space. The Reasoner network consists of multiple attention layers, together with a bidirectional GRU encoder and skip connection. For each token $e_k, k = 0, 1,..., M$ represented by $h^r_{p,k}$ at the corresponding location, we have: where $g$ is the classification layer, softmax is applied across all entities to get the probability. We denote the computation above as $P^r(e_k| \mathbf {p}) = \textrm {MatchLSTM.Reader}(e_k, \mathbf {p})$ for simplicity. Definition of Chain Accuracy In HotpotQA, on average we can find 6 candidate chains (2-hop) in a instance, and the human labeled true reasoning chain is unique. A predicted chain is correct if the chain only contains all supporting passages (exact match of passages). In MedHop, on average we can find 30 candidate chains (3-hop). For each candidate chain our human annotators labeled whether it is correct or not, and the correct reasoning chain is not unique. A predicted chain is correct if it is one of the chains that human labeled as correct. The accuracy is defined as the ratio:
Answer with content missing: (formula) The accuracy is defined as the ratio # of correct chains predicted to # of evaluation samples
c1429f7fed5a4dda11ac7d9643f97af87a83508b
c1429f7fed5a4dda11ac7d9643f97af87a83508b_0
Q: What empricial investigations do they reference? Text: Introduction Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation. This paper is a response to two recent human evaluation studies in which some neural machine translation systems reportedly performed at (or above) the level of human translators for news translation from Chinese to English BIBREF3 and English to Czech BIBREF4, BIBREF5. Both evaluations were based on current best practices in the field: they used a source-based direct assessment with non-expert annotators, using data sets and the evaluation protocol of the Conference on Machine Translation (WMT). While the results are intriguing, especially because they are based on best practices in MT evaluation, BIBREF5 warn against taking their results as evidence for human–machine parity, and caution that for well-resourced language pairs, an update of WMT evaluation style will be needed to keep up with the progress in machine translation. We concur that these findings have demonstrated the need to critically re-evaluate the design of human MT evaluation. Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis. Background We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators. Background ::: Human Evaluation of Machine Translation The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans. Various methods have been proposed for the human evaluation of MT quality BIBREF8. What they have in common is that the MT output to be rated is paired with a translation hint: the source text or a reference translation. The MT output is then either adapted or scored with reference to the translation hint by human post-editors or raters, respectively. As part of the large-scale evaluation campaign at WMT, two primary evaluation methods have been used in recent years: relative ranking and direct assessment BIBREF9. In the case of relative ranking, raters are presented with outputs from two or more systems, which they are asked to evaluate relative to each other (e.g., to determine system A is better than system B). Ties (e.g., system A is as good or as bad as system B) are typically allowed. Compared to absolute scores on Likert scales, data obtained through relative ranking show better inter- and intra-annotator agreement BIBREF10. However, they do not allow conclusions to be drawn about the order of magnitude of the differences, so that it is not possible to determine how much better system A was than system B. This is one of the reasons why direct assessment has prevailed as an evaluation method more recently. In contrast to relative ranking, the raters are presented with one MT output at a time, to which they assign a score between 0 and 100. To increase homogeneity, each rater's ratings are standardised BIBREF11. Reference translations serve as the basis in the context of WMT, and evaluations are carried out by monolingual raters. To avoid reference bias, the evaluation can be based on source texts instead, which presupposes bilingual raters, but leads to more reliable results overall BIBREF12. Background ::: Assessing Human–Machine Parity BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation. In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations. Background ::: Assessing Human–Machine Parity ::: Choice of Raters The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations. Background ::: Assessing Human–Machine Parity ::: Linguistic Context MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents. Background ::: Assessing Human–Machine Parity ::: Reference Translations The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality. We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6). Background ::: Translations We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article: [labelwidth=1cm, leftmargin=1.25cm] The professional human translations in the dataset of BIBREF3.[1] Professional human translations that we ordered from a different translation vendor, which included a post-hoc native English check. We produced these only for the documents that were originally Chinese, as discussed in more detail in Section SECREF35. The machine translations produced by BIBREF3's BIBREF3 best system (Combo-6),[1] for which the authors found parity with H$_A$. The machine translations produced by Google's production system (Google Translate) in October 2017, as contained in BIBREF3's BIBREF3 dataset.[1] Statistical significance is denoted by * ($p\le .05$), ** ($p\le .01$), and *** ($p\le .001$) throughout this article, unless otherwise stated. Choice of Raters Both professional and amateur evaluators can be involved in human evaluation of MT quality. However, from published work in the field BIBREF19, it is fair to say that there is a tendency to “rely on students and amateur evaluators, sometimes with an undefined (or self-rated) proficiency in the languages involved, an unknown expertise with the text type" BIBREF8. Previous work on evaluation of MT output by professional translators against crowd workers by BIBREF20 showed that for all language pairs (involving 11 languages) evaluated, crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. The authors argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation more suitable than another, and therefore, when confronted with a translation that is hard to post-edit, tend to accept the MT rather than try to improve it. Choice of Raters ::: Evaluation Protocol We test for difference in ratings of MT outputs and human translations between experts and non-experts. We consider professional translators as experts, and both crowd workers and MT researchers as non-experts. We conduct a relative ranking experiment using one professional human (H$_A$) and two machine translations (MT$_1$ and MT$_2$), considering the native Chinese part of the WMT 2017 Chinese–English test set (see Section SECREF35 for details). The 299 sentences used in the experiments stem from 41 documents, randomly selected from all the documents in the test set originally written in Chinese, and are shown in their original order. Raters are shown one sentence at a time, and see the original Chinese source alongside the three translations. The previous and next source sentences are also shown, in order to provide the annotator with local inter-sentential context. Five raters—two experts and three non-experts—participated in the assessment. The experts were professional Chinese to English translators: one native in Chinese with a fluent level of English, the other native in English with a fluent level of Chinese. The non-experts were NLP researchers native in Chinese, working in an English-speaking country. The ratings are elicited with Appraise BIBREF21. We derive an overall score for each translation (H$_A$, MT$_1$, and MT$_2$) based on the rankings. We use the TrueSkill method adapted to MT evaluation BIBREF22 following its usage at WMT15, i. e., we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\alpha =0.05$). Choice of Raters ::: Results Table TABREF17 shows the TrueSkill scores for each translation resulting from the evaluations by expert and non-expert translators. We find that translation expertise affects the judgement of MT$_1$ and H$_A$, where the rating gap is wider for the expert raters. This indicates that non-experts disregard translation nuances in the evaluation, which leads to a more tolerant judgement of MT systems and a lower inter-annotator agreement ($\kappa =0.13$ for non-experts versus $\kappa =0.254$ for experts). It is worth noticing that, regardless of their expertise, the performance of human raters may vary over time. For example, performance may improve or decrease due to learning effects or fatigue, respectively BIBREF23. It is likely that such longitudinal effects are present in our data. They should be accounted for in future work, e. g., by using trial number as an additional predictor BIBREF24. Linguistic Context Another concern is the unit of evaluation. Historically, machine translation has primarily operated on the level of sentences, and so has machine translation evaluation. However, it has been remarked that human raters do not necessarily understand the intended meaning of a sentence shown out-of-context BIBREF25, which limits their ability to spot some mistranslations. Also, a sentence-level evaluation will be blind to errors related to textual cohesion and coherence. While sentence-level evaluation may be good enough when evaluating MT systems of relatively low quality, we hypothesise that with additional context, raters will be able to make more nuanced quality assessments, and will also reward translations that show more textual cohesion and coherence. We believe that this aspect should be considered in evaluation, especially when making claims about human–machine parity, since human translators can and do take inter-sentential context into account BIBREF15, BIBREF16. Linguistic Context ::: Evaluation Protocol We test if the availability of document-level context affects human–machine parity claims in terms of adequacy and fluency. In a pairwise ranking experiment, we show raters (i) isolated sentences and (ii) entire documents, asking them to choose the better (with ties allowed) from two translation outputs: one produced by a professional translator, the other by a machine translation system. We do not show reference translations as one of the two options is itself a human translation. We use source sentences and documents from the WMT 2017 Chinese–English test set (see Section SECREF8): documents are full news articles, and sentences are randomly drawn from these news articles, regardless of their position. We only consider articles from the test set that are native Chinese (see Section SECREF35). In order to compare our results to those of BIBREF3, we use both their professional human (H$_A$) and machine translations (MT$_1$). Each rater evaluates both sentences and documents, but never the same text in both conditions so as to avoid repetition priming BIBREF26. The order of experimental items as well as the placement of choices (H$_A$, MT$_1$; left, right) are randomised. We use spam items for quality control BIBREF27: In a small fraction of items, we render one of the two options nonsensical by randomly shuffling the order of all translated words, except for 10 % at the beginning and end. If a rater marks a spam item as better than or equal to an actual translation, this is a strong indication that they did not read both options carefully. We recruit professional translators (see Section SECREF3) from proz.com, a well-known online market place for professional freelance translation, considering Chinese to English translators and native English revisers for the adequacy and fluency conditions, respectively. In each condition, four raters evaluate 50 documents (plus 5 spam items) and 104 sentences (plus 16 spam items). We use two non-overlapping sets of documents and two non-overlapping sets of sentences, and each is evaluated by two raters. Linguistic Context ::: Results Results are shown in Table TABREF21. We note that sentence ratings from two raters are excluded from our analysis because of unintentional textual overlap with documents, meaning we cannot fully rule out that sentence-level decisions were informed by access to the full documents they originated from. Moreover, we exclude document ratings from one rater in the fluency condition because of poor performance on spam items, and recruit an additional rater to re-rate these documents. We analyse our data using two-tailed Sign Tests, the null hypothesis being that raters do not prefer MT$_1$ over H$_A$ or vice versa, implying human–machine parity. Following WMT evaluation campaigns that used pairwise ranking BIBREF28, the number of successes $x$ is the number of ratings in favour of H$_A$, and the number of trials $n$ is the number of all ratings except for ties. Adding half of the ties to $x$ and the total number of ties to $n$ BIBREF29 does not impact the significance levels reported in this section. Adequacy raters show no statistically significant preference for MT$_1$ or H$_A$ when evaluating isolated sentences ($x=86, n=189, p=.244$). This is in accordance with BIBREF3, who found the same in a source-based direct assessment experiment with crowd workers. With the availability of document-level context, however, preference for MT$_1$ drops from 49.5 to 37.0 % and is significantly lower than preference for human translation ($x=104, n=178, p<.05$). This evidences that document-level context cues allow raters to get a signal on adequacy. Fluency raters prefer H$_A$ over MT$_1$ both on the level of sentences ($x=106, n=172, p<.01$) and documents ($x=99, n=143, p<.001$). This is somewhat surprising given that increased fluency was found to be one of the main strengths of NMT BIBREF30, as we further discuss in Section SECREF24. The availability of document-level context decreases fluency raters' preference for MT$_1$, which falls from 31.7 to 22.0 %, without increasing their preference for H$_A$ (Table TABREF21). Linguistic Context ::: Discussion Our findings emphasise the importance of linguistic context in human evaluation of MT. In terms of adequacy, raters assessing documents as a whole show a significant preference for human translation, but when assessing single sentences in random order, they show no significant preference for human translation. Document-level evaluation exposes errors to raters which are hard or impossible to spot in a sentence-level evaluation, such as coherent translation of named entities. The example in Table TABREF23 shows the first two sentences of a Chinese news article as translated by a professional human translator (H$_A$) and BIBREF3's BIBREF3 NMT system (MT$_1$). When looking at both sentences (document-level evaluation), it can be seen that MT$_1$ uses two different translations to refer to a cultural festival, “2016盂兰文化节", whereas the human translation uses only one. When assessing the second sentence out of context (sentence-level evaluation), it is hard to penalise MT$_1$ for producing 2016 Python Cultural Festival, particularly for fluency raters without access to the corresponding source text. For further examples, see Section SECREF24 and Table TABREF34. Reference Translations Yet another relevant element in human evaluation is the reference translation used. This is the focus of this section, where we cover two aspects of reference translations that can have an impact on evaluation: quality and directionality. Reference Translations ::: Quality Because the translations are created by humans, a number of factors could lead to compromises in quality: If the translator is a non-native speaker of the source language, they may make mistakes in interpreting the original message. This is particularly true if the translator does not normally work in the domain of the text, e. g., when a translator who normally works on translating electronic product manuals is asked to translate news. If the translator is a non-native speaker of the target language, they might not be able to generate completely fluent text. This similarly applies to domain-specific terminology. Unlike computers, human translators have limits in time, attention, and motivation, and will generally do a better job when they have sufficient time to check their work, or are particularly motivated to do a good job, such as when doing a good job is necessary to maintain their reputation as a translator. In recent years, a large number of human translation jobs are performed by post-editing MT output, which can result in MT artefacts remaining even after manual post-editing BIBREF31, BIBREF32, BIBREF33. In this section, we examine the effect of the quality of underlying translations on the conclusions that can be drawn with regards to human–machine parity. We first do an analysis on (i) how the source of the human translation affects claims of human–machine parity, and (ii) whether significant differences exist between two varieties of human translation. We follow the same protocol as in Section SECREF19, having 4 professional translators per condition, evaluate the translations for adequacy and fluency on both the sentence and document level. The results are shown in Table TABREF30. From this, we can see that the human translation H$_B$, which was aggressively edited to ensure target fluency, resulted in lower adequacy (Table TABREF30). With more fluent and less accurate translations, raters do not prefer human over machine translation in terms of adequacy (Table TABREF30), but have a stronger preference for human translation in terms of fluency (compare Tables TABREF30 and TABREF21). In a direct comparison of the two human translations (Table TABREF30), we also find that H$_A$ is considered significantly more adequate than H$_B$, while there is no significant difference in fluency. To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32. From these results, we can glean a few interesting insights. First, we find significantly larger numbers of errors of the categories of Incorrect Word and Named Entity in MT$_1$, indicating that the MT system is less effective at choosing correct translations for individual words than the human translators. An example of this can be found in Table TABREF33, where we see that the MT system refers to a singular “point of view" and translates 线路 (channel, route, path) into the semantically similar but inadequate lines. Interestingly, MT$_1$ has significantly more Word Order errors, one example of this being shown in Table TABREF33, with the relative placements of at the end of last year (去年年底) and stop production (停产). This result is particularly notable given previous reports that NMT systems have led to great increases in reordering accuracy compared to previous statistical MT systems BIBREF35, BIBREF36, demonstrating that the problem of generating correctly ordered output is far from solved even in very strong NMT systems. Moreover, H$_B$ had significantly more Missing Word (Semantics) errors than both H$_A$ ($p<.001$) and MT$_1$ ($p<.001$), an indication that the proofreading process resulted in drops of content in favour of fluency. An example of this is shown in Table TABREF33, where H$_B$ dropped the information that the meetings between Suning and Apple were recently (近期) held. Finally, while there was not a significant difference, likely due to the small number of examples overall, it is noticeable that MT$_1$ had a higher percentage of Collocation and Context errors, which indicate that the system has more trouble translating words that are dependent on longer-range context. Similarly, some Named Entity errors are also attributable to translation inconsistencies due to lack of longer-range context. Table TABREF34 shows an example where we see that the MT system was unable to maintain a consistently gendered or correct pronoun for the female Olympic shooter Zhang Binbin (张彬彬). Apart from showing qualitative differences between the three translations, the analysis also supports the finding of the pairwise ranking study: H$_A$ is both preferred over MT$_1$ in the pairwise ranking study, and exhibits fewer translation errors in our error classification. H$_B$ has a substantially higher number of missing words than the other two translations, which agrees with the lower perceived adequacy in the pairwise ranking. However, the analysis not only supports the findings of the pairwise ranking study, but also adds nuance to it. Even though H$_B$ has the highest number of deletions, and does worse than the other two translations in a pairwise adequacy ranking, it is similar to H$_A$, and better than MT$_1$, in terms of most other error categories. Reference Translations ::: Directionality Translation quality is also affected by the nature of the source text. In this respect, we note that from the 2,001 sentences in the WMT 2017 Chinese–English test set, half were originally written in Chinese; the remaining half were originally written in English and then manually translated into Chinese. This Chinese reference file (half original, half translated) was then manually translated into English by BIBREF3 to make up the reference for assessing human–machine parity. Therefore, 50 % of the reference comprises direct English translations from the original Chinese, while 50 % are English translations from the human-translated file from English into Chinese, i. e., backtranslations of the original English. According to BIBREF37, translated texts differ from their originals in that they are simpler, more explicit, and more normalised. For example, the synonyms used in an original text may be replaced by a single translation. These differences are referred to as translationese, and have been shown to affect translation quality in the field of machine translation BIBREF38, BIBREF39, BIBREF32, BIBREF33. We test whether translationese has an effect on assessing parity between translations produced by humans and machines, using relative rankings of translations in the WMT 2017 Chinese–English test set by five raters (see Section SECREF3). Our hypothesis is that the difference between human and machine translation quality is smaller when source texts are translated English (translationese) rather than original Chinese, because a translationese source text should be simpler and thus easier to translate for an MT system. We confirm Laviosa's observation that “translationese” Chinese (that started as English) exhibits less lexical variety than “natively” Chinese text and demonstrate that translationese source texts are generally easier for MT systems to score well on. Table TABREF36 shows the TrueSkill scores for translations (H$_A$, MT$_1$, and MT$_2$) of the entire test set (Both) versus only the sentences originally written in Chinese or English therein. The human translation H$_A$ outperforms the machine translation MT$_1$ significantly when the original language is Chinese, while the difference between the two is not significant when the original language is English (i. e., translationese input). We also compare the two subsets of the test set, original and translationese, using type-token ratio (TTR). Our hypothesis is that the TTR will be smaller for the translationese subset, thus its simpler nature getting reflected in a less varied use of language. While both subsets contain a similar number of sentences (1,001 and 1,000), the Chinese subset contains more tokens (26,468) than its English counterpart (22,279). We thus take a subset of the Chinese (840 sentences) containing a similar amount of words to the English data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for Chinese ($M=0.1927$, $SD=0.0026$, 95 % confidence interval $[0.1925,0.1928]$) is 13 % higher than that for English ($M=0.1710$, $SD=0.0025$, 95 % confidence interval $[0.1708,0.1711]$). Our results show that using translationese (Chinese translated from English) rather than original source texts results in higher scores for MT systems in human evaluation, and that the lexical variety of translationese is smaller than that of original text. Recommendations Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Recommendations ::: (R1) Choose professional translators as raters. In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs. Recommendations ::: (R2) Evaluate documents, not sentences. When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4). Recommendations ::: (R3) Evaluate fluency in addition to adequacy. Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality. Recommendations ::: (R4) Do not heavily edit reference translations for fluency. In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30). Recommendations ::: (R5) Use original source texts. Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT. Our work empirically strengthens and extends the recommendations on human MT evaluation in previous work BIBREF6, BIBREF7, some of which have meanwhile been adopted by the large-scale evaluation campaign at WMT 2019 BIBREF40: the new evaluation protocol uses original source texts only (R5) and gives raters access to document-level context (R2). The findings of WMT 2019 provide further evidence in support of our recommendations. In particular, human English to Czech translation was found to be significantly better than MT BIBREF40; the comparison includes the same MT system (CUNI-Transformer-T2T-2018) which outperformed human translation according to the previous protocol BIBREF5. Results also show a larger difference between human translation and MT in document-level evaluation. We note that in contrast to WMT, the judgements in our experiments are provided by a small number of human raters: five in the experiments of Sections SECREF3 and SECREF35, four per condition (adequacy and fluency) in Section SECREF4, and one in the fine-grained error analysis presented in Section SECREF24. Moreover, the results presented in this article are based on one text domain (news) and one language direction (Chinese to English), and while a large-scale evaluation with another language pair supports our findings (see above), further experiments with more languages, domains, and raters will be required to increase their external validity. Conclusion We compared professional human Chinese to English translations to the output of a strong MT system. In a human evaluation following best practices, BIBREF3 found no significant difference between the two, concluding that their NMT system had reached parity with professional human translation. Our blind qualitative analysis, however, showed that the machine translation output contained significantly more incorrect words, omissions, mistranslated names, and word order errors. Our experiments show that recent findings of human–machine parity in language translation are owed to weaknesses in the design of human evaluation campaigns. We empirically tested alternatives to what is currently considered best practice in the field, and found that the choice of raters, the availability of linguistic context, and the creation of reference translations have a strong impact on perceived translation quality. As for the choice of raters, professional translators showed a significant preference for human translation, while non-expert raters did not. In terms of linguistic context, raters found human translation significantly more accurate than machine translation when evaluating full documents, but not when evaluating single sentences out of context. They also found human translation significantly more fluent than machine translation, both when evaluating full documents and single sentences. Moreover, we showed that aggressive editing of human reference translations for target language fluency can decrease adequacy to the point that they become indistinguishable from machine translation, and that raters found human translations significantly better than machine translations of original source texts, but not of source texts that were translations themselves. Our results strongly suggest that in order to reveal errors in the output of strong MT systems, the design of MT quality assessments with human raters should be revisited. To that end, we have offered a set of recommendations, supported by empirical data, which we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Our recommendations have the aim of increasing the validity of MT evaluation, but we are aware of the high cost of having MT evaluation done by professional translators, and on the level of full documents. We welcome future research into alternative evaluation protocols that can demonstrate their validity at a lower cost.
empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation
a93d4aa89ac3abbd08d725f3765c4f1bed35c889
a93d4aa89ac3abbd08d725f3765c4f1bed35c889_0
Q: What languages do they investigate for machine translation? Text: Introduction Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation. This paper is a response to two recent human evaluation studies in which some neural machine translation systems reportedly performed at (or above) the level of human translators for news translation from Chinese to English BIBREF3 and English to Czech BIBREF4, BIBREF5. Both evaluations were based on current best practices in the field: they used a source-based direct assessment with non-expert annotators, using data sets and the evaluation protocol of the Conference on Machine Translation (WMT). While the results are intriguing, especially because they are based on best practices in MT evaluation, BIBREF5 warn against taking their results as evidence for human–machine parity, and caution that for well-resourced language pairs, an update of WMT evaluation style will be needed to keep up with the progress in machine translation. We concur that these findings have demonstrated the need to critically re-evaluate the design of human MT evaluation. Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis. Background We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators. Background ::: Human Evaluation of Machine Translation The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans. Various methods have been proposed for the human evaluation of MT quality BIBREF8. What they have in common is that the MT output to be rated is paired with a translation hint: the source text or a reference translation. The MT output is then either adapted or scored with reference to the translation hint by human post-editors or raters, respectively. As part of the large-scale evaluation campaign at WMT, two primary evaluation methods have been used in recent years: relative ranking and direct assessment BIBREF9. In the case of relative ranking, raters are presented with outputs from two or more systems, which they are asked to evaluate relative to each other (e.g., to determine system A is better than system B). Ties (e.g., system A is as good or as bad as system B) are typically allowed. Compared to absolute scores on Likert scales, data obtained through relative ranking show better inter- and intra-annotator agreement BIBREF10. However, they do not allow conclusions to be drawn about the order of magnitude of the differences, so that it is not possible to determine how much better system A was than system B. This is one of the reasons why direct assessment has prevailed as an evaluation method more recently. In contrast to relative ranking, the raters are presented with one MT output at a time, to which they assign a score between 0 and 100. To increase homogeneity, each rater's ratings are standardised BIBREF11. Reference translations serve as the basis in the context of WMT, and evaluations are carried out by monolingual raters. To avoid reference bias, the evaluation can be based on source texts instead, which presupposes bilingual raters, but leads to more reliable results overall BIBREF12. Background ::: Assessing Human–Machine Parity BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation. In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations. Background ::: Assessing Human–Machine Parity ::: Choice of Raters The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations. Background ::: Assessing Human–Machine Parity ::: Linguistic Context MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents. Background ::: Assessing Human–Machine Parity ::: Reference Translations The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality. We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6). Background ::: Translations We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article: [labelwidth=1cm, leftmargin=1.25cm] The professional human translations in the dataset of BIBREF3.[1] Professional human translations that we ordered from a different translation vendor, which included a post-hoc native English check. We produced these only for the documents that were originally Chinese, as discussed in more detail in Section SECREF35. The machine translations produced by BIBREF3's BIBREF3 best system (Combo-6),[1] for which the authors found parity with H$_A$. The machine translations produced by Google's production system (Google Translate) in October 2017, as contained in BIBREF3's BIBREF3 dataset.[1] Statistical significance is denoted by * ($p\le .05$), ** ($p\le .01$), and *** ($p\le .001$) throughout this article, unless otherwise stated. Choice of Raters Both professional and amateur evaluators can be involved in human evaluation of MT quality. However, from published work in the field BIBREF19, it is fair to say that there is a tendency to “rely on students and amateur evaluators, sometimes with an undefined (or self-rated) proficiency in the languages involved, an unknown expertise with the text type" BIBREF8. Previous work on evaluation of MT output by professional translators against crowd workers by BIBREF20 showed that for all language pairs (involving 11 languages) evaluated, crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. The authors argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation more suitable than another, and therefore, when confronted with a translation that is hard to post-edit, tend to accept the MT rather than try to improve it. Choice of Raters ::: Evaluation Protocol We test for difference in ratings of MT outputs and human translations between experts and non-experts. We consider professional translators as experts, and both crowd workers and MT researchers as non-experts. We conduct a relative ranking experiment using one professional human (H$_A$) and two machine translations (MT$_1$ and MT$_2$), considering the native Chinese part of the WMT 2017 Chinese–English test set (see Section SECREF35 for details). The 299 sentences used in the experiments stem from 41 documents, randomly selected from all the documents in the test set originally written in Chinese, and are shown in their original order. Raters are shown one sentence at a time, and see the original Chinese source alongside the three translations. The previous and next source sentences are also shown, in order to provide the annotator with local inter-sentential context. Five raters—two experts and three non-experts—participated in the assessment. The experts were professional Chinese to English translators: one native in Chinese with a fluent level of English, the other native in English with a fluent level of Chinese. The non-experts were NLP researchers native in Chinese, working in an English-speaking country. The ratings are elicited with Appraise BIBREF21. We derive an overall score for each translation (H$_A$, MT$_1$, and MT$_2$) based on the rankings. We use the TrueSkill method adapted to MT evaluation BIBREF22 following its usage at WMT15, i. e., we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\alpha =0.05$). Choice of Raters ::: Results Table TABREF17 shows the TrueSkill scores for each translation resulting from the evaluations by expert and non-expert translators. We find that translation expertise affects the judgement of MT$_1$ and H$_A$, where the rating gap is wider for the expert raters. This indicates that non-experts disregard translation nuances in the evaluation, which leads to a more tolerant judgement of MT systems and a lower inter-annotator agreement ($\kappa =0.13$ for non-experts versus $\kappa =0.254$ for experts). It is worth noticing that, regardless of their expertise, the performance of human raters may vary over time. For example, performance may improve or decrease due to learning effects or fatigue, respectively BIBREF23. It is likely that such longitudinal effects are present in our data. They should be accounted for in future work, e. g., by using trial number as an additional predictor BIBREF24. Linguistic Context Another concern is the unit of evaluation. Historically, machine translation has primarily operated on the level of sentences, and so has machine translation evaluation. However, it has been remarked that human raters do not necessarily understand the intended meaning of a sentence shown out-of-context BIBREF25, which limits their ability to spot some mistranslations. Also, a sentence-level evaluation will be blind to errors related to textual cohesion and coherence. While sentence-level evaluation may be good enough when evaluating MT systems of relatively low quality, we hypothesise that with additional context, raters will be able to make more nuanced quality assessments, and will also reward translations that show more textual cohesion and coherence. We believe that this aspect should be considered in evaluation, especially when making claims about human–machine parity, since human translators can and do take inter-sentential context into account BIBREF15, BIBREF16. Linguistic Context ::: Evaluation Protocol We test if the availability of document-level context affects human–machine parity claims in terms of adequacy and fluency. In a pairwise ranking experiment, we show raters (i) isolated sentences and (ii) entire documents, asking them to choose the better (with ties allowed) from two translation outputs: one produced by a professional translator, the other by a machine translation system. We do not show reference translations as one of the two options is itself a human translation. We use source sentences and documents from the WMT 2017 Chinese–English test set (see Section SECREF8): documents are full news articles, and sentences are randomly drawn from these news articles, regardless of their position. We only consider articles from the test set that are native Chinese (see Section SECREF35). In order to compare our results to those of BIBREF3, we use both their professional human (H$_A$) and machine translations (MT$_1$). Each rater evaluates both sentences and documents, but never the same text in both conditions so as to avoid repetition priming BIBREF26. The order of experimental items as well as the placement of choices (H$_A$, MT$_1$; left, right) are randomised. We use spam items for quality control BIBREF27: In a small fraction of items, we render one of the two options nonsensical by randomly shuffling the order of all translated words, except for 10 % at the beginning and end. If a rater marks a spam item as better than or equal to an actual translation, this is a strong indication that they did not read both options carefully. We recruit professional translators (see Section SECREF3) from proz.com, a well-known online market place for professional freelance translation, considering Chinese to English translators and native English revisers for the adequacy and fluency conditions, respectively. In each condition, four raters evaluate 50 documents (plus 5 spam items) and 104 sentences (plus 16 spam items). We use two non-overlapping sets of documents and two non-overlapping sets of sentences, and each is evaluated by two raters. Linguistic Context ::: Results Results are shown in Table TABREF21. We note that sentence ratings from two raters are excluded from our analysis because of unintentional textual overlap with documents, meaning we cannot fully rule out that sentence-level decisions were informed by access to the full documents they originated from. Moreover, we exclude document ratings from one rater in the fluency condition because of poor performance on spam items, and recruit an additional rater to re-rate these documents. We analyse our data using two-tailed Sign Tests, the null hypothesis being that raters do not prefer MT$_1$ over H$_A$ or vice versa, implying human–machine parity. Following WMT evaluation campaigns that used pairwise ranking BIBREF28, the number of successes $x$ is the number of ratings in favour of H$_A$, and the number of trials $n$ is the number of all ratings except for ties. Adding half of the ties to $x$ and the total number of ties to $n$ BIBREF29 does not impact the significance levels reported in this section. Adequacy raters show no statistically significant preference for MT$_1$ or H$_A$ when evaluating isolated sentences ($x=86, n=189, p=.244$). This is in accordance with BIBREF3, who found the same in a source-based direct assessment experiment with crowd workers. With the availability of document-level context, however, preference for MT$_1$ drops from 49.5 to 37.0 % and is significantly lower than preference for human translation ($x=104, n=178, p<.05$). This evidences that document-level context cues allow raters to get a signal on adequacy. Fluency raters prefer H$_A$ over MT$_1$ both on the level of sentences ($x=106, n=172, p<.01$) and documents ($x=99, n=143, p<.001$). This is somewhat surprising given that increased fluency was found to be one of the main strengths of NMT BIBREF30, as we further discuss in Section SECREF24. The availability of document-level context decreases fluency raters' preference for MT$_1$, which falls from 31.7 to 22.0 %, without increasing their preference for H$_A$ (Table TABREF21). Linguistic Context ::: Discussion Our findings emphasise the importance of linguistic context in human evaluation of MT. In terms of adequacy, raters assessing documents as a whole show a significant preference for human translation, but when assessing single sentences in random order, they show no significant preference for human translation. Document-level evaluation exposes errors to raters which are hard or impossible to spot in a sentence-level evaluation, such as coherent translation of named entities. The example in Table TABREF23 shows the first two sentences of a Chinese news article as translated by a professional human translator (H$_A$) and BIBREF3's BIBREF3 NMT system (MT$_1$). When looking at both sentences (document-level evaluation), it can be seen that MT$_1$ uses two different translations to refer to a cultural festival, “2016盂兰文化节", whereas the human translation uses only one. When assessing the second sentence out of context (sentence-level evaluation), it is hard to penalise MT$_1$ for producing 2016 Python Cultural Festival, particularly for fluency raters without access to the corresponding source text. For further examples, see Section SECREF24 and Table TABREF34. Reference Translations Yet another relevant element in human evaluation is the reference translation used. This is the focus of this section, where we cover two aspects of reference translations that can have an impact on evaluation: quality and directionality. Reference Translations ::: Quality Because the translations are created by humans, a number of factors could lead to compromises in quality: If the translator is a non-native speaker of the source language, they may make mistakes in interpreting the original message. This is particularly true if the translator does not normally work in the domain of the text, e. g., when a translator who normally works on translating electronic product manuals is asked to translate news. If the translator is a non-native speaker of the target language, they might not be able to generate completely fluent text. This similarly applies to domain-specific terminology. Unlike computers, human translators have limits in time, attention, and motivation, and will generally do a better job when they have sufficient time to check their work, or are particularly motivated to do a good job, such as when doing a good job is necessary to maintain their reputation as a translator. In recent years, a large number of human translation jobs are performed by post-editing MT output, which can result in MT artefacts remaining even after manual post-editing BIBREF31, BIBREF32, BIBREF33. In this section, we examine the effect of the quality of underlying translations on the conclusions that can be drawn with regards to human–machine parity. We first do an analysis on (i) how the source of the human translation affects claims of human–machine parity, and (ii) whether significant differences exist between two varieties of human translation. We follow the same protocol as in Section SECREF19, having 4 professional translators per condition, evaluate the translations for adequacy and fluency on both the sentence and document level. The results are shown in Table TABREF30. From this, we can see that the human translation H$_B$, which was aggressively edited to ensure target fluency, resulted in lower adequacy (Table TABREF30). With more fluent and less accurate translations, raters do not prefer human over machine translation in terms of adequacy (Table TABREF30), but have a stronger preference for human translation in terms of fluency (compare Tables TABREF30 and TABREF21). In a direct comparison of the two human translations (Table TABREF30), we also find that H$_A$ is considered significantly more adequate than H$_B$, while there is no significant difference in fluency. To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32. From these results, we can glean a few interesting insights. First, we find significantly larger numbers of errors of the categories of Incorrect Word and Named Entity in MT$_1$, indicating that the MT system is less effective at choosing correct translations for individual words than the human translators. An example of this can be found in Table TABREF33, where we see that the MT system refers to a singular “point of view" and translates 线路 (channel, route, path) into the semantically similar but inadequate lines. Interestingly, MT$_1$ has significantly more Word Order errors, one example of this being shown in Table TABREF33, with the relative placements of at the end of last year (去年年底) and stop production (停产). This result is particularly notable given previous reports that NMT systems have led to great increases in reordering accuracy compared to previous statistical MT systems BIBREF35, BIBREF36, demonstrating that the problem of generating correctly ordered output is far from solved even in very strong NMT systems. Moreover, H$_B$ had significantly more Missing Word (Semantics) errors than both H$_A$ ($p<.001$) and MT$_1$ ($p<.001$), an indication that the proofreading process resulted in drops of content in favour of fluency. An example of this is shown in Table TABREF33, where H$_B$ dropped the information that the meetings between Suning and Apple were recently (近期) held. Finally, while there was not a significant difference, likely due to the small number of examples overall, it is noticeable that MT$_1$ had a higher percentage of Collocation and Context errors, which indicate that the system has more trouble translating words that are dependent on longer-range context. Similarly, some Named Entity errors are also attributable to translation inconsistencies due to lack of longer-range context. Table TABREF34 shows an example where we see that the MT system was unable to maintain a consistently gendered or correct pronoun for the female Olympic shooter Zhang Binbin (张彬彬). Apart from showing qualitative differences between the three translations, the analysis also supports the finding of the pairwise ranking study: H$_A$ is both preferred over MT$_1$ in the pairwise ranking study, and exhibits fewer translation errors in our error classification. H$_B$ has a substantially higher number of missing words than the other two translations, which agrees with the lower perceived adequacy in the pairwise ranking. However, the analysis not only supports the findings of the pairwise ranking study, but also adds nuance to it. Even though H$_B$ has the highest number of deletions, and does worse than the other two translations in a pairwise adequacy ranking, it is similar to H$_A$, and better than MT$_1$, in terms of most other error categories. Reference Translations ::: Directionality Translation quality is also affected by the nature of the source text. In this respect, we note that from the 2,001 sentences in the WMT 2017 Chinese–English test set, half were originally written in Chinese; the remaining half were originally written in English and then manually translated into Chinese. This Chinese reference file (half original, half translated) was then manually translated into English by BIBREF3 to make up the reference for assessing human–machine parity. Therefore, 50 % of the reference comprises direct English translations from the original Chinese, while 50 % are English translations from the human-translated file from English into Chinese, i. e., backtranslations of the original English. According to BIBREF37, translated texts differ from their originals in that they are simpler, more explicit, and more normalised. For example, the synonyms used in an original text may be replaced by a single translation. These differences are referred to as translationese, and have been shown to affect translation quality in the field of machine translation BIBREF38, BIBREF39, BIBREF32, BIBREF33. We test whether translationese has an effect on assessing parity between translations produced by humans and machines, using relative rankings of translations in the WMT 2017 Chinese–English test set by five raters (see Section SECREF3). Our hypothesis is that the difference between human and machine translation quality is smaller when source texts are translated English (translationese) rather than original Chinese, because a translationese source text should be simpler and thus easier to translate for an MT system. We confirm Laviosa's observation that “translationese” Chinese (that started as English) exhibits less lexical variety than “natively” Chinese text and demonstrate that translationese source texts are generally easier for MT systems to score well on. Table TABREF36 shows the TrueSkill scores for translations (H$_A$, MT$_1$, and MT$_2$) of the entire test set (Both) versus only the sentences originally written in Chinese or English therein. The human translation H$_A$ outperforms the machine translation MT$_1$ significantly when the original language is Chinese, while the difference between the two is not significant when the original language is English (i. e., translationese input). We also compare the two subsets of the test set, original and translationese, using type-token ratio (TTR). Our hypothesis is that the TTR will be smaller for the translationese subset, thus its simpler nature getting reflected in a less varied use of language. While both subsets contain a similar number of sentences (1,001 and 1,000), the Chinese subset contains more tokens (26,468) than its English counterpart (22,279). We thus take a subset of the Chinese (840 sentences) containing a similar amount of words to the English data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for Chinese ($M=0.1927$, $SD=0.0026$, 95 % confidence interval $[0.1925,0.1928]$) is 13 % higher than that for English ($M=0.1710$, $SD=0.0025$, 95 % confidence interval $[0.1708,0.1711]$). Our results show that using translationese (Chinese translated from English) rather than original source texts results in higher scores for MT systems in human evaluation, and that the lexical variety of translationese is smaller than that of original text. Recommendations Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Recommendations ::: (R1) Choose professional translators as raters. In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs. Recommendations ::: (R2) Evaluate documents, not sentences. When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4). Recommendations ::: (R3) Evaluate fluency in addition to adequacy. Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality. Recommendations ::: (R4) Do not heavily edit reference translations for fluency. In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30). Recommendations ::: (R5) Use original source texts. Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT. Our work empirically strengthens and extends the recommendations on human MT evaluation in previous work BIBREF6, BIBREF7, some of which have meanwhile been adopted by the large-scale evaluation campaign at WMT 2019 BIBREF40: the new evaluation protocol uses original source texts only (R5) and gives raters access to document-level context (R2). The findings of WMT 2019 provide further evidence in support of our recommendations. In particular, human English to Czech translation was found to be significantly better than MT BIBREF40; the comparison includes the same MT system (CUNI-Transformer-T2T-2018) which outperformed human translation according to the previous protocol BIBREF5. Results also show a larger difference between human translation and MT in document-level evaluation. We note that in contrast to WMT, the judgements in our experiments are provided by a small number of human raters: five in the experiments of Sections SECREF3 and SECREF35, four per condition (adequacy and fluency) in Section SECREF4, and one in the fine-grained error analysis presented in Section SECREF24. Moreover, the results presented in this article are based on one text domain (news) and one language direction (Chinese to English), and while a large-scale evaluation with another language pair supports our findings (see above), further experiments with more languages, domains, and raters will be required to increase their external validity. Conclusion We compared professional human Chinese to English translations to the output of a strong MT system. In a human evaluation following best practices, BIBREF3 found no significant difference between the two, concluding that their NMT system had reached parity with professional human translation. Our blind qualitative analysis, however, showed that the machine translation output contained significantly more incorrect words, omissions, mistranslated names, and word order errors. Our experiments show that recent findings of human–machine parity in language translation are owed to weaknesses in the design of human evaluation campaigns. We empirically tested alternatives to what is currently considered best practice in the field, and found that the choice of raters, the availability of linguistic context, and the creation of reference translations have a strong impact on perceived translation quality. As for the choice of raters, professional translators showed a significant preference for human translation, while non-expert raters did not. In terms of linguistic context, raters found human translation significantly more accurate than machine translation when evaluating full documents, but not when evaluating single sentences out of context. They also found human translation significantly more fluent than machine translation, both when evaluating full documents and single sentences. Moreover, we showed that aggressive editing of human reference translations for target language fluency can decrease adequacy to the point that they become indistinguishable from machine translation, and that raters found human translations significantly better than machine translations of original source texts, but not of source texts that were translations themselves. Our results strongly suggest that in order to reveal errors in the output of strong MT systems, the design of MT quality assessments with human raters should be revisited. To that end, we have offered a set of recommendations, supported by empirical data, which we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Our recommendations have the aim of increasing the validity of MT evaluation, but we are aware of the high cost of having MT evaluation done by professional translators, and on the level of full documents. We welcome future research into alternative evaluation protocols that can demonstrate their validity at a lower cost.
English , Chinese
bc473c5bd0e1a8be9b2037aa7006fd68217c3f47
bc473c5bd0e1a8be9b2037aa7006fd68217c3f47_0
Q: What recommendations do they offer? Text: Introduction Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation. This paper is a response to two recent human evaluation studies in which some neural machine translation systems reportedly performed at (or above) the level of human translators for news translation from Chinese to English BIBREF3 and English to Czech BIBREF4, BIBREF5. Both evaluations were based on current best practices in the field: they used a source-based direct assessment with non-expert annotators, using data sets and the evaluation protocol of the Conference on Machine Translation (WMT). While the results are intriguing, especially because they are based on best practices in MT evaluation, BIBREF5 warn against taking their results as evidence for human–machine parity, and caution that for well-resourced language pairs, an update of WMT evaluation style will be needed to keep up with the progress in machine translation. We concur that these findings have demonstrated the need to critically re-evaluate the design of human MT evaluation. Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis. Background We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators. Background ::: Human Evaluation of Machine Translation The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans. Various methods have been proposed for the human evaluation of MT quality BIBREF8. What they have in common is that the MT output to be rated is paired with a translation hint: the source text or a reference translation. The MT output is then either adapted or scored with reference to the translation hint by human post-editors or raters, respectively. As part of the large-scale evaluation campaign at WMT, two primary evaluation methods have been used in recent years: relative ranking and direct assessment BIBREF9. In the case of relative ranking, raters are presented with outputs from two or more systems, which they are asked to evaluate relative to each other (e.g., to determine system A is better than system B). Ties (e.g., system A is as good or as bad as system B) are typically allowed. Compared to absolute scores on Likert scales, data obtained through relative ranking show better inter- and intra-annotator agreement BIBREF10. However, they do not allow conclusions to be drawn about the order of magnitude of the differences, so that it is not possible to determine how much better system A was than system B. This is one of the reasons why direct assessment has prevailed as an evaluation method more recently. In contrast to relative ranking, the raters are presented with one MT output at a time, to which they assign a score between 0 and 100. To increase homogeneity, each rater's ratings are standardised BIBREF11. Reference translations serve as the basis in the context of WMT, and evaluations are carried out by monolingual raters. To avoid reference bias, the evaluation can be based on source texts instead, which presupposes bilingual raters, but leads to more reliable results overall BIBREF12. Background ::: Assessing Human–Machine Parity BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation. In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations. Background ::: Assessing Human–Machine Parity ::: Choice of Raters The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations. Background ::: Assessing Human–Machine Parity ::: Linguistic Context MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents. Background ::: Assessing Human–Machine Parity ::: Reference Translations The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality. We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6). Background ::: Translations We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article: [labelwidth=1cm, leftmargin=1.25cm] The professional human translations in the dataset of BIBREF3.[1] Professional human translations that we ordered from a different translation vendor, which included a post-hoc native English check. We produced these only for the documents that were originally Chinese, as discussed in more detail in Section SECREF35. The machine translations produced by BIBREF3's BIBREF3 best system (Combo-6),[1] for which the authors found parity with H$_A$. The machine translations produced by Google's production system (Google Translate) in October 2017, as contained in BIBREF3's BIBREF3 dataset.[1] Statistical significance is denoted by * ($p\le .05$), ** ($p\le .01$), and *** ($p\le .001$) throughout this article, unless otherwise stated. Choice of Raters Both professional and amateur evaluators can be involved in human evaluation of MT quality. However, from published work in the field BIBREF19, it is fair to say that there is a tendency to “rely on students and amateur evaluators, sometimes with an undefined (or self-rated) proficiency in the languages involved, an unknown expertise with the text type" BIBREF8. Previous work on evaluation of MT output by professional translators against crowd workers by BIBREF20 showed that for all language pairs (involving 11 languages) evaluated, crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. The authors argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation more suitable than another, and therefore, when confronted with a translation that is hard to post-edit, tend to accept the MT rather than try to improve it. Choice of Raters ::: Evaluation Protocol We test for difference in ratings of MT outputs and human translations between experts and non-experts. We consider professional translators as experts, and both crowd workers and MT researchers as non-experts. We conduct a relative ranking experiment using one professional human (H$_A$) and two machine translations (MT$_1$ and MT$_2$), considering the native Chinese part of the WMT 2017 Chinese–English test set (see Section SECREF35 for details). The 299 sentences used in the experiments stem from 41 documents, randomly selected from all the documents in the test set originally written in Chinese, and are shown in their original order. Raters are shown one sentence at a time, and see the original Chinese source alongside the three translations. The previous and next source sentences are also shown, in order to provide the annotator with local inter-sentential context. Five raters—two experts and three non-experts—participated in the assessment. The experts were professional Chinese to English translators: one native in Chinese with a fluent level of English, the other native in English with a fluent level of Chinese. The non-experts were NLP researchers native in Chinese, working in an English-speaking country. The ratings are elicited with Appraise BIBREF21. We derive an overall score for each translation (H$_A$, MT$_1$, and MT$_2$) based on the rankings. We use the TrueSkill method adapted to MT evaluation BIBREF22 following its usage at WMT15, i. e., we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\alpha =0.05$). Choice of Raters ::: Results Table TABREF17 shows the TrueSkill scores for each translation resulting from the evaluations by expert and non-expert translators. We find that translation expertise affects the judgement of MT$_1$ and H$_A$, where the rating gap is wider for the expert raters. This indicates that non-experts disregard translation nuances in the evaluation, which leads to a more tolerant judgement of MT systems and a lower inter-annotator agreement ($\kappa =0.13$ for non-experts versus $\kappa =0.254$ for experts). It is worth noticing that, regardless of their expertise, the performance of human raters may vary over time. For example, performance may improve or decrease due to learning effects or fatigue, respectively BIBREF23. It is likely that such longitudinal effects are present in our data. They should be accounted for in future work, e. g., by using trial number as an additional predictor BIBREF24. Linguistic Context Another concern is the unit of evaluation. Historically, machine translation has primarily operated on the level of sentences, and so has machine translation evaluation. However, it has been remarked that human raters do not necessarily understand the intended meaning of a sentence shown out-of-context BIBREF25, which limits their ability to spot some mistranslations. Also, a sentence-level evaluation will be blind to errors related to textual cohesion and coherence. While sentence-level evaluation may be good enough when evaluating MT systems of relatively low quality, we hypothesise that with additional context, raters will be able to make more nuanced quality assessments, and will also reward translations that show more textual cohesion and coherence. We believe that this aspect should be considered in evaluation, especially when making claims about human–machine parity, since human translators can and do take inter-sentential context into account BIBREF15, BIBREF16. Linguistic Context ::: Evaluation Protocol We test if the availability of document-level context affects human–machine parity claims in terms of adequacy and fluency. In a pairwise ranking experiment, we show raters (i) isolated sentences and (ii) entire documents, asking them to choose the better (with ties allowed) from two translation outputs: one produced by a professional translator, the other by a machine translation system. We do not show reference translations as one of the two options is itself a human translation. We use source sentences and documents from the WMT 2017 Chinese–English test set (see Section SECREF8): documents are full news articles, and sentences are randomly drawn from these news articles, regardless of their position. We only consider articles from the test set that are native Chinese (see Section SECREF35). In order to compare our results to those of BIBREF3, we use both their professional human (H$_A$) and machine translations (MT$_1$). Each rater evaluates both sentences and documents, but never the same text in both conditions so as to avoid repetition priming BIBREF26. The order of experimental items as well as the placement of choices (H$_A$, MT$_1$; left, right) are randomised. We use spam items for quality control BIBREF27: In a small fraction of items, we render one of the two options nonsensical by randomly shuffling the order of all translated words, except for 10 % at the beginning and end. If a rater marks a spam item as better than or equal to an actual translation, this is a strong indication that they did not read both options carefully. We recruit professional translators (see Section SECREF3) from proz.com, a well-known online market place for professional freelance translation, considering Chinese to English translators and native English revisers for the adequacy and fluency conditions, respectively. In each condition, four raters evaluate 50 documents (plus 5 spam items) and 104 sentences (plus 16 spam items). We use two non-overlapping sets of documents and two non-overlapping sets of sentences, and each is evaluated by two raters. Linguistic Context ::: Results Results are shown in Table TABREF21. We note that sentence ratings from two raters are excluded from our analysis because of unintentional textual overlap with documents, meaning we cannot fully rule out that sentence-level decisions were informed by access to the full documents they originated from. Moreover, we exclude document ratings from one rater in the fluency condition because of poor performance on spam items, and recruit an additional rater to re-rate these documents. We analyse our data using two-tailed Sign Tests, the null hypothesis being that raters do not prefer MT$_1$ over H$_A$ or vice versa, implying human–machine parity. Following WMT evaluation campaigns that used pairwise ranking BIBREF28, the number of successes $x$ is the number of ratings in favour of H$_A$, and the number of trials $n$ is the number of all ratings except for ties. Adding half of the ties to $x$ and the total number of ties to $n$ BIBREF29 does not impact the significance levels reported in this section. Adequacy raters show no statistically significant preference for MT$_1$ or H$_A$ when evaluating isolated sentences ($x=86, n=189, p=.244$). This is in accordance with BIBREF3, who found the same in a source-based direct assessment experiment with crowd workers. With the availability of document-level context, however, preference for MT$_1$ drops from 49.5 to 37.0 % and is significantly lower than preference for human translation ($x=104, n=178, p<.05$). This evidences that document-level context cues allow raters to get a signal on adequacy. Fluency raters prefer H$_A$ over MT$_1$ both on the level of sentences ($x=106, n=172, p<.01$) and documents ($x=99, n=143, p<.001$). This is somewhat surprising given that increased fluency was found to be one of the main strengths of NMT BIBREF30, as we further discuss in Section SECREF24. The availability of document-level context decreases fluency raters' preference for MT$_1$, which falls from 31.7 to 22.0 %, without increasing their preference for H$_A$ (Table TABREF21). Linguistic Context ::: Discussion Our findings emphasise the importance of linguistic context in human evaluation of MT. In terms of adequacy, raters assessing documents as a whole show a significant preference for human translation, but when assessing single sentences in random order, they show no significant preference for human translation. Document-level evaluation exposes errors to raters which are hard or impossible to spot in a sentence-level evaluation, such as coherent translation of named entities. The example in Table TABREF23 shows the first two sentences of a Chinese news article as translated by a professional human translator (H$_A$) and BIBREF3's BIBREF3 NMT system (MT$_1$). When looking at both sentences (document-level evaluation), it can be seen that MT$_1$ uses two different translations to refer to a cultural festival, “2016盂兰文化节", whereas the human translation uses only one. When assessing the second sentence out of context (sentence-level evaluation), it is hard to penalise MT$_1$ for producing 2016 Python Cultural Festival, particularly for fluency raters without access to the corresponding source text. For further examples, see Section SECREF24 and Table TABREF34. Reference Translations Yet another relevant element in human evaluation is the reference translation used. This is the focus of this section, where we cover two aspects of reference translations that can have an impact on evaluation: quality and directionality. Reference Translations ::: Quality Because the translations are created by humans, a number of factors could lead to compromises in quality: If the translator is a non-native speaker of the source language, they may make mistakes in interpreting the original message. This is particularly true if the translator does not normally work in the domain of the text, e. g., when a translator who normally works on translating electronic product manuals is asked to translate news. If the translator is a non-native speaker of the target language, they might not be able to generate completely fluent text. This similarly applies to domain-specific terminology. Unlike computers, human translators have limits in time, attention, and motivation, and will generally do a better job when they have sufficient time to check their work, or are particularly motivated to do a good job, such as when doing a good job is necessary to maintain their reputation as a translator. In recent years, a large number of human translation jobs are performed by post-editing MT output, which can result in MT artefacts remaining even after manual post-editing BIBREF31, BIBREF32, BIBREF33. In this section, we examine the effect of the quality of underlying translations on the conclusions that can be drawn with regards to human–machine parity. We first do an analysis on (i) how the source of the human translation affects claims of human–machine parity, and (ii) whether significant differences exist between two varieties of human translation. We follow the same protocol as in Section SECREF19, having 4 professional translators per condition, evaluate the translations for adequacy and fluency on both the sentence and document level. The results are shown in Table TABREF30. From this, we can see that the human translation H$_B$, which was aggressively edited to ensure target fluency, resulted in lower adequacy (Table TABREF30). With more fluent and less accurate translations, raters do not prefer human over machine translation in terms of adequacy (Table TABREF30), but have a stronger preference for human translation in terms of fluency (compare Tables TABREF30 and TABREF21). In a direct comparison of the two human translations (Table TABREF30), we also find that H$_A$ is considered significantly more adequate than H$_B$, while there is no significant difference in fluency. To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32. From these results, we can glean a few interesting insights. First, we find significantly larger numbers of errors of the categories of Incorrect Word and Named Entity in MT$_1$, indicating that the MT system is less effective at choosing correct translations for individual words than the human translators. An example of this can be found in Table TABREF33, where we see that the MT system refers to a singular “point of view" and translates 线路 (channel, route, path) into the semantically similar but inadequate lines. Interestingly, MT$_1$ has significantly more Word Order errors, one example of this being shown in Table TABREF33, with the relative placements of at the end of last year (去年年底) and stop production (停产). This result is particularly notable given previous reports that NMT systems have led to great increases in reordering accuracy compared to previous statistical MT systems BIBREF35, BIBREF36, demonstrating that the problem of generating correctly ordered output is far from solved even in very strong NMT systems. Moreover, H$_B$ had significantly more Missing Word (Semantics) errors than both H$_A$ ($p<.001$) and MT$_1$ ($p<.001$), an indication that the proofreading process resulted in drops of content in favour of fluency. An example of this is shown in Table TABREF33, where H$_B$ dropped the information that the meetings between Suning and Apple were recently (近期) held. Finally, while there was not a significant difference, likely due to the small number of examples overall, it is noticeable that MT$_1$ had a higher percentage of Collocation and Context errors, which indicate that the system has more trouble translating words that are dependent on longer-range context. Similarly, some Named Entity errors are also attributable to translation inconsistencies due to lack of longer-range context. Table TABREF34 shows an example where we see that the MT system was unable to maintain a consistently gendered or correct pronoun for the female Olympic shooter Zhang Binbin (张彬彬). Apart from showing qualitative differences between the three translations, the analysis also supports the finding of the pairwise ranking study: H$_A$ is both preferred over MT$_1$ in the pairwise ranking study, and exhibits fewer translation errors in our error classification. H$_B$ has a substantially higher number of missing words than the other two translations, which agrees with the lower perceived adequacy in the pairwise ranking. However, the analysis not only supports the findings of the pairwise ranking study, but also adds nuance to it. Even though H$_B$ has the highest number of deletions, and does worse than the other two translations in a pairwise adequacy ranking, it is similar to H$_A$, and better than MT$_1$, in terms of most other error categories. Reference Translations ::: Directionality Translation quality is also affected by the nature of the source text. In this respect, we note that from the 2,001 sentences in the WMT 2017 Chinese–English test set, half were originally written in Chinese; the remaining half were originally written in English and then manually translated into Chinese. This Chinese reference file (half original, half translated) was then manually translated into English by BIBREF3 to make up the reference for assessing human–machine parity. Therefore, 50 % of the reference comprises direct English translations from the original Chinese, while 50 % are English translations from the human-translated file from English into Chinese, i. e., backtranslations of the original English. According to BIBREF37, translated texts differ from their originals in that they are simpler, more explicit, and more normalised. For example, the synonyms used in an original text may be replaced by a single translation. These differences are referred to as translationese, and have been shown to affect translation quality in the field of machine translation BIBREF38, BIBREF39, BIBREF32, BIBREF33. We test whether translationese has an effect on assessing parity between translations produced by humans and machines, using relative rankings of translations in the WMT 2017 Chinese–English test set by five raters (see Section SECREF3). Our hypothesis is that the difference between human and machine translation quality is smaller when source texts are translated English (translationese) rather than original Chinese, because a translationese source text should be simpler and thus easier to translate for an MT system. We confirm Laviosa's observation that “translationese” Chinese (that started as English) exhibits less lexical variety than “natively” Chinese text and demonstrate that translationese source texts are generally easier for MT systems to score well on. Table TABREF36 shows the TrueSkill scores for translations (H$_A$, MT$_1$, and MT$_2$) of the entire test set (Both) versus only the sentences originally written in Chinese or English therein. The human translation H$_A$ outperforms the machine translation MT$_1$ significantly when the original language is Chinese, while the difference between the two is not significant when the original language is English (i. e., translationese input). We also compare the two subsets of the test set, original and translationese, using type-token ratio (TTR). Our hypothesis is that the TTR will be smaller for the translationese subset, thus its simpler nature getting reflected in a less varied use of language. While both subsets contain a similar number of sentences (1,001 and 1,000), the Chinese subset contains more tokens (26,468) than its English counterpart (22,279). We thus take a subset of the Chinese (840 sentences) containing a similar amount of words to the English data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for Chinese ($M=0.1927$, $SD=0.0026$, 95 % confidence interval $[0.1925,0.1928]$) is 13 % higher than that for English ($M=0.1710$, $SD=0.0025$, 95 % confidence interval $[0.1708,0.1711]$). Our results show that using translationese (Chinese translated from English) rather than original source texts results in higher scores for MT systems in human evaluation, and that the lexical variety of translationese is smaller than that of original text. Recommendations Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Recommendations ::: (R1) Choose professional translators as raters. In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs. Recommendations ::: (R2) Evaluate documents, not sentences. When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4). Recommendations ::: (R3) Evaluate fluency in addition to adequacy. Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality. Recommendations ::: (R4) Do not heavily edit reference translations for fluency. In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30). Recommendations ::: (R5) Use original source texts. Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT. Our work empirically strengthens and extends the recommendations on human MT evaluation in previous work BIBREF6, BIBREF7, some of which have meanwhile been adopted by the large-scale evaluation campaign at WMT 2019 BIBREF40: the new evaluation protocol uses original source texts only (R5) and gives raters access to document-level context (R2). The findings of WMT 2019 provide further evidence in support of our recommendations. In particular, human English to Czech translation was found to be significantly better than MT BIBREF40; the comparison includes the same MT system (CUNI-Transformer-T2T-2018) which outperformed human translation according to the previous protocol BIBREF5. Results also show a larger difference between human translation and MT in document-level evaluation. We note that in contrast to WMT, the judgements in our experiments are provided by a small number of human raters: five in the experiments of Sections SECREF3 and SECREF35, four per condition (adequacy and fluency) in Section SECREF4, and one in the fine-grained error analysis presented in Section SECREF24. Moreover, the results presented in this article are based on one text domain (news) and one language direction (Chinese to English), and while a large-scale evaluation with another language pair supports our findings (see above), further experiments with more languages, domains, and raters will be required to increase their external validity. Conclusion We compared professional human Chinese to English translations to the output of a strong MT system. In a human evaluation following best practices, BIBREF3 found no significant difference between the two, concluding that their NMT system had reached parity with professional human translation. Our blind qualitative analysis, however, showed that the machine translation output contained significantly more incorrect words, omissions, mistranslated names, and word order errors. Our experiments show that recent findings of human–machine parity in language translation are owed to weaknesses in the design of human evaluation campaigns. We empirically tested alternatives to what is currently considered best practice in the field, and found that the choice of raters, the availability of linguistic context, and the creation of reference translations have a strong impact on perceived translation quality. As for the choice of raters, professional translators showed a significant preference for human translation, while non-expert raters did not. In terms of linguistic context, raters found human translation significantly more accurate than machine translation when evaluating full documents, but not when evaluating single sentences out of context. They also found human translation significantly more fluent than machine translation, both when evaluating full documents and single sentences. Moreover, we showed that aggressive editing of human reference translations for target language fluency can decrease adequacy to the point that they become indistinguishable from machine translation, and that raters found human translations significantly better than machine translations of original source texts, but not of source texts that were translations themselves. Our results strongly suggest that in order to reveal errors in the output of strong MT systems, the design of MT quality assessments with human raters should be revisited. To that end, we have offered a set of recommendations, supported by empirical data, which we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Our recommendations have the aim of increasing the validity of MT evaluation, but we are aware of the high cost of having MT evaluation done by professional translators, and on the level of full documents. We welcome future research into alternative evaluation protocols that can demonstrate their validity at a lower cost.
Choose professional translators as raters, Evaluate documents, not sentences, Evaluate fluency in addition to adequacy, Do not heavily edit reference translations for fluency, Use original source texts
cc5d8e12f6aecf6a5f305e2f8b3a0c67f49801a9
cc5d8e12f6aecf6a5f305e2f8b3a0c67f49801a9_0
Q: What percentage fewer errors did professional translations make? Text: Introduction Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation. This paper is a response to two recent human evaluation studies in which some neural machine translation systems reportedly performed at (or above) the level of human translators for news translation from Chinese to English BIBREF3 and English to Czech BIBREF4, BIBREF5. Both evaluations were based on current best practices in the field: they used a source-based direct assessment with non-expert annotators, using data sets and the evaluation protocol of the Conference on Machine Translation (WMT). While the results are intriguing, especially because they are based on best practices in MT evaluation, BIBREF5 warn against taking their results as evidence for human–machine parity, and caution that for well-resourced language pairs, an update of WMT evaluation style will be needed to keep up with the progress in machine translation. We concur that these findings have demonstrated the need to critically re-evaluate the design of human MT evaluation. Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis. Background We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators. Background ::: Human Evaluation of Machine Translation The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans. Various methods have been proposed for the human evaluation of MT quality BIBREF8. What they have in common is that the MT output to be rated is paired with a translation hint: the source text or a reference translation. The MT output is then either adapted or scored with reference to the translation hint by human post-editors or raters, respectively. As part of the large-scale evaluation campaign at WMT, two primary evaluation methods have been used in recent years: relative ranking and direct assessment BIBREF9. In the case of relative ranking, raters are presented with outputs from two or more systems, which they are asked to evaluate relative to each other (e.g., to determine system A is better than system B). Ties (e.g., system A is as good or as bad as system B) are typically allowed. Compared to absolute scores on Likert scales, data obtained through relative ranking show better inter- and intra-annotator agreement BIBREF10. However, they do not allow conclusions to be drawn about the order of magnitude of the differences, so that it is not possible to determine how much better system A was than system B. This is one of the reasons why direct assessment has prevailed as an evaluation method more recently. In contrast to relative ranking, the raters are presented with one MT output at a time, to which they assign a score between 0 and 100. To increase homogeneity, each rater's ratings are standardised BIBREF11. Reference translations serve as the basis in the context of WMT, and evaluations are carried out by monolingual raters. To avoid reference bias, the evaluation can be based on source texts instead, which presupposes bilingual raters, but leads to more reliable results overall BIBREF12. Background ::: Assessing Human–Machine Parity BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation. In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations. Background ::: Assessing Human–Machine Parity ::: Choice of Raters The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations. Background ::: Assessing Human–Machine Parity ::: Linguistic Context MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents. Background ::: Assessing Human–Machine Parity ::: Reference Translations The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality. We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6). Background ::: Translations We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article: [labelwidth=1cm, leftmargin=1.25cm] The professional human translations in the dataset of BIBREF3.[1] Professional human translations that we ordered from a different translation vendor, which included a post-hoc native English check. We produced these only for the documents that were originally Chinese, as discussed in more detail in Section SECREF35. The machine translations produced by BIBREF3's BIBREF3 best system (Combo-6),[1] for which the authors found parity with H$_A$. The machine translations produced by Google's production system (Google Translate) in October 2017, as contained in BIBREF3's BIBREF3 dataset.[1] Statistical significance is denoted by * ($p\le .05$), ** ($p\le .01$), and *** ($p\le .001$) throughout this article, unless otherwise stated. Choice of Raters Both professional and amateur evaluators can be involved in human evaluation of MT quality. However, from published work in the field BIBREF19, it is fair to say that there is a tendency to “rely on students and amateur evaluators, sometimes with an undefined (or self-rated) proficiency in the languages involved, an unknown expertise with the text type" BIBREF8. Previous work on evaluation of MT output by professional translators against crowd workers by BIBREF20 showed that for all language pairs (involving 11 languages) evaluated, crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. The authors argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation more suitable than another, and therefore, when confronted with a translation that is hard to post-edit, tend to accept the MT rather than try to improve it. Choice of Raters ::: Evaluation Protocol We test for difference in ratings of MT outputs and human translations between experts and non-experts. We consider professional translators as experts, and both crowd workers and MT researchers as non-experts. We conduct a relative ranking experiment using one professional human (H$_A$) and two machine translations (MT$_1$ and MT$_2$), considering the native Chinese part of the WMT 2017 Chinese–English test set (see Section SECREF35 for details). The 299 sentences used in the experiments stem from 41 documents, randomly selected from all the documents in the test set originally written in Chinese, and are shown in their original order. Raters are shown one sentence at a time, and see the original Chinese source alongside the three translations. The previous and next source sentences are also shown, in order to provide the annotator with local inter-sentential context. Five raters—two experts and three non-experts—participated in the assessment. The experts were professional Chinese to English translators: one native in Chinese with a fluent level of English, the other native in English with a fluent level of Chinese. The non-experts were NLP researchers native in Chinese, working in an English-speaking country. The ratings are elicited with Appraise BIBREF21. We derive an overall score for each translation (H$_A$, MT$_1$, and MT$_2$) based on the rankings. We use the TrueSkill method adapted to MT evaluation BIBREF22 following its usage at WMT15, i. e., we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\alpha =0.05$). Choice of Raters ::: Results Table TABREF17 shows the TrueSkill scores for each translation resulting from the evaluations by expert and non-expert translators. We find that translation expertise affects the judgement of MT$_1$ and H$_A$, where the rating gap is wider for the expert raters. This indicates that non-experts disregard translation nuances in the evaluation, which leads to a more tolerant judgement of MT systems and a lower inter-annotator agreement ($\kappa =0.13$ for non-experts versus $\kappa =0.254$ for experts). It is worth noticing that, regardless of their expertise, the performance of human raters may vary over time. For example, performance may improve or decrease due to learning effects or fatigue, respectively BIBREF23. It is likely that such longitudinal effects are present in our data. They should be accounted for in future work, e. g., by using trial number as an additional predictor BIBREF24. Linguistic Context Another concern is the unit of evaluation. Historically, machine translation has primarily operated on the level of sentences, and so has machine translation evaluation. However, it has been remarked that human raters do not necessarily understand the intended meaning of a sentence shown out-of-context BIBREF25, which limits their ability to spot some mistranslations. Also, a sentence-level evaluation will be blind to errors related to textual cohesion and coherence. While sentence-level evaluation may be good enough when evaluating MT systems of relatively low quality, we hypothesise that with additional context, raters will be able to make more nuanced quality assessments, and will also reward translations that show more textual cohesion and coherence. We believe that this aspect should be considered in evaluation, especially when making claims about human–machine parity, since human translators can and do take inter-sentential context into account BIBREF15, BIBREF16. Linguistic Context ::: Evaluation Protocol We test if the availability of document-level context affects human–machine parity claims in terms of adequacy and fluency. In a pairwise ranking experiment, we show raters (i) isolated sentences and (ii) entire documents, asking them to choose the better (with ties allowed) from two translation outputs: one produced by a professional translator, the other by a machine translation system. We do not show reference translations as one of the two options is itself a human translation. We use source sentences and documents from the WMT 2017 Chinese–English test set (see Section SECREF8): documents are full news articles, and sentences are randomly drawn from these news articles, regardless of their position. We only consider articles from the test set that are native Chinese (see Section SECREF35). In order to compare our results to those of BIBREF3, we use both their professional human (H$_A$) and machine translations (MT$_1$). Each rater evaluates both sentences and documents, but never the same text in both conditions so as to avoid repetition priming BIBREF26. The order of experimental items as well as the placement of choices (H$_A$, MT$_1$; left, right) are randomised. We use spam items for quality control BIBREF27: In a small fraction of items, we render one of the two options nonsensical by randomly shuffling the order of all translated words, except for 10 % at the beginning and end. If a rater marks a spam item as better than or equal to an actual translation, this is a strong indication that they did not read both options carefully. We recruit professional translators (see Section SECREF3) from proz.com, a well-known online market place for professional freelance translation, considering Chinese to English translators and native English revisers for the adequacy and fluency conditions, respectively. In each condition, four raters evaluate 50 documents (plus 5 spam items) and 104 sentences (plus 16 spam items). We use two non-overlapping sets of documents and two non-overlapping sets of sentences, and each is evaluated by two raters. Linguistic Context ::: Results Results are shown in Table TABREF21. We note that sentence ratings from two raters are excluded from our analysis because of unintentional textual overlap with documents, meaning we cannot fully rule out that sentence-level decisions were informed by access to the full documents they originated from. Moreover, we exclude document ratings from one rater in the fluency condition because of poor performance on spam items, and recruit an additional rater to re-rate these documents. We analyse our data using two-tailed Sign Tests, the null hypothesis being that raters do not prefer MT$_1$ over H$_A$ or vice versa, implying human–machine parity. Following WMT evaluation campaigns that used pairwise ranking BIBREF28, the number of successes $x$ is the number of ratings in favour of H$_A$, and the number of trials $n$ is the number of all ratings except for ties. Adding half of the ties to $x$ and the total number of ties to $n$ BIBREF29 does not impact the significance levels reported in this section. Adequacy raters show no statistically significant preference for MT$_1$ or H$_A$ when evaluating isolated sentences ($x=86, n=189, p=.244$). This is in accordance with BIBREF3, who found the same in a source-based direct assessment experiment with crowd workers. With the availability of document-level context, however, preference for MT$_1$ drops from 49.5 to 37.0 % and is significantly lower than preference for human translation ($x=104, n=178, p<.05$). This evidences that document-level context cues allow raters to get a signal on adequacy. Fluency raters prefer H$_A$ over MT$_1$ both on the level of sentences ($x=106, n=172, p<.01$) and documents ($x=99, n=143, p<.001$). This is somewhat surprising given that increased fluency was found to be one of the main strengths of NMT BIBREF30, as we further discuss in Section SECREF24. The availability of document-level context decreases fluency raters' preference for MT$_1$, which falls from 31.7 to 22.0 %, without increasing their preference for H$_A$ (Table TABREF21). Linguistic Context ::: Discussion Our findings emphasise the importance of linguistic context in human evaluation of MT. In terms of adequacy, raters assessing documents as a whole show a significant preference for human translation, but when assessing single sentences in random order, they show no significant preference for human translation. Document-level evaluation exposes errors to raters which are hard or impossible to spot in a sentence-level evaluation, such as coherent translation of named entities. The example in Table TABREF23 shows the first two sentences of a Chinese news article as translated by a professional human translator (H$_A$) and BIBREF3's BIBREF3 NMT system (MT$_1$). When looking at both sentences (document-level evaluation), it can be seen that MT$_1$ uses two different translations to refer to a cultural festival, “2016盂兰文化节", whereas the human translation uses only one. When assessing the second sentence out of context (sentence-level evaluation), it is hard to penalise MT$_1$ for producing 2016 Python Cultural Festival, particularly for fluency raters without access to the corresponding source text. For further examples, see Section SECREF24 and Table TABREF34. Reference Translations Yet another relevant element in human evaluation is the reference translation used. This is the focus of this section, where we cover two aspects of reference translations that can have an impact on evaluation: quality and directionality. Reference Translations ::: Quality Because the translations are created by humans, a number of factors could lead to compromises in quality: If the translator is a non-native speaker of the source language, they may make mistakes in interpreting the original message. This is particularly true if the translator does not normally work in the domain of the text, e. g., when a translator who normally works on translating electronic product manuals is asked to translate news. If the translator is a non-native speaker of the target language, they might not be able to generate completely fluent text. This similarly applies to domain-specific terminology. Unlike computers, human translators have limits in time, attention, and motivation, and will generally do a better job when they have sufficient time to check their work, or are particularly motivated to do a good job, such as when doing a good job is necessary to maintain their reputation as a translator. In recent years, a large number of human translation jobs are performed by post-editing MT output, which can result in MT artefacts remaining even after manual post-editing BIBREF31, BIBREF32, BIBREF33. In this section, we examine the effect of the quality of underlying translations on the conclusions that can be drawn with regards to human–machine parity. We first do an analysis on (i) how the source of the human translation affects claims of human–machine parity, and (ii) whether significant differences exist between two varieties of human translation. We follow the same protocol as in Section SECREF19, having 4 professional translators per condition, evaluate the translations for adequacy and fluency on both the sentence and document level. The results are shown in Table TABREF30. From this, we can see that the human translation H$_B$, which was aggressively edited to ensure target fluency, resulted in lower adequacy (Table TABREF30). With more fluent and less accurate translations, raters do not prefer human over machine translation in terms of adequacy (Table TABREF30), but have a stronger preference for human translation in terms of fluency (compare Tables TABREF30 and TABREF21). In a direct comparison of the two human translations (Table TABREF30), we also find that H$_A$ is considered significantly more adequate than H$_B$, while there is no significant difference in fluency. To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32. From these results, we can glean a few interesting insights. First, we find significantly larger numbers of errors of the categories of Incorrect Word and Named Entity in MT$_1$, indicating that the MT system is less effective at choosing correct translations for individual words than the human translators. An example of this can be found in Table TABREF33, where we see that the MT system refers to a singular “point of view" and translates 线路 (channel, route, path) into the semantically similar but inadequate lines. Interestingly, MT$_1$ has significantly more Word Order errors, one example of this being shown in Table TABREF33, with the relative placements of at the end of last year (去年年底) and stop production (停产). This result is particularly notable given previous reports that NMT systems have led to great increases in reordering accuracy compared to previous statistical MT systems BIBREF35, BIBREF36, demonstrating that the problem of generating correctly ordered output is far from solved even in very strong NMT systems. Moreover, H$_B$ had significantly more Missing Word (Semantics) errors than both H$_A$ ($p<.001$) and MT$_1$ ($p<.001$), an indication that the proofreading process resulted in drops of content in favour of fluency. An example of this is shown in Table TABREF33, where H$_B$ dropped the information that the meetings between Suning and Apple were recently (近期) held. Finally, while there was not a significant difference, likely due to the small number of examples overall, it is noticeable that MT$_1$ had a higher percentage of Collocation and Context errors, which indicate that the system has more trouble translating words that are dependent on longer-range context. Similarly, some Named Entity errors are also attributable to translation inconsistencies due to lack of longer-range context. Table TABREF34 shows an example where we see that the MT system was unable to maintain a consistently gendered or correct pronoun for the female Olympic shooter Zhang Binbin (张彬彬). Apart from showing qualitative differences between the three translations, the analysis also supports the finding of the pairwise ranking study: H$_A$ is both preferred over MT$_1$ in the pairwise ranking study, and exhibits fewer translation errors in our error classification. H$_B$ has a substantially higher number of missing words than the other two translations, which agrees with the lower perceived adequacy in the pairwise ranking. However, the analysis not only supports the findings of the pairwise ranking study, but also adds nuance to it. Even though H$_B$ has the highest number of deletions, and does worse than the other two translations in a pairwise adequacy ranking, it is similar to H$_A$, and better than MT$_1$, in terms of most other error categories. Reference Translations ::: Directionality Translation quality is also affected by the nature of the source text. In this respect, we note that from the 2,001 sentences in the WMT 2017 Chinese–English test set, half were originally written in Chinese; the remaining half were originally written in English and then manually translated into Chinese. This Chinese reference file (half original, half translated) was then manually translated into English by BIBREF3 to make up the reference for assessing human–machine parity. Therefore, 50 % of the reference comprises direct English translations from the original Chinese, while 50 % are English translations from the human-translated file from English into Chinese, i. e., backtranslations of the original English. According to BIBREF37, translated texts differ from their originals in that they are simpler, more explicit, and more normalised. For example, the synonyms used in an original text may be replaced by a single translation. These differences are referred to as translationese, and have been shown to affect translation quality in the field of machine translation BIBREF38, BIBREF39, BIBREF32, BIBREF33. We test whether translationese has an effect on assessing parity between translations produced by humans and machines, using relative rankings of translations in the WMT 2017 Chinese–English test set by five raters (see Section SECREF3). Our hypothesis is that the difference between human and machine translation quality is smaller when source texts are translated English (translationese) rather than original Chinese, because a translationese source text should be simpler and thus easier to translate for an MT system. We confirm Laviosa's observation that “translationese” Chinese (that started as English) exhibits less lexical variety than “natively” Chinese text and demonstrate that translationese source texts are generally easier for MT systems to score well on. Table TABREF36 shows the TrueSkill scores for translations (H$_A$, MT$_1$, and MT$_2$) of the entire test set (Both) versus only the sentences originally written in Chinese or English therein. The human translation H$_A$ outperforms the machine translation MT$_1$ significantly when the original language is Chinese, while the difference between the two is not significant when the original language is English (i. e., translationese input). We also compare the two subsets of the test set, original and translationese, using type-token ratio (TTR). Our hypothesis is that the TTR will be smaller for the translationese subset, thus its simpler nature getting reflected in a less varied use of language. While both subsets contain a similar number of sentences (1,001 and 1,000), the Chinese subset contains more tokens (26,468) than its English counterpart (22,279). We thus take a subset of the Chinese (840 sentences) containing a similar amount of words to the English data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for Chinese ($M=0.1927$, $SD=0.0026$, 95 % confidence interval $[0.1925,0.1928]$) is 13 % higher than that for English ($M=0.1710$, $SD=0.0025$, 95 % confidence interval $[0.1708,0.1711]$). Our results show that using translationese (Chinese translated from English) rather than original source texts results in higher scores for MT systems in human evaluation, and that the lexical variety of translationese is smaller than that of original text. Recommendations Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Recommendations ::: (R1) Choose professional translators as raters. In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs. Recommendations ::: (R2) Evaluate documents, not sentences. When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4). Recommendations ::: (R3) Evaluate fluency in addition to adequacy. Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality. Recommendations ::: (R4) Do not heavily edit reference translations for fluency. In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30). Recommendations ::: (R5) Use original source texts. Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT. Our work empirically strengthens and extends the recommendations on human MT evaluation in previous work BIBREF6, BIBREF7, some of which have meanwhile been adopted by the large-scale evaluation campaign at WMT 2019 BIBREF40: the new evaluation protocol uses original source texts only (R5) and gives raters access to document-level context (R2). The findings of WMT 2019 provide further evidence in support of our recommendations. In particular, human English to Czech translation was found to be significantly better than MT BIBREF40; the comparison includes the same MT system (CUNI-Transformer-T2T-2018) which outperformed human translation according to the previous protocol BIBREF5. Results also show a larger difference between human translation and MT in document-level evaluation. We note that in contrast to WMT, the judgements in our experiments are provided by a small number of human raters: five in the experiments of Sections SECREF3 and SECREF35, four per condition (adequacy and fluency) in Section SECREF4, and one in the fine-grained error analysis presented in Section SECREF24. Moreover, the results presented in this article are based on one text domain (news) and one language direction (Chinese to English), and while a large-scale evaluation with another language pair supports our findings (see above), further experiments with more languages, domains, and raters will be required to increase their external validity. Conclusion We compared professional human Chinese to English translations to the output of a strong MT system. In a human evaluation following best practices, BIBREF3 found no significant difference between the two, concluding that their NMT system had reached parity with professional human translation. Our blind qualitative analysis, however, showed that the machine translation output contained significantly more incorrect words, omissions, mistranslated names, and word order errors. Our experiments show that recent findings of human–machine parity in language translation are owed to weaknesses in the design of human evaluation campaigns. We empirically tested alternatives to what is currently considered best practice in the field, and found that the choice of raters, the availability of linguistic context, and the creation of reference translations have a strong impact on perceived translation quality. As for the choice of raters, professional translators showed a significant preference for human translation, while non-expert raters did not. In terms of linguistic context, raters found human translation significantly more accurate than machine translation when evaluating full documents, but not when evaluating single sentences out of context. They also found human translation significantly more fluent than machine translation, both when evaluating full documents and single sentences. Moreover, we showed that aggressive editing of human reference translations for target language fluency can decrease adequacy to the point that they become indistinguishable from machine translation, and that raters found human translations significantly better than machine translations of original source texts, but not of source texts that were translations themselves. Our results strongly suggest that in order to reveal errors in the output of strong MT systems, the design of MT quality assessments with human raters should be revisited. To that end, we have offered a set of recommendations, supported by empirical data, which we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Our recommendations have the aim of increasing the validity of MT evaluation, but we are aware of the high cost of having MT evaluation done by professional translators, and on the level of full documents. We welcome future research into alternative evaluation protocols that can demonstrate their validity at a lower cost.
36%
9299fe72f19c1974564ea60278e03a423eb335dc
9299fe72f19c1974564ea60278e03a423eb335dc_0
Q: What was the weakness in Hassan et al's evaluation design? Text: Introduction Machine translation (MT) has made astounding progress in recent years thanks to improvements in neural modelling BIBREF0, BIBREF1, BIBREF2, and the resulting increase in translation quality is creating new challenges for MT evaluation. Human evaluation remains the gold standard, but there are many design decisions that potentially affect the validity of such a human evaluation. This paper is a response to two recent human evaluation studies in which some neural machine translation systems reportedly performed at (or above) the level of human translators for news translation from Chinese to English BIBREF3 and English to Czech BIBREF4, BIBREF5. Both evaluations were based on current best practices in the field: they used a source-based direct assessment with non-expert annotators, using data sets and the evaluation protocol of the Conference on Machine Translation (WMT). While the results are intriguing, especially because they are based on best practices in MT evaluation, BIBREF5 warn against taking their results as evidence for human–machine parity, and caution that for well-resourced language pairs, an update of WMT evaluation style will be needed to keep up with the progress in machine translation. We concur that these findings have demonstrated the need to critically re-evaluate the design of human MT evaluation. Our paper investigates three aspects of human MT evaluation, with a special focus on assessing human–machine parity: the choice of raters, the use of linguistic context, and the creation of reference translations. We focus on the data shared by BIBREF3, and empirically test to what extent changes in the evaluation design affect the outcome of the human evaluation. We find that for all three aspects, human translations are judged more favourably, and significantly better than MT, when we make changes that we believe strengthen the evaluation design. Based on our empirical findings, we formulate a set of recommendations for human MT evaluation in general, and assessing human–machine parity in particular. All of our data are made publicly available for external validation and further analysis. Background We first review current methods to assess the quality of machine translation system outputs, and highlight potential issues in using these methods to compare such outputs to translations produced by professional human translators. Background ::: Human Evaluation of Machine Translation The evaluation of MT quality has been the subject of controversial discussions in research and the language services industry for decades due to its high economic importance. While automatic evaluation methods are particularly important in system development, there is consensus that a reliable evaluation should—despite high costs—be carried out by humans. Various methods have been proposed for the human evaluation of MT quality BIBREF8. What they have in common is that the MT output to be rated is paired with a translation hint: the source text or a reference translation. The MT output is then either adapted or scored with reference to the translation hint by human post-editors or raters, respectively. As part of the large-scale evaluation campaign at WMT, two primary evaluation methods have been used in recent years: relative ranking and direct assessment BIBREF9. In the case of relative ranking, raters are presented with outputs from two or more systems, which they are asked to evaluate relative to each other (e.g., to determine system A is better than system B). Ties (e.g., system A is as good or as bad as system B) are typically allowed. Compared to absolute scores on Likert scales, data obtained through relative ranking show better inter- and intra-annotator agreement BIBREF10. However, they do not allow conclusions to be drawn about the order of magnitude of the differences, so that it is not possible to determine how much better system A was than system B. This is one of the reasons why direct assessment has prevailed as an evaluation method more recently. In contrast to relative ranking, the raters are presented with one MT output at a time, to which they assign a score between 0 and 100. To increase homogeneity, each rater's ratings are standardised BIBREF11. Reference translations serve as the basis in the context of WMT, and evaluations are carried out by monolingual raters. To avoid reference bias, the evaluation can be based on source texts instead, which presupposes bilingual raters, but leads to more reliable results overall BIBREF12. Background ::: Assessing Human–Machine Parity BIBREF3 base their claim of achieving human–machine parity on a source-based direct assessment as described in the previous section, where they found no significant difference in ratings between the output of their MT system and a professional human translation. Similarly, BIBREF5 report that the best-performing English to Czech system submitted to WMT 2018 BIBREF4 significantly outperforms the human reference translation. However, the authors caution against interpreting their results as evidence of human–machine parity, highlighting potential limitations of the evaluation. In this study, we address three aspects that we consider to be particularly relevant for human evaluation of MT, with a special focus on testing human–machine parity: the choice of raters, the use of linguistic context, and the construction of reference translations. Background ::: Assessing Human–Machine Parity ::: Choice of Raters The human evaluation of MT output in research scenarios is typically conducted by crowd workers in order to minimise costs. BIBREF13 shows that aggregated assessments of bilingual crowd workers are very similar to those of MT developers, and BIBREF14, based on experiments with data from WMT 2012, similarly conclude that with proper quality control, MT systems can be evaluated by crowd workers. BIBREF3 also use bilingual crowd workers, but the studies supporting the use of crowdsourcing for MT evaluation were performed with older MT systems, and their findings may not carry over to the evaluation of contemporary higher-quality neural machine translation (NMT) systems. In addition, the MT developers to which crowd workers were compared are usually not professional translators. We hypothesise that expert translators will provide more nuanced ratings than non-experts, and that their ratings will show a higher difference between MT outputs and human translations. Background ::: Assessing Human–Machine Parity ::: Linguistic Context MT has been evaluated almost exclusively at the sentence level, owing to the fact that most MT systems do not yet take context across sentence boundaries into account. However, when machine translations are compared to those of professional translators, the omission of linguistic context—e. g., by random ordering of the sentences to be evaluated—does not do justice to humans who, in contrast to most MT systems, can and do take inter-sentential context into account BIBREF15, BIBREF16. We hypothesise that an evaluation of sentences in isolation, as applied by BIBREF3, precludes raters from detecting translation errors that become apparent only when inter-sentential context is available, and that they will judge MT quality less favourably when evaluating full documents. Background ::: Assessing Human–Machine Parity ::: Reference Translations The human reference translations with which machine translations are compared within the scope of a human–machine parity assessment play an important role. BIBREF3 used all source texts of the WMT 2017 Chinese–English test set for their experiments, of which only half were originally written in Chinese; the other half were translated from English into Chinese. Since translated texts are usually simpler than their original counterparts BIBREF17, they should be easier to translate for MT systems. Moreover, different human translations of the same source text sometimes show considerable differences in quality, and a comparison with an MT system only makes sense if the human reference translations are of high quality. BIBREF3, for example, had the WMT source texts re-translated as they were not convinced of the quality of the human translations in the test set. At WMT 2018, the organisers themselves noted that the manual evaluation included several reports of ill-formed reference translations BIBREF5. We hypothesise that the quality of the human translations has a significant effect on findings of human–machine parity, which would indicate that it is necessary to ensure that human translations used to assess parity claims need to be carefully vetted for their quality. We empirically test and discuss the impact of these factors on human evaluation of MT in Sections SECREF3–SECREF5. Based on our findings, we then distil a set of recommendations for human evaluation of strong MT systems, with a focus on assessing human–machine parity (Section SECREF6). Background ::: Translations We use English translations of the Chinese source texts in the WMT 2017 English–Chinese test set BIBREF18 for all experiments presented in this article: [labelwidth=1cm, leftmargin=1.25cm] The professional human translations in the dataset of BIBREF3.[1] Professional human translations that we ordered from a different translation vendor, which included a post-hoc native English check. We produced these only for the documents that were originally Chinese, as discussed in more detail in Section SECREF35. The machine translations produced by BIBREF3's BIBREF3 best system (Combo-6),[1] for which the authors found parity with H$_A$. The machine translations produced by Google's production system (Google Translate) in October 2017, as contained in BIBREF3's BIBREF3 dataset.[1] Statistical significance is denoted by * ($p\le .05$), ** ($p\le .01$), and *** ($p\le .001$) throughout this article, unless otherwise stated. Choice of Raters Both professional and amateur evaluators can be involved in human evaluation of MT quality. However, from published work in the field BIBREF19, it is fair to say that there is a tendency to “rely on students and amateur evaluators, sometimes with an undefined (or self-rated) proficiency in the languages involved, an unknown expertise with the text type" BIBREF8. Previous work on evaluation of MT output by professional translators against crowd workers by BIBREF20 showed that for all language pairs (involving 11 languages) evaluated, crowd workers tend to be more accepting of the MT output by giving higher fluency and adequacy scores and performing very little post-editing. The authors argued that non-expert translators lack knowledge of translation and so might not notice subtle differences that make one translation more suitable than another, and therefore, when confronted with a translation that is hard to post-edit, tend to accept the MT rather than try to improve it. Choice of Raters ::: Evaluation Protocol We test for difference in ratings of MT outputs and human translations between experts and non-experts. We consider professional translators as experts, and both crowd workers and MT researchers as non-experts. We conduct a relative ranking experiment using one professional human (H$_A$) and two machine translations (MT$_1$ and MT$_2$), considering the native Chinese part of the WMT 2017 Chinese–English test set (see Section SECREF35 for details). The 299 sentences used in the experiments stem from 41 documents, randomly selected from all the documents in the test set originally written in Chinese, and are shown in their original order. Raters are shown one sentence at a time, and see the original Chinese source alongside the three translations. The previous and next source sentences are also shown, in order to provide the annotator with local inter-sentential context. Five raters—two experts and three non-experts—participated in the assessment. The experts were professional Chinese to English translators: one native in Chinese with a fluent level of English, the other native in English with a fluent level of Chinese. The non-experts were NLP researchers native in Chinese, working in an English-speaking country. The ratings are elicited with Appraise BIBREF21. We derive an overall score for each translation (H$_A$, MT$_1$, and MT$_2$) based on the rankings. We use the TrueSkill method adapted to MT evaluation BIBREF22 following its usage at WMT15, i. e., we run 1,000 iterations of the rankings recorded with Appraise followed by clustering (significance level $\alpha =0.05$). Choice of Raters ::: Results Table TABREF17 shows the TrueSkill scores for each translation resulting from the evaluations by expert and non-expert translators. We find that translation expertise affects the judgement of MT$_1$ and H$_A$, where the rating gap is wider for the expert raters. This indicates that non-experts disregard translation nuances in the evaluation, which leads to a more tolerant judgement of MT systems and a lower inter-annotator agreement ($\kappa =0.13$ for non-experts versus $\kappa =0.254$ for experts). It is worth noticing that, regardless of their expertise, the performance of human raters may vary over time. For example, performance may improve or decrease due to learning effects or fatigue, respectively BIBREF23. It is likely that such longitudinal effects are present in our data. They should be accounted for in future work, e. g., by using trial number as an additional predictor BIBREF24. Linguistic Context Another concern is the unit of evaluation. Historically, machine translation has primarily operated on the level of sentences, and so has machine translation evaluation. However, it has been remarked that human raters do not necessarily understand the intended meaning of a sentence shown out-of-context BIBREF25, which limits their ability to spot some mistranslations. Also, a sentence-level evaluation will be blind to errors related to textual cohesion and coherence. While sentence-level evaluation may be good enough when evaluating MT systems of relatively low quality, we hypothesise that with additional context, raters will be able to make more nuanced quality assessments, and will also reward translations that show more textual cohesion and coherence. We believe that this aspect should be considered in evaluation, especially when making claims about human–machine parity, since human translators can and do take inter-sentential context into account BIBREF15, BIBREF16. Linguistic Context ::: Evaluation Protocol We test if the availability of document-level context affects human–machine parity claims in terms of adequacy and fluency. In a pairwise ranking experiment, we show raters (i) isolated sentences and (ii) entire documents, asking them to choose the better (with ties allowed) from two translation outputs: one produced by a professional translator, the other by a machine translation system. We do not show reference translations as one of the two options is itself a human translation. We use source sentences and documents from the WMT 2017 Chinese–English test set (see Section SECREF8): documents are full news articles, and sentences are randomly drawn from these news articles, regardless of their position. We only consider articles from the test set that are native Chinese (see Section SECREF35). In order to compare our results to those of BIBREF3, we use both their professional human (H$_A$) and machine translations (MT$_1$). Each rater evaluates both sentences and documents, but never the same text in both conditions so as to avoid repetition priming BIBREF26. The order of experimental items as well as the placement of choices (H$_A$, MT$_1$; left, right) are randomised. We use spam items for quality control BIBREF27: In a small fraction of items, we render one of the two options nonsensical by randomly shuffling the order of all translated words, except for 10 % at the beginning and end. If a rater marks a spam item as better than or equal to an actual translation, this is a strong indication that they did not read both options carefully. We recruit professional translators (see Section SECREF3) from proz.com, a well-known online market place for professional freelance translation, considering Chinese to English translators and native English revisers for the adequacy and fluency conditions, respectively. In each condition, four raters evaluate 50 documents (plus 5 spam items) and 104 sentences (plus 16 spam items). We use two non-overlapping sets of documents and two non-overlapping sets of sentences, and each is evaluated by two raters. Linguistic Context ::: Results Results are shown in Table TABREF21. We note that sentence ratings from two raters are excluded from our analysis because of unintentional textual overlap with documents, meaning we cannot fully rule out that sentence-level decisions were informed by access to the full documents they originated from. Moreover, we exclude document ratings from one rater in the fluency condition because of poor performance on spam items, and recruit an additional rater to re-rate these documents. We analyse our data using two-tailed Sign Tests, the null hypothesis being that raters do not prefer MT$_1$ over H$_A$ or vice versa, implying human–machine parity. Following WMT evaluation campaigns that used pairwise ranking BIBREF28, the number of successes $x$ is the number of ratings in favour of H$_A$, and the number of trials $n$ is the number of all ratings except for ties. Adding half of the ties to $x$ and the total number of ties to $n$ BIBREF29 does not impact the significance levels reported in this section. Adequacy raters show no statistically significant preference for MT$_1$ or H$_A$ when evaluating isolated sentences ($x=86, n=189, p=.244$). This is in accordance with BIBREF3, who found the same in a source-based direct assessment experiment with crowd workers. With the availability of document-level context, however, preference for MT$_1$ drops from 49.5 to 37.0 % and is significantly lower than preference for human translation ($x=104, n=178, p<.05$). This evidences that document-level context cues allow raters to get a signal on adequacy. Fluency raters prefer H$_A$ over MT$_1$ both on the level of sentences ($x=106, n=172, p<.01$) and documents ($x=99, n=143, p<.001$). This is somewhat surprising given that increased fluency was found to be one of the main strengths of NMT BIBREF30, as we further discuss in Section SECREF24. The availability of document-level context decreases fluency raters' preference for MT$_1$, which falls from 31.7 to 22.0 %, without increasing their preference for H$_A$ (Table TABREF21). Linguistic Context ::: Discussion Our findings emphasise the importance of linguistic context in human evaluation of MT. In terms of adequacy, raters assessing documents as a whole show a significant preference for human translation, but when assessing single sentences in random order, they show no significant preference for human translation. Document-level evaluation exposes errors to raters which are hard or impossible to spot in a sentence-level evaluation, such as coherent translation of named entities. The example in Table TABREF23 shows the first two sentences of a Chinese news article as translated by a professional human translator (H$_A$) and BIBREF3's BIBREF3 NMT system (MT$_1$). When looking at both sentences (document-level evaluation), it can be seen that MT$_1$ uses two different translations to refer to a cultural festival, “2016盂兰文化节", whereas the human translation uses only one. When assessing the second sentence out of context (sentence-level evaluation), it is hard to penalise MT$_1$ for producing 2016 Python Cultural Festival, particularly for fluency raters without access to the corresponding source text. For further examples, see Section SECREF24 and Table TABREF34. Reference Translations Yet another relevant element in human evaluation is the reference translation used. This is the focus of this section, where we cover two aspects of reference translations that can have an impact on evaluation: quality and directionality. Reference Translations ::: Quality Because the translations are created by humans, a number of factors could lead to compromises in quality: If the translator is a non-native speaker of the source language, they may make mistakes in interpreting the original message. This is particularly true if the translator does not normally work in the domain of the text, e. g., when a translator who normally works on translating electronic product manuals is asked to translate news. If the translator is a non-native speaker of the target language, they might not be able to generate completely fluent text. This similarly applies to domain-specific terminology. Unlike computers, human translators have limits in time, attention, and motivation, and will generally do a better job when they have sufficient time to check their work, or are particularly motivated to do a good job, such as when doing a good job is necessary to maintain their reputation as a translator. In recent years, a large number of human translation jobs are performed by post-editing MT output, which can result in MT artefacts remaining even after manual post-editing BIBREF31, BIBREF32, BIBREF33. In this section, we examine the effect of the quality of underlying translations on the conclusions that can be drawn with regards to human–machine parity. We first do an analysis on (i) how the source of the human translation affects claims of human–machine parity, and (ii) whether significant differences exist between two varieties of human translation. We follow the same protocol as in Section SECREF19, having 4 professional translators per condition, evaluate the translations for adequacy and fluency on both the sentence and document level. The results are shown in Table TABREF30. From this, we can see that the human translation H$_B$, which was aggressively edited to ensure target fluency, resulted in lower adequacy (Table TABREF30). With more fluent and less accurate translations, raters do not prefer human over machine translation in terms of adequacy (Table TABREF30), but have a stronger preference for human translation in terms of fluency (compare Tables TABREF30 and TABREF21). In a direct comparison of the two human translations (Table TABREF30), we also find that H$_A$ is considered significantly more adequate than H$_B$, while there is no significant difference in fluency. To achieve a finer-grained understanding of what errors the evaluated translations exhibit, we perform a categorisation of 150 randomly sampled sentences based on the classification used by BIBREF3. We expand the classification with a Context category, which we use to mark errors that are only apparent in larger context (e. g., regarding poor register choice, or coreference errors), and which do not clearly fit into one of the other categories. BIBREF3 perform this classification only for the machine-translated outputs, and thus the natural question of whether the mistakes that humans and computers make are qualitatively different is left unanswered. Our analysis was performed by one of the co-authors who is a bi-lingual native Chinese/English speaker. Sentences were shown in the context of the document, to make it easier to determine whether the translations were correct based on the context. The analysis was performed on one machine translation (MT$_1$) and two human translation outputs (H$_A$, H$_B$), using the same 150 sentences, but blinding their origin by randomising the order in which the documents were presented. We show the results of this analysis in Table TABREF32. From these results, we can glean a few interesting insights. First, we find significantly larger numbers of errors of the categories of Incorrect Word and Named Entity in MT$_1$, indicating that the MT system is less effective at choosing correct translations for individual words than the human translators. An example of this can be found in Table TABREF33, where we see that the MT system refers to a singular “point of view" and translates 线路 (channel, route, path) into the semantically similar but inadequate lines. Interestingly, MT$_1$ has significantly more Word Order errors, one example of this being shown in Table TABREF33, with the relative placements of at the end of last year (去年年底) and stop production (停产). This result is particularly notable given previous reports that NMT systems have led to great increases in reordering accuracy compared to previous statistical MT systems BIBREF35, BIBREF36, demonstrating that the problem of generating correctly ordered output is far from solved even in very strong NMT systems. Moreover, H$_B$ had significantly more Missing Word (Semantics) errors than both H$_A$ ($p<.001$) and MT$_1$ ($p<.001$), an indication that the proofreading process resulted in drops of content in favour of fluency. An example of this is shown in Table TABREF33, where H$_B$ dropped the information that the meetings between Suning and Apple were recently (近期) held. Finally, while there was not a significant difference, likely due to the small number of examples overall, it is noticeable that MT$_1$ had a higher percentage of Collocation and Context errors, which indicate that the system has more trouble translating words that are dependent on longer-range context. Similarly, some Named Entity errors are also attributable to translation inconsistencies due to lack of longer-range context. Table TABREF34 shows an example where we see that the MT system was unable to maintain a consistently gendered or correct pronoun for the female Olympic shooter Zhang Binbin (张彬彬). Apart from showing qualitative differences between the three translations, the analysis also supports the finding of the pairwise ranking study: H$_A$ is both preferred over MT$_1$ in the pairwise ranking study, and exhibits fewer translation errors in our error classification. H$_B$ has a substantially higher number of missing words than the other two translations, which agrees with the lower perceived adequacy in the pairwise ranking. However, the analysis not only supports the findings of the pairwise ranking study, but also adds nuance to it. Even though H$_B$ has the highest number of deletions, and does worse than the other two translations in a pairwise adequacy ranking, it is similar to H$_A$, and better than MT$_1$, in terms of most other error categories. Reference Translations ::: Directionality Translation quality is also affected by the nature of the source text. In this respect, we note that from the 2,001 sentences in the WMT 2017 Chinese–English test set, half were originally written in Chinese; the remaining half were originally written in English and then manually translated into Chinese. This Chinese reference file (half original, half translated) was then manually translated into English by BIBREF3 to make up the reference for assessing human–machine parity. Therefore, 50 % of the reference comprises direct English translations from the original Chinese, while 50 % are English translations from the human-translated file from English into Chinese, i. e., backtranslations of the original English. According to BIBREF37, translated texts differ from their originals in that they are simpler, more explicit, and more normalised. For example, the synonyms used in an original text may be replaced by a single translation. These differences are referred to as translationese, and have been shown to affect translation quality in the field of machine translation BIBREF38, BIBREF39, BIBREF32, BIBREF33. We test whether translationese has an effect on assessing parity between translations produced by humans and machines, using relative rankings of translations in the WMT 2017 Chinese–English test set by five raters (see Section SECREF3). Our hypothesis is that the difference between human and machine translation quality is smaller when source texts are translated English (translationese) rather than original Chinese, because a translationese source text should be simpler and thus easier to translate for an MT system. We confirm Laviosa's observation that “translationese” Chinese (that started as English) exhibits less lexical variety than “natively” Chinese text and demonstrate that translationese source texts are generally easier for MT systems to score well on. Table TABREF36 shows the TrueSkill scores for translations (H$_A$, MT$_1$, and MT$_2$) of the entire test set (Both) versus only the sentences originally written in Chinese or English therein. The human translation H$_A$ outperforms the machine translation MT$_1$ significantly when the original language is Chinese, while the difference between the two is not significant when the original language is English (i. e., translationese input). We also compare the two subsets of the test set, original and translationese, using type-token ratio (TTR). Our hypothesis is that the TTR will be smaller for the translationese subset, thus its simpler nature getting reflected in a less varied use of language. While both subsets contain a similar number of sentences (1,001 and 1,000), the Chinese subset contains more tokens (26,468) than its English counterpart (22,279). We thus take a subset of the Chinese (840 sentences) containing a similar amount of words to the English data (22,271 words). We then calculate the TTR for these two subsets using bootstrap resampling. The TTR for Chinese ($M=0.1927$, $SD=0.0026$, 95 % confidence interval $[0.1925,0.1928]$) is 13 % higher than that for English ($M=0.1710$, $SD=0.0025$, 95 % confidence interval $[0.1708,0.1711]$). Our results show that using translationese (Chinese translated from English) rather than original source texts results in higher scores for MT systems in human evaluation, and that the lexical variety of translationese is smaller than that of original text. Recommendations Our experiments in Sections SECREF3–SECREF5 show that machine translation quality has not yet reached the level of professional human translation, and that human evaluation methods which are currently considered best practice fail to reveal errors in the output of strong NMT systems. In this section, we recommend a set of evaluation design changes that we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Recommendations ::: (R1) Choose professional translators as raters. In our blind experiment (Section SECREF3), non-experts assess parity between human and machine translation where professional translators do not, indicating that the former neglect more subtle differences between different translation outputs. Recommendations ::: (R2) Evaluate documents, not sentences. When evaluating sentences in random order, professional translators judge machine translation more favourably as they cannot identify errors related to textual coherence and cohesion, such as different translations of the same product name. Our experiments show that using whole documents (i. e., full news articles) as unit of evaluation increases the rating gap between human and machine translation (Section SECREF4). Recommendations ::: (R3) Evaluate fluency in addition to adequacy. Raters who judge target language fluency without access to the source texts show a stronger preference for human translation than raters with access to the source texts (Sections SECREF4 and SECREF24). In all of our experiments, raters prefer human translation in terms of fluency while, just as in BIBREF3's BIBREF3 evaluation, they find no significant difference between human and machine translation in sentence-level adequacy (Tables TABREF21 and TABREF30). Our error analysis in Table TABREF34 also indicates that MT still lags behind human translation in fluency, specifically in grammaticality. Recommendations ::: (R4) Do not heavily edit reference translations for fluency. In professional translation workflows, texts are typically revised with a focus on target language fluency after an initial translation step. As shown in our experiment in Section SECREF24, aggressive revision can make translations more fluent but less accurate, to the degree that they become indistinguishable from MT in terms of accuracy (Table TABREF30). Recommendations ::: (R5) Use original source texts. Raters show a significant preference for human over machine translations of texts that were originally written in the source language, but not for source texts that are translations themselves (Section SECREF35). Our results are further evidence that translated texts tend to be simpler than original texts, and in turn easier to translate with MT. Our work empirically strengthens and extends the recommendations on human MT evaluation in previous work BIBREF6, BIBREF7, some of which have meanwhile been adopted by the large-scale evaluation campaign at WMT 2019 BIBREF40: the new evaluation protocol uses original source texts only (R5) and gives raters access to document-level context (R2). The findings of WMT 2019 provide further evidence in support of our recommendations. In particular, human English to Czech translation was found to be significantly better than MT BIBREF40; the comparison includes the same MT system (CUNI-Transformer-T2T-2018) which outperformed human translation according to the previous protocol BIBREF5. Results also show a larger difference between human translation and MT in document-level evaluation. We note that in contrast to WMT, the judgements in our experiments are provided by a small number of human raters: five in the experiments of Sections SECREF3 and SECREF35, four per condition (adequacy and fluency) in Section SECREF4, and one in the fine-grained error analysis presented in Section SECREF24. Moreover, the results presented in this article are based on one text domain (news) and one language direction (Chinese to English), and while a large-scale evaluation with another language pair supports our findings (see above), further experiments with more languages, domains, and raters will be required to increase their external validity. Conclusion We compared professional human Chinese to English translations to the output of a strong MT system. In a human evaluation following best practices, BIBREF3 found no significant difference between the two, concluding that their NMT system had reached parity with professional human translation. Our blind qualitative analysis, however, showed that the machine translation output contained significantly more incorrect words, omissions, mistranslated names, and word order errors. Our experiments show that recent findings of human–machine parity in language translation are owed to weaknesses in the design of human evaluation campaigns. We empirically tested alternatives to what is currently considered best practice in the field, and found that the choice of raters, the availability of linguistic context, and the creation of reference translations have a strong impact on perceived translation quality. As for the choice of raters, professional translators showed a significant preference for human translation, while non-expert raters did not. In terms of linguistic context, raters found human translation significantly more accurate than machine translation when evaluating full documents, but not when evaluating single sentences out of context. They also found human translation significantly more fluent than machine translation, both when evaluating full documents and single sentences. Moreover, we showed that aggressive editing of human reference translations for target language fluency can decrease adequacy to the point that they become indistinguishable from machine translation, and that raters found human translations significantly better than machine translations of original source texts, but not of source texts that were translations themselves. Our results strongly suggest that in order to reveal errors in the output of strong MT systems, the design of MT quality assessments with human raters should be revisited. To that end, we have offered a set of recommendations, supported by empirical data, which we believe are needed for assessing human–machine parity, and will strengthen the human evaluation of MT in general. Our recommendations have the aim of increasing the validity of MT evaluation, but we are aware of the high cost of having MT evaluation done by professional translators, and on the level of full documents. We welcome future research into alternative evaluation protocols that can demonstrate their validity at a lower cost.
MT developers to which crowd workers were compared are usually not professional translators, evaluation of sentences in isolation prevents raters from detecting translation errors, used not originally written Chinese test set
2ed02be0c183fca7031ccb8be3fd7bc109f3694b
2ed02be0c183fca7031ccb8be3fd7bc109f3694b_0
Q: By how much they improve over the previous state-of-the-art? Text: Introduction Traditional approaches to abstractive summarization have relied on interpretable structured representations such as graph based sentence centrality BIBREF0, AMR parses BIBREF1, discourse based compression and anaphora constraints BIBREF2. On the other hand, state of the art neural approaches to single document summarization encode the document as a sequence of tokens and compose them into a document representation BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Albeit being effective, these systems learn to rely significantly on layout bias associated with the source document BIBREF8 and do not lend themselves easily to interpretation via intermediate structures. Recent work provides evidence that structured representation of text leads to better document representations BIBREF9, BIBREF10. However, structured representations are under-explored in the neural summarization literature. Motivated by this, we propose a structure-aware end-to-end model (§SECREF2) for summarization. Our proposed model, StructSum, augments the existing pointer-generator network BIBREF3 with two novel components: (1) a latent-structure attention module that adapts structured representations BIBREF11, BIBREF12 for the summarization task, and (2) an explicit-structure attention module, that incorporates a coreference graph. The components together model sentence level dependencies in a document generating rich structured representations. The motivation of this work is to provide a framework to induce rich interpretable latent structures and inject external document structures that can be introduced into any document encoder model. Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. Building on this motivation, our latent structure attention module builds upon BIBREF12 to model the dependencies between sentences in a document. It uses a variant of Kirchhoff’s matrix-tree theorem BIBREF14 to model such dependencies as non-projective tree structures(§SECREF3). The explicit attention module is linguistically-motivated and aims to incorporate sentence-level structures from externally annotated document structures. We incorporate a coreference based sentence dependency graph, which is then combined with the output of the latent structure attention module to produce a hybrid structure-aware sentence representation (§SECREF5). We evaluate our model on the CNN/DM dataset BIBREF15 and show in §SECREF4 that it outperforms strong baselines by up to 1.1 ROUGE-L. We find that the latent and explicit structures are complementary, both contributing to the final performance improvement. Our modules are also independent of the underlying encoder-decoder architectures, rendering them flexible to be incorporated into any advanced models. Our analysis quantitatively compares our generated summaries with the baselines and reference documents (§SECREF5). It reveals that structure-aware summarization reduces the bias of copying large sequences from the source inherently making the summaries more abstractive by generating $\sim $15% more novel n-grams compared to a competitive baseline. We also show qualitative examples of the learned interpretable sentence dependency structures, motivating further research for structure-aware modeling. StructSum Model Consider a source document $\mathbf {x}$ consisting of $n$ sentences $\lbrace \mathbf {s}\rbrace $ where each sentence $\mathbf {s}_i$ is composed of a sequence of words. Document summarization aims to map the source document to a target summary of $m$ words $\lbrace y\rbrace $. A typical neural abstractive summarization system is an attentional sequence-to-sequence model that encodes the input sequence $\mathbf {x}$ as a continuous sequence of tokens $\lbrace w\rbrace $ using a BiLSTM. The encoder produces a set of hidden representations $\lbrace \mathbf {h}\rbrace $. An LSTM decoder maps the previously generated token $y_{t-1}$ to a hidden state and computes a soft attention probability distribution $p(\mathbf {a}_t \mid \mathbf {x}, \mathbf {y}_{1:t-1})$ over encoder hidden states. A distribution $p$ over the vocabulary is computed at every timestep $t$ and the network is trained using negative log likelihood loss : $\text{loss}_t = - \mathrm {log}\:p(y_t) $. The pointer-generator network BIBREF3 augments the standard encoder-decoder architecture by linearly interpolating a pointer based copy mechanism. StructSum uses the pointer-generator network as the base model. Our encoder is a structured hierarchical encoder BIBREF16, which computes hidden representations of the sequence both at the token and sentence level. The model then uses the explicit-structure and implicit-structure attention modules to augment the sentence representations with rich sentence dependency information, leveraging both learned latent structure and additional external structure from other NLP modules. The attended vectors are then passed to the decoder, which produces the output sequence for abstractive summarization. In the rest of this section, we describe our model architecture, shown in Figure FIGREF2, in detail. StructSum Model ::: Encoder Our hierarchical encoder consists of a BiLSTM encoder over words, followed by sentence level BiLSTM encoder. The word encoder takes a sequence of words in a sentence $\mathbf {s}_i = \lbrace w\rbrace $ as input and produces contextual hidden representation for each word $\mathbf {h}_{w_{ik}}$, where $w_{ik}$ is the $i^{th}$ word of the $k^{th}$ sentence, $k=1:q$ and $q$ is the number of words in the sentence $\mathbf {s}_i$. The word hidden representations are max-pooled at the sentence level and the result is passed to a BiLSTM sentence-encoder which produces new hidden sentence representations for each sentence $\mathbf {h}_{\mathbf {s}_i}$. The sentence hidden representations are then passed as inputs to latent and explicit structure attention modules. StructSum Model ::: Latent Structure (LS) Attention We model the latent structure of a source document as a non-projective dependency tree and force a pair-wise attention module to automatically induce this tree. We denote the marginal probability of a dependency edge as $a_{ij} = p(z_{ij}=1)$ where $z_{ij}$ is the latent variable representing the edge from sentence $i$ to sentence $j$. We parameterize with a neural network the unnormalized pair-wise scores between sentences and use the Kirchoff's matrix tree theorem BIBREF14 to compute the marginal probability of a dependency edge between any two sentences. We decompose the representation of sentence $\mathbf {s}_i$ into a semantic vector $\mathbf {g}_{\mathbf {s}_i}$ and structure vector $\mathbf {d}_{\mathbf {s}_i}$ as $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {g}_{\mathbf {s}_i}; \mathbf {d}_{\mathbf {s}_i}]$. Using the structure vectors $\mathbf {d}_{\mathbf {s}_i}, \mathbf {d}_{\mathbf {s}_j}$, we compute a score $f_{ij}$ between sentence pairs $(i,j)$ (where sentence $i$ is the parent node of sentence $j$) and a score for sentence $\mathbf {s}_i$ being the root node $r_i$: where $F_p, F_c$ and $F_r$ are linear-projection functions to build representations for the parent, child and root node respectively and $W_a$ is the weight for bilinear transformation. Here, $f_{ij}$ is the edge weight between nodes $(i,j)$ in a weighted adjacency graph $\mathbf {F}$ and is computed for all pairs of sentences. Using $f_{ij}$ and $r_i$, we compute normalized attention scores $a_{ij}$ and $a_{i}^r $ using a variant of Kirchhoff’s matrix-tree theorem BIBREF12, BIBREF14 where $a_{ij}$ is the marginal probability of a dependency edge between sentences $(i,j)$ and $a_{i}^r $ is the probability of sentence $i$ being the root. Using these probabilistic attention weights and the semantic vectors $\lbrace \mathbf {g}_{\mathbf {s}}\rbrace $, we compute the attended sentence representations as: where $\mathbf {p}_{\mathbf {s}_i}$ is the context vector gathered from possible parents of sentence $i$, $\mathbf {c}_{\mathbf {s}_i}$ is the context vector gathered from possible children, and $\mathbf {g}_{root}$ is a special embedding for the root node. Here, the updated sentence representation $\textit {l}_{\mathbf {s}_i}$ incorporates the implicit structural information. StructSum Model ::: Explicit Structure (ES) Attention BIBREF2 showed that modeling coreference knowledge through anaphora constraints led to improved clarity or grammaticality in summaries. Taking inspiration from this, we choose coreference links across sentences as our explicit structure. First, we use an off-the-shelf coreference parser to identify coreferring mentions. We then build a coreference based sentence graph by adding a link between sentences $(\mathbf {s}_i, \mathbf {s}_j)$, if they have any coreferring mentions between them. This representation is then converted into a weighted graph by incorporating a weight on the edge between two sentences that is proportional to the number of unique coreferring mentions between them. We normalize these edge weights for every sentence, effectively building a weighted adjacency matrix $\mathbf {K}$ where $k_{ij}$ is given by: where $m_i$ denotes the set of unique mentions in sentence $\mathbf {s}_i$, ($m_i$ $\bigcap $ $m_j$) denotes the set of co-referring mentions between the two sentences and $z$ is a latent variable representing a link in the coreference sentence graph. $\epsilon = 5e-4$ is a smoothing hyperparameter. StructSum Model ::: Explicit Structure (ES) Attention ::: Incorporating explicit structure Given contextual sentence representations $\lbrace \mathbf {h}_{\mathbf {s}}\rbrace $ and our explicit coreference based weighted adjacency matrix $\mathbf {K}$, we learn an explicit-structure aware representation as follows: where $F_u$ and $F_e$ are linear projections and $\mathbf {e}_{\mathbf {s}_i}$ is an updated sentence representation which incorporates explicit structural information. Finally, to combine the two structural representations, we concatenate the latent and explicit sentence vectors as: $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {l}_{\mathbf {s}_i};\mathbf {e}_{\mathbf {s}_i}]$ to form encoder sentence representations of the source document. To provide every token representation with context of the entire document, we keep the same formulation as pointer-generator networks, where each token $w_{ij}$ is mapped to its hidden representation $\mathbf {h}_{w_{ij}}$ using a BiLSTM. The token representation is concatenated with their corresponding structure-aware sentence representation: $\mathbf {h}_{w_{ij}} = [\mathbf {h}_{w_{ij}};\mathbf {h}_{\mathbf {s}_i}]$ where $\mathbf {s}_i$ is the sentence to which the word $w_{ij}$ belongs. The resulting structure-aware token representations can be used to directly replace previous token representations as input to the decoder. Experiments ::: Dataset: We evaluate our approach on the CNN/Daily Mail corpus BIBREF15, BIBREF17 and use the same preprocessing steps as shown in BIBREF3. The CNN/DM summaries have an average of 66 tokens ($\sigma = 26$) and 4.9 sentences. Differing from BIBREF3, we truncate source documents to 700 tokens instead of 400 in training and validation sets to model longer documents with more sentences. Experiments ::: Baselines: We choose the following baselines based on their relatedness to the task and wide applicability: BIBREF3 : We re-implement the base pointer-generator model and the additional coverage mechanism. This forms the base model of our implementation and hence our addition of modeling document structure can be directly compared to it. BIBREF6 : This is a graph-based attention model that is closest in spirit to the method we present in this work. They use a graph attention module to learn attention between sentences, but cannot be easily used to induce interpretable document structures, since their attention scores are not constrained to learn structure. In addition to learning latent and interpretable structured attention between sentences, StructSum also introduces an explicit structure component to inject external document structure. BIBREF7 : We compare with the DiffMask experiment with this work. This work introduces a separate content selector which tags words and phrases to be copied. The DiffMask variant is an end-to-end variant like ours and hence is included in our baselines. Our baselines exclude Reinforcement Learning (RL) based systems as they aren't directly comparable, but our approach can be easily introduced in any encoder-decoder based RL system. Since we do not incorporate any pretraining, we do not compare with recent contextual representation based models BIBREF18. Experiments ::: Hyperparameters: Our encoder uses 256 hidden states for both directions in the one-layer LSTM, and 512 for the single-layer decoder. We use the adagrad optimizer BIBREF19 with a learning rate of 0.15 and an initial accumulator value of 0.1. We do not use dropout and use gradient-clipping with a maximum norm of 2. We selected the best model using early stopping based on the ROUGE score on the validation dataset as our criteria. We also used the coverage penalty during inference as shown in BIBREF7. For decoding, we use beam-search with a beam width of 3. We did not observe significant improvements with higher beam widths. Results Table TABREF8 shows the results of our work on the CNN/DM dataset. We use the standard ROUGE-1,2 and L BIBREF20 F1 metric to evaluate all our summarization output. We first observe that introducing the capability to learn latent structures already improves our performance on ROUGE-L. It suggests that modeling dependencies between sentences helps the model compose better long sequences w.r.t reference compared to baselines. We do not see a significant improvement in ROUGE-1 and ROUGE-2, hinting that we retrieve similar content words as the baseline but compose them into better contiguous sequences. We observe similar results when using explicit structures only with the ES attention module. This shows that adding inductive bias in the form of coreference based sentence graphs helps compose long sequences. Our results here are close to the model that uses just LS attention. This demonstrates that LS attention induces good latent dependencies that make up for pure external coreference knowledge. Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1. It shows that the latent and explicit information are complementary and a model can jointly leverage them to produce better summaries. Modeling structure and adding inductive biases also helps a model to converge faster where the combined LS+ES Attention model took 126K iterations for training in comparison to 230K iterations required to train the plain pointer-generator network and an additional 3K iterations for the coverage loss BIBREF3. Analysis We present below analysis on the quality of summarization as compared to our base model, the pointer-generator network with coverage BIBREF3 and the reference. Analysis ::: Analysis of Copying Despite being an abstractive model, the pointer-generator model tends to copy very long sequences of words including whole sentences from the source document (also observed by BIBREF7). Table TABREF15 shows a comparison of the Average Length (Copy Len) of contiguous copied sequences greater than length 3. We observe that the pointer-generator baseline on average copies 16.61 continuous tokens from the source which shows the extractive nature of the model. This indicates that pointer networks, aimed at combining advantages from abstractive and extractive methods by allowing to copy content from the input document, tend to skew towards copying, particularly in this dataset. A consequence of this is that the model fails to interrupt copying at desirable sequence length. In contrast, modeling document structure through StructSum reduces the length of copied sequences to 9.13 words on average reducing the bias of copying sentences in entirety. This average is closer to the reference (5.07 words) in comparison, without sacrificing task performance. StructSum learns to stop when needed, only copying enough content to generate a coherent summary. Analysis ::: Content Selection and Abstraction A direct outcome of copying shorter sequences is being able to cover more content from the source document within given length constraints. We observe that this leads to better summarization performance. In our analysis, we compute coverage by computing the number of source sentences from which sequences greater than length 3 are copied in the summary. Table TABREF15 shows a comparison of the coverage of source sentences in the summary content. We see that while the baseline pointer-generator model only copies from 12.1% of the source sentences, we copy content from 24.0% of the source sentences. Additionally, the average length of the summaries produced by StructSum remains mostly unchanged at 66 words on average compared to 61 of the baseline model. This indicates that StructSum produces summaries that draw from a wider selection of sentences from the original article compared to the baseline models. BIBREF21 show that copying more diverse content in isolation does not necessarily lead to better summaries for extractive summarization. Our analysis suggests that this observation might not extend to abstractive summarization methods. The proportion of novel n-grams generated has been used in the literature to measure the degree of abstraction of summarization models BIBREF3. Figure FIGREF17 compares the percentage of novel n-grams in StructSum as compared to the baseline model. Our model produces novel trigrams 21.0% of the time and copies whole sentences only 21.7% of the time. In comparison, the pointer-generator network has only 6.1% novel trigrams and copies entire sentences 51.7% of the time. This shows that StructSum on average generates 14.7% more novel n-grams in comparison to the pointer-generator baseline. Analysis ::: Layout Bias Neural abstractive summarization methods applied to news articles are typically biased towards selecting and generating summaries based on the first few sentences of the articles. This stems from the structure of news articles, which present the salient information of the article in the first few sentences and expand in the subsequent ones. As a result, the LEAD 3 baseline, which selects the top three sentences of an article, is widely used in the literature as a strong baseline to evaluate summarization models applied to the news domain BIBREF22. BIBREF8 observed that the current summarization models learn to exploit the layout biases of current datasets and offer limited diversity in their outputs. To analyze whether StructSum also holds the same layout biases, we compute a distribution of source sentence indices that are used for copying content (copied sequences of length 3 or more are considered). Figure FIGREF19 shows the comparison of coverage of sentences. The coverage of sentences in the reference summaries shows a high proportion of the top 5 sentences of any article being copied to the summary. Additionally, the reference summaries have a smoother tail end distribution with relevant sentences in all positions being copied. It shows that a smooth distribution over all sentences is a desirable feature. We notice that the sequence-to-sequence and pointer-generator framework (with and without coverage enabled) have a stronger bias towards the beginning of the article with a high concentration of copied sentences within the top 5 sentences of the article. In contrast, StructSum improves coverage slightly having a lower concentration of top 5 sentences and copies more tail end sentences than the baselines. However, although the modeling of structure does help, our model has a reasonable gap compared to the reference distribution. We see this as an area of improvement and a direction for future work. Analysis ::: Document Structures Similar to BIBREF12, we also look at the quality of the intermediate structures learned by the model. We use the Chu-Liu-Edmonds algorithm BIBREF23, BIBREF24 to extract the maximum spanning tree from the attention score matrix as our sentence structure. Table TABREF20 shows the frequency of various tree depths. We find that the average tree depth is 2.9 and the average proportion of leaf nodes is 88%, consistent with results from tree induction in document classification BIBREF25. Further, we compare latent trees extracted from StructSum with undirected graphs based on coreference and NER. These are constructed similarly to our explicit coreference based sentence graphs in §SECREF5 by linking sentences with overlapping coreference mentions or named entities. We measure the similarity between the learned latent trees and the explicit graphs through precision and recall over edges. The results are shown in Table TABREF22. We observe that our latent graphs have low recall with the linguistic graphs showing that our latent graphs do not capture the coreference or named entity overlaps explicitly, suggesting that the latent and explicit structures capture complementary information. Figure FIGREF24 shows qualitative examples of our induced structures along with generated summaries from the StructSum model. The first example shows a tree with sentence 3 chosen as root, which was the key sentence mentioned in the reference. We notice that in both examples, the sentences in the lower level of the dependency tree contribute less to the generated summary. Along the same lines, in the examples source sentences used to generate summaries tend to be closer to the root node. In the first summary, all sentences from which content was drawn are either the root node or within depth 1 of the root node. Similarly, in the second example, 4 out of 5 source sentences were at depth=1 in the tree. In the two examples, generated summaries diverged from the reference by omitting certain sentences used in the reference. These sentences appear in the lower section of the tree giving us some insights on which sentences were preferred for the summary generation. Further, in example 1, we notice that the latent structures cluster sentences based on the main topic of the document. Sentences 1,2,3 differ from sentences 5,6,7 on the topic being discussed and our model has clustered the two sets separately. Related Work Prior to neural models for summarization, document structure played a critical role in generating relevant, diverse and coherent summaries. BIBREF26 formulated document summarization using linguistic features to construct a semantic graph of the document and building a subgraph for the summary. BIBREF27 leverage language-independent syntactic graphs of the source document to do unsupervised document summarization. BIBREF1 parse the source text into a set of AMR graphs, transform the graphs to summary graphs and then generate text from the summary graph. While such systems generate grammatical summaries and preserve linguistic quality BIBREF2, they are often computationally demanding and do not generalize well BIBREF21. Data-driven neural models for summarization fall into extractive BIBREF13, BIBREF28 or abstractive BIBREF29, BIBREF3, BIBREF7, BIBREF30. BIBREF3 proposed a pointer-generator framework that learns to either generate novel in-vocabulary words or copy words from the source. This model has been the foundation for a lot of follow up work on abstractive summarization BIBREF7, BIBREF31, BIBREF32. Our model extends the pointer-generator model by incorporating latent structure and explicit structure knowledge, making our extension applicable to any of the followup work. BIBREF6 present a graph-based attention system to improve the saliency of summaries. While this model learns attention between sentences, it does not induce interpretable intermediate structures. A lot of recent work looks into incorporating structure into neural models. BIBREF32 infuse source side syntactic structure into the copy mechanism of the pointer-generator model. They identify explicit word-level syntactic features based on dependency parses and parts of speech tags and augment the decoder copy mechanism to attend to them. In contrast, we model sentence level dependency structures in the form of latent or induced structures and explicit coreference based structures. We do not identify any heuristic or salient features other than linking dependent sentences. BIBREF33 propose structural compression and coverage regularizers to provide an objective to neural models to generate concise and informative content. Here, they incorporate structural bias about the target summaries but we choose to model the structure of the source sentence to produce rich document representations. BIBREF34 induce latent document structure for aspect based summarization. BIBREF35 use present long document summarization model applicable for scientific papers, which attends to discourse sections in a document, while BIBREF36 propose an unsupervised model for review summarization which learns a latent discourse structure and uses it to summarize a review. BIBREF37 use discourse structures to improve coherence in blog summarization. These are all complementary directions to our work. To our knowledge, we are the first to simultaneously incorporate latent and explicit document structure in a single framework for document summarization. Conclusion and Future Work To summarize, our contributions are three-fold. We propose a framework for incorporating latent and explicit document structure in neural abstractive summarization. We introduce a novel explicit-attention module which can incorporate external linguistic structures, and we show one such application where we use coreference to enhance summarization. We show quantitative improvements on the ROUGE metric over strong summarization baselines and demonstrate improvements in abstraction and coverage through extensive qualitative analysis. StructSum has demonstrated performance gain and higher quality output summaries; with a potential direction to study the role of latent structures in the interpretability of models in the future. Another possible direction is to investigate whether structured representations allow better generalization for transfer learning and summarization in other domains with limited data.
1.08 points in ROUGE-L over our base pointer-generator model , 0.6 points in ROUGE-1
be73a88d5b695200e2ead4c2c24e2a977692970e
be73a88d5b695200e2ead4c2c24e2a977692970e_0
Q: Is there any evidence that encoders with latent structures work well on other tasks? Text: Introduction Traditional approaches to abstractive summarization have relied on interpretable structured representations such as graph based sentence centrality BIBREF0, AMR parses BIBREF1, discourse based compression and anaphora constraints BIBREF2. On the other hand, state of the art neural approaches to single document summarization encode the document as a sequence of tokens and compose them into a document representation BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. Albeit being effective, these systems learn to rely significantly on layout bias associated with the source document BIBREF8 and do not lend themselves easily to interpretation via intermediate structures. Recent work provides evidence that structured representation of text leads to better document representations BIBREF9, BIBREF10. However, structured representations are under-explored in the neural summarization literature. Motivated by this, we propose a structure-aware end-to-end model (§SECREF2) for summarization. Our proposed model, StructSum, augments the existing pointer-generator network BIBREF3 with two novel components: (1) a latent-structure attention module that adapts structured representations BIBREF11, BIBREF12 for the summarization task, and (2) an explicit-structure attention module, that incorporates a coreference graph. The components together model sentence level dependencies in a document generating rich structured representations. The motivation of this work is to provide a framework to induce rich interpretable latent structures and inject external document structures that can be introduced into any document encoder model. Encoders with induced latent structures have been shown to benefit several tasks including document classification, natural language inference BIBREF12, BIBREF13, and machine translation BIBREF11. Building on this motivation, our latent structure attention module builds upon BIBREF12 to model the dependencies between sentences in a document. It uses a variant of Kirchhoff’s matrix-tree theorem BIBREF14 to model such dependencies as non-projective tree structures(§SECREF3). The explicit attention module is linguistically-motivated and aims to incorporate sentence-level structures from externally annotated document structures. We incorporate a coreference based sentence dependency graph, which is then combined with the output of the latent structure attention module to produce a hybrid structure-aware sentence representation (§SECREF5). We evaluate our model on the CNN/DM dataset BIBREF15 and show in §SECREF4 that it outperforms strong baselines by up to 1.1 ROUGE-L. We find that the latent and explicit structures are complementary, both contributing to the final performance improvement. Our modules are also independent of the underlying encoder-decoder architectures, rendering them flexible to be incorporated into any advanced models. Our analysis quantitatively compares our generated summaries with the baselines and reference documents (§SECREF5). It reveals that structure-aware summarization reduces the bias of copying large sequences from the source inherently making the summaries more abstractive by generating $\sim $15% more novel n-grams compared to a competitive baseline. We also show qualitative examples of the learned interpretable sentence dependency structures, motivating further research for structure-aware modeling. StructSum Model Consider a source document $\mathbf {x}$ consisting of $n$ sentences $\lbrace \mathbf {s}\rbrace $ where each sentence $\mathbf {s}_i$ is composed of a sequence of words. Document summarization aims to map the source document to a target summary of $m$ words $\lbrace y\rbrace $. A typical neural abstractive summarization system is an attentional sequence-to-sequence model that encodes the input sequence $\mathbf {x}$ as a continuous sequence of tokens $\lbrace w\rbrace $ using a BiLSTM. The encoder produces a set of hidden representations $\lbrace \mathbf {h}\rbrace $. An LSTM decoder maps the previously generated token $y_{t-1}$ to a hidden state and computes a soft attention probability distribution $p(\mathbf {a}_t \mid \mathbf {x}, \mathbf {y}_{1:t-1})$ over encoder hidden states. A distribution $p$ over the vocabulary is computed at every timestep $t$ and the network is trained using negative log likelihood loss : $\text{loss}_t = - \mathrm {log}\:p(y_t) $. The pointer-generator network BIBREF3 augments the standard encoder-decoder architecture by linearly interpolating a pointer based copy mechanism. StructSum uses the pointer-generator network as the base model. Our encoder is a structured hierarchical encoder BIBREF16, which computes hidden representations of the sequence both at the token and sentence level. The model then uses the explicit-structure and implicit-structure attention modules to augment the sentence representations with rich sentence dependency information, leveraging both learned latent structure and additional external structure from other NLP modules. The attended vectors are then passed to the decoder, which produces the output sequence for abstractive summarization. In the rest of this section, we describe our model architecture, shown in Figure FIGREF2, in detail. StructSum Model ::: Encoder Our hierarchical encoder consists of a BiLSTM encoder over words, followed by sentence level BiLSTM encoder. The word encoder takes a sequence of words in a sentence $\mathbf {s}_i = \lbrace w\rbrace $ as input and produces contextual hidden representation for each word $\mathbf {h}_{w_{ik}}$, where $w_{ik}$ is the $i^{th}$ word of the $k^{th}$ sentence, $k=1:q$ and $q$ is the number of words in the sentence $\mathbf {s}_i$. The word hidden representations are max-pooled at the sentence level and the result is passed to a BiLSTM sentence-encoder which produces new hidden sentence representations for each sentence $\mathbf {h}_{\mathbf {s}_i}$. The sentence hidden representations are then passed as inputs to latent and explicit structure attention modules. StructSum Model ::: Latent Structure (LS) Attention We model the latent structure of a source document as a non-projective dependency tree and force a pair-wise attention module to automatically induce this tree. We denote the marginal probability of a dependency edge as $a_{ij} = p(z_{ij}=1)$ where $z_{ij}$ is the latent variable representing the edge from sentence $i$ to sentence $j$. We parameterize with a neural network the unnormalized pair-wise scores between sentences and use the Kirchoff's matrix tree theorem BIBREF14 to compute the marginal probability of a dependency edge between any two sentences. We decompose the representation of sentence $\mathbf {s}_i$ into a semantic vector $\mathbf {g}_{\mathbf {s}_i}$ and structure vector $\mathbf {d}_{\mathbf {s}_i}$ as $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {g}_{\mathbf {s}_i}; \mathbf {d}_{\mathbf {s}_i}]$. Using the structure vectors $\mathbf {d}_{\mathbf {s}_i}, \mathbf {d}_{\mathbf {s}_j}$, we compute a score $f_{ij}$ between sentence pairs $(i,j)$ (where sentence $i$ is the parent node of sentence $j$) and a score for sentence $\mathbf {s}_i$ being the root node $r_i$: where $F_p, F_c$ and $F_r$ are linear-projection functions to build representations for the parent, child and root node respectively and $W_a$ is the weight for bilinear transformation. Here, $f_{ij}$ is the edge weight between nodes $(i,j)$ in a weighted adjacency graph $\mathbf {F}$ and is computed for all pairs of sentences. Using $f_{ij}$ and $r_i$, we compute normalized attention scores $a_{ij}$ and $a_{i}^r $ using a variant of Kirchhoff’s matrix-tree theorem BIBREF12, BIBREF14 where $a_{ij}$ is the marginal probability of a dependency edge between sentences $(i,j)$ and $a_{i}^r $ is the probability of sentence $i$ being the root. Using these probabilistic attention weights and the semantic vectors $\lbrace \mathbf {g}_{\mathbf {s}}\rbrace $, we compute the attended sentence representations as: where $\mathbf {p}_{\mathbf {s}_i}$ is the context vector gathered from possible parents of sentence $i$, $\mathbf {c}_{\mathbf {s}_i}$ is the context vector gathered from possible children, and $\mathbf {g}_{root}$ is a special embedding for the root node. Here, the updated sentence representation $\textit {l}_{\mathbf {s}_i}$ incorporates the implicit structural information. StructSum Model ::: Explicit Structure (ES) Attention BIBREF2 showed that modeling coreference knowledge through anaphora constraints led to improved clarity or grammaticality in summaries. Taking inspiration from this, we choose coreference links across sentences as our explicit structure. First, we use an off-the-shelf coreference parser to identify coreferring mentions. We then build a coreference based sentence graph by adding a link between sentences $(\mathbf {s}_i, \mathbf {s}_j)$, if they have any coreferring mentions between them. This representation is then converted into a weighted graph by incorporating a weight on the edge between two sentences that is proportional to the number of unique coreferring mentions between them. We normalize these edge weights for every sentence, effectively building a weighted adjacency matrix $\mathbf {K}$ where $k_{ij}$ is given by: where $m_i$ denotes the set of unique mentions in sentence $\mathbf {s}_i$, ($m_i$ $\bigcap $ $m_j$) denotes the set of co-referring mentions between the two sentences and $z$ is a latent variable representing a link in the coreference sentence graph. $\epsilon = 5e-4$ is a smoothing hyperparameter. StructSum Model ::: Explicit Structure (ES) Attention ::: Incorporating explicit structure Given contextual sentence representations $\lbrace \mathbf {h}_{\mathbf {s}}\rbrace $ and our explicit coreference based weighted adjacency matrix $\mathbf {K}$, we learn an explicit-structure aware representation as follows: where $F_u$ and $F_e$ are linear projections and $\mathbf {e}_{\mathbf {s}_i}$ is an updated sentence representation which incorporates explicit structural information. Finally, to combine the two structural representations, we concatenate the latent and explicit sentence vectors as: $\mathbf {h}_{\mathbf {s}_i} = [\mathbf {l}_{\mathbf {s}_i};\mathbf {e}_{\mathbf {s}_i}]$ to form encoder sentence representations of the source document. To provide every token representation with context of the entire document, we keep the same formulation as pointer-generator networks, where each token $w_{ij}$ is mapped to its hidden representation $\mathbf {h}_{w_{ij}}$ using a BiLSTM. The token representation is concatenated with their corresponding structure-aware sentence representation: $\mathbf {h}_{w_{ij}} = [\mathbf {h}_{w_{ij}};\mathbf {h}_{\mathbf {s}_i}]$ where $\mathbf {s}_i$ is the sentence to which the word $w_{ij}$ belongs. The resulting structure-aware token representations can be used to directly replace previous token representations as input to the decoder. Experiments ::: Dataset: We evaluate our approach on the CNN/Daily Mail corpus BIBREF15, BIBREF17 and use the same preprocessing steps as shown in BIBREF3. The CNN/DM summaries have an average of 66 tokens ($\sigma = 26$) and 4.9 sentences. Differing from BIBREF3, we truncate source documents to 700 tokens instead of 400 in training and validation sets to model longer documents with more sentences. Experiments ::: Baselines: We choose the following baselines based on their relatedness to the task and wide applicability: BIBREF3 : We re-implement the base pointer-generator model and the additional coverage mechanism. This forms the base model of our implementation and hence our addition of modeling document structure can be directly compared to it. BIBREF6 : This is a graph-based attention model that is closest in spirit to the method we present in this work. They use a graph attention module to learn attention between sentences, but cannot be easily used to induce interpretable document structures, since their attention scores are not constrained to learn structure. In addition to learning latent and interpretable structured attention between sentences, StructSum also introduces an explicit structure component to inject external document structure. BIBREF7 : We compare with the DiffMask experiment with this work. This work introduces a separate content selector which tags words and phrases to be copied. The DiffMask variant is an end-to-end variant like ours and hence is included in our baselines. Our baselines exclude Reinforcement Learning (RL) based systems as they aren't directly comparable, but our approach can be easily introduced in any encoder-decoder based RL system. Since we do not incorporate any pretraining, we do not compare with recent contextual representation based models BIBREF18. Experiments ::: Hyperparameters: Our encoder uses 256 hidden states for both directions in the one-layer LSTM, and 512 for the single-layer decoder. We use the adagrad optimizer BIBREF19 with a learning rate of 0.15 and an initial accumulator value of 0.1. We do not use dropout and use gradient-clipping with a maximum norm of 2. We selected the best model using early stopping based on the ROUGE score on the validation dataset as our criteria. We also used the coverage penalty during inference as shown in BIBREF7. For decoding, we use beam-search with a beam width of 3. We did not observe significant improvements with higher beam widths. Results Table TABREF8 shows the results of our work on the CNN/DM dataset. We use the standard ROUGE-1,2 and L BIBREF20 F1 metric to evaluate all our summarization output. We first observe that introducing the capability to learn latent structures already improves our performance on ROUGE-L. It suggests that modeling dependencies between sentences helps the model compose better long sequences w.r.t reference compared to baselines. We do not see a significant improvement in ROUGE-1 and ROUGE-2, hinting that we retrieve similar content words as the baseline but compose them into better contiguous sequences. We observe similar results when using explicit structures only with the ES attention module. This shows that adding inductive bias in the form of coreference based sentence graphs helps compose long sequences. Our results here are close to the model that uses just LS attention. This demonstrates that LS attention induces good latent dependencies that make up for pure external coreference knowledge. Finally, our combined model which uses both Latent and Explicit structure performs the best with a strong improvement of 1.08 points in ROUGE-L over our base pointer-generator model and 0.6 points in ROUGE-1. It shows that the latent and explicit information are complementary and a model can jointly leverage them to produce better summaries. Modeling structure and adding inductive biases also helps a model to converge faster where the combined LS+ES Attention model took 126K iterations for training in comparison to 230K iterations required to train the plain pointer-generator network and an additional 3K iterations for the coverage loss BIBREF3. Analysis We present below analysis on the quality of summarization as compared to our base model, the pointer-generator network with coverage BIBREF3 and the reference. Analysis ::: Analysis of Copying Despite being an abstractive model, the pointer-generator model tends to copy very long sequences of words including whole sentences from the source document (also observed by BIBREF7). Table TABREF15 shows a comparison of the Average Length (Copy Len) of contiguous copied sequences greater than length 3. We observe that the pointer-generator baseline on average copies 16.61 continuous tokens from the source which shows the extractive nature of the model. This indicates that pointer networks, aimed at combining advantages from abstractive and extractive methods by allowing to copy content from the input document, tend to skew towards copying, particularly in this dataset. A consequence of this is that the model fails to interrupt copying at desirable sequence length. In contrast, modeling document structure through StructSum reduces the length of copied sequences to 9.13 words on average reducing the bias of copying sentences in entirety. This average is closer to the reference (5.07 words) in comparison, without sacrificing task performance. StructSum learns to stop when needed, only copying enough content to generate a coherent summary. Analysis ::: Content Selection and Abstraction A direct outcome of copying shorter sequences is being able to cover more content from the source document within given length constraints. We observe that this leads to better summarization performance. In our analysis, we compute coverage by computing the number of source sentences from which sequences greater than length 3 are copied in the summary. Table TABREF15 shows a comparison of the coverage of source sentences in the summary content. We see that while the baseline pointer-generator model only copies from 12.1% of the source sentences, we copy content from 24.0% of the source sentences. Additionally, the average length of the summaries produced by StructSum remains mostly unchanged at 66 words on average compared to 61 of the baseline model. This indicates that StructSum produces summaries that draw from a wider selection of sentences from the original article compared to the baseline models. BIBREF21 show that copying more diverse content in isolation does not necessarily lead to better summaries for extractive summarization. Our analysis suggests that this observation might not extend to abstractive summarization methods. The proportion of novel n-grams generated has been used in the literature to measure the degree of abstraction of summarization models BIBREF3. Figure FIGREF17 compares the percentage of novel n-grams in StructSum as compared to the baseline model. Our model produces novel trigrams 21.0% of the time and copies whole sentences only 21.7% of the time. In comparison, the pointer-generator network has only 6.1% novel trigrams and copies entire sentences 51.7% of the time. This shows that StructSum on average generates 14.7% more novel n-grams in comparison to the pointer-generator baseline. Analysis ::: Layout Bias Neural abstractive summarization methods applied to news articles are typically biased towards selecting and generating summaries based on the first few sentences of the articles. This stems from the structure of news articles, which present the salient information of the article in the first few sentences and expand in the subsequent ones. As a result, the LEAD 3 baseline, which selects the top three sentences of an article, is widely used in the literature as a strong baseline to evaluate summarization models applied to the news domain BIBREF22. BIBREF8 observed that the current summarization models learn to exploit the layout biases of current datasets and offer limited diversity in their outputs. To analyze whether StructSum also holds the same layout biases, we compute a distribution of source sentence indices that are used for copying content (copied sequences of length 3 or more are considered). Figure FIGREF19 shows the comparison of coverage of sentences. The coverage of sentences in the reference summaries shows a high proportion of the top 5 sentences of any article being copied to the summary. Additionally, the reference summaries have a smoother tail end distribution with relevant sentences in all positions being copied. It shows that a smooth distribution over all sentences is a desirable feature. We notice that the sequence-to-sequence and pointer-generator framework (with and without coverage enabled) have a stronger bias towards the beginning of the article with a high concentration of copied sentences within the top 5 sentences of the article. In contrast, StructSum improves coverage slightly having a lower concentration of top 5 sentences and copies more tail end sentences than the baselines. However, although the modeling of structure does help, our model has a reasonable gap compared to the reference distribution. We see this as an area of improvement and a direction for future work. Analysis ::: Document Structures Similar to BIBREF12, we also look at the quality of the intermediate structures learned by the model. We use the Chu-Liu-Edmonds algorithm BIBREF23, BIBREF24 to extract the maximum spanning tree from the attention score matrix as our sentence structure. Table TABREF20 shows the frequency of various tree depths. We find that the average tree depth is 2.9 and the average proportion of leaf nodes is 88%, consistent with results from tree induction in document classification BIBREF25. Further, we compare latent trees extracted from StructSum with undirected graphs based on coreference and NER. These are constructed similarly to our explicit coreference based sentence graphs in §SECREF5 by linking sentences with overlapping coreference mentions or named entities. We measure the similarity between the learned latent trees and the explicit graphs through precision and recall over edges. The results are shown in Table TABREF22. We observe that our latent graphs have low recall with the linguistic graphs showing that our latent graphs do not capture the coreference or named entity overlaps explicitly, suggesting that the latent and explicit structures capture complementary information. Figure FIGREF24 shows qualitative examples of our induced structures along with generated summaries from the StructSum model. The first example shows a tree with sentence 3 chosen as root, which was the key sentence mentioned in the reference. We notice that in both examples, the sentences in the lower level of the dependency tree contribute less to the generated summary. Along the same lines, in the examples source sentences used to generate summaries tend to be closer to the root node. In the first summary, all sentences from which content was drawn are either the root node or within depth 1 of the root node. Similarly, in the second example, 4 out of 5 source sentences were at depth=1 in the tree. In the two examples, generated summaries diverged from the reference by omitting certain sentences used in the reference. These sentences appear in the lower section of the tree giving us some insights on which sentences were preferred for the summary generation. Further, in example 1, we notice that the latent structures cluster sentences based on the main topic of the document. Sentences 1,2,3 differ from sentences 5,6,7 on the topic being discussed and our model has clustered the two sets separately. Related Work Prior to neural models for summarization, document structure played a critical role in generating relevant, diverse and coherent summaries. BIBREF26 formulated document summarization using linguistic features to construct a semantic graph of the document and building a subgraph for the summary. BIBREF27 leverage language-independent syntactic graphs of the source document to do unsupervised document summarization. BIBREF1 parse the source text into a set of AMR graphs, transform the graphs to summary graphs and then generate text from the summary graph. While such systems generate grammatical summaries and preserve linguistic quality BIBREF2, they are often computationally demanding and do not generalize well BIBREF21. Data-driven neural models for summarization fall into extractive BIBREF13, BIBREF28 or abstractive BIBREF29, BIBREF3, BIBREF7, BIBREF30. BIBREF3 proposed a pointer-generator framework that learns to either generate novel in-vocabulary words or copy words from the source. This model has been the foundation for a lot of follow up work on abstractive summarization BIBREF7, BIBREF31, BIBREF32. Our model extends the pointer-generator model by incorporating latent structure and explicit structure knowledge, making our extension applicable to any of the followup work. BIBREF6 present a graph-based attention system to improve the saliency of summaries. While this model learns attention between sentences, it does not induce interpretable intermediate structures. A lot of recent work looks into incorporating structure into neural models. BIBREF32 infuse source side syntactic structure into the copy mechanism of the pointer-generator model. They identify explicit word-level syntactic features based on dependency parses and parts of speech tags and augment the decoder copy mechanism to attend to them. In contrast, we model sentence level dependency structures in the form of latent or induced structures and explicit coreference based structures. We do not identify any heuristic or salient features other than linking dependent sentences. BIBREF33 propose structural compression and coverage regularizers to provide an objective to neural models to generate concise and informative content. Here, they incorporate structural bias about the target summaries but we choose to model the structure of the source sentence to produce rich document representations. BIBREF34 induce latent document structure for aspect based summarization. BIBREF35 use present long document summarization model applicable for scientific papers, which attends to discourse sections in a document, while BIBREF36 propose an unsupervised model for review summarization which learns a latent discourse structure and uses it to summarize a review. BIBREF37 use discourse structures to improve coherence in blog summarization. These are all complementary directions to our work. To our knowledge, we are the first to simultaneously incorporate latent and explicit document structure in a single framework for document summarization. Conclusion and Future Work To summarize, our contributions are three-fold. We propose a framework for incorporating latent and explicit document structure in neural abstractive summarization. We introduce a novel explicit-attention module which can incorporate external linguistic structures, and we show one such application where we use coreference to enhance summarization. We show quantitative improvements on the ROUGE metric over strong summarization baselines and demonstrate improvements in abstraction and coverage through extensive qualitative analysis. StructSum has demonstrated performance gain and higher quality output summaries; with a potential direction to study the role of latent structures in the interpretability of models in the future. Another possible direction is to investigate whether structured representations allow better generalization for transfer learning and summarization in other domains with limited data.
Yes
0e45aae0e97a6895543e88705e153f084ce9c136
0e45aae0e97a6895543e88705e153f084ce9c136_0
Q: Do they report results only on English? Text: Introduction Transformer based pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 have been shown to perform remarkably well on a range of tasks, including entity-related tasks like coreference resolution BIBREF5 and named entity recognition BIBREF0. This performance has been generally attributed to the robust transfer of lexical semantics to downstream tasks. However, these models are still better at capturing syntax than they are at more entity-focused aspects like coreference BIBREF6, BIBREF7; moreover, existing state-of-the-art architectures for such tasks often perform well looking at only local entity mentions BIBREF8, BIBREF9, BIBREF10 rather than forming truly global entity representations BIBREF11, BIBREF12. Thus, performance on these tasks does not form sufficient evidence that these representations strongly capture entity semantics. Better understanding the models' capabilities requires testing them in domains involving complex entity interactions over longer texts. One such domain is that of procedural language, which is strongly focused on tracking the entities involved and their interactions BIBREF13, BIBREF14, BIBREF15. This paper investigates the question of how transformer-based models form entity representations and what these representations capture. We expect that after fine-tuning on a target task, a transformer's output representations should somehow capture relevant entity properties, in the sense that these properties can be extracted by shallow classification either from entity tokens or from marker tokens. However, we observe that such “post-conditioning” approaches don't perform significantly better than rule-based baselines on the tasks we study. We address this by proposing entity-centric ways of structuring input to the transformer networks, using the entity to guide the intrinsic self-attention and form entity-centric representations for all the tokens. We find that our proposed methods lead to a significant improvement in performance over baselines. Although our entity-specific application of transformers is more effective at the entity tracking tasks we study, we perform additional analysis and find that these tasks still do not encourage transformers to form truly deep entity representations. Our performance gain is largely from better understanding of verb semantics in terms of associating process actions with entity the paragraph is conditioned on. The model also does not specialize in “tracking” composed entities per se, again using surface clues like verbs to identify the components involved in a new composition. We evaluate our models on two datasets specifically designed to invoke procedural understanding: (i) Recipes BIBREF16, and (ii) ProPara BIBREF14. For the Recipes dataset, we classify whether an ingredient was affected in a certain step, which requires understanding when ingredients are combined or the focus of the recipe shifts away from them. The ProPara dataset involves answering a more complex set of questions about physical state changes of components in scientific processes. To handle this more structured setting, our transformer produces potentials consumed by a conditional random field which predicts entity states over time. Using a unidirectional GPT-based architecture, we achieve state-of-the-art results on both the datasets; nevertheless, analysis shows that our approach still falls short of capturing the full space of entity interactions. Background: Process Understanding Procedural text is a domain of text involved with understanding some kind of process, such as a phenomenon arising in nature or a set of instructions to perform a task. Entity tracking is a core component of understanding such texts. BIBREF14 introduced the ProPara dataset to probe understanding of scientific processes. The goal is to track the sequence of physical state changes (creation, destruction, and movement) entites undergo over long sequences of process steps. Past work involves both modeling entities across time BIBREF17 and capturing structural constraints inherent in the processes BIBREF18, BIBREF19 Figure FIGREF2b shows an example of the dataset posed as a structured prediction task, as in BIBREF19. For such a domain, it is crucial to capture implicit event occurrences beyond explicit entity mentions. For example, in fuel goes into the generator. The generator converts mechanical energy into electrical energy”, the fuel is implicitly destroyed in the process. BIBREF15 introduced the task of detecting state changes in recipes in the Recipes dataset and proposed an entity-centric memory network neural architecture for simulating action dynamics. Figure FIGREF2a shows an example from the Recipes dataset with a grid showing ingredient presence. We focus specifically on this core problem of ingredient detection; while only one of the sub-tasks associated with their dataset, it reflects some complex semantics involving understanding the current state of the recipe. Tracking of ingredients in the cooking domain is challenging owing to the compositional nature of recipes whereby ingredients mix together and are aliased as intermediate compositions. We pose both of these procedural understanding tasks as classification problems, predicting the state of the entity at each timestep from a set of pre-defined classes. In Figure FIGREF2, these classes correspond to either the presence (1) or absence (0) or the sequence of state changes create (C), move (M), destroy (D), exists (E), and none (O). State-of-the-art approaches on these tasks are inherently entity-centric. Separately, it has been shown that entity-centric language modeling in a continuous framework can lead to better performance for LM related tasks BIBREF20, BIBREF21. Moreover, external data has shown to be useful for modeling process understanding tasks in prior work BIBREF18, BIBREF15, suggesting that pre-trained models may be effective. With such tasks in place, a strong model will ideally learn to form robust entity-centric representation at each time step instead of solely relying on extracting information from the local entity mentions. This expectation is primarily due to the evolving nature of the process domain where entities undergo complex interactions, form intermediate compositions, and are often accompanied by implicit state changes. We now investigate to what extent this is true in a standard application of transformer models to this problem. Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models The most natural way to use the pre-trained transformer architectures for the entity tracking tasks is to simply encode the text sequence and then attempt to “read off” entity states from the contextual transformer representation. We call this approach post-conditioning: the transformer runs with no knowledge of which entity or entities we are going to make predictions on, but we only condition on the target entity after the transformer stage. Figure FIGREF4 depicts this model. Formally, for a labelled pair $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$, we encode the tokenized sequence of steps up to the current timestep (the sentences are separated by using a special [SEP] token), independent of the entity. We denote by $X=[h_{1}, h_{2},\dots , h_{m}]$ the contextualized hidden representation of the $m$ input tokens from the last layer, and by $\textstyle g_{e}\!=\!\!\!\sum \limits _{\text{ent toks}}\!emb(e_i)$ the entity representation for post conditioning. We now use one of the following two ways to make an entity-specific prediction: Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models ::: Task Specific Input Token We append a $\texttt {[CLS]}$ token to the input sequence and use the output representation of the $\texttt {[CLS]}$ token denoted by $h_{ \texttt {[CLS]}}$ concatenated with the learned BPE embeddings of the entity as the representation $c_{e,t}$ for our entity tracking system. We then use a linear layer over it to get class probabilities: The aim of the [CLS] token is to encode information related to general entity related semantics participating in the recipe (sentence priors). We then use a single linear layer to learn sentence priors and entity priors independently, without strong interaction. We call this model GPT$_{indep}$. Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models ::: Entity Based Attention Second, we explore a more fine-grained way of using the GPT model outputs. Specifically, we use bilinear attention between $g_e$ and the transformer output for the process tokens $X$ to get a contextual representation $c_{e,t}$ for a given entity. Finally, using a feed-forward network followed by softmax layer gives us the class probabilities: The bilinear attention over the contextual representations of the process tokens allows the model to fetch token content relevant to that particular entity. We call this model GPT$_{attn}$. Studying Basic Transformer Representations for Entity Tracking ::: Results and Observations We evaluate the discussed post-conditioning models on the ingredient detection task of the Recipes dataset. To benchmark the performance, we compare to three rule-based baselines. This includes (i) Majority Class, (ii) Exact Match of an ingredient $e$ in recipe step $s_t$, and (iii) First Occurrence, where we predict the ingredient to be present in all steps following the first exact match. These latter two baselines capture natural modes of reasoning about the dataset: an ingredient is used when it is directly mentioned, or it is used in every step after it is mentioned, reflecting the assumption that a recipe is about incrementally adding ingredients to an ever-growing mixture. We also construct a LSTM baseline to evaluate the performance of ELMo embeddings (ELMo$_{token}$ and ELMo$_{sent}$) BIBREF22 compared to GPT. Table TABREF10 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance. Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ingredient is explicitly mentioned (uncombined) or part of a mixture (combined). Note that Exact Match and First Occ baselines represent high-precision and high-recall regimes for this task, respectively. As observed from the results, the post-conditioning frameworks underperform compared to the First Occ baseline. While the CR values appear to be high, which would suggest that the model is capturing the addition of ingredients to the mixture, we note that this value is also lower than the corresponding value for First Occ. This result suggests that the model may be approximating the behavior of this baseline, but doing so poorly. The unconditional self-attention mechanism of the transformers does not seem sufficient to capture the entity details at each time step beyond simple presence or absence. Moreover, we see that GPT$_{indep}$ performs somewhat comparably to GPT$_{attn}$, suggesting that consuming the transformer's output with simple attention is not able to really extract the right entity representation. For ProPara, we observe similar performance trends where the post-conditioning model performed below par with the state-of-the-art architectures. Entity-Conditioned Models The post-conditioning framework assumes that the transformer network can form strong representations containing entity information accessible in a shallow way based on the target entity. We now propose a model architecture which more strongly conditions on the entity as a part of the intrinsic self-attention mechanism of the transformers. Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. The input sequence begins with a [START] token, then the entity under consideration, then a [SEP] token. After each sentence, a [CLS] token is used to anchor the prediction for that sentence. In this model, the transformer can always observe the entity it should be primarily “attending to” from the standpoint of building representations. We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. These variants are naturally more computationally-intensive than post-conditioned models, as we need to rerun the transformer for each distinct entity we want to make a prediction for. Entity-Conditioned Models ::: Sentence Level vs. Document Level As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model). In a sentence level model, we formulate each pair of entity $e$ and process step $t$ as a separate instance for our classification task. Thus, for a process with $T$ steps and $m$ entities we get $T \times m$ input sequences for fine tuning our classification task. Entity-Conditioned Models ::: Training Details In most experiments, we initialize the network with the weights of the standard pre-trained GPT model, then subsequently do either domain specific LM fine-tuning and supervised task specific fine-tuning. Entity-Conditioned Models ::: Training Details ::: Domain Specific LM fine-tuning For some procedural domains, we have access to additional unlabeled data. To adapt the LM to capture domain intricacies, we fine-tune the transformer network on this unlabeled corpus. Entity-Conditioned Models ::: Training Details ::: Supervised Task Fine-Tuning After the domain specific LM fine-tuning, we fine-tune our network parameters for the end task of entity tracking. For fine-tuning for the task, we have a labelled dataset which we denote by $\mathcal {C}$, the set of labelled pairs $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$ for a given process. The input is converted according to our chosen entity conditioning procedure, then fed through the pre-trained network. In addition, we observed that adding the language model loss during task specific fine-tuning leads to better performance as well, possibly because it adapts the LM to our task-specific input formulation. Thus, Entity-Conditioned Models ::: Experiments: Ingredient Detection We first evaluate the proposed entity conditioned self-attention model on the Recipes dataset to compare the performance with the post-conditioning variants. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Systems to Compare We use the pre-trained GPT architecture in the proposed entity conditioned framework with all its variants. BERT mainly differs in that it is bidirectional, though we also use the pre-trained [CLS] and [SEP] tokens instead of introducing new tokens in the input vocabulary and training them from scratch during fine-tuning. Owing to the lengths of the processes, all our experiments are performed on BERT$_{BASE}$. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Systems to Compare ::: Neural Process Networks The most significant prior work on this dataset is the work of BIBREF15. However, their data condition differs significantly from ours: they train on a large noisy training set and do not use any of the high-quality labeled data, instead treating it as dev and test data. Consequently, their model achieves low performance, roughly 56 $F_1 $ while ours achieves $82.5$ $F_1$ (though these are not the exact same test set). Moreover, theirs underperforms the first occurrence baseline, which calls into question the value of that training data. Therefore, we do not compare to this model directly. We use the small set of human-annotated data for our probing task. Our train/dev/test split consists of $600/100/175$ recipes, respectively. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Results Table TABREF20 compares the overall performances of our proposed models. Our best ET$_{GPT}$ model achieves an $F_1$ score of $82.50$. Comparing to the baselines (Majority through First) and post-conditioned models, we see that the early entity conditioning is critical to achieve high performance. Although the First model still achieves the highest CR, due to operating in a high-recall regime, we see that the ET$_{GPT}$ models all significantly outperform the post-conditioning models on this metric, indicating better modeling of these compositions. Both recall and precision are substantially increaesd compared to these baseline models. Interestingly, the ELMo-based model under-performs the first-occurrence baseline, indicating that the LSTM model is not learning much in terms of recognizing complex entity semantics grounded in long term contexts. Comparing the four variants of structuring input in proposed architectures as discussed in Section SECREF4, we observe that the document-level, entity-first model is the best performing variant. Given the left-to-right unidirectional transformer architecture, this model notably forms target-specific representations for all process tokens, compared to using the transformer self-attention only to extract entity specific information at the end of the process. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations We perform ablations to evaluate the model's dependency on the context and on the target ingredient. Table TABREF23 shows the results for these ablations. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations ::: Ingredient Specificity In the “no ingredient” baseline (w/o ing.), the model is not provided with the specific ingredient information. Table TABREF23 shows that while not being a strong baseline, the model achieves decent overall accuracy with the drop in UR being higher compared to CR. This indicates that there are some generic indicators (mixture) that it can pick up to try to guess at overall ingredient presence or absence. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations ::: Context Importance We compare with a “no context” model (w/o context) which ignore the previous context and only use the current recipe step in determining the ingredient's presence. Table TABREF23 shows that the such model is able to perform surprisingly well, nearly as well as the first occurrence baseline. This is because the model can often recognize words like verbs (for example, add) or nouns (for example, mixture) that indicate many ingredients are being used, and can do well without really tracking any specific entity as desired for the task. Entity-Conditioned Models ::: State Change Detection (ProPara) Next, we now focus on a structured task to evaluate the performance of the entity tracking architecture in capturing the structural information in the continuous self-attention framework. For this, we use the ProPara dataset and evaluate our proposed model on the comprehension task. Figure FIGREF2b shows an example of a short instance from the ProPara dataset. The task of identifying state change follows a structure satisfying the existence cycle; for example, an entity can not be created after destruction. Our prior work BIBREF19 proposed a structured model for the task that achieved state-of-the-art performance. We adapt our proposed entity tracking transformer models to this structured prediction framework, capturing creation, movement, existence (distinct from movement or creation), destruction, and non-existence. We use the standard evaluation scheme of the ProPara dataset, which is framed as answering the following categories of questions: (Cat-1) Is e created (destroyed, moved) in the process?, (Cat-2) When (step #) is e created (destroyed, moved)?, (Cat-3) Where is e created/destroyed/moved from/to)? Entity-Conditioned Models ::: State Change Detection (ProPara) ::: Systems to Compare We compare our proposed models to the previous work on the ProPara dataset. This includes the entity specific MRC models, EntNet BIBREF23, QRN BIBREF24, and KG-MRC BIBREF17. Also, BIBREF14 proposed two task specific models, ProLocal and ProGlobal, as baselines for the dataset. Finally, we compare against our past neural CRF entity tracking model (NCET) BIBREF19 which uses ELMo embeddings in a neural CRF architecture. For the proposed GPT architecture, we use the task specific [CLS] token to generate tag potentials instead of class probabilities as we did previously. For BERT, we perform a similar modification as described in the previous task to utilize the pre-trained [CLS] token to generate tag potentials. Finally, we perform a Viterbi decoding at inference time to infer the most likely valid tag sequence. Entity-Conditioned Models ::: State Change Detection (ProPara) ::: Results Table TABREF28 compares the performance of the proposed entity tracking models on the sentence level task. Since, we are considering the classification aspect of the task, we compare our model performance for Cat-1 and Cat-2. As shown, the structured document level, entity first ET$_{GPT}$ and ET$_{BERT}$ models achieve state-of-the-art results. We observe that the major source of performance gain is attributed to the improvement in identifying the exact step(s) for the state changes (Cat-2). This shows that the model are able to better track the entities by identifying the exact step of state change (Cat-2) accurately rather than just detecting the presence of such state changes (Cat-1). This task is more highly structured and in some ways more non-local than ingredient prediction; the high performance here shows that the ET$_{GPT}$ model is able to capture document level structural information effectively. Further, the structural constraints from the CRF also aid in making better predictions. For example, in the process “higher pressure causes the sediment to heat up. the heat causes chemical processes. the material becomes a liquid. is known as oil.”, the material is a by-product of the chemical process but there's no direct mention of it. However, the material ceases to exist in the next step, and because the model is able to predict this correctly, maintaining consistency results in the model finally predicting the entire state change correctly as well. Challenging Task Phenomena Based on the results in the previous section, our models clearly achieve strong performance compared to past approaches. We now revisit the challenging cases discussed in Section SECREF2 to see if our entity tracking approaches are modeling sophisticated entity phenomena as advertised. For both datasets and associated tasks, we isolate the specific set of challenging cases grounded in tracking (i) intermediate compositions formed as part of combination of entities leading to no explicit mention, and (ii) implicit events which change entities' states without explicit mention of the affects. Challenging Task Phenomena ::: Ingredient Detection For Recipes, we mainly want to investigate cases of ingredients getting re-engaged in the recipe not in a raw form but in a combined nature with other ingredients and henceforth no explicit mention. For example, eggs in step 4 of Figure FIGREF2a exemplifies this case. The performance in such cases is indicative of how strongly the model can track compositional entities. We also examine the performance for cases where the ingredient is referred by some other name. Challenging Task Phenomena ::: Ingredient Detection ::: Intermediate Compositions Formally, we pick the set of examples where the ground truth is a transition from $0 \rightarrow 1$ (not present to present) and the 1 is a “combined” case. Table TABREF31 shows the model's performance on this subset of cases, of which there are 1049 in the test set. The model achieves an accuracy of 51.1% on these bigrams, which is relatively low given the overall model performance. In the error cases, the model defaults to the $1\rightarrow 1$ pattern indicative of the First Occ baseline. Challenging Task Phenomena ::: Ingredient Detection ::: Hypernymy and Synonymy We observe the model is able to capture ingredients based on their hypernyms (nuts $\rightarrow $ pecans, salad $\rightarrow $ lettuce) and rough synonymy (bourbon $\rightarrow $ scotch). This performance can be partially attributed to the language model pre-training. We can isolate these cases by filtering for uncombined ingredients when there is no matching ingredient token in the step. Out of 552 such cases in the test set, the model predicts 375 correctly giving a recall of $67.9$. This is lower than overall UR; if pre-training behaves as advertised, we expect little degradation in this case, but instead we see performance significantly below the average on uncombined ingredients. Challenging Task Phenomena ::: Ingredient Detection ::: Impact of external data One question we can ask of the model's capabilities is to what extent they arise from domain knowledge in the large pre-trained data. We train transformer models from scratch and additionally investigate using the large corpus of unlabeled recipes for our LM pre-training. As can be seen in Table TABREF35, the incorporation of external data leads to major improvements in the overall performance. This gain is largely due to the increase in combined recall. One possible reason could be that external data leads to better understanding of verb semantics and in turn the specific ingredients forming part of the intermediate compositions. Figure FIGREF37 shows that verbs are a critical clue the model relies on to make predictions. Performing LM fine-tuning on top of GPT also gives gains. Challenging Task Phenomena ::: State Change Detection For ProPara, Table TABREF28 shows that the model does not significantly outperform the SOTA models in state change detection (Cat-1). However, for those correctly detected events, the transformer model outperforms the previous models for detecting the exact step of state change (Cat-2), primarily based on verb semantics. We do a finer-grained study in Table TABREF36 by breaking down the performance for the three state changes: creation (C), movement (M), and destruction (D), separately. Across the three state changes, the model suffers a loss of performance in the movement cases. This is owing to the fact that the movement cases require a deeper compositional and implicit event tracking. Also, a majority of errors leading to false negatives are due to the the formation of new sub-entities which are then mentioned with other names. For example, when talking about weak acid in “the water becomes a weak acid. the water dissolves limestone” the weak acid is also considered to move to the limestone. Analysis The model's performance on these challenging task cases suggests that even though it outperforms baselines, it may not be capturing deep reasoning about entities. To understand what the model actually does, we perform analysis of the model's behavior with respect to the input to understand what cues it is picking up on. Analysis ::: Gradient based Analysis One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics. In an ideal scenario, we would want the model to track constituent entities by translating the “focus” to track their newly formed compositions with other entities, often aliased by other names like mixture, blend, paste etc. However, the low performance on such cases shown in Section SECREF5 gives further evidence that the model is not doing this. Analysis ::: Input Ablations We can study which inputs are important more directly by explicitly removing specific certain words from the input process paragraph and evaluating the performance of the resulting input under the current model setup. We mainly did experiments to examine the importance of: (i) verbs, and (ii) other ingredients. Table TABREF40 presents these ablation studies. We only observe a minor performance drop from $84.59$ to $82.71$ (accuracy) when other ingredients are removed entirely. Removing verbs dropped the performance to $79.08$ and further omitting both leads to $77.79$. This shows the model’s dependence on verb semantics over tracking the other ingredients. Conclusion In this paper, we examined the capabilities of transformer networks for capturing entity state semantics. First, we show that the conventional framework of using the transformer networks is not rich enough to capture entity semantics in these cases. We then propose entity-centric ways to formulate richer transformer encoding of the process paragraph, guiding the self-attention in a target entity oriented way. This approach leads to significant performance improvements, but examining model performance more deeply, we conclude that these models still do not model the intermediate compositional entities and perform well by largely relying on surface entity mentions and verb semantics. Acknowledgments This work was partially supported by NSF Grant IIS-1814522 and an equipment grant from NVIDIA. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research. Results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation. Thanks as well to the anonymous reviewers for their helpful comments.
Unanswerable
c515269b37cc186f6f82ab9ada5d9ca176335ded
c515269b37cc186f6f82ab9ada5d9ca176335ded_0
Q: What evidence do they present that the model attends to shallow context clues? Text: Introduction Transformer based pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 have been shown to perform remarkably well on a range of tasks, including entity-related tasks like coreference resolution BIBREF5 and named entity recognition BIBREF0. This performance has been generally attributed to the robust transfer of lexical semantics to downstream tasks. However, these models are still better at capturing syntax than they are at more entity-focused aspects like coreference BIBREF6, BIBREF7; moreover, existing state-of-the-art architectures for such tasks often perform well looking at only local entity mentions BIBREF8, BIBREF9, BIBREF10 rather than forming truly global entity representations BIBREF11, BIBREF12. Thus, performance on these tasks does not form sufficient evidence that these representations strongly capture entity semantics. Better understanding the models' capabilities requires testing them in domains involving complex entity interactions over longer texts. One such domain is that of procedural language, which is strongly focused on tracking the entities involved and their interactions BIBREF13, BIBREF14, BIBREF15. This paper investigates the question of how transformer-based models form entity representations and what these representations capture. We expect that after fine-tuning on a target task, a transformer's output representations should somehow capture relevant entity properties, in the sense that these properties can be extracted by shallow classification either from entity tokens or from marker tokens. However, we observe that such “post-conditioning” approaches don't perform significantly better than rule-based baselines on the tasks we study. We address this by proposing entity-centric ways of structuring input to the transformer networks, using the entity to guide the intrinsic self-attention and form entity-centric representations for all the tokens. We find that our proposed methods lead to a significant improvement in performance over baselines. Although our entity-specific application of transformers is more effective at the entity tracking tasks we study, we perform additional analysis and find that these tasks still do not encourage transformers to form truly deep entity representations. Our performance gain is largely from better understanding of verb semantics in terms of associating process actions with entity the paragraph is conditioned on. The model also does not specialize in “tracking” composed entities per se, again using surface clues like verbs to identify the components involved in a new composition. We evaluate our models on two datasets specifically designed to invoke procedural understanding: (i) Recipes BIBREF16, and (ii) ProPara BIBREF14. For the Recipes dataset, we classify whether an ingredient was affected in a certain step, which requires understanding when ingredients are combined or the focus of the recipe shifts away from them. The ProPara dataset involves answering a more complex set of questions about physical state changes of components in scientific processes. To handle this more structured setting, our transformer produces potentials consumed by a conditional random field which predicts entity states over time. Using a unidirectional GPT-based architecture, we achieve state-of-the-art results on both the datasets; nevertheless, analysis shows that our approach still falls short of capturing the full space of entity interactions. Background: Process Understanding Procedural text is a domain of text involved with understanding some kind of process, such as a phenomenon arising in nature or a set of instructions to perform a task. Entity tracking is a core component of understanding such texts. BIBREF14 introduced the ProPara dataset to probe understanding of scientific processes. The goal is to track the sequence of physical state changes (creation, destruction, and movement) entites undergo over long sequences of process steps. Past work involves both modeling entities across time BIBREF17 and capturing structural constraints inherent in the processes BIBREF18, BIBREF19 Figure FIGREF2b shows an example of the dataset posed as a structured prediction task, as in BIBREF19. For such a domain, it is crucial to capture implicit event occurrences beyond explicit entity mentions. For example, in fuel goes into the generator. The generator converts mechanical energy into electrical energy”, the fuel is implicitly destroyed in the process. BIBREF15 introduced the task of detecting state changes in recipes in the Recipes dataset and proposed an entity-centric memory network neural architecture for simulating action dynamics. Figure FIGREF2a shows an example from the Recipes dataset with a grid showing ingredient presence. We focus specifically on this core problem of ingredient detection; while only one of the sub-tasks associated with their dataset, it reflects some complex semantics involving understanding the current state of the recipe. Tracking of ingredients in the cooking domain is challenging owing to the compositional nature of recipes whereby ingredients mix together and are aliased as intermediate compositions. We pose both of these procedural understanding tasks as classification problems, predicting the state of the entity at each timestep from a set of pre-defined classes. In Figure FIGREF2, these classes correspond to either the presence (1) or absence (0) or the sequence of state changes create (C), move (M), destroy (D), exists (E), and none (O). State-of-the-art approaches on these tasks are inherently entity-centric. Separately, it has been shown that entity-centric language modeling in a continuous framework can lead to better performance for LM related tasks BIBREF20, BIBREF21. Moreover, external data has shown to be useful for modeling process understanding tasks in prior work BIBREF18, BIBREF15, suggesting that pre-trained models may be effective. With such tasks in place, a strong model will ideally learn to form robust entity-centric representation at each time step instead of solely relying on extracting information from the local entity mentions. This expectation is primarily due to the evolving nature of the process domain where entities undergo complex interactions, form intermediate compositions, and are often accompanied by implicit state changes. We now investigate to what extent this is true in a standard application of transformer models to this problem. Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models The most natural way to use the pre-trained transformer architectures for the entity tracking tasks is to simply encode the text sequence and then attempt to “read off” entity states from the contextual transformer representation. We call this approach post-conditioning: the transformer runs with no knowledge of which entity or entities we are going to make predictions on, but we only condition on the target entity after the transformer stage. Figure FIGREF4 depicts this model. Formally, for a labelled pair $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$, we encode the tokenized sequence of steps up to the current timestep (the sentences are separated by using a special [SEP] token), independent of the entity. We denote by $X=[h_{1}, h_{2},\dots , h_{m}]$ the contextualized hidden representation of the $m$ input tokens from the last layer, and by $\textstyle g_{e}\!=\!\!\!\sum \limits _{\text{ent toks}}\!emb(e_i)$ the entity representation for post conditioning. We now use one of the following two ways to make an entity-specific prediction: Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models ::: Task Specific Input Token We append a $\texttt {[CLS]}$ token to the input sequence and use the output representation of the $\texttt {[CLS]}$ token denoted by $h_{ \texttt {[CLS]}}$ concatenated with the learned BPE embeddings of the entity as the representation $c_{e,t}$ for our entity tracking system. We then use a linear layer over it to get class probabilities: The aim of the [CLS] token is to encode information related to general entity related semantics participating in the recipe (sentence priors). We then use a single linear layer to learn sentence priors and entity priors independently, without strong interaction. We call this model GPT$_{indep}$. Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models ::: Entity Based Attention Second, we explore a more fine-grained way of using the GPT model outputs. Specifically, we use bilinear attention between $g_e$ and the transformer output for the process tokens $X$ to get a contextual representation $c_{e,t}$ for a given entity. Finally, using a feed-forward network followed by softmax layer gives us the class probabilities: The bilinear attention over the contextual representations of the process tokens allows the model to fetch token content relevant to that particular entity. We call this model GPT$_{attn}$. Studying Basic Transformer Representations for Entity Tracking ::: Results and Observations We evaluate the discussed post-conditioning models on the ingredient detection task of the Recipes dataset. To benchmark the performance, we compare to three rule-based baselines. This includes (i) Majority Class, (ii) Exact Match of an ingredient $e$ in recipe step $s_t$, and (iii) First Occurrence, where we predict the ingredient to be present in all steps following the first exact match. These latter two baselines capture natural modes of reasoning about the dataset: an ingredient is used when it is directly mentioned, or it is used in every step after it is mentioned, reflecting the assumption that a recipe is about incrementally adding ingredients to an ever-growing mixture. We also construct a LSTM baseline to evaluate the performance of ELMo embeddings (ELMo$_{token}$ and ELMo$_{sent}$) BIBREF22 compared to GPT. Table TABREF10 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance. Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ingredient is explicitly mentioned (uncombined) or part of a mixture (combined). Note that Exact Match and First Occ baselines represent high-precision and high-recall regimes for this task, respectively. As observed from the results, the post-conditioning frameworks underperform compared to the First Occ baseline. While the CR values appear to be high, which would suggest that the model is capturing the addition of ingredients to the mixture, we note that this value is also lower than the corresponding value for First Occ. This result suggests that the model may be approximating the behavior of this baseline, but doing so poorly. The unconditional self-attention mechanism of the transformers does not seem sufficient to capture the entity details at each time step beyond simple presence or absence. Moreover, we see that GPT$_{indep}$ performs somewhat comparably to GPT$_{attn}$, suggesting that consuming the transformer's output with simple attention is not able to really extract the right entity representation. For ProPara, we observe similar performance trends where the post-conditioning model performed below par with the state-of-the-art architectures. Entity-Conditioned Models The post-conditioning framework assumes that the transformer network can form strong representations containing entity information accessible in a shallow way based on the target entity. We now propose a model architecture which more strongly conditions on the entity as a part of the intrinsic self-attention mechanism of the transformers. Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. The input sequence begins with a [START] token, then the entity under consideration, then a [SEP] token. After each sentence, a [CLS] token is used to anchor the prediction for that sentence. In this model, the transformer can always observe the entity it should be primarily “attending to” from the standpoint of building representations. We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. These variants are naturally more computationally-intensive than post-conditioned models, as we need to rerun the transformer for each distinct entity we want to make a prediction for. Entity-Conditioned Models ::: Sentence Level vs. Document Level As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model). In a sentence level model, we formulate each pair of entity $e$ and process step $t$ as a separate instance for our classification task. Thus, for a process with $T$ steps and $m$ entities we get $T \times m$ input sequences for fine tuning our classification task. Entity-Conditioned Models ::: Training Details In most experiments, we initialize the network with the weights of the standard pre-trained GPT model, then subsequently do either domain specific LM fine-tuning and supervised task specific fine-tuning. Entity-Conditioned Models ::: Training Details ::: Domain Specific LM fine-tuning For some procedural domains, we have access to additional unlabeled data. To adapt the LM to capture domain intricacies, we fine-tune the transformer network on this unlabeled corpus. Entity-Conditioned Models ::: Training Details ::: Supervised Task Fine-Tuning After the domain specific LM fine-tuning, we fine-tune our network parameters for the end task of entity tracking. For fine-tuning for the task, we have a labelled dataset which we denote by $\mathcal {C}$, the set of labelled pairs $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$ for a given process. The input is converted according to our chosen entity conditioning procedure, then fed through the pre-trained network. In addition, we observed that adding the language model loss during task specific fine-tuning leads to better performance as well, possibly because it adapts the LM to our task-specific input formulation. Thus, Entity-Conditioned Models ::: Experiments: Ingredient Detection We first evaluate the proposed entity conditioned self-attention model on the Recipes dataset to compare the performance with the post-conditioning variants. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Systems to Compare We use the pre-trained GPT architecture in the proposed entity conditioned framework with all its variants. BERT mainly differs in that it is bidirectional, though we also use the pre-trained [CLS] and [SEP] tokens instead of introducing new tokens in the input vocabulary and training them from scratch during fine-tuning. Owing to the lengths of the processes, all our experiments are performed on BERT$_{BASE}$. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Systems to Compare ::: Neural Process Networks The most significant prior work on this dataset is the work of BIBREF15. However, their data condition differs significantly from ours: they train on a large noisy training set and do not use any of the high-quality labeled data, instead treating it as dev and test data. Consequently, their model achieves low performance, roughly 56 $F_1 $ while ours achieves $82.5$ $F_1$ (though these are not the exact same test set). Moreover, theirs underperforms the first occurrence baseline, which calls into question the value of that training data. Therefore, we do not compare to this model directly. We use the small set of human-annotated data for our probing task. Our train/dev/test split consists of $600/100/175$ recipes, respectively. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Results Table TABREF20 compares the overall performances of our proposed models. Our best ET$_{GPT}$ model achieves an $F_1$ score of $82.50$. Comparing to the baselines (Majority through First) and post-conditioned models, we see that the early entity conditioning is critical to achieve high performance. Although the First model still achieves the highest CR, due to operating in a high-recall regime, we see that the ET$_{GPT}$ models all significantly outperform the post-conditioning models on this metric, indicating better modeling of these compositions. Both recall and precision are substantially increaesd compared to these baseline models. Interestingly, the ELMo-based model under-performs the first-occurrence baseline, indicating that the LSTM model is not learning much in terms of recognizing complex entity semantics grounded in long term contexts. Comparing the four variants of structuring input in proposed architectures as discussed in Section SECREF4, we observe that the document-level, entity-first model is the best performing variant. Given the left-to-right unidirectional transformer architecture, this model notably forms target-specific representations for all process tokens, compared to using the transformer self-attention only to extract entity specific information at the end of the process. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations We perform ablations to evaluate the model's dependency on the context and on the target ingredient. Table TABREF23 shows the results for these ablations. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations ::: Ingredient Specificity In the “no ingredient” baseline (w/o ing.), the model is not provided with the specific ingredient information. Table TABREF23 shows that while not being a strong baseline, the model achieves decent overall accuracy with the drop in UR being higher compared to CR. This indicates that there are some generic indicators (mixture) that it can pick up to try to guess at overall ingredient presence or absence. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations ::: Context Importance We compare with a “no context” model (w/o context) which ignore the previous context and only use the current recipe step in determining the ingredient's presence. Table TABREF23 shows that the such model is able to perform surprisingly well, nearly as well as the first occurrence baseline. This is because the model can often recognize words like verbs (for example, add) or nouns (for example, mixture) that indicate many ingredients are being used, and can do well without really tracking any specific entity as desired for the task. Entity-Conditioned Models ::: State Change Detection (ProPara) Next, we now focus on a structured task to evaluate the performance of the entity tracking architecture in capturing the structural information in the continuous self-attention framework. For this, we use the ProPara dataset and evaluate our proposed model on the comprehension task. Figure FIGREF2b shows an example of a short instance from the ProPara dataset. The task of identifying state change follows a structure satisfying the existence cycle; for example, an entity can not be created after destruction. Our prior work BIBREF19 proposed a structured model for the task that achieved state-of-the-art performance. We adapt our proposed entity tracking transformer models to this structured prediction framework, capturing creation, movement, existence (distinct from movement or creation), destruction, and non-existence. We use the standard evaluation scheme of the ProPara dataset, which is framed as answering the following categories of questions: (Cat-1) Is e created (destroyed, moved) in the process?, (Cat-2) When (step #) is e created (destroyed, moved)?, (Cat-3) Where is e created/destroyed/moved from/to)? Entity-Conditioned Models ::: State Change Detection (ProPara) ::: Systems to Compare We compare our proposed models to the previous work on the ProPara dataset. This includes the entity specific MRC models, EntNet BIBREF23, QRN BIBREF24, and KG-MRC BIBREF17. Also, BIBREF14 proposed two task specific models, ProLocal and ProGlobal, as baselines for the dataset. Finally, we compare against our past neural CRF entity tracking model (NCET) BIBREF19 which uses ELMo embeddings in a neural CRF architecture. For the proposed GPT architecture, we use the task specific [CLS] token to generate tag potentials instead of class probabilities as we did previously. For BERT, we perform a similar modification as described in the previous task to utilize the pre-trained [CLS] token to generate tag potentials. Finally, we perform a Viterbi decoding at inference time to infer the most likely valid tag sequence. Entity-Conditioned Models ::: State Change Detection (ProPara) ::: Results Table TABREF28 compares the performance of the proposed entity tracking models on the sentence level task. Since, we are considering the classification aspect of the task, we compare our model performance for Cat-1 and Cat-2. As shown, the structured document level, entity first ET$_{GPT}$ and ET$_{BERT}$ models achieve state-of-the-art results. We observe that the major source of performance gain is attributed to the improvement in identifying the exact step(s) for the state changes (Cat-2). This shows that the model are able to better track the entities by identifying the exact step of state change (Cat-2) accurately rather than just detecting the presence of such state changes (Cat-1). This task is more highly structured and in some ways more non-local than ingredient prediction; the high performance here shows that the ET$_{GPT}$ model is able to capture document level structural information effectively. Further, the structural constraints from the CRF also aid in making better predictions. For example, in the process “higher pressure causes the sediment to heat up. the heat causes chemical processes. the material becomes a liquid. is known as oil.”, the material is a by-product of the chemical process but there's no direct mention of it. However, the material ceases to exist in the next step, and because the model is able to predict this correctly, maintaining consistency results in the model finally predicting the entire state change correctly as well. Challenging Task Phenomena Based on the results in the previous section, our models clearly achieve strong performance compared to past approaches. We now revisit the challenging cases discussed in Section SECREF2 to see if our entity tracking approaches are modeling sophisticated entity phenomena as advertised. For both datasets and associated tasks, we isolate the specific set of challenging cases grounded in tracking (i) intermediate compositions formed as part of combination of entities leading to no explicit mention, and (ii) implicit events which change entities' states without explicit mention of the affects. Challenging Task Phenomena ::: Ingredient Detection For Recipes, we mainly want to investigate cases of ingredients getting re-engaged in the recipe not in a raw form but in a combined nature with other ingredients and henceforth no explicit mention. For example, eggs in step 4 of Figure FIGREF2a exemplifies this case. The performance in such cases is indicative of how strongly the model can track compositional entities. We also examine the performance for cases where the ingredient is referred by some other name. Challenging Task Phenomena ::: Ingredient Detection ::: Intermediate Compositions Formally, we pick the set of examples where the ground truth is a transition from $0 \rightarrow 1$ (not present to present) and the 1 is a “combined” case. Table TABREF31 shows the model's performance on this subset of cases, of which there are 1049 in the test set. The model achieves an accuracy of 51.1% on these bigrams, which is relatively low given the overall model performance. In the error cases, the model defaults to the $1\rightarrow 1$ pattern indicative of the First Occ baseline. Challenging Task Phenomena ::: Ingredient Detection ::: Hypernymy and Synonymy We observe the model is able to capture ingredients based on their hypernyms (nuts $\rightarrow $ pecans, salad $\rightarrow $ lettuce) and rough synonymy (bourbon $\rightarrow $ scotch). This performance can be partially attributed to the language model pre-training. We can isolate these cases by filtering for uncombined ingredients when there is no matching ingredient token in the step. Out of 552 such cases in the test set, the model predicts 375 correctly giving a recall of $67.9$. This is lower than overall UR; if pre-training behaves as advertised, we expect little degradation in this case, but instead we see performance significantly below the average on uncombined ingredients. Challenging Task Phenomena ::: Ingredient Detection ::: Impact of external data One question we can ask of the model's capabilities is to what extent they arise from domain knowledge in the large pre-trained data. We train transformer models from scratch and additionally investigate using the large corpus of unlabeled recipes for our LM pre-training. As can be seen in Table TABREF35, the incorporation of external data leads to major improvements in the overall performance. This gain is largely due to the increase in combined recall. One possible reason could be that external data leads to better understanding of verb semantics and in turn the specific ingredients forming part of the intermediate compositions. Figure FIGREF37 shows that verbs are a critical clue the model relies on to make predictions. Performing LM fine-tuning on top of GPT also gives gains. Challenging Task Phenomena ::: State Change Detection For ProPara, Table TABREF28 shows that the model does not significantly outperform the SOTA models in state change detection (Cat-1). However, for those correctly detected events, the transformer model outperforms the previous models for detecting the exact step of state change (Cat-2), primarily based on verb semantics. We do a finer-grained study in Table TABREF36 by breaking down the performance for the three state changes: creation (C), movement (M), and destruction (D), separately. Across the three state changes, the model suffers a loss of performance in the movement cases. This is owing to the fact that the movement cases require a deeper compositional and implicit event tracking. Also, a majority of errors leading to false negatives are due to the the formation of new sub-entities which are then mentioned with other names. For example, when talking about weak acid in “the water becomes a weak acid. the water dissolves limestone” the weak acid is also considered to move to the limestone. Analysis The model's performance on these challenging task cases suggests that even though it outperforms baselines, it may not be capturing deep reasoning about entities. To understand what the model actually does, we perform analysis of the model's behavior with respect to the input to understand what cues it is picking up on. Analysis ::: Gradient based Analysis One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics. In an ideal scenario, we would want the model to track constituent entities by translating the “focus” to track their newly formed compositions with other entities, often aliased by other names like mixture, blend, paste etc. However, the low performance on such cases shown in Section SECREF5 gives further evidence that the model is not doing this. Analysis ::: Input Ablations We can study which inputs are important more directly by explicitly removing specific certain words from the input process paragraph and evaluating the performance of the resulting input under the current model setup. We mainly did experiments to examine the importance of: (i) verbs, and (ii) other ingredients. Table TABREF40 presents these ablation studies. We only observe a minor performance drop from $84.59$ to $82.71$ (accuracy) when other ingredients are removed entirely. Removing verbs dropped the performance to $79.08$ and further omitting both leads to $77.79$. This shows the model’s dependence on verb semantics over tracking the other ingredients. Conclusion In this paper, we examined the capabilities of transformer networks for capturing entity state semantics. First, we show that the conventional framework of using the transformer networks is not rich enough to capture entity semantics in these cases. We then propose entity-centric ways to formulate richer transformer encoding of the process paragraph, guiding the self-attention in a target entity oriented way. This approach leads to significant performance improvements, but examining model performance more deeply, we conclude that these models still do not model the intermediate compositional entities and perform well by largely relying on surface entity mentions and verb semantics. Acknowledgments This work was partially supported by NSF Grant IIS-1814522 and an equipment grant from NVIDIA. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research. Results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation. Thanks as well to the anonymous reviewers for their helpful comments.
Using model gradients with respect to input features they presented that the most important model inputs are verbs associated with entities which shows that the model attends to shallow context clues
43f86cd8aafe930ebb35ca919ada33b74b36c7dd
43f86cd8aafe930ebb35ca919ada33b74b36c7dd_0
Q: In what way is the input restructured? Text: Introduction Transformer based pre-trained language models BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 have been shown to perform remarkably well on a range of tasks, including entity-related tasks like coreference resolution BIBREF5 and named entity recognition BIBREF0. This performance has been generally attributed to the robust transfer of lexical semantics to downstream tasks. However, these models are still better at capturing syntax than they are at more entity-focused aspects like coreference BIBREF6, BIBREF7; moreover, existing state-of-the-art architectures for such tasks often perform well looking at only local entity mentions BIBREF8, BIBREF9, BIBREF10 rather than forming truly global entity representations BIBREF11, BIBREF12. Thus, performance on these tasks does not form sufficient evidence that these representations strongly capture entity semantics. Better understanding the models' capabilities requires testing them in domains involving complex entity interactions over longer texts. One such domain is that of procedural language, which is strongly focused on tracking the entities involved and their interactions BIBREF13, BIBREF14, BIBREF15. This paper investigates the question of how transformer-based models form entity representations and what these representations capture. We expect that after fine-tuning on a target task, a transformer's output representations should somehow capture relevant entity properties, in the sense that these properties can be extracted by shallow classification either from entity tokens or from marker tokens. However, we observe that such “post-conditioning” approaches don't perform significantly better than rule-based baselines on the tasks we study. We address this by proposing entity-centric ways of structuring input to the transformer networks, using the entity to guide the intrinsic self-attention and form entity-centric representations for all the tokens. We find that our proposed methods lead to a significant improvement in performance over baselines. Although our entity-specific application of transformers is more effective at the entity tracking tasks we study, we perform additional analysis and find that these tasks still do not encourage transformers to form truly deep entity representations. Our performance gain is largely from better understanding of verb semantics in terms of associating process actions with entity the paragraph is conditioned on. The model also does not specialize in “tracking” composed entities per se, again using surface clues like verbs to identify the components involved in a new composition. We evaluate our models on two datasets specifically designed to invoke procedural understanding: (i) Recipes BIBREF16, and (ii) ProPara BIBREF14. For the Recipes dataset, we classify whether an ingredient was affected in a certain step, which requires understanding when ingredients are combined or the focus of the recipe shifts away from them. The ProPara dataset involves answering a more complex set of questions about physical state changes of components in scientific processes. To handle this more structured setting, our transformer produces potentials consumed by a conditional random field which predicts entity states over time. Using a unidirectional GPT-based architecture, we achieve state-of-the-art results on both the datasets; nevertheless, analysis shows that our approach still falls short of capturing the full space of entity interactions. Background: Process Understanding Procedural text is a domain of text involved with understanding some kind of process, such as a phenomenon arising in nature or a set of instructions to perform a task. Entity tracking is a core component of understanding such texts. BIBREF14 introduced the ProPara dataset to probe understanding of scientific processes. The goal is to track the sequence of physical state changes (creation, destruction, and movement) entites undergo over long sequences of process steps. Past work involves both modeling entities across time BIBREF17 and capturing structural constraints inherent in the processes BIBREF18, BIBREF19 Figure FIGREF2b shows an example of the dataset posed as a structured prediction task, as in BIBREF19. For such a domain, it is crucial to capture implicit event occurrences beyond explicit entity mentions. For example, in fuel goes into the generator. The generator converts mechanical energy into electrical energy”, the fuel is implicitly destroyed in the process. BIBREF15 introduced the task of detecting state changes in recipes in the Recipes dataset and proposed an entity-centric memory network neural architecture for simulating action dynamics. Figure FIGREF2a shows an example from the Recipes dataset with a grid showing ingredient presence. We focus specifically on this core problem of ingredient detection; while only one of the sub-tasks associated with their dataset, it reflects some complex semantics involving understanding the current state of the recipe. Tracking of ingredients in the cooking domain is challenging owing to the compositional nature of recipes whereby ingredients mix together and are aliased as intermediate compositions. We pose both of these procedural understanding tasks as classification problems, predicting the state of the entity at each timestep from a set of pre-defined classes. In Figure FIGREF2, these classes correspond to either the presence (1) or absence (0) or the sequence of state changes create (C), move (M), destroy (D), exists (E), and none (O). State-of-the-art approaches on these tasks are inherently entity-centric. Separately, it has been shown that entity-centric language modeling in a continuous framework can lead to better performance for LM related tasks BIBREF20, BIBREF21. Moreover, external data has shown to be useful for modeling process understanding tasks in prior work BIBREF18, BIBREF15, suggesting that pre-trained models may be effective. With such tasks in place, a strong model will ideally learn to form robust entity-centric representation at each time step instead of solely relying on extracting information from the local entity mentions. This expectation is primarily due to the evolving nature of the process domain where entities undergo complex interactions, form intermediate compositions, and are often accompanied by implicit state changes. We now investigate to what extent this is true in a standard application of transformer models to this problem. Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models The most natural way to use the pre-trained transformer architectures for the entity tracking tasks is to simply encode the text sequence and then attempt to “read off” entity states from the contextual transformer representation. We call this approach post-conditioning: the transformer runs with no knowledge of which entity or entities we are going to make predictions on, but we only condition on the target entity after the transformer stage. Figure FIGREF4 depicts this model. Formally, for a labelled pair $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$, we encode the tokenized sequence of steps up to the current timestep (the sentences are separated by using a special [SEP] token), independent of the entity. We denote by $X=[h_{1}, h_{2},\dots , h_{m}]$ the contextualized hidden representation of the $m$ input tokens from the last layer, and by $\textstyle g_{e}\!=\!\!\!\sum \limits _{\text{ent toks}}\!emb(e_i)$ the entity representation for post conditioning. We now use one of the following two ways to make an entity-specific prediction: Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models ::: Task Specific Input Token We append a $\texttt {[CLS]}$ token to the input sequence and use the output representation of the $\texttt {[CLS]}$ token denoted by $h_{ \texttt {[CLS]}}$ concatenated with the learned BPE embeddings of the entity as the representation $c_{e,t}$ for our entity tracking system. We then use a linear layer over it to get class probabilities: The aim of the [CLS] token is to encode information related to general entity related semantics participating in the recipe (sentence priors). We then use a single linear layer to learn sentence priors and entity priors independently, without strong interaction. We call this model GPT$_{indep}$. Studying Basic Transformer Representations for Entity Tracking ::: Post-conditioning Models ::: Entity Based Attention Second, we explore a more fine-grained way of using the GPT model outputs. Specifically, we use bilinear attention between $g_e$ and the transformer output for the process tokens $X$ to get a contextual representation $c_{e,t}$ for a given entity. Finally, using a feed-forward network followed by softmax layer gives us the class probabilities: The bilinear attention over the contextual representations of the process tokens allows the model to fetch token content relevant to that particular entity. We call this model GPT$_{attn}$. Studying Basic Transformer Representations for Entity Tracking ::: Results and Observations We evaluate the discussed post-conditioning models on the ingredient detection task of the Recipes dataset. To benchmark the performance, we compare to three rule-based baselines. This includes (i) Majority Class, (ii) Exact Match of an ingredient $e$ in recipe step $s_t$, and (iii) First Occurrence, where we predict the ingredient to be present in all steps following the first exact match. These latter two baselines capture natural modes of reasoning about the dataset: an ingredient is used when it is directly mentioned, or it is used in every step after it is mentioned, reflecting the assumption that a recipe is about incrementally adding ingredients to an ever-growing mixture. We also construct a LSTM baseline to evaluate the performance of ELMo embeddings (ELMo$_{token}$ and ELMo$_{sent}$) BIBREF22 compared to GPT. Table TABREF10 compares the performance of the discussed models against the baselines, evaluating per-step entity prediction performance. Using the ground truth about ingredient's state, we also report the uncombined (UR) and combined (CR) recalls, which are per-timestep ingredient recall distinguished by whether the ingredient is explicitly mentioned (uncombined) or part of a mixture (combined). Note that Exact Match and First Occ baselines represent high-precision and high-recall regimes for this task, respectively. As observed from the results, the post-conditioning frameworks underperform compared to the First Occ baseline. While the CR values appear to be high, which would suggest that the model is capturing the addition of ingredients to the mixture, we note that this value is also lower than the corresponding value for First Occ. This result suggests that the model may be approximating the behavior of this baseline, but doing so poorly. The unconditional self-attention mechanism of the transformers does not seem sufficient to capture the entity details at each time step beyond simple presence or absence. Moreover, we see that GPT$_{indep}$ performs somewhat comparably to GPT$_{attn}$, suggesting that consuming the transformer's output with simple attention is not able to really extract the right entity representation. For ProPara, we observe similar performance trends where the post-conditioning model performed below par with the state-of-the-art architectures. Entity-Conditioned Models The post-conditioning framework assumes that the transformer network can form strong representations containing entity information accessible in a shallow way based on the target entity. We now propose a model architecture which more strongly conditions on the entity as a part of the intrinsic self-attention mechanism of the transformers. Our approach consists of structuring input to the transformer network to use and guide the self-attention of the transformers, conditioning it on the entity. Our main mode of encoding the input, the entity-first method, is shown in Figure FIGREF4. The input sequence begins with a [START] token, then the entity under consideration, then a [SEP] token. After each sentence, a [CLS] token is used to anchor the prediction for that sentence. In this model, the transformer can always observe the entity it should be primarily “attending to” from the standpoint of building representations. We also have an entity-last variant where the entity is primarily observed just before the classification token to condition the [CLS] token's self-attention accordingly. These variants are naturally more computationally-intensive than post-conditioned models, as we need to rerun the transformer for each distinct entity we want to make a prediction for. Entity-Conditioned Models ::: Sentence Level vs. Document Level As an additional variation, we can either run the transformer once per document with multiple [CLS] tokens (a document-level model as shown in Figure FIGREF4) or specialize the prediction to a single timestep (a sentence-level model). In a sentence level model, we formulate each pair of entity $e$ and process step $t$ as a separate instance for our classification task. Thus, for a process with $T$ steps and $m$ entities we get $T \times m$ input sequences for fine tuning our classification task. Entity-Conditioned Models ::: Training Details In most experiments, we initialize the network with the weights of the standard pre-trained GPT model, then subsequently do either domain specific LM fine-tuning and supervised task specific fine-tuning. Entity-Conditioned Models ::: Training Details ::: Domain Specific LM fine-tuning For some procedural domains, we have access to additional unlabeled data. To adapt the LM to capture domain intricacies, we fine-tune the transformer network on this unlabeled corpus. Entity-Conditioned Models ::: Training Details ::: Supervised Task Fine-Tuning After the domain specific LM fine-tuning, we fine-tune our network parameters for the end task of entity tracking. For fine-tuning for the task, we have a labelled dataset which we denote by $\mathcal {C}$, the set of labelled pairs $(\lbrace s_1, s_2, \dots , s_t\rbrace , y_{et})$ for a given process. The input is converted according to our chosen entity conditioning procedure, then fed through the pre-trained network. In addition, we observed that adding the language model loss during task specific fine-tuning leads to better performance as well, possibly because it adapts the LM to our task-specific input formulation. Thus, Entity-Conditioned Models ::: Experiments: Ingredient Detection We first evaluate the proposed entity conditioned self-attention model on the Recipes dataset to compare the performance with the post-conditioning variants. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Systems to Compare We use the pre-trained GPT architecture in the proposed entity conditioned framework with all its variants. BERT mainly differs in that it is bidirectional, though we also use the pre-trained [CLS] and [SEP] tokens instead of introducing new tokens in the input vocabulary and training them from scratch during fine-tuning. Owing to the lengths of the processes, all our experiments are performed on BERT$_{BASE}$. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Systems to Compare ::: Neural Process Networks The most significant prior work on this dataset is the work of BIBREF15. However, their data condition differs significantly from ours: they train on a large noisy training set and do not use any of the high-quality labeled data, instead treating it as dev and test data. Consequently, their model achieves low performance, roughly 56 $F_1 $ while ours achieves $82.5$ $F_1$ (though these are not the exact same test set). Moreover, theirs underperforms the first occurrence baseline, which calls into question the value of that training data. Therefore, we do not compare to this model directly. We use the small set of human-annotated data for our probing task. Our train/dev/test split consists of $600/100/175$ recipes, respectively. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Results Table TABREF20 compares the overall performances of our proposed models. Our best ET$_{GPT}$ model achieves an $F_1$ score of $82.50$. Comparing to the baselines (Majority through First) and post-conditioned models, we see that the early entity conditioning is critical to achieve high performance. Although the First model still achieves the highest CR, due to operating in a high-recall regime, we see that the ET$_{GPT}$ models all significantly outperform the post-conditioning models on this metric, indicating better modeling of these compositions. Both recall and precision are substantially increaesd compared to these baseline models. Interestingly, the ELMo-based model under-performs the first-occurrence baseline, indicating that the LSTM model is not learning much in terms of recognizing complex entity semantics grounded in long term contexts. Comparing the four variants of structuring input in proposed architectures as discussed in Section SECREF4, we observe that the document-level, entity-first model is the best performing variant. Given the left-to-right unidirectional transformer architecture, this model notably forms target-specific representations for all process tokens, compared to using the transformer self-attention only to extract entity specific information at the end of the process. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations We perform ablations to evaluate the model's dependency on the context and on the target ingredient. Table TABREF23 shows the results for these ablations. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations ::: Ingredient Specificity In the “no ingredient” baseline (w/o ing.), the model is not provided with the specific ingredient information. Table TABREF23 shows that while not being a strong baseline, the model achieves decent overall accuracy with the drop in UR being higher compared to CR. This indicates that there are some generic indicators (mixture) that it can pick up to try to guess at overall ingredient presence or absence. Entity-Conditioned Models ::: Experiments: Ingredient Detection ::: Ablations ::: Context Importance We compare with a “no context” model (w/o context) which ignore the previous context and only use the current recipe step in determining the ingredient's presence. Table TABREF23 shows that the such model is able to perform surprisingly well, nearly as well as the first occurrence baseline. This is because the model can often recognize words like verbs (for example, add) or nouns (for example, mixture) that indicate many ingredients are being used, and can do well without really tracking any specific entity as desired for the task. Entity-Conditioned Models ::: State Change Detection (ProPara) Next, we now focus on a structured task to evaluate the performance of the entity tracking architecture in capturing the structural information in the continuous self-attention framework. For this, we use the ProPara dataset and evaluate our proposed model on the comprehension task. Figure FIGREF2b shows an example of a short instance from the ProPara dataset. The task of identifying state change follows a structure satisfying the existence cycle; for example, an entity can not be created after destruction. Our prior work BIBREF19 proposed a structured model for the task that achieved state-of-the-art performance. We adapt our proposed entity tracking transformer models to this structured prediction framework, capturing creation, movement, existence (distinct from movement or creation), destruction, and non-existence. We use the standard evaluation scheme of the ProPara dataset, which is framed as answering the following categories of questions: (Cat-1) Is e created (destroyed, moved) in the process?, (Cat-2) When (step #) is e created (destroyed, moved)?, (Cat-3) Where is e created/destroyed/moved from/to)? Entity-Conditioned Models ::: State Change Detection (ProPara) ::: Systems to Compare We compare our proposed models to the previous work on the ProPara dataset. This includes the entity specific MRC models, EntNet BIBREF23, QRN BIBREF24, and KG-MRC BIBREF17. Also, BIBREF14 proposed two task specific models, ProLocal and ProGlobal, as baselines for the dataset. Finally, we compare against our past neural CRF entity tracking model (NCET) BIBREF19 which uses ELMo embeddings in a neural CRF architecture. For the proposed GPT architecture, we use the task specific [CLS] token to generate tag potentials instead of class probabilities as we did previously. For BERT, we perform a similar modification as described in the previous task to utilize the pre-trained [CLS] token to generate tag potentials. Finally, we perform a Viterbi decoding at inference time to infer the most likely valid tag sequence. Entity-Conditioned Models ::: State Change Detection (ProPara) ::: Results Table TABREF28 compares the performance of the proposed entity tracking models on the sentence level task. Since, we are considering the classification aspect of the task, we compare our model performance for Cat-1 and Cat-2. As shown, the structured document level, entity first ET$_{GPT}$ and ET$_{BERT}$ models achieve state-of-the-art results. We observe that the major source of performance gain is attributed to the improvement in identifying the exact step(s) for the state changes (Cat-2). This shows that the model are able to better track the entities by identifying the exact step of state change (Cat-2) accurately rather than just detecting the presence of such state changes (Cat-1). This task is more highly structured and in some ways more non-local than ingredient prediction; the high performance here shows that the ET$_{GPT}$ model is able to capture document level structural information effectively. Further, the structural constraints from the CRF also aid in making better predictions. For example, in the process “higher pressure causes the sediment to heat up. the heat causes chemical processes. the material becomes a liquid. is known as oil.”, the material is a by-product of the chemical process but there's no direct mention of it. However, the material ceases to exist in the next step, and because the model is able to predict this correctly, maintaining consistency results in the model finally predicting the entire state change correctly as well. Challenging Task Phenomena Based on the results in the previous section, our models clearly achieve strong performance compared to past approaches. We now revisit the challenging cases discussed in Section SECREF2 to see if our entity tracking approaches are modeling sophisticated entity phenomena as advertised. For both datasets and associated tasks, we isolate the specific set of challenging cases grounded in tracking (i) intermediate compositions formed as part of combination of entities leading to no explicit mention, and (ii) implicit events which change entities' states without explicit mention of the affects. Challenging Task Phenomena ::: Ingredient Detection For Recipes, we mainly want to investigate cases of ingredients getting re-engaged in the recipe not in a raw form but in a combined nature with other ingredients and henceforth no explicit mention. For example, eggs in step 4 of Figure FIGREF2a exemplifies this case. The performance in such cases is indicative of how strongly the model can track compositional entities. We also examine the performance for cases where the ingredient is referred by some other name. Challenging Task Phenomena ::: Ingredient Detection ::: Intermediate Compositions Formally, we pick the set of examples where the ground truth is a transition from $0 \rightarrow 1$ (not present to present) and the 1 is a “combined” case. Table TABREF31 shows the model's performance on this subset of cases, of which there are 1049 in the test set. The model achieves an accuracy of 51.1% on these bigrams, which is relatively low given the overall model performance. In the error cases, the model defaults to the $1\rightarrow 1$ pattern indicative of the First Occ baseline. Challenging Task Phenomena ::: Ingredient Detection ::: Hypernymy and Synonymy We observe the model is able to capture ingredients based on their hypernyms (nuts $\rightarrow $ pecans, salad $\rightarrow $ lettuce) and rough synonymy (bourbon $\rightarrow $ scotch). This performance can be partially attributed to the language model pre-training. We can isolate these cases by filtering for uncombined ingredients when there is no matching ingredient token in the step. Out of 552 such cases in the test set, the model predicts 375 correctly giving a recall of $67.9$. This is lower than overall UR; if pre-training behaves as advertised, we expect little degradation in this case, but instead we see performance significantly below the average on uncombined ingredients. Challenging Task Phenomena ::: Ingredient Detection ::: Impact of external data One question we can ask of the model's capabilities is to what extent they arise from domain knowledge in the large pre-trained data. We train transformer models from scratch and additionally investigate using the large corpus of unlabeled recipes for our LM pre-training. As can be seen in Table TABREF35, the incorporation of external data leads to major improvements in the overall performance. This gain is largely due to the increase in combined recall. One possible reason could be that external data leads to better understanding of verb semantics and in turn the specific ingredients forming part of the intermediate compositions. Figure FIGREF37 shows that verbs are a critical clue the model relies on to make predictions. Performing LM fine-tuning on top of GPT also gives gains. Challenging Task Phenomena ::: State Change Detection For ProPara, Table TABREF28 shows that the model does not significantly outperform the SOTA models in state change detection (Cat-1). However, for those correctly detected events, the transformer model outperforms the previous models for detecting the exact step of state change (Cat-2), primarily based on verb semantics. We do a finer-grained study in Table TABREF36 by breaking down the performance for the three state changes: creation (C), movement (M), and destruction (D), separately. Across the three state changes, the model suffers a loss of performance in the movement cases. This is owing to the fact that the movement cases require a deeper compositional and implicit event tracking. Also, a majority of errors leading to false negatives are due to the the formation of new sub-entities which are then mentioned with other names. For example, when talking about weak acid in “the water becomes a weak acid. the water dissolves limestone” the weak acid is also considered to move to the limestone. Analysis The model's performance on these challenging task cases suggests that even though it outperforms baselines, it may not be capturing deep reasoning about entities. To understand what the model actually does, we perform analysis of the model's behavior with respect to the input to understand what cues it is picking up on. Analysis ::: Gradient based Analysis One way to analyze the model is to compute model gradients with respect to input features BIBREF26, BIBREF25. Figure FIGREF37 shows that in this particular example, the most important model inputs are verbs possibly associated with the entity butter, in addition to the entity's mentions themselves. It further shows that the model learns to extract shallow clues of identifying actions exerted upon only the entity being tracked, regardless of other entities, by leveraging verb semantics. In an ideal scenario, we would want the model to track constituent entities by translating the “focus” to track their newly formed compositions with other entities, often aliased by other names like mixture, blend, paste etc. However, the low performance on such cases shown in Section SECREF5 gives further evidence that the model is not doing this. Analysis ::: Input Ablations We can study which inputs are important more directly by explicitly removing specific certain words from the input process paragraph and evaluating the performance of the resulting input under the current model setup. We mainly did experiments to examine the importance of: (i) verbs, and (ii) other ingredients. Table TABREF40 presents these ablation studies. We only observe a minor performance drop from $84.59$ to $82.71$ (accuracy) when other ingredients are removed entirely. Removing verbs dropped the performance to $79.08$ and further omitting both leads to $77.79$. This shows the model’s dependence on verb semantics over tracking the other ingredients. Conclusion In this paper, we examined the capabilities of transformer networks for capturing entity state semantics. First, we show that the conventional framework of using the transformer networks is not rich enough to capture entity semantics in these cases. We then propose entity-centric ways to formulate richer transformer encoding of the process paragraph, guiding the self-attention in a target entity oriented way. This approach leads to significant performance improvements, but examining model performance more deeply, we conclude that these models still do not model the intermediate compositional entities and perform well by largely relying on surface entity mentions and verb semantics. Acknowledgments This work was partially supported by NSF Grant IIS-1814522 and an equipment grant from NVIDIA. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research. Results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation. Thanks as well to the anonymous reviewers for their helpful comments.
In four entity-centric ways - entity-first, entity-last, document-level and sentence-level
aa60b0a6c1601e09209626fd8c8bdc463624b0b3
aa60b0a6c1601e09209626fd8c8bdc463624b0b3_0
Q: What are their results on the entity recognition task? Text: Introduction The increasing use of social media and microblogging services has broken new ground in the field of Information Extraction (IE) from user-generated content (UGC). Understanding the information contained in users' content has become one of the main goal for many applications, due to the uniqueness and the variety of this data BIBREF0 . However, the highly informal and noisy status of these sources makes it difficult to apply techniques proposed by the NLP community for dealing with formal and structured content BIBREF1 . In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work. The method proposed makes use of the information extracted from the radio schedule for creating links between users' tweets and tracks broadcasted. Thanks to this linking, we aim to detect when users refer to entities included into the schedule. Apart from that, we consider a series of linguistic features, partly taken from the NLP literature and partly specifically designed for this task, for building statistical models able to recognize the musical entities. To that aim, we perform several experiments with a supervised learning model, Support Vector Machine (SVM), and a recurrent neural network architecture, a bidirectional LSTM with a CRF layer (biLSTM-CRF). The contributions in this work are summarized as follows: The paper is structured as follows. In Section 2, we present a review of the previous works related to Named Entity Recognition, focusing on its application on UGC and MIR. Afterwards, in Section 3 it is presented the methodology of this work, describing the dataset and the method proposed. In Section 4, the results obtained are shown. Finally, in Section 5 conclusions are discussed. Related Work Named Entity Recognition (NER), or alternatively Named Entity Recognition and Classification (NERC), is the task of detecting entities in an input text and to assign them to a specific class. It starts to be defined in the early '80, and over the years several approaches have been proposed BIBREF2 . Early systems were based on handcrafted rule-based algorithms, while recently several contributions by Machine Learning scientists have helped in integrating probabilistic models into NER systems. In particular, new developments in neural architectures have become an important resource for this task. Their main advantages are that they do not need language-specific knowledge resources BIBREF3 , and they are robust to the noisy and short nature of social media messages BIBREF4 . Indeed, according to a performance analysis of several Named Entity Recognition and Linking systems presented in BIBREF5 , it has been found that poor capitalization is one of the main issues when dealing with microblog content. Apart from that, typographic errors and the ubiquitous occurrence of out-of-vocabulary (OOV) words also cause drops in NER recall and precision, together with shortenings and slang, particularly pronounced in tweets. Music Information Retrieval (MIR) is an interdisciplinary field which borrows tools of several disciplines, such as signal processing, musicology, machine learning, psychology and many others, for extracting knowledge from musical objects (be them audio, texts, etc.) BIBREF6 . In the last decade, several MIR tasks have benefited from NLP, such as sound and music recommendation BIBREF7 , automatic summary of song review BIBREF8 , artist similarity BIBREF9 and genre classification BIBREF10 . In the field of IE, a first approach for detecting musical named entities from raw text, based on Hidden Markov Models, has been proposed in BIBREF11 . In BIBREF12 , the authors combine state-of-the-art Entity Linking (EL) systems to tackle the problem of detecting musical entities from raw texts. The method proposed relies on the argumentum ad populum intuition, so if two or more different EL systems perform the same prediction in linking a named entity mention, the more likely this prediction is to be correct. In detail, the off-the-shelf systems used are: DBpedia Spotlight BIBREF13 , TagMe BIBREF14 , Babelfy BIBREF15 . Moreover, a first Musical Entity Linking, MEL has been presented in BIBREF16 which combines different state-of-the-art NLP libraries and SimpleBrainz, an RDF knowledge base created from MusicBrainz after a simplification process. Furthermore, Twitter has also been at the center of many studies done by the MIR community. As example, for building a music recommender system BIBREF17 analyzes tweets containing keywords like nowplaying or listeningto. In BIBREF9 , a similar dataset it is used for discovering cultural listening patterns. Publicly available Twitter corpora built for MIR investigations have been created, among others the Million Musical Tweets dataset BIBREF18 and the #nowplaying dataset BIBREF19 . Methodology We propose a hybrid method which recognizes musical entities in UGC using both contextual and linguistic information. We focus on detecting two types of entities: Contributor: person who is related to a musical work (composer, performer, conductor, etc). Musical Work: musical composition or recording (symphony, concerto, overture, etc). As case study, we have chosen to analyze tweets extracted from the channel of a classical music radio, BBC Radio 3. The choice to focus on classical music has been mostly motivated by the particular discrepancy between the informal language used in the social platform and the formal nomenclature of contributors and musical works. Indeed, users when referring to a musician or to a classical piece in a tweet, rarely use the full name of the person or of the work, as shown in Table 2. We extract information from the radio schedule for recreating the musical context to analyze user-generated tweets, detecting when they are referring to a specific work or contributor recently played. We manage to associate to every track broadcasted a list of entities, thanks to the tweets automatically posted by the BBC Radio3 Music Bot, where it is described the track actually on air in the radio. In Table 3, examples of bot-generated tweets are shown. Afterwards, we detect the entities on the user-generated content by means of two methods: on one side, we use the entities extracted from the radio schedule for generating candidates entities in the user-generated tweets, thanks to a matching algorithm based on time proximity and string similarity. On the other side, we create a statistical model capable of detecting entities directly from the UGC, aimed to model the informal language of the raw texts. In Figure 1, an overview of the system proposed is presented. Dataset In May 2018, we crawled Twitter using the Python library Tweepy, creating two datasets on which Contributor and Musical Work entities have been manually annotated, using IOB tags. The first set contains user-generated tweets related to the BBC Radio 3 channel. It represents the source of user-generated content on which we aim to predict the named entities. We create it filtering the messages containing hashtags related to BBC Radio 3, such as #BBCRadio3 or #BBCR3. We obtain a set of 2,225 unique user-generated tweets. The second set consists of the messages automatically generated by the BBC Radio 3 Music Bot. This set contains 5,093 automatically generated tweets, thanks to which we have recreated the schedule. In Table 4, the amount of tokens and relative entities annotated are reported for the two datasets. For evaluation purposes, both sets are split in a training part (80%) and two test sets (10% each one) randomly chosen. Within the user-generated corpora, entities annotated are only about 5% of the whole amount of tokens. In the case of the automatically generated tweets, the percentage is significantly greater and entities represent about the 50%. NER system According to the literature reviewed, state-of-the-art NER systems proposed by the NLP community are not tailored to detect musical entities in user-generated content. Consequently, our first objective has been to understand how to adapt existing systems for achieving significant results in this task. In the following sections, we describe separately the features, the word embeddings and the models considered. All the resources used are publicy available. We define a set of features for characterizing the text at the token level. We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context. For extracting the POS and the chunk tag we use the Python library twitter_nlp, presented in BIBREF1 . In total, we define 26 features for describing each token: 1)POS tag; 2)Chunk tag; 3)Position of the token within the text, normalized between 0 and 1; 4)If the token starts with a capital letter; 5)If the token is a digit. Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types ("soprano", "violinist", etc.); 9)Classical work types ("symphony", "overture", etc.); 10)Musical instruments; 11)Opus forms ("op", "opus"); 12)Work number forms ("no", "number"); 13)Work keys ("C", "D", "E", "F" , "G" , "A", "B", "flat", "sharp"); 14)Work Modes ("major", "minor", "m"). Finally, we complete the tokens' description including as token's features the surface form, the POS and the chunk tag of the previous and the following two tokens (12 features). We consider two sets of GloVe word embeddings BIBREF20 for training the neural architecture, one pre-trained with 2B of tweets, publicy downloadable, one trained with a corpora of 300K tweets collected during the 2014-2017 BBC Proms Festivals and disjoint from the data used in our experiments. The first model considered for this task has been the John Platt's sequential minimal optimization algorithm for training a support vector classifier BIBREF21 , implemented in WEKA BIBREF22 . Indeed, in BIBREF23 results shown that SVM outperforms other machine learning models, such as Decision Trees and Naive Bayes, obtaining the best accuracy when detecting named entities from the user-generated tweets. However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments. In the first, we train the model using only the word embeddings as feature. In the second, together with the word embeddings we use the POS and chunk tag. In the third, all the features previously defined are included, in addition to the word embeddings. For every experiment, we use both the pre-trained embeddings and the ones that we created with our Twitter corpora. In section 4, results obtained from the several experiments are reported. Schedule matching The bot-generated tweets present a predefined structure and a formal language, which facilitates the entities detection. In this dataset, our goal is to assign to each track played on the radio, represented by a tweet, a list of entities extracted from the tweet raw text. For achieving that, we experiment with the algorithms and features presented previously, obtaining an high level of accuracy, as presented in section 4. The hypothesis considered is that when a radio listener posts a tweet, it is possible that she is referring to a track which has been played a relatively short time before. In this cases, we want to show that knowing the radio schedule can help improving the results when detecting entities. Once assigned a list of entities to each track, we perform two types of matching. Firstly, within the tracks we identify the ones which have been played in a fixed range of time (t) before and after the generation of the user's tweet. Using the resulting tracks, we create a list of candidates entities on which performing string similarity. The score of the matching based on string similarity is computed as the ratio of the number of tokens in common between an entity and the input tweet, and the total number of token of the entity: DISPLAYFORM0 In order to exclude trivial matches, tokens within a list of stop words are not considered while performing string matching. The final score is a weighted combination of the string matching score and the time proximity of the track, aimed to enhance matches from tracks played closer to the time when the user is posting the tweet. The performance of the algorithm depends, apart from the time proximity threshold t, also on other two thresholds related to the string matching, one for the Musical Work (w) and one for the Contributor (c) entities. It has been necessary for avoiding to include candidate entities matched against the schedule with a low score, often source of false positives or negatives. Consequently, as last step Contributor and Musical Work candidates entities with respectively a string matching score lower than c and w, are filtered out. In Figure 2, an example of Musical Work entity recognized in an user-generated tweet using the schedule information is presented. The entities recognized from the schedule matching are joined with the ones obtained directly from the statistical models. In the joined results, the criteria is to give priority to the entities recognized from the machine learning techniques. If they do not return any entities, the entities predicted by the schedule matching are considered. Our strategy is justified by the poorer results obtained by the NER based only on the schedule matching, compared to the other models used in the experiments, to be presented in the next section. Results The performances of the NER experiments are reported separately for three different parts of the system proposed. Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot. In Table 7, results of the schedule matching are reported. We can observe how the quality of the linking performed by the algorithm is correlated to the choice of the three thresholds. Indeed, the Precision score increase when the time threshold decrease, admitting less candidates as entities during the matching, and when the string similarity thresholds increase, accepting only candidates with an higher degree of similarity. The behaviour of the Recall score is inverted. Finally, we test the impact of using the schedule matching together with a biLSTM-CRF network. In this experiment, we consider the network trained using all the features proposed, and the embeddings not pre-trained. Table 8 reports the results obtained. We can observe how generally the system benefits from the use of the schedule information. Especially in the testing part, where the neural network recognizes with less accuracy, the explicit information contained in the schedule can be exploited for identifying the entities at which users are referring while listening to the radio and posting the tweets. Conclusion We have presented in this work a novel method for detecting musical entities from user-generated content, modelling linguistic features with statistical models and extracting contextual information from a radio schedule. We analyzed tweets related to a classical music radio station, integrating its schedule to connect users' messages to tracks broadcasted. We focus on the recognition of two kinds of entities related to the music field, Contributor and Musical Work. According to the results obtained, we have seen a pronounced difference between the system performances when dealing with the Contributor instead of the Musical Work entities. Indeed, the former type of entity has been shown to be more easily detected in comparison to the latter, and we identify several reasons behind this fact. Firstly, Contributor entities are less prone to be shorten or modified, while due to their longness, Musical Work entities often represent only a part of the complete title of a musical piece. Furthermore, Musical Work titles are typically composed by more tokens, including common words which can be easily misclassified. The low performances obtained in the case of Musical Work entities can be a consequences of these observations. On the other hand, when referring to a Contributor users often use only the surname, but in most of the cases it is enough for the system to recognizing the entities. From the experiments we have seen that generally the biLSTM-CRF architecture outperforms the SVM model. The benefit of using the whole set of features is evident in the training part, but while testing the inclusion of the features not always leads to better results. In addition, some of the features designed in our experiments are tailored to the case of classical music, hence they might not be representative if applied to other fields. We do not exclude that our method can be adapted for detecting other kinds of entity, but it might be needed to redefine the features according to the case considered. Similarly, it has not been found a particular advantage of using the pre-trained embeddings instead of the one trained with our corpora. Furthermore, we verified the statistical significance of our experiment by using Wilcoxon Rank-Sum Test, obtaining that there have been not significant difference between the various model considered while testing. The information extracted from the schedule also present several limitations. In fact, the hypothesis that a tweet is referring to a track broadcasted is not always verified. Even if it is common that radios listeners do comments about tracks played, or give suggestion to the radio host about what they would like to listen, it is also true that they might refer to a Contributor or Musical Work unrelated to the radio schedule.
With both test sets performances decrease, varying between 94-97%
3837ae1e91a4feb27f11ac3b14963e9a12f0c05e
3837ae1e91a4feb27f11ac3b14963e9a12f0c05e_0
Q: What task-specific features are used? Text: Introduction The increasing use of social media and microblogging services has broken new ground in the field of Information Extraction (IE) from user-generated content (UGC). Understanding the information contained in users' content has become one of the main goal for many applications, due to the uniqueness and the variety of this data BIBREF0 . However, the highly informal and noisy status of these sources makes it difficult to apply techniques proposed by the NLP community for dealing with formal and structured content BIBREF1 . In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work. The method proposed makes use of the information extracted from the radio schedule for creating links between users' tweets and tracks broadcasted. Thanks to this linking, we aim to detect when users refer to entities included into the schedule. Apart from that, we consider a series of linguistic features, partly taken from the NLP literature and partly specifically designed for this task, for building statistical models able to recognize the musical entities. To that aim, we perform several experiments with a supervised learning model, Support Vector Machine (SVM), and a recurrent neural network architecture, a bidirectional LSTM with a CRF layer (biLSTM-CRF). The contributions in this work are summarized as follows: The paper is structured as follows. In Section 2, we present a review of the previous works related to Named Entity Recognition, focusing on its application on UGC and MIR. Afterwards, in Section 3 it is presented the methodology of this work, describing the dataset and the method proposed. In Section 4, the results obtained are shown. Finally, in Section 5 conclusions are discussed. Related Work Named Entity Recognition (NER), or alternatively Named Entity Recognition and Classification (NERC), is the task of detecting entities in an input text and to assign them to a specific class. It starts to be defined in the early '80, and over the years several approaches have been proposed BIBREF2 . Early systems were based on handcrafted rule-based algorithms, while recently several contributions by Machine Learning scientists have helped in integrating probabilistic models into NER systems. In particular, new developments in neural architectures have become an important resource for this task. Their main advantages are that they do not need language-specific knowledge resources BIBREF3 , and they are robust to the noisy and short nature of social media messages BIBREF4 . Indeed, according to a performance analysis of several Named Entity Recognition and Linking systems presented in BIBREF5 , it has been found that poor capitalization is one of the main issues when dealing with microblog content. Apart from that, typographic errors and the ubiquitous occurrence of out-of-vocabulary (OOV) words also cause drops in NER recall and precision, together with shortenings and slang, particularly pronounced in tweets. Music Information Retrieval (MIR) is an interdisciplinary field which borrows tools of several disciplines, such as signal processing, musicology, machine learning, psychology and many others, for extracting knowledge from musical objects (be them audio, texts, etc.) BIBREF6 . In the last decade, several MIR tasks have benefited from NLP, such as sound and music recommendation BIBREF7 , automatic summary of song review BIBREF8 , artist similarity BIBREF9 and genre classification BIBREF10 . In the field of IE, a first approach for detecting musical named entities from raw text, based on Hidden Markov Models, has been proposed in BIBREF11 . In BIBREF12 , the authors combine state-of-the-art Entity Linking (EL) systems to tackle the problem of detecting musical entities from raw texts. The method proposed relies on the argumentum ad populum intuition, so if two or more different EL systems perform the same prediction in linking a named entity mention, the more likely this prediction is to be correct. In detail, the off-the-shelf systems used are: DBpedia Spotlight BIBREF13 , TagMe BIBREF14 , Babelfy BIBREF15 . Moreover, a first Musical Entity Linking, MEL has been presented in BIBREF16 which combines different state-of-the-art NLP libraries and SimpleBrainz, an RDF knowledge base created from MusicBrainz after a simplification process. Furthermore, Twitter has also been at the center of many studies done by the MIR community. As example, for building a music recommender system BIBREF17 analyzes tweets containing keywords like nowplaying or listeningto. In BIBREF9 , a similar dataset it is used for discovering cultural listening patterns. Publicly available Twitter corpora built for MIR investigations have been created, among others the Million Musical Tweets dataset BIBREF18 and the #nowplaying dataset BIBREF19 . Methodology We propose a hybrid method which recognizes musical entities in UGC using both contextual and linguistic information. We focus on detecting two types of entities: Contributor: person who is related to a musical work (composer, performer, conductor, etc). Musical Work: musical composition or recording (symphony, concerto, overture, etc). As case study, we have chosen to analyze tweets extracted from the channel of a classical music radio, BBC Radio 3. The choice to focus on classical music has been mostly motivated by the particular discrepancy between the informal language used in the social platform and the formal nomenclature of contributors and musical works. Indeed, users when referring to a musician or to a classical piece in a tweet, rarely use the full name of the person or of the work, as shown in Table 2. We extract information from the radio schedule for recreating the musical context to analyze user-generated tweets, detecting when they are referring to a specific work or contributor recently played. We manage to associate to every track broadcasted a list of entities, thanks to the tweets automatically posted by the BBC Radio3 Music Bot, where it is described the track actually on air in the radio. In Table 3, examples of bot-generated tweets are shown. Afterwards, we detect the entities on the user-generated content by means of two methods: on one side, we use the entities extracted from the radio schedule for generating candidates entities in the user-generated tweets, thanks to a matching algorithm based on time proximity and string similarity. On the other side, we create a statistical model capable of detecting entities directly from the UGC, aimed to model the informal language of the raw texts. In Figure 1, an overview of the system proposed is presented. Dataset In May 2018, we crawled Twitter using the Python library Tweepy, creating two datasets on which Contributor and Musical Work entities have been manually annotated, using IOB tags. The first set contains user-generated tweets related to the BBC Radio 3 channel. It represents the source of user-generated content on which we aim to predict the named entities. We create it filtering the messages containing hashtags related to BBC Radio 3, such as #BBCRadio3 or #BBCR3. We obtain a set of 2,225 unique user-generated tweets. The second set consists of the messages automatically generated by the BBC Radio 3 Music Bot. This set contains 5,093 automatically generated tweets, thanks to which we have recreated the schedule. In Table 4, the amount of tokens and relative entities annotated are reported for the two datasets. For evaluation purposes, both sets are split in a training part (80%) and two test sets (10% each one) randomly chosen. Within the user-generated corpora, entities annotated are only about 5% of the whole amount of tokens. In the case of the automatically generated tweets, the percentage is significantly greater and entities represent about the 50%. NER system According to the literature reviewed, state-of-the-art NER systems proposed by the NLP community are not tailored to detect musical entities in user-generated content. Consequently, our first objective has been to understand how to adapt existing systems for achieving significant results in this task. In the following sections, we describe separately the features, the word embeddings and the models considered. All the resources used are publicy available. We define a set of features for characterizing the text at the token level. We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context. For extracting the POS and the chunk tag we use the Python library twitter_nlp, presented in BIBREF1 . In total, we define 26 features for describing each token: 1)POS tag; 2)Chunk tag; 3)Position of the token within the text, normalized between 0 and 1; 4)If the token starts with a capital letter; 5)If the token is a digit. Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types ("soprano", "violinist", etc.); 9)Classical work types ("symphony", "overture", etc.); 10)Musical instruments; 11)Opus forms ("op", "opus"); 12)Work number forms ("no", "number"); 13)Work keys ("C", "D", "E", "F" , "G" , "A", "B", "flat", "sharp"); 14)Work Modes ("major", "minor", "m"). Finally, we complete the tokens' description including as token's features the surface form, the POS and the chunk tag of the previous and the following two tokens (12 features). We consider two sets of GloVe word embeddings BIBREF20 for training the neural architecture, one pre-trained with 2B of tweets, publicy downloadable, one trained with a corpora of 300K tweets collected during the 2014-2017 BBC Proms Festivals and disjoint from the data used in our experiments. The first model considered for this task has been the John Platt's sequential minimal optimization algorithm for training a support vector classifier BIBREF21 , implemented in WEKA BIBREF22 . Indeed, in BIBREF23 results shown that SVM outperforms other machine learning models, such as Decision Trees and Naive Bayes, obtaining the best accuracy when detecting named entities from the user-generated tweets. However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments. In the first, we train the model using only the word embeddings as feature. In the second, together with the word embeddings we use the POS and chunk tag. In the third, all the features previously defined are included, in addition to the word embeddings. For every experiment, we use both the pre-trained embeddings and the ones that we created with our Twitter corpora. In section 4, results obtained from the several experiments are reported. Schedule matching The bot-generated tweets present a predefined structure and a formal language, which facilitates the entities detection. In this dataset, our goal is to assign to each track played on the radio, represented by a tweet, a list of entities extracted from the tweet raw text. For achieving that, we experiment with the algorithms and features presented previously, obtaining an high level of accuracy, as presented in section 4. The hypothesis considered is that when a radio listener posts a tweet, it is possible that she is referring to a track which has been played a relatively short time before. In this cases, we want to show that knowing the radio schedule can help improving the results when detecting entities. Once assigned a list of entities to each track, we perform two types of matching. Firstly, within the tracks we identify the ones which have been played in a fixed range of time (t) before and after the generation of the user's tweet. Using the resulting tracks, we create a list of candidates entities on which performing string similarity. The score of the matching based on string similarity is computed as the ratio of the number of tokens in common between an entity and the input tweet, and the total number of token of the entity: DISPLAYFORM0 In order to exclude trivial matches, tokens within a list of stop words are not considered while performing string matching. The final score is a weighted combination of the string matching score and the time proximity of the track, aimed to enhance matches from tracks played closer to the time when the user is posting the tweet. The performance of the algorithm depends, apart from the time proximity threshold t, also on other two thresholds related to the string matching, one for the Musical Work (w) and one for the Contributor (c) entities. It has been necessary for avoiding to include candidate entities matched against the schedule with a low score, often source of false positives or negatives. Consequently, as last step Contributor and Musical Work candidates entities with respectively a string matching score lower than c and w, are filtered out. In Figure 2, an example of Musical Work entity recognized in an user-generated tweet using the schedule information is presented. The entities recognized from the schedule matching are joined with the ones obtained directly from the statistical models. In the joined results, the criteria is to give priority to the entities recognized from the machine learning techniques. If they do not return any entities, the entities predicted by the schedule matching are considered. Our strategy is justified by the poorer results obtained by the NER based only on the schedule matching, compared to the other models used in the experiments, to be presented in the next section. Results The performances of the NER experiments are reported separately for three different parts of the system proposed. Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot. In Table 7, results of the schedule matching are reported. We can observe how the quality of the linking performed by the algorithm is correlated to the choice of the three thresholds. Indeed, the Precision score increase when the time threshold decrease, admitting less candidates as entities during the matching, and when the string similarity thresholds increase, accepting only candidates with an higher degree of similarity. The behaviour of the Recall score is inverted. Finally, we test the impact of using the schedule matching together with a biLSTM-CRF network. In this experiment, we consider the network trained using all the features proposed, and the embeddings not pre-trained. Table 8 reports the results obtained. We can observe how generally the system benefits from the use of the schedule information. Especially in the testing part, where the neural network recognizes with less accuracy, the explicit information contained in the schedule can be exploited for identifying the entities at which users are referring while listening to the radio and posting the tweets. Conclusion We have presented in this work a novel method for detecting musical entities from user-generated content, modelling linguistic features with statistical models and extracting contextual information from a radio schedule. We analyzed tweets related to a classical music radio station, integrating its schedule to connect users' messages to tracks broadcasted. We focus on the recognition of two kinds of entities related to the music field, Contributor and Musical Work. According to the results obtained, we have seen a pronounced difference between the system performances when dealing with the Contributor instead of the Musical Work entities. Indeed, the former type of entity has been shown to be more easily detected in comparison to the latter, and we identify several reasons behind this fact. Firstly, Contributor entities are less prone to be shorten or modified, while due to their longness, Musical Work entities often represent only a part of the complete title of a musical piece. Furthermore, Musical Work titles are typically composed by more tokens, including common words which can be easily misclassified. The low performances obtained in the case of Musical Work entities can be a consequences of these observations. On the other hand, when referring to a Contributor users often use only the surname, but in most of the cases it is enough for the system to recognizing the entities. From the experiments we have seen that generally the biLSTM-CRF architecture outperforms the SVM model. The benefit of using the whole set of features is evident in the training part, but while testing the inclusion of the features not always leads to better results. In addition, some of the features designed in our experiments are tailored to the case of classical music, hence they might not be representative if applied to other fields. We do not exclude that our method can be adapted for detecting other kinds of entity, but it might be needed to redefine the features according to the case considered. Similarly, it has not been found a particular advantage of using the pre-trained embeddings instead of the one trained with our corpora. Furthermore, we verified the statistical significance of our experiment by using Wilcoxon Rank-Sum Test, obtaining that there have been not significant difference between the various model considered while testing. The information extracted from the schedule also present several limitations. In fact, the hypothesis that a tweet is referring to a track broadcasted is not always verified. Even if it is common that radios listeners do comments about tracks played, or give suggestion to the radio host about what they would like to listen, it is also true that they might refer to a Contributor or Musical Work unrelated to the radio schedule.
6)Contributor first names, 7)Contributor last names, 8)Contributor types ("soprano", "violinist", etc.), 9)Classical work types ("symphony", "overture", etc.), 10)Musical instruments, 11)Opus forms ("op", "opus"), 12)Work number forms ("no", "number"), 13)Work keys ("C", "D", "E", "F" , "G" , "A", "B", "flat", "sharp"), 14)Work Modes ("major", "minor", "m")
ef4d6c9416e45301ea1a4d550b7c381f377cacd9
ef4d6c9416e45301ea1a4d550b7c381f377cacd9_0
Q: What kind of corpus-based features are taken into account? Text: Introduction The increasing use of social media and microblogging services has broken new ground in the field of Information Extraction (IE) from user-generated content (UGC). Understanding the information contained in users' content has become one of the main goal for many applications, due to the uniqueness and the variety of this data BIBREF0 . However, the highly informal and noisy status of these sources makes it difficult to apply techniques proposed by the NLP community for dealing with formal and structured content BIBREF1 . In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work. The method proposed makes use of the information extracted from the radio schedule for creating links between users' tweets and tracks broadcasted. Thanks to this linking, we aim to detect when users refer to entities included into the schedule. Apart from that, we consider a series of linguistic features, partly taken from the NLP literature and partly specifically designed for this task, for building statistical models able to recognize the musical entities. To that aim, we perform several experiments with a supervised learning model, Support Vector Machine (SVM), and a recurrent neural network architecture, a bidirectional LSTM with a CRF layer (biLSTM-CRF). The contributions in this work are summarized as follows: The paper is structured as follows. In Section 2, we present a review of the previous works related to Named Entity Recognition, focusing on its application on UGC and MIR. Afterwards, in Section 3 it is presented the methodology of this work, describing the dataset and the method proposed. In Section 4, the results obtained are shown. Finally, in Section 5 conclusions are discussed. Related Work Named Entity Recognition (NER), or alternatively Named Entity Recognition and Classification (NERC), is the task of detecting entities in an input text and to assign them to a specific class. It starts to be defined in the early '80, and over the years several approaches have been proposed BIBREF2 . Early systems were based on handcrafted rule-based algorithms, while recently several contributions by Machine Learning scientists have helped in integrating probabilistic models into NER systems. In particular, new developments in neural architectures have become an important resource for this task. Their main advantages are that they do not need language-specific knowledge resources BIBREF3 , and they are robust to the noisy and short nature of social media messages BIBREF4 . Indeed, according to a performance analysis of several Named Entity Recognition and Linking systems presented in BIBREF5 , it has been found that poor capitalization is one of the main issues when dealing with microblog content. Apart from that, typographic errors and the ubiquitous occurrence of out-of-vocabulary (OOV) words also cause drops in NER recall and precision, together with shortenings and slang, particularly pronounced in tweets. Music Information Retrieval (MIR) is an interdisciplinary field which borrows tools of several disciplines, such as signal processing, musicology, machine learning, psychology and many others, for extracting knowledge from musical objects (be them audio, texts, etc.) BIBREF6 . In the last decade, several MIR tasks have benefited from NLP, such as sound and music recommendation BIBREF7 , automatic summary of song review BIBREF8 , artist similarity BIBREF9 and genre classification BIBREF10 . In the field of IE, a first approach for detecting musical named entities from raw text, based on Hidden Markov Models, has been proposed in BIBREF11 . In BIBREF12 , the authors combine state-of-the-art Entity Linking (EL) systems to tackle the problem of detecting musical entities from raw texts. The method proposed relies on the argumentum ad populum intuition, so if two or more different EL systems perform the same prediction in linking a named entity mention, the more likely this prediction is to be correct. In detail, the off-the-shelf systems used are: DBpedia Spotlight BIBREF13 , TagMe BIBREF14 , Babelfy BIBREF15 . Moreover, a first Musical Entity Linking, MEL has been presented in BIBREF16 which combines different state-of-the-art NLP libraries and SimpleBrainz, an RDF knowledge base created from MusicBrainz after a simplification process. Furthermore, Twitter has also been at the center of many studies done by the MIR community. As example, for building a music recommender system BIBREF17 analyzes tweets containing keywords like nowplaying or listeningto. In BIBREF9 , a similar dataset it is used for discovering cultural listening patterns. Publicly available Twitter corpora built for MIR investigations have been created, among others the Million Musical Tweets dataset BIBREF18 and the #nowplaying dataset BIBREF19 . Methodology We propose a hybrid method which recognizes musical entities in UGC using both contextual and linguistic information. We focus on detecting two types of entities: Contributor: person who is related to a musical work (composer, performer, conductor, etc). Musical Work: musical composition or recording (symphony, concerto, overture, etc). As case study, we have chosen to analyze tweets extracted from the channel of a classical music radio, BBC Radio 3. The choice to focus on classical music has been mostly motivated by the particular discrepancy between the informal language used in the social platform and the formal nomenclature of contributors and musical works. Indeed, users when referring to a musician or to a classical piece in a tweet, rarely use the full name of the person or of the work, as shown in Table 2. We extract information from the radio schedule for recreating the musical context to analyze user-generated tweets, detecting when they are referring to a specific work or contributor recently played. We manage to associate to every track broadcasted a list of entities, thanks to the tweets automatically posted by the BBC Radio3 Music Bot, where it is described the track actually on air in the radio. In Table 3, examples of bot-generated tweets are shown. Afterwards, we detect the entities on the user-generated content by means of two methods: on one side, we use the entities extracted from the radio schedule for generating candidates entities in the user-generated tweets, thanks to a matching algorithm based on time proximity and string similarity. On the other side, we create a statistical model capable of detecting entities directly from the UGC, aimed to model the informal language of the raw texts. In Figure 1, an overview of the system proposed is presented. Dataset In May 2018, we crawled Twitter using the Python library Tweepy, creating two datasets on which Contributor and Musical Work entities have been manually annotated, using IOB tags. The first set contains user-generated tweets related to the BBC Radio 3 channel. It represents the source of user-generated content on which we aim to predict the named entities. We create it filtering the messages containing hashtags related to BBC Radio 3, such as #BBCRadio3 or #BBCR3. We obtain a set of 2,225 unique user-generated tweets. The second set consists of the messages automatically generated by the BBC Radio 3 Music Bot. This set contains 5,093 automatically generated tweets, thanks to which we have recreated the schedule. In Table 4, the amount of tokens and relative entities annotated are reported for the two datasets. For evaluation purposes, both sets are split in a training part (80%) and two test sets (10% each one) randomly chosen. Within the user-generated corpora, entities annotated are only about 5% of the whole amount of tokens. In the case of the automatically generated tweets, the percentage is significantly greater and entities represent about the 50%. NER system According to the literature reviewed, state-of-the-art NER systems proposed by the NLP community are not tailored to detect musical entities in user-generated content. Consequently, our first objective has been to understand how to adapt existing systems for achieving significant results in this task. In the following sections, we describe separately the features, the word embeddings and the models considered. All the resources used are publicy available. We define a set of features for characterizing the text at the token level. We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context. For extracting the POS and the chunk tag we use the Python library twitter_nlp, presented in BIBREF1 . In total, we define 26 features for describing each token: 1)POS tag; 2)Chunk tag; 3)Position of the token within the text, normalized between 0 and 1; 4)If the token starts with a capital letter; 5)If the token is a digit. Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types ("soprano", "violinist", etc.); 9)Classical work types ("symphony", "overture", etc.); 10)Musical instruments; 11)Opus forms ("op", "opus"); 12)Work number forms ("no", "number"); 13)Work keys ("C", "D", "E", "F" , "G" , "A", "B", "flat", "sharp"); 14)Work Modes ("major", "minor", "m"). Finally, we complete the tokens' description including as token's features the surface form, the POS and the chunk tag of the previous and the following two tokens (12 features). We consider two sets of GloVe word embeddings BIBREF20 for training the neural architecture, one pre-trained with 2B of tweets, publicy downloadable, one trained with a corpora of 300K tweets collected during the 2014-2017 BBC Proms Festivals and disjoint from the data used in our experiments. The first model considered for this task has been the John Platt's sequential minimal optimization algorithm for training a support vector classifier BIBREF21 , implemented in WEKA BIBREF22 . Indeed, in BIBREF23 results shown that SVM outperforms other machine learning models, such as Decision Trees and Naive Bayes, obtaining the best accuracy when detecting named entities from the user-generated tweets. However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments. In the first, we train the model using only the word embeddings as feature. In the second, together with the word embeddings we use the POS and chunk tag. In the third, all the features previously defined are included, in addition to the word embeddings. For every experiment, we use both the pre-trained embeddings and the ones that we created with our Twitter corpora. In section 4, results obtained from the several experiments are reported. Schedule matching The bot-generated tweets present a predefined structure and a formal language, which facilitates the entities detection. In this dataset, our goal is to assign to each track played on the radio, represented by a tweet, a list of entities extracted from the tweet raw text. For achieving that, we experiment with the algorithms and features presented previously, obtaining an high level of accuracy, as presented in section 4. The hypothesis considered is that when a radio listener posts a tweet, it is possible that she is referring to a track which has been played a relatively short time before. In this cases, we want to show that knowing the radio schedule can help improving the results when detecting entities. Once assigned a list of entities to each track, we perform two types of matching. Firstly, within the tracks we identify the ones which have been played in a fixed range of time (t) before and after the generation of the user's tweet. Using the resulting tracks, we create a list of candidates entities on which performing string similarity. The score of the matching based on string similarity is computed as the ratio of the number of tokens in common between an entity and the input tweet, and the total number of token of the entity: DISPLAYFORM0 In order to exclude trivial matches, tokens within a list of stop words are not considered while performing string matching. The final score is a weighted combination of the string matching score and the time proximity of the track, aimed to enhance matches from tracks played closer to the time when the user is posting the tweet. The performance of the algorithm depends, apart from the time proximity threshold t, also on other two thresholds related to the string matching, one for the Musical Work (w) and one for the Contributor (c) entities. It has been necessary for avoiding to include candidate entities matched against the schedule with a low score, often source of false positives or negatives. Consequently, as last step Contributor and Musical Work candidates entities with respectively a string matching score lower than c and w, are filtered out. In Figure 2, an example of Musical Work entity recognized in an user-generated tweet using the schedule information is presented. The entities recognized from the schedule matching are joined with the ones obtained directly from the statistical models. In the joined results, the criteria is to give priority to the entities recognized from the machine learning techniques. If they do not return any entities, the entities predicted by the schedule matching are considered. Our strategy is justified by the poorer results obtained by the NER based only on the schedule matching, compared to the other models used in the experiments, to be presented in the next section. Results The performances of the NER experiments are reported separately for three different parts of the system proposed. Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot. In Table 7, results of the schedule matching are reported. We can observe how the quality of the linking performed by the algorithm is correlated to the choice of the three thresholds. Indeed, the Precision score increase when the time threshold decrease, admitting less candidates as entities during the matching, and when the string similarity thresholds increase, accepting only candidates with an higher degree of similarity. The behaviour of the Recall score is inverted. Finally, we test the impact of using the schedule matching together with a biLSTM-CRF network. In this experiment, we consider the network trained using all the features proposed, and the embeddings not pre-trained. Table 8 reports the results obtained. We can observe how generally the system benefits from the use of the schedule information. Especially in the testing part, where the neural network recognizes with less accuracy, the explicit information contained in the schedule can be exploited for identifying the entities at which users are referring while listening to the radio and posting the tweets. Conclusion We have presented in this work a novel method for detecting musical entities from user-generated content, modelling linguistic features with statistical models and extracting contextual information from a radio schedule. We analyzed tweets related to a classical music radio station, integrating its schedule to connect users' messages to tracks broadcasted. We focus on the recognition of two kinds of entities related to the music field, Contributor and Musical Work. According to the results obtained, we have seen a pronounced difference between the system performances when dealing with the Contributor instead of the Musical Work entities. Indeed, the former type of entity has been shown to be more easily detected in comparison to the latter, and we identify several reasons behind this fact. Firstly, Contributor entities are less prone to be shorten or modified, while due to their longness, Musical Work entities often represent only a part of the complete title of a musical piece. Furthermore, Musical Work titles are typically composed by more tokens, including common words which can be easily misclassified. The low performances obtained in the case of Musical Work entities can be a consequences of these observations. On the other hand, when referring to a Contributor users often use only the surname, but in most of the cases it is enough for the system to recognizing the entities. From the experiments we have seen that generally the biLSTM-CRF architecture outperforms the SVM model. The benefit of using the whole set of features is evident in the training part, but while testing the inclusion of the features not always leads to better results. In addition, some of the features designed in our experiments are tailored to the case of classical music, hence they might not be representative if applied to other fields. We do not exclude that our method can be adapted for detecting other kinds of entity, but it might be needed to redefine the features according to the case considered. Similarly, it has not been found a particular advantage of using the pre-trained embeddings instead of the one trained with our corpora. Furthermore, we verified the statistical significance of our experiment by using Wilcoxon Rank-Sum Test, obtaining that there have been not significant difference between the various model considered while testing. The information extracted from the schedule also present several limitations. In fact, the hypothesis that a tweet is referring to a track broadcasted is not always verified. Even if it is common that radios listeners do comments about tracks played, or give suggestion to the radio host about what they would like to listen, it is also true that they might refer to a Contributor or Musical Work unrelated to the radio schedule.
standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, series of features representing tokens' left and right context
689d1d0c4653a8fa87fd0e01fa7e12f75405cd38
689d1d0c4653a8fa87fd0e01fa7e12f75405cd38_0
Q: Which machine learning algorithms did the explore? Text: Introduction The increasing use of social media and microblogging services has broken new ground in the field of Information Extraction (IE) from user-generated content (UGC). Understanding the information contained in users' content has become one of the main goal for many applications, due to the uniqueness and the variety of this data BIBREF0 . However, the highly informal and noisy status of these sources makes it difficult to apply techniques proposed by the NLP community for dealing with formal and structured content BIBREF1 . In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work. The method proposed makes use of the information extracted from the radio schedule for creating links between users' tweets and tracks broadcasted. Thanks to this linking, we aim to detect when users refer to entities included into the schedule. Apart from that, we consider a series of linguistic features, partly taken from the NLP literature and partly specifically designed for this task, for building statistical models able to recognize the musical entities. To that aim, we perform several experiments with a supervised learning model, Support Vector Machine (SVM), and a recurrent neural network architecture, a bidirectional LSTM with a CRF layer (biLSTM-CRF). The contributions in this work are summarized as follows: The paper is structured as follows. In Section 2, we present a review of the previous works related to Named Entity Recognition, focusing on its application on UGC and MIR. Afterwards, in Section 3 it is presented the methodology of this work, describing the dataset and the method proposed. In Section 4, the results obtained are shown. Finally, in Section 5 conclusions are discussed. Related Work Named Entity Recognition (NER), or alternatively Named Entity Recognition and Classification (NERC), is the task of detecting entities in an input text and to assign them to a specific class. It starts to be defined in the early '80, and over the years several approaches have been proposed BIBREF2 . Early systems were based on handcrafted rule-based algorithms, while recently several contributions by Machine Learning scientists have helped in integrating probabilistic models into NER systems. In particular, new developments in neural architectures have become an important resource for this task. Their main advantages are that they do not need language-specific knowledge resources BIBREF3 , and they are robust to the noisy and short nature of social media messages BIBREF4 . Indeed, according to a performance analysis of several Named Entity Recognition and Linking systems presented in BIBREF5 , it has been found that poor capitalization is one of the main issues when dealing with microblog content. Apart from that, typographic errors and the ubiquitous occurrence of out-of-vocabulary (OOV) words also cause drops in NER recall and precision, together with shortenings and slang, particularly pronounced in tweets. Music Information Retrieval (MIR) is an interdisciplinary field which borrows tools of several disciplines, such as signal processing, musicology, machine learning, psychology and many others, for extracting knowledge from musical objects (be them audio, texts, etc.) BIBREF6 . In the last decade, several MIR tasks have benefited from NLP, such as sound and music recommendation BIBREF7 , automatic summary of song review BIBREF8 , artist similarity BIBREF9 and genre classification BIBREF10 . In the field of IE, a first approach for detecting musical named entities from raw text, based on Hidden Markov Models, has been proposed in BIBREF11 . In BIBREF12 , the authors combine state-of-the-art Entity Linking (EL) systems to tackle the problem of detecting musical entities from raw texts. The method proposed relies on the argumentum ad populum intuition, so if two or more different EL systems perform the same prediction in linking a named entity mention, the more likely this prediction is to be correct. In detail, the off-the-shelf systems used are: DBpedia Spotlight BIBREF13 , TagMe BIBREF14 , Babelfy BIBREF15 . Moreover, a first Musical Entity Linking, MEL has been presented in BIBREF16 which combines different state-of-the-art NLP libraries and SimpleBrainz, an RDF knowledge base created from MusicBrainz after a simplification process. Furthermore, Twitter has also been at the center of many studies done by the MIR community. As example, for building a music recommender system BIBREF17 analyzes tweets containing keywords like nowplaying or listeningto. In BIBREF9 , a similar dataset it is used for discovering cultural listening patterns. Publicly available Twitter corpora built for MIR investigations have been created, among others the Million Musical Tweets dataset BIBREF18 and the #nowplaying dataset BIBREF19 . Methodology We propose a hybrid method which recognizes musical entities in UGC using both contextual and linguistic information. We focus on detecting two types of entities: Contributor: person who is related to a musical work (composer, performer, conductor, etc). Musical Work: musical composition or recording (symphony, concerto, overture, etc). As case study, we have chosen to analyze tweets extracted from the channel of a classical music radio, BBC Radio 3. The choice to focus on classical music has been mostly motivated by the particular discrepancy between the informal language used in the social platform and the formal nomenclature of contributors and musical works. Indeed, users when referring to a musician or to a classical piece in a tweet, rarely use the full name of the person or of the work, as shown in Table 2. We extract information from the radio schedule for recreating the musical context to analyze user-generated tweets, detecting when they are referring to a specific work or contributor recently played. We manage to associate to every track broadcasted a list of entities, thanks to the tweets automatically posted by the BBC Radio3 Music Bot, where it is described the track actually on air in the radio. In Table 3, examples of bot-generated tweets are shown. Afterwards, we detect the entities on the user-generated content by means of two methods: on one side, we use the entities extracted from the radio schedule for generating candidates entities in the user-generated tweets, thanks to a matching algorithm based on time proximity and string similarity. On the other side, we create a statistical model capable of detecting entities directly from the UGC, aimed to model the informal language of the raw texts. In Figure 1, an overview of the system proposed is presented. Dataset In May 2018, we crawled Twitter using the Python library Tweepy, creating two datasets on which Contributor and Musical Work entities have been manually annotated, using IOB tags. The first set contains user-generated tweets related to the BBC Radio 3 channel. It represents the source of user-generated content on which we aim to predict the named entities. We create it filtering the messages containing hashtags related to BBC Radio 3, such as #BBCRadio3 or #BBCR3. We obtain a set of 2,225 unique user-generated tweets. The second set consists of the messages automatically generated by the BBC Radio 3 Music Bot. This set contains 5,093 automatically generated tweets, thanks to which we have recreated the schedule. In Table 4, the amount of tokens and relative entities annotated are reported for the two datasets. For evaluation purposes, both sets are split in a training part (80%) and two test sets (10% each one) randomly chosen. Within the user-generated corpora, entities annotated are only about 5% of the whole amount of tokens. In the case of the automatically generated tweets, the percentage is significantly greater and entities represent about the 50%. NER system According to the literature reviewed, state-of-the-art NER systems proposed by the NLP community are not tailored to detect musical entities in user-generated content. Consequently, our first objective has been to understand how to adapt existing systems for achieving significant results in this task. In the following sections, we describe separately the features, the word embeddings and the models considered. All the resources used are publicy available. We define a set of features for characterizing the text at the token level. We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context. For extracting the POS and the chunk tag we use the Python library twitter_nlp, presented in BIBREF1 . In total, we define 26 features for describing each token: 1)POS tag; 2)Chunk tag; 3)Position of the token within the text, normalized between 0 and 1; 4)If the token starts with a capital letter; 5)If the token is a digit. Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types ("soprano", "violinist", etc.); 9)Classical work types ("symphony", "overture", etc.); 10)Musical instruments; 11)Opus forms ("op", "opus"); 12)Work number forms ("no", "number"); 13)Work keys ("C", "D", "E", "F" , "G" , "A", "B", "flat", "sharp"); 14)Work Modes ("major", "minor", "m"). Finally, we complete the tokens' description including as token's features the surface form, the POS and the chunk tag of the previous and the following two tokens (12 features). We consider two sets of GloVe word embeddings BIBREF20 for training the neural architecture, one pre-trained with 2B of tweets, publicy downloadable, one trained with a corpora of 300K tweets collected during the 2014-2017 BBC Proms Festivals and disjoint from the data used in our experiments. The first model considered for this task has been the John Platt's sequential minimal optimization algorithm for training a support vector classifier BIBREF21 , implemented in WEKA BIBREF22 . Indeed, in BIBREF23 results shown that SVM outperforms other machine learning models, such as Decision Trees and Naive Bayes, obtaining the best accuracy when detecting named entities from the user-generated tweets. However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments. In the first, we train the model using only the word embeddings as feature. In the second, together with the word embeddings we use the POS and chunk tag. In the third, all the features previously defined are included, in addition to the word embeddings. For every experiment, we use both the pre-trained embeddings and the ones that we created with our Twitter corpora. In section 4, results obtained from the several experiments are reported. Schedule matching The bot-generated tweets present a predefined structure and a formal language, which facilitates the entities detection. In this dataset, our goal is to assign to each track played on the radio, represented by a tweet, a list of entities extracted from the tweet raw text. For achieving that, we experiment with the algorithms and features presented previously, obtaining an high level of accuracy, as presented in section 4. The hypothesis considered is that when a radio listener posts a tweet, it is possible that she is referring to a track which has been played a relatively short time before. In this cases, we want to show that knowing the radio schedule can help improving the results when detecting entities. Once assigned a list of entities to each track, we perform two types of matching. Firstly, within the tracks we identify the ones which have been played in a fixed range of time (t) before and after the generation of the user's tweet. Using the resulting tracks, we create a list of candidates entities on which performing string similarity. The score of the matching based on string similarity is computed as the ratio of the number of tokens in common between an entity and the input tweet, and the total number of token of the entity: DISPLAYFORM0 In order to exclude trivial matches, tokens within a list of stop words are not considered while performing string matching. The final score is a weighted combination of the string matching score and the time proximity of the track, aimed to enhance matches from tracks played closer to the time when the user is posting the tweet. The performance of the algorithm depends, apart from the time proximity threshold t, also on other two thresholds related to the string matching, one for the Musical Work (w) and one for the Contributor (c) entities. It has been necessary for avoiding to include candidate entities matched against the schedule with a low score, often source of false positives or negatives. Consequently, as last step Contributor and Musical Work candidates entities with respectively a string matching score lower than c and w, are filtered out. In Figure 2, an example of Musical Work entity recognized in an user-generated tweet using the schedule information is presented. The entities recognized from the schedule matching are joined with the ones obtained directly from the statistical models. In the joined results, the criteria is to give priority to the entities recognized from the machine learning techniques. If they do not return any entities, the entities predicted by the schedule matching are considered. Our strategy is justified by the poorer results obtained by the NER based only on the schedule matching, compared to the other models used in the experiments, to be presented in the next section. Results The performances of the NER experiments are reported separately for three different parts of the system proposed. Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot. In Table 7, results of the schedule matching are reported. We can observe how the quality of the linking performed by the algorithm is correlated to the choice of the three thresholds. Indeed, the Precision score increase when the time threshold decrease, admitting less candidates as entities during the matching, and when the string similarity thresholds increase, accepting only candidates with an higher degree of similarity. The behaviour of the Recall score is inverted. Finally, we test the impact of using the schedule matching together with a biLSTM-CRF network. In this experiment, we consider the network trained using all the features proposed, and the embeddings not pre-trained. Table 8 reports the results obtained. We can observe how generally the system benefits from the use of the schedule information. Especially in the testing part, where the neural network recognizes with less accuracy, the explicit information contained in the schedule can be exploited for identifying the entities at which users are referring while listening to the radio and posting the tweets. Conclusion We have presented in this work a novel method for detecting musical entities from user-generated content, modelling linguistic features with statistical models and extracting contextual information from a radio schedule. We analyzed tweets related to a classical music radio station, integrating its schedule to connect users' messages to tracks broadcasted. We focus on the recognition of two kinds of entities related to the music field, Contributor and Musical Work. According to the results obtained, we have seen a pronounced difference between the system performances when dealing with the Contributor instead of the Musical Work entities. Indeed, the former type of entity has been shown to be more easily detected in comparison to the latter, and we identify several reasons behind this fact. Firstly, Contributor entities are less prone to be shorten or modified, while due to their longness, Musical Work entities often represent only a part of the complete title of a musical piece. Furthermore, Musical Work titles are typically composed by more tokens, including common words which can be easily misclassified. The low performances obtained in the case of Musical Work entities can be a consequences of these observations. On the other hand, when referring to a Contributor users often use only the surname, but in most of the cases it is enough for the system to recognizing the entities. From the experiments we have seen that generally the biLSTM-CRF architecture outperforms the SVM model. The benefit of using the whole set of features is evident in the training part, but while testing the inclusion of the features not always leads to better results. In addition, some of the features designed in our experiments are tailored to the case of classical music, hence they might not be representative if applied to other fields. We do not exclude that our method can be adapted for detecting other kinds of entity, but it might be needed to redefine the features according to the case considered. Similarly, it has not been found a particular advantage of using the pre-trained embeddings instead of the one trained with our corpora. Furthermore, we verified the statistical significance of our experiment by using Wilcoxon Rank-Sum Test, obtaining that there have been not significant difference between the various model considered while testing. The information extracted from the schedule also present several limitations. In fact, the hypothesis that a tweet is referring to a track broadcasted is not always verified. Even if it is common that radios listeners do comments about tracks played, or give suggestion to the radio host about what they would like to listen, it is also true that they might refer to a Contributor or Musical Work unrelated to the radio schedule.
biLSTM-networks
7920f228de6ef4c685f478bac4c7776443f19f39
7920f228de6ef4c685f478bac4c7776443f19f39_0
Q: What language is the Twitter content in? Text: Introduction The increasing use of social media and microblogging services has broken new ground in the field of Information Extraction (IE) from user-generated content (UGC). Understanding the information contained in users' content has become one of the main goal for many applications, due to the uniqueness and the variety of this data BIBREF0 . However, the highly informal and noisy status of these sources makes it difficult to apply techniques proposed by the NLP community for dealing with formal and structured content BIBREF1 . In this work, we analyze a set of tweets related to a specific classical music radio channel, BBC Radio 3, interested in detecting two types of musical named entities, Contributor and Musical Work. The method proposed makes use of the information extracted from the radio schedule for creating links between users' tweets and tracks broadcasted. Thanks to this linking, we aim to detect when users refer to entities included into the schedule. Apart from that, we consider a series of linguistic features, partly taken from the NLP literature and partly specifically designed for this task, for building statistical models able to recognize the musical entities. To that aim, we perform several experiments with a supervised learning model, Support Vector Machine (SVM), and a recurrent neural network architecture, a bidirectional LSTM with a CRF layer (biLSTM-CRF). The contributions in this work are summarized as follows: The paper is structured as follows. In Section 2, we present a review of the previous works related to Named Entity Recognition, focusing on its application on UGC and MIR. Afterwards, in Section 3 it is presented the methodology of this work, describing the dataset and the method proposed. In Section 4, the results obtained are shown. Finally, in Section 5 conclusions are discussed. Related Work Named Entity Recognition (NER), or alternatively Named Entity Recognition and Classification (NERC), is the task of detecting entities in an input text and to assign them to a specific class. It starts to be defined in the early '80, and over the years several approaches have been proposed BIBREF2 . Early systems were based on handcrafted rule-based algorithms, while recently several contributions by Machine Learning scientists have helped in integrating probabilistic models into NER systems. In particular, new developments in neural architectures have become an important resource for this task. Their main advantages are that they do not need language-specific knowledge resources BIBREF3 , and they are robust to the noisy and short nature of social media messages BIBREF4 . Indeed, according to a performance analysis of several Named Entity Recognition and Linking systems presented in BIBREF5 , it has been found that poor capitalization is one of the main issues when dealing with microblog content. Apart from that, typographic errors and the ubiquitous occurrence of out-of-vocabulary (OOV) words also cause drops in NER recall and precision, together with shortenings and slang, particularly pronounced in tweets. Music Information Retrieval (MIR) is an interdisciplinary field which borrows tools of several disciplines, such as signal processing, musicology, machine learning, psychology and many others, for extracting knowledge from musical objects (be them audio, texts, etc.) BIBREF6 . In the last decade, several MIR tasks have benefited from NLP, such as sound and music recommendation BIBREF7 , automatic summary of song review BIBREF8 , artist similarity BIBREF9 and genre classification BIBREF10 . In the field of IE, a first approach for detecting musical named entities from raw text, based on Hidden Markov Models, has been proposed in BIBREF11 . In BIBREF12 , the authors combine state-of-the-art Entity Linking (EL) systems to tackle the problem of detecting musical entities from raw texts. The method proposed relies on the argumentum ad populum intuition, so if two or more different EL systems perform the same prediction in linking a named entity mention, the more likely this prediction is to be correct. In detail, the off-the-shelf systems used are: DBpedia Spotlight BIBREF13 , TagMe BIBREF14 , Babelfy BIBREF15 . Moreover, a first Musical Entity Linking, MEL has been presented in BIBREF16 which combines different state-of-the-art NLP libraries and SimpleBrainz, an RDF knowledge base created from MusicBrainz after a simplification process. Furthermore, Twitter has also been at the center of many studies done by the MIR community. As example, for building a music recommender system BIBREF17 analyzes tweets containing keywords like nowplaying or listeningto. In BIBREF9 , a similar dataset it is used for discovering cultural listening patterns. Publicly available Twitter corpora built for MIR investigations have been created, among others the Million Musical Tweets dataset BIBREF18 and the #nowplaying dataset BIBREF19 . Methodology We propose a hybrid method which recognizes musical entities in UGC using both contextual and linguistic information. We focus on detecting two types of entities: Contributor: person who is related to a musical work (composer, performer, conductor, etc). Musical Work: musical composition or recording (symphony, concerto, overture, etc). As case study, we have chosen to analyze tweets extracted from the channel of a classical music radio, BBC Radio 3. The choice to focus on classical music has been mostly motivated by the particular discrepancy between the informal language used in the social platform and the formal nomenclature of contributors and musical works. Indeed, users when referring to a musician or to a classical piece in a tweet, rarely use the full name of the person or of the work, as shown in Table 2. We extract information from the radio schedule for recreating the musical context to analyze user-generated tweets, detecting when they are referring to a specific work or contributor recently played. We manage to associate to every track broadcasted a list of entities, thanks to the tweets automatically posted by the BBC Radio3 Music Bot, where it is described the track actually on air in the radio. In Table 3, examples of bot-generated tweets are shown. Afterwards, we detect the entities on the user-generated content by means of two methods: on one side, we use the entities extracted from the radio schedule for generating candidates entities in the user-generated tweets, thanks to a matching algorithm based on time proximity and string similarity. On the other side, we create a statistical model capable of detecting entities directly from the UGC, aimed to model the informal language of the raw texts. In Figure 1, an overview of the system proposed is presented. Dataset In May 2018, we crawled Twitter using the Python library Tweepy, creating two datasets on which Contributor and Musical Work entities have been manually annotated, using IOB tags. The first set contains user-generated tweets related to the BBC Radio 3 channel. It represents the source of user-generated content on which we aim to predict the named entities. We create it filtering the messages containing hashtags related to BBC Radio 3, such as #BBCRadio3 or #BBCR3. We obtain a set of 2,225 unique user-generated tweets. The second set consists of the messages automatically generated by the BBC Radio 3 Music Bot. This set contains 5,093 automatically generated tweets, thanks to which we have recreated the schedule. In Table 4, the amount of tokens and relative entities annotated are reported for the two datasets. For evaluation purposes, both sets are split in a training part (80%) and two test sets (10% each one) randomly chosen. Within the user-generated corpora, entities annotated are only about 5% of the whole amount of tokens. In the case of the automatically generated tweets, the percentage is significantly greater and entities represent about the 50%. NER system According to the literature reviewed, state-of-the-art NER systems proposed by the NLP community are not tailored to detect musical entities in user-generated content. Consequently, our first objective has been to understand how to adapt existing systems for achieving significant results in this task. In the following sections, we describe separately the features, the word embeddings and the models considered. All the resources used are publicy available. We define a set of features for characterizing the text at the token level. We mix standard linguistic features, such as Part-Of-Speech (POS) and chunk tag, together with several gazetteers specifically built for classical music, and a series of features representing tokens' left and right context. For extracting the POS and the chunk tag we use the Python library twitter_nlp, presented in BIBREF1 . In total, we define 26 features for describing each token: 1)POS tag; 2)Chunk tag; 3)Position of the token within the text, normalized between 0 and 1; 4)If the token starts with a capital letter; 5)If the token is a digit. Gazetteers: 6)Contributor first names; 7)Contributor last names; 8)Contributor types ("soprano", "violinist", etc.); 9)Classical work types ("symphony", "overture", etc.); 10)Musical instruments; 11)Opus forms ("op", "opus"); 12)Work number forms ("no", "number"); 13)Work keys ("C", "D", "E", "F" , "G" , "A", "B", "flat", "sharp"); 14)Work Modes ("major", "minor", "m"). Finally, we complete the tokens' description including as token's features the surface form, the POS and the chunk tag of the previous and the following two tokens (12 features). We consider two sets of GloVe word embeddings BIBREF20 for training the neural architecture, one pre-trained with 2B of tweets, publicy downloadable, one trained with a corpora of 300K tweets collected during the 2014-2017 BBC Proms Festivals and disjoint from the data used in our experiments. The first model considered for this task has been the John Platt's sequential minimal optimization algorithm for training a support vector classifier BIBREF21 , implemented in WEKA BIBREF22 . Indeed, in BIBREF23 results shown that SVM outperforms other machine learning models, such as Decision Trees and Naive Bayes, obtaining the best accuracy when detecting named entities from the user-generated tweets. However, recent advances in Deep Learning techniques have shown that the NER task can benefit from the use of neural architectures, such as biLSTM-networks BIBREF3 , BIBREF4 . We use the implementation proposed in BIBREF24 for conducting three different experiments. In the first, we train the model using only the word embeddings as feature. In the second, together with the word embeddings we use the POS and chunk tag. In the third, all the features previously defined are included, in addition to the word embeddings. For every experiment, we use both the pre-trained embeddings and the ones that we created with our Twitter corpora. In section 4, results obtained from the several experiments are reported. Schedule matching The bot-generated tweets present a predefined structure and a formal language, which facilitates the entities detection. In this dataset, our goal is to assign to each track played on the radio, represented by a tweet, a list of entities extracted from the tweet raw text. For achieving that, we experiment with the algorithms and features presented previously, obtaining an high level of accuracy, as presented in section 4. The hypothesis considered is that when a radio listener posts a tweet, it is possible that she is referring to a track which has been played a relatively short time before. In this cases, we want to show that knowing the radio schedule can help improving the results when detecting entities. Once assigned a list of entities to each track, we perform two types of matching. Firstly, within the tracks we identify the ones which have been played in a fixed range of time (t) before and after the generation of the user's tweet. Using the resulting tracks, we create a list of candidates entities on which performing string similarity. The score of the matching based on string similarity is computed as the ratio of the number of tokens in common between an entity and the input tweet, and the total number of token of the entity: DISPLAYFORM0 In order to exclude trivial matches, tokens within a list of stop words are not considered while performing string matching. The final score is a weighted combination of the string matching score and the time proximity of the track, aimed to enhance matches from tracks played closer to the time when the user is posting the tweet. The performance of the algorithm depends, apart from the time proximity threshold t, also on other two thresholds related to the string matching, one for the Musical Work (w) and one for the Contributor (c) entities. It has been necessary for avoiding to include candidate entities matched against the schedule with a low score, often source of false positives or negatives. Consequently, as last step Contributor and Musical Work candidates entities with respectively a string matching score lower than c and w, are filtered out. In Figure 2, an example of Musical Work entity recognized in an user-generated tweet using the schedule information is presented. The entities recognized from the schedule matching are joined with the ones obtained directly from the statistical models. In the joined results, the criteria is to give priority to the entities recognized from the machine learning techniques. If they do not return any entities, the entities predicted by the schedule matching are considered. Our strategy is justified by the poorer results obtained by the NER based only on the schedule matching, compared to the other models used in the experiments, to be presented in the next section. Results The performances of the NER experiments are reported separately for three different parts of the system proposed. Table 6 presents the comparison of the various methods while performing NER on the bot-generated corpora and the user-generated corpora. Results shown that, in the first case, in the training set the F1 score is always greater than 97%, with a maximum of 99.65%. With both test sets performances decrease, varying between 94-97%. In the case of UGC, comparing the F1 score we can observe how performances significantly decrease. It can be considered a natural consequence of the complex nature of the users' informal language in comparison to the structured message created by the bot. In Table 7, results of the schedule matching are reported. We can observe how the quality of the linking performed by the algorithm is correlated to the choice of the three thresholds. Indeed, the Precision score increase when the time threshold decrease, admitting less candidates as entities during the matching, and when the string similarity thresholds increase, accepting only candidates with an higher degree of similarity. The behaviour of the Recall score is inverted. Finally, we test the impact of using the schedule matching together with a biLSTM-CRF network. In this experiment, we consider the network trained using all the features proposed, and the embeddings not pre-trained. Table 8 reports the results obtained. We can observe how generally the system benefits from the use of the schedule information. Especially in the testing part, where the neural network recognizes with less accuracy, the explicit information contained in the schedule can be exploited for identifying the entities at which users are referring while listening to the radio and posting the tweets. Conclusion We have presented in this work a novel method for detecting musical entities from user-generated content, modelling linguistic features with statistical models and extracting contextual information from a radio schedule. We analyzed tweets related to a classical music radio station, integrating its schedule to connect users' messages to tracks broadcasted. We focus on the recognition of two kinds of entities related to the music field, Contributor and Musical Work. According to the results obtained, we have seen a pronounced difference between the system performances when dealing with the Contributor instead of the Musical Work entities. Indeed, the former type of entity has been shown to be more easily detected in comparison to the latter, and we identify several reasons behind this fact. Firstly, Contributor entities are less prone to be shorten or modified, while due to their longness, Musical Work entities often represent only a part of the complete title of a musical piece. Furthermore, Musical Work titles are typically composed by more tokens, including common words which can be easily misclassified. The low performances obtained in the case of Musical Work entities can be a consequences of these observations. On the other hand, when referring to a Contributor users often use only the surname, but in most of the cases it is enough for the system to recognizing the entities. From the experiments we have seen that generally the biLSTM-CRF architecture outperforms the SVM model. The benefit of using the whole set of features is evident in the training part, but while testing the inclusion of the features not always leads to better results. In addition, some of the features designed in our experiments are tailored to the case of classical music, hence they might not be representative if applied to other fields. We do not exclude that our method can be adapted for detecting other kinds of entity, but it might be needed to redefine the features according to the case considered. Similarly, it has not been found a particular advantage of using the pre-trained embeddings instead of the one trained with our corpora. Furthermore, we verified the statistical significance of our experiment by using Wilcoxon Rank-Sum Test, obtaining that there have been not significant difference between the various model considered while testing. The information extracted from the schedule also present several limitations. In fact, the hypothesis that a tweet is referring to a track broadcasted is not always verified. Even if it is common that radios listeners do comments about tracks played, or give suggestion to the radio host about what they would like to listen, it is also true that they might refer to a Contributor or Musical Work unrelated to the radio schedule.
English
41844d1d1ee6d6d38f31b3a17a2398f87566ed92
41844d1d1ee6d6d38f31b3a17a2398f87566ed92_0
Q: What is the architecture of the siamese neural network? Text: Introduction One of the challenges of processing real-world spoken content, such as media broadcasts, is the potential presence of different dialects of a language in the material. Dialect identification can be a useful capability to identify which dialect is being spoken during a recording. Dialect identification can be regarded as a special case of language recognition, requiring an ability to discriminate between different members within the same language family, as opposed to across language families (i.e., for language recognition). The dominant approach, based on i-vector extraction, has proven to be very effective for both language and speaker recognition BIBREF0 . Recently, phonetically aware deep neural models have also been found to be effective in combination with i-vectors BIBREF1 , BIBREF2 , BIBREF3 . Phonetically aware models could be beneficial for dialect identification, since they provide a mechanism to focus attention on small phonetic differences between dialects with predominantly common phonetic inventories. Since 2015, the Arabic Multi-Genre Broadcast (MGB) Challenge tasks have provided a valuable resource for researchers interested in processing multi-dialectal Arabic speech. For the ASRU 2017 MGB-3 Challenge, there were two possible tasks. The first task was aimed at developing an automatic speech recognition system for Arabic dialectal speech based on a multi-genre broadcast audio dataset. The second task was aimed at developing an Arabic Dialect Identification (ADI) capability for five major Arabic dialects. This paper reports our experimentation efforts for the ADI task. While the MGB-3 Arabic ASR task included seven different genres from the broadcast domain, the ADI task focused solely on broadcast news. Participants were provided high-quality Aljazeera news broadcasts as well as transcriptions generated by a multi-dialect ASR system created from the MGB-2 dataset BIBREF4 . The biggest difference from previous MGB challenges is that only a relatively small development set of in-domain data is provided for adaptation to the test set (i.e., the training data is mismatched with the test data). For the ADI baseline, participants were also provided with i-vector features from the audio dataset, and lexical features from the transcripts. Evaluation software was shared with all participants using baseline features available via Github. The evaluation scenario for the MGB-3 ADI task can be viewed as channel and domain mismatch because the recording environment of the training data is different from the development and test data. In general, channel or domain mismatch between training and test data can be a significant factor affecting system performance. Differences in channel, genre, language, topic etc. produce shifts in low-dimensional projections of the corresponding speech and ultimately cause performance degradations on evaluation data. In order to address performance degradation of speaker and language recognition systems due to domain mismatches, researchers have proposed various approaches to compensate for, and to adapt to the mismatch BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . For the MGB-3 ADI task, we utilized the development data to adapt to the test data recording domain, and investigated approaches to improve ADI performance both on the domain mismatched scenario, and the matching scenario, by using a recursive whitening transformation, a weighted dialect i-vector model, and a Siamese Neural Network. In contrast to the language recognition scenario, where there are different linguistic units across languages, language dialects typically share a common phonetic inventory and written language. Thus, we can potentially use ASR outputs such as phones, characters, and lexicons as features. N-gram histograms of phonemes, characters and lexicons can be used as feature vectors directly, and indeed, a lexicon-based n-gram feature vector was provided for the MGB-3 ADI baseline. The linguistic feature space is, naturally, completely different to the audio feature space, so a fusion of the results from both feature representations has been previously shown to be beneficial BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Moreover, the linguistic feature has an advantage in channel domain mismatch situations because the transcription itself does not reflect the recording environment, and only contains linguistic information. In this paper, we describe our work for the MGB-3 ADI Challenge. The final MIT-QCRI submitted system is a combination of audio and linguistic feature-based systems, and includes multiple approaches to address the challenging mismatched conditions. From the official results, this system achieved the best performance among all participants. The following sections describe our research in greater detail. MGB-3 Arabic Dialect Identification For the MGB-3 ADI task, the challenge organizers provided 13,825 utterances (53.6 hours) for the training (TRN) set, 1,524 utterances (10 hours) for a development (DEV) set, and 1,492 utterances (10.1 hours) for a test (TST) set. Each dataset consisted of five Arabic dialects: Egyptian (EGY), Levantine (LEV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA). Detailed statistics of the ADI dataset can be found in BIBREF23 . Table TABREF3 shows some facts about the evaluation conditions and data properties. Note that the development set is relatively small compared to the training set. However, it is matched with the test set channel domain. Thus, the development set provides valuable information to adapt or compensate the channel (recording) domain mismatch between the train and test sets. Dialect Identification Task & System The MGB-3 ADI task asks participants to classify speech as one of five dialects, by specifying one dialect for each audio file for their submission. Performance is evaluated via three indices: overall accuracy, average precision, and average recall for the five dialects. Baseline ADI System The challenge organizers provided features and code for a baseline ADI system. The features consisted of 400 dimensional i-vector features for each audio file (based on bottleneck feature inputs for their frame-level acoustic representation), as well as lexical features using bigrams generated from transcriptions BIBREF23 . For baseline dialect identification, a multi-class Support Vector Machine (SVM) was used. The baseline i-vector performance was 57.3%, 60.8%, and 58.0% for accuracy, precision and recall respectively. Lexical features achieved 48.4%, 51.0%, and 49.3%, respectively. While the audio-based features achieved better performance than the lexical features, both systems only obtained approximately 50% accuracy, indicating that this ADI task is difficult, considering that there are only five classes to choose from. Siamese Neural Network-based ADI To further distinguish speech from different Arabic dialects, while making speech from the same dialect more similar, we adopted a Siamese neural network architecture BIBREF24 based on an i-vector feature space. The Siamese neural network has two parallel convolutional networks, INLINEFORM0 , that share the same set of weights, INLINEFORM1 , as shown in Figure FIGREF5 (a). Let INLINEFORM2 and INLINEFORM3 be a pair of i-vectors for which we wish to compute a distance. Let INLINEFORM4 be the label for the pair, where INLINEFORM5 = 1 if the i-vectors INLINEFORM6 and INLINEFORM7 belong to same dialect, and INLINEFORM8 otherwise. To optimize the network, we use a Euclidean distance loss function between the label and the cosine distance, INLINEFORM9 , where INLINEFORM10 For training, i-vector pairs and their corresponding labels can be processed by combinations of i-vectors from the training dataset. The trained convolutional network INLINEFORM0 transforms an i-vector INLINEFORM1 to a low-dimensional subspace that is more robust for distinguishing dialects. A detailed illustration of the convolutional network INLINEFORM2 is shown in Figure FIGREF5 (b). The final transformed i-vector, INLINEFORM3 , is a 200-dimensional vector. No nonlinear activation function was used on the fully connected layer. A cosine distance is used for scoring. i-vector Post-Processing In this section we describe the domain adaptation techniques we investigated using the development set to help adapt our models to the test set. Although the baseline system used an SVM classifier, Cosine Distance Scoring (CDS) is a fast, simple, and effective method to measure the similarity between an enrolled i-vector dialect model, and a test utterance i-vector. Under CDS, ZT-norm or S-norm can be also applied for score normalization BIBREF25 . Dialect enrollment can be obtained by means of i-vectors for each dialect, and is called the i-vector dialect model: INLINEFORM0 , where INLINEFORM1 is the number of utterances for each dialect INLINEFORM2 . Since we have two datasets for dialect enrollment, INLINEFORM3 for the training set, and INLINEFORM4 for the development set, we use an interpolation approach with parameter INLINEFORM5 , where INLINEFORM6 We observed that the mismatched training set is useful when combined with matched development set. Figure FIGREF7 shows the performance evaluation by parameter INLINEFORM0 on the same experimental conditions of System 2 in Section 4.3. This approach can be thought of as exactly the same as score fusion for different system. However, score fusion is usually performed at the system score level, while this approach uses a combination of knowledge of in-domain and out-of-domain i-vectors with a gamma weight on a single system. For i-vector-based speaker and language recognition approaches, a whitening transformation and length normalization is considered essential BIBREF26 . Since length normalization is inherently a nonlinear, non-whitening operation, recently, a recursive whitening transformation has been proposed to reduce residual un-whitened components in the i-vector space, as illustrated in Figure FIGREF10 BIBREF14 . In this approach, the data subset that best matches the test data is used at each iteration to calculate the whitening transformation. In our ADI experiments, we applied 1 to 3 levels of recursive whitening transformation using the training and development data. Phoneme Features Phoneme feature extraction consists of extracting the phone sequence, and phone duration statistics using four different speech recognizers: Czech, Hungarian, and Russian using narrowband model, and English using a broadband model BIBREF27 . We evaluated the four systems using a Support Vector Machine (SVM). The hyper-parameters for the SVM are distance from the hyperplane (C is 0.01), and penalty l2. We used the training data for training the SVM and the development data for testing. Table TABREF13 shows the results for the four phoneme recognizers. The Hungarian phoneme recognition obtained the best results, so we used it for the final system combination. Character Features Word sequences are extracted using a state-of-the-art Arabic speech-to-text transcription system built as part of the MGB-2 BIBREF28 . The system is a combination of a Time Delayed Neural Network (TDNN), a Long Short-Term Memory Recurrent Neural Network (LSTM) and Bidirectional LSTM acoustic models, followed by 4-gram and Recurrent Neural Network (RNN) language model rescoring. Our system uses a grapheme lexicon during both training and decoding. The acoustic models are trained on 1,200 hours of Arabic broadcast speech. We also perform data augmentation (speed and volume perturbation) which gives us three times the original training data. For more details see the system description paper BIBREF4 . We kept the <UNK> from the ASR system, which indicates out-of-vocabulary (OOV) words, we replaced it with special symbol. Space was inserted between all characters including the word boundaries. An SVM classifier was trained similarly to the one used for the phoneme ASR systems, and we achieved 52% accuracy, 51.2% precision and 51.8% recall. The confusion matrix is different between the phoneme classifier and the character classifier systems, which motivates us to use both of them in the final system combination. Score Calibration All scores are calibrated to be between 0 and 1. A linear calibration is done by the Bosaris toolkit BIBREF29 . Fusion is also done in a linear manner. ADI Experiments For experiments and evaluation, we use i-vectors and transcriptions that are provided by the challenge organizers. Please refer to BIBREF23 for descriptions of i-vector extraction and Arabic speech-to-text configuration. Using Training Data for Training The first experiment we conducted used only the training data for developing the ADI system. Thus, the interpolated i-vector dialect model cannot be used for this experimental condition. Table TABREF14 shows the performance on dimension reduced i-vectors using the Siamese network (Siam i-vector), and Linear Discriminant Analysis (LDA i-vector), as compared to the baseline i-vector system. LDA reduces the 400-dimension i-vector to 4, while the Siamese network reduces it from 400 to 200. Since the Siamese network used a cosine distance for the loss function, the Siam i-vector showed better performance with the CDS scoring method, while others achieved better performance with an SVM. The best system using Siam i-vector showed overall 10% better performance accuracy, as compared to the baseline. Using Training and Development Data for Training For our second experiment, both the training and development data were used for training. For phoneme and character features, we show development set experimental results in Table TABREF15 . For i-vector experiments, we show results in Table TABREF16 . In the table we see that the interpolated dialect model gave significant improvements in all three metrics. The recursive whitening transformation gave slight improvements on the original i-vector, but not after LDA and the Siamese network. The best system is the original i-vector with recursive whitening, and an interpolated i-vector dialect model, which achieves over 20% accuracy improvement over the baseline. While the Siamese i-vector network helped in the training data only experiments, it does not show any advantage over the baseline i-vector for this condition. We suspect this result is due to the composition of the data used for training the Siamese network. To train the network, i-vector pairs are chosen from from training dataset. We selected the pairs using both the training and development datasets. However, if we could put more emphasis on the development data, we suspect the Siamese i-vector network would be more robust on the test data. We plan to further examine the performances due to different compositions of data in the future. Performance Evaluation of Submission Tables TABREF21 and TABREF22 show detailed performance evaluations of our three submitted systems. System 1 was trained using only the training data as shown in Table TABREF21 . Systems 2 and 3 were trained using both the training and development sets as shown in Table TABREF22 . We found the best linear fusion weight based on System 1 to prevent over-fitting was 0.7, 0.2 and 0.1 for i-vector, character, and phonetic based scores respectively. We applied the same weights to Systems 2 and 3 for fusion. From Table TABREF21 , we see that the Siamese network demonstrates its effectiveness on both the development and test sets without using any information of the test domain. The interpolated i-vector dialect model also demonstrates that it reflects test domain information well as shown by Systems 2 and 3 in Table TABREF22 . Although we expected that the linguistic features would not affected by the domain mismatch, character and phoneme features show useful contributions for all systems. We believe the reason for the performance degradation of Systems 2 and 3 after fusion on the development data can be seen in the fusion rule. We applied the fusion rule derived from System 1 which was not optimal for Systems 2 and 3, considering the development set evaluation. By including the development data as part of their training, Systems 2 and 3 are subsequently overfit on the development data, which was why we used the fusion rule of System 1. From the excellent fusion performance on the test data for Systems 2 and 3, we believe that the fusion rule from System 1 prevented an over-fitted result. Conclusion In this paper, we describe the MIT-QCRI ADI system using both audio and linguistic features for the MGB-3 challenge. We studied several approaches to address dialect variability and domain mismatches between the training and test sets. Without knowledge of the test domain where the system will be applied, i-vector dimensionality reduction using a Siamese network was found to be useful, while an interpolated i-vector dialect model showed effectiveness with relatively small amounts of test domain information from the development data. On both conditions, fusion of audio and linguistic feature guarantees substantial improvements on dialect identification. As these approaches are not limited to dialect identification, we plan to explore their utility on other speaker and language recognition problems in the future.
two parallel convolutional networks, INLINEFORM0 , that share the same set of weights
ae17066634bd2731a07cd60e9ca79fc171692585
ae17066634bd2731a07cd60e9ca79fc171692585_0
Q: How do they explore domain mismatch? Text: Introduction One of the challenges of processing real-world spoken content, such as media broadcasts, is the potential presence of different dialects of a language in the material. Dialect identification can be a useful capability to identify which dialect is being spoken during a recording. Dialect identification can be regarded as a special case of language recognition, requiring an ability to discriminate between different members within the same language family, as opposed to across language families (i.e., for language recognition). The dominant approach, based on i-vector extraction, has proven to be very effective for both language and speaker recognition BIBREF0 . Recently, phonetically aware deep neural models have also been found to be effective in combination with i-vectors BIBREF1 , BIBREF2 , BIBREF3 . Phonetically aware models could be beneficial for dialect identification, since they provide a mechanism to focus attention on small phonetic differences between dialects with predominantly common phonetic inventories. Since 2015, the Arabic Multi-Genre Broadcast (MGB) Challenge tasks have provided a valuable resource for researchers interested in processing multi-dialectal Arabic speech. For the ASRU 2017 MGB-3 Challenge, there were two possible tasks. The first task was aimed at developing an automatic speech recognition system for Arabic dialectal speech based on a multi-genre broadcast audio dataset. The second task was aimed at developing an Arabic Dialect Identification (ADI) capability for five major Arabic dialects. This paper reports our experimentation efforts for the ADI task. While the MGB-3 Arabic ASR task included seven different genres from the broadcast domain, the ADI task focused solely on broadcast news. Participants were provided high-quality Aljazeera news broadcasts as well as transcriptions generated by a multi-dialect ASR system created from the MGB-2 dataset BIBREF4 . The biggest difference from previous MGB challenges is that only a relatively small development set of in-domain data is provided for adaptation to the test set (i.e., the training data is mismatched with the test data). For the ADI baseline, participants were also provided with i-vector features from the audio dataset, and lexical features from the transcripts. Evaluation software was shared with all participants using baseline features available via Github. The evaluation scenario for the MGB-3 ADI task can be viewed as channel and domain mismatch because the recording environment of the training data is different from the development and test data. In general, channel or domain mismatch between training and test data can be a significant factor affecting system performance. Differences in channel, genre, language, topic etc. produce shifts in low-dimensional projections of the corresponding speech and ultimately cause performance degradations on evaluation data. In order to address performance degradation of speaker and language recognition systems due to domain mismatches, researchers have proposed various approaches to compensate for, and to adapt to the mismatch BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . For the MGB-3 ADI task, we utilized the development data to adapt to the test data recording domain, and investigated approaches to improve ADI performance both on the domain mismatched scenario, and the matching scenario, by using a recursive whitening transformation, a weighted dialect i-vector model, and a Siamese Neural Network. In contrast to the language recognition scenario, where there are different linguistic units across languages, language dialects typically share a common phonetic inventory and written language. Thus, we can potentially use ASR outputs such as phones, characters, and lexicons as features. N-gram histograms of phonemes, characters and lexicons can be used as feature vectors directly, and indeed, a lexicon-based n-gram feature vector was provided for the MGB-3 ADI baseline. The linguistic feature space is, naturally, completely different to the audio feature space, so a fusion of the results from both feature representations has been previously shown to be beneficial BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Moreover, the linguistic feature has an advantage in channel domain mismatch situations because the transcription itself does not reflect the recording environment, and only contains linguistic information. In this paper, we describe our work for the MGB-3 ADI Challenge. The final MIT-QCRI submitted system is a combination of audio and linguistic feature-based systems, and includes multiple approaches to address the challenging mismatched conditions. From the official results, this system achieved the best performance among all participants. The following sections describe our research in greater detail. MGB-3 Arabic Dialect Identification For the MGB-3 ADI task, the challenge organizers provided 13,825 utterances (53.6 hours) for the training (TRN) set, 1,524 utterances (10 hours) for a development (DEV) set, and 1,492 utterances (10.1 hours) for a test (TST) set. Each dataset consisted of five Arabic dialects: Egyptian (EGY), Levantine (LEV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA). Detailed statistics of the ADI dataset can be found in BIBREF23 . Table TABREF3 shows some facts about the evaluation conditions and data properties. Note that the development set is relatively small compared to the training set. However, it is matched with the test set channel domain. Thus, the development set provides valuable information to adapt or compensate the channel (recording) domain mismatch between the train and test sets. Dialect Identification Task & System The MGB-3 ADI task asks participants to classify speech as one of five dialects, by specifying one dialect for each audio file for their submission. Performance is evaluated via three indices: overall accuracy, average precision, and average recall for the five dialects. Baseline ADI System The challenge organizers provided features and code for a baseline ADI system. The features consisted of 400 dimensional i-vector features for each audio file (based on bottleneck feature inputs for their frame-level acoustic representation), as well as lexical features using bigrams generated from transcriptions BIBREF23 . For baseline dialect identification, a multi-class Support Vector Machine (SVM) was used. The baseline i-vector performance was 57.3%, 60.8%, and 58.0% for accuracy, precision and recall respectively. Lexical features achieved 48.4%, 51.0%, and 49.3%, respectively. While the audio-based features achieved better performance than the lexical features, both systems only obtained approximately 50% accuracy, indicating that this ADI task is difficult, considering that there are only five classes to choose from. Siamese Neural Network-based ADI To further distinguish speech from different Arabic dialects, while making speech from the same dialect more similar, we adopted a Siamese neural network architecture BIBREF24 based on an i-vector feature space. The Siamese neural network has two parallel convolutional networks, INLINEFORM0 , that share the same set of weights, INLINEFORM1 , as shown in Figure FIGREF5 (a). Let INLINEFORM2 and INLINEFORM3 be a pair of i-vectors for which we wish to compute a distance. Let INLINEFORM4 be the label for the pair, where INLINEFORM5 = 1 if the i-vectors INLINEFORM6 and INLINEFORM7 belong to same dialect, and INLINEFORM8 otherwise. To optimize the network, we use a Euclidean distance loss function between the label and the cosine distance, INLINEFORM9 , where INLINEFORM10 For training, i-vector pairs and their corresponding labels can be processed by combinations of i-vectors from the training dataset. The trained convolutional network INLINEFORM0 transforms an i-vector INLINEFORM1 to a low-dimensional subspace that is more robust for distinguishing dialects. A detailed illustration of the convolutional network INLINEFORM2 is shown in Figure FIGREF5 (b). The final transformed i-vector, INLINEFORM3 , is a 200-dimensional vector. No nonlinear activation function was used on the fully connected layer. A cosine distance is used for scoring. i-vector Post-Processing In this section we describe the domain adaptation techniques we investigated using the development set to help adapt our models to the test set. Although the baseline system used an SVM classifier, Cosine Distance Scoring (CDS) is a fast, simple, and effective method to measure the similarity between an enrolled i-vector dialect model, and a test utterance i-vector. Under CDS, ZT-norm or S-norm can be also applied for score normalization BIBREF25 . Dialect enrollment can be obtained by means of i-vectors for each dialect, and is called the i-vector dialect model: INLINEFORM0 , where INLINEFORM1 is the number of utterances for each dialect INLINEFORM2 . Since we have two datasets for dialect enrollment, INLINEFORM3 for the training set, and INLINEFORM4 for the development set, we use an interpolation approach with parameter INLINEFORM5 , where INLINEFORM6 We observed that the mismatched training set is useful when combined with matched development set. Figure FIGREF7 shows the performance evaluation by parameter INLINEFORM0 on the same experimental conditions of System 2 in Section 4.3. This approach can be thought of as exactly the same as score fusion for different system. However, score fusion is usually performed at the system score level, while this approach uses a combination of knowledge of in-domain and out-of-domain i-vectors with a gamma weight on a single system. For i-vector-based speaker and language recognition approaches, a whitening transformation and length normalization is considered essential BIBREF26 . Since length normalization is inherently a nonlinear, non-whitening operation, recently, a recursive whitening transformation has been proposed to reduce residual un-whitened components in the i-vector space, as illustrated in Figure FIGREF10 BIBREF14 . In this approach, the data subset that best matches the test data is used at each iteration to calculate the whitening transformation. In our ADI experiments, we applied 1 to 3 levels of recursive whitening transformation using the training and development data. Phoneme Features Phoneme feature extraction consists of extracting the phone sequence, and phone duration statistics using four different speech recognizers: Czech, Hungarian, and Russian using narrowband model, and English using a broadband model BIBREF27 . We evaluated the four systems using a Support Vector Machine (SVM). The hyper-parameters for the SVM are distance from the hyperplane (C is 0.01), and penalty l2. We used the training data for training the SVM and the development data for testing. Table TABREF13 shows the results for the four phoneme recognizers. The Hungarian phoneme recognition obtained the best results, so we used it for the final system combination. Character Features Word sequences are extracted using a state-of-the-art Arabic speech-to-text transcription system built as part of the MGB-2 BIBREF28 . The system is a combination of a Time Delayed Neural Network (TDNN), a Long Short-Term Memory Recurrent Neural Network (LSTM) and Bidirectional LSTM acoustic models, followed by 4-gram and Recurrent Neural Network (RNN) language model rescoring. Our system uses a grapheme lexicon during both training and decoding. The acoustic models are trained on 1,200 hours of Arabic broadcast speech. We also perform data augmentation (speed and volume perturbation) which gives us three times the original training data. For more details see the system description paper BIBREF4 . We kept the <UNK> from the ASR system, which indicates out-of-vocabulary (OOV) words, we replaced it with special symbol. Space was inserted between all characters including the word boundaries. An SVM classifier was trained similarly to the one used for the phoneme ASR systems, and we achieved 52% accuracy, 51.2% precision and 51.8% recall. The confusion matrix is different between the phoneme classifier and the character classifier systems, which motivates us to use both of them in the final system combination. Score Calibration All scores are calibrated to be between 0 and 1. A linear calibration is done by the Bosaris toolkit BIBREF29 . Fusion is also done in a linear manner. ADI Experiments For experiments and evaluation, we use i-vectors and transcriptions that are provided by the challenge organizers. Please refer to BIBREF23 for descriptions of i-vector extraction and Arabic speech-to-text configuration. Using Training Data for Training The first experiment we conducted used only the training data for developing the ADI system. Thus, the interpolated i-vector dialect model cannot be used for this experimental condition. Table TABREF14 shows the performance on dimension reduced i-vectors using the Siamese network (Siam i-vector), and Linear Discriminant Analysis (LDA i-vector), as compared to the baseline i-vector system. LDA reduces the 400-dimension i-vector to 4, while the Siamese network reduces it from 400 to 200. Since the Siamese network used a cosine distance for the loss function, the Siam i-vector showed better performance with the CDS scoring method, while others achieved better performance with an SVM. The best system using Siam i-vector showed overall 10% better performance accuracy, as compared to the baseline. Using Training and Development Data for Training For our second experiment, both the training and development data were used for training. For phoneme and character features, we show development set experimental results in Table TABREF15 . For i-vector experiments, we show results in Table TABREF16 . In the table we see that the interpolated dialect model gave significant improvements in all three metrics. The recursive whitening transformation gave slight improvements on the original i-vector, but not after LDA and the Siamese network. The best system is the original i-vector with recursive whitening, and an interpolated i-vector dialect model, which achieves over 20% accuracy improvement over the baseline. While the Siamese i-vector network helped in the training data only experiments, it does not show any advantage over the baseline i-vector for this condition. We suspect this result is due to the composition of the data used for training the Siamese network. To train the network, i-vector pairs are chosen from from training dataset. We selected the pairs using both the training and development datasets. However, if we could put more emphasis on the development data, we suspect the Siamese i-vector network would be more robust on the test data. We plan to further examine the performances due to different compositions of data in the future. Performance Evaluation of Submission Tables TABREF21 and TABREF22 show detailed performance evaluations of our three submitted systems. System 1 was trained using only the training data as shown in Table TABREF21 . Systems 2 and 3 were trained using both the training and development sets as shown in Table TABREF22 . We found the best linear fusion weight based on System 1 to prevent over-fitting was 0.7, 0.2 and 0.1 for i-vector, character, and phonetic based scores respectively. We applied the same weights to Systems 2 and 3 for fusion. From Table TABREF21 , we see that the Siamese network demonstrates its effectiveness on both the development and test sets without using any information of the test domain. The interpolated i-vector dialect model also demonstrates that it reflects test domain information well as shown by Systems 2 and 3 in Table TABREF22 . Although we expected that the linguistic features would not affected by the domain mismatch, character and phoneme features show useful contributions for all systems. We believe the reason for the performance degradation of Systems 2 and 3 after fusion on the development data can be seen in the fusion rule. We applied the fusion rule derived from System 1 which was not optimal for Systems 2 and 3, considering the development set evaluation. By including the development data as part of their training, Systems 2 and 3 are subsequently overfit on the development data, which was why we used the fusion rule of System 1. From the excellent fusion performance on the test data for Systems 2 and 3, we believe that the fusion rule from System 1 prevented an over-fitted result. Conclusion In this paper, we describe the MIT-QCRI ADI system using both audio and linguistic features for the MGB-3 challenge. We studied several approaches to address dialect variability and domain mismatches between the training and test sets. Without knowledge of the test domain where the system will be applied, i-vector dimensionality reduction using a Siamese network was found to be useful, while an interpolated i-vector dialect model showed effectiveness with relatively small amounts of test domain information from the development data. On both conditions, fusion of audio and linguistic feature guarantees substantial improvements on dialect identification. As these approaches are not limited to dialect identification, we plan to explore their utility on other speaker and language recognition problems in the future.
Unanswerable
4fa2faa08eeabc09d78d89aaf0ea86bb36328172
4fa2faa08eeabc09d78d89aaf0ea86bb36328172_0
Q: How do they explore dialect variability? Text: Introduction One of the challenges of processing real-world spoken content, such as media broadcasts, is the potential presence of different dialects of a language in the material. Dialect identification can be a useful capability to identify which dialect is being spoken during a recording. Dialect identification can be regarded as a special case of language recognition, requiring an ability to discriminate between different members within the same language family, as opposed to across language families (i.e., for language recognition). The dominant approach, based on i-vector extraction, has proven to be very effective for both language and speaker recognition BIBREF0 . Recently, phonetically aware deep neural models have also been found to be effective in combination with i-vectors BIBREF1 , BIBREF2 , BIBREF3 . Phonetically aware models could be beneficial for dialect identification, since they provide a mechanism to focus attention on small phonetic differences between dialects with predominantly common phonetic inventories. Since 2015, the Arabic Multi-Genre Broadcast (MGB) Challenge tasks have provided a valuable resource for researchers interested in processing multi-dialectal Arabic speech. For the ASRU 2017 MGB-3 Challenge, there were two possible tasks. The first task was aimed at developing an automatic speech recognition system for Arabic dialectal speech based on a multi-genre broadcast audio dataset. The second task was aimed at developing an Arabic Dialect Identification (ADI) capability for five major Arabic dialects. This paper reports our experimentation efforts for the ADI task. While the MGB-3 Arabic ASR task included seven different genres from the broadcast domain, the ADI task focused solely on broadcast news. Participants were provided high-quality Aljazeera news broadcasts as well as transcriptions generated by a multi-dialect ASR system created from the MGB-2 dataset BIBREF4 . The biggest difference from previous MGB challenges is that only a relatively small development set of in-domain data is provided for adaptation to the test set (i.e., the training data is mismatched with the test data). For the ADI baseline, participants were also provided with i-vector features from the audio dataset, and lexical features from the transcripts. Evaluation software was shared with all participants using baseline features available via Github. The evaluation scenario for the MGB-3 ADI task can be viewed as channel and domain mismatch because the recording environment of the training data is different from the development and test data. In general, channel or domain mismatch between training and test data can be a significant factor affecting system performance. Differences in channel, genre, language, topic etc. produce shifts in low-dimensional projections of the corresponding speech and ultimately cause performance degradations on evaluation data. In order to address performance degradation of speaker and language recognition systems due to domain mismatches, researchers have proposed various approaches to compensate for, and to adapt to the mismatch BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . For the MGB-3 ADI task, we utilized the development data to adapt to the test data recording domain, and investigated approaches to improve ADI performance both on the domain mismatched scenario, and the matching scenario, by using a recursive whitening transformation, a weighted dialect i-vector model, and a Siamese Neural Network. In contrast to the language recognition scenario, where there are different linguistic units across languages, language dialects typically share a common phonetic inventory and written language. Thus, we can potentially use ASR outputs such as phones, characters, and lexicons as features. N-gram histograms of phonemes, characters and lexicons can be used as feature vectors directly, and indeed, a lexicon-based n-gram feature vector was provided for the MGB-3 ADI baseline. The linguistic feature space is, naturally, completely different to the audio feature space, so a fusion of the results from both feature representations has been previously shown to be beneficial BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Moreover, the linguistic feature has an advantage in channel domain mismatch situations because the transcription itself does not reflect the recording environment, and only contains linguistic information. In this paper, we describe our work for the MGB-3 ADI Challenge. The final MIT-QCRI submitted system is a combination of audio and linguistic feature-based systems, and includes multiple approaches to address the challenging mismatched conditions. From the official results, this system achieved the best performance among all participants. The following sections describe our research in greater detail. MGB-3 Arabic Dialect Identification For the MGB-3 ADI task, the challenge organizers provided 13,825 utterances (53.6 hours) for the training (TRN) set, 1,524 utterances (10 hours) for a development (DEV) set, and 1,492 utterances (10.1 hours) for a test (TST) set. Each dataset consisted of five Arabic dialects: Egyptian (EGY), Levantine (LEV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA). Detailed statistics of the ADI dataset can be found in BIBREF23 . Table TABREF3 shows some facts about the evaluation conditions and data properties. Note that the development set is relatively small compared to the training set. However, it is matched with the test set channel domain. Thus, the development set provides valuable information to adapt or compensate the channel (recording) domain mismatch between the train and test sets. Dialect Identification Task & System The MGB-3 ADI task asks participants to classify speech as one of five dialects, by specifying one dialect for each audio file for their submission. Performance is evaluated via three indices: overall accuracy, average precision, and average recall for the five dialects. Baseline ADI System The challenge organizers provided features and code for a baseline ADI system. The features consisted of 400 dimensional i-vector features for each audio file (based on bottleneck feature inputs for their frame-level acoustic representation), as well as lexical features using bigrams generated from transcriptions BIBREF23 . For baseline dialect identification, a multi-class Support Vector Machine (SVM) was used. The baseline i-vector performance was 57.3%, 60.8%, and 58.0% for accuracy, precision and recall respectively. Lexical features achieved 48.4%, 51.0%, and 49.3%, respectively. While the audio-based features achieved better performance than the lexical features, both systems only obtained approximately 50% accuracy, indicating that this ADI task is difficult, considering that there are only five classes to choose from. Siamese Neural Network-based ADI To further distinguish speech from different Arabic dialects, while making speech from the same dialect more similar, we adopted a Siamese neural network architecture BIBREF24 based on an i-vector feature space. The Siamese neural network has two parallel convolutional networks, INLINEFORM0 , that share the same set of weights, INLINEFORM1 , as shown in Figure FIGREF5 (a). Let INLINEFORM2 and INLINEFORM3 be a pair of i-vectors for which we wish to compute a distance. Let INLINEFORM4 be the label for the pair, where INLINEFORM5 = 1 if the i-vectors INLINEFORM6 and INLINEFORM7 belong to same dialect, and INLINEFORM8 otherwise. To optimize the network, we use a Euclidean distance loss function between the label and the cosine distance, INLINEFORM9 , where INLINEFORM10 For training, i-vector pairs and their corresponding labels can be processed by combinations of i-vectors from the training dataset. The trained convolutional network INLINEFORM0 transforms an i-vector INLINEFORM1 to a low-dimensional subspace that is more robust for distinguishing dialects. A detailed illustration of the convolutional network INLINEFORM2 is shown in Figure FIGREF5 (b). The final transformed i-vector, INLINEFORM3 , is a 200-dimensional vector. No nonlinear activation function was used on the fully connected layer. A cosine distance is used for scoring. i-vector Post-Processing In this section we describe the domain adaptation techniques we investigated using the development set to help adapt our models to the test set. Although the baseline system used an SVM classifier, Cosine Distance Scoring (CDS) is a fast, simple, and effective method to measure the similarity between an enrolled i-vector dialect model, and a test utterance i-vector. Under CDS, ZT-norm or S-norm can be also applied for score normalization BIBREF25 . Dialect enrollment can be obtained by means of i-vectors for each dialect, and is called the i-vector dialect model: INLINEFORM0 , where INLINEFORM1 is the number of utterances for each dialect INLINEFORM2 . Since we have two datasets for dialect enrollment, INLINEFORM3 for the training set, and INLINEFORM4 for the development set, we use an interpolation approach with parameter INLINEFORM5 , where INLINEFORM6 We observed that the mismatched training set is useful when combined with matched development set. Figure FIGREF7 shows the performance evaluation by parameter INLINEFORM0 on the same experimental conditions of System 2 in Section 4.3. This approach can be thought of as exactly the same as score fusion for different system. However, score fusion is usually performed at the system score level, while this approach uses a combination of knowledge of in-domain and out-of-domain i-vectors with a gamma weight on a single system. For i-vector-based speaker and language recognition approaches, a whitening transformation and length normalization is considered essential BIBREF26 . Since length normalization is inherently a nonlinear, non-whitening operation, recently, a recursive whitening transformation has been proposed to reduce residual un-whitened components in the i-vector space, as illustrated in Figure FIGREF10 BIBREF14 . In this approach, the data subset that best matches the test data is used at each iteration to calculate the whitening transformation. In our ADI experiments, we applied 1 to 3 levels of recursive whitening transformation using the training and development data. Phoneme Features Phoneme feature extraction consists of extracting the phone sequence, and phone duration statistics using four different speech recognizers: Czech, Hungarian, and Russian using narrowband model, and English using a broadband model BIBREF27 . We evaluated the four systems using a Support Vector Machine (SVM). The hyper-parameters for the SVM are distance from the hyperplane (C is 0.01), and penalty l2. We used the training data for training the SVM and the development data for testing. Table TABREF13 shows the results for the four phoneme recognizers. The Hungarian phoneme recognition obtained the best results, so we used it for the final system combination. Character Features Word sequences are extracted using a state-of-the-art Arabic speech-to-text transcription system built as part of the MGB-2 BIBREF28 . The system is a combination of a Time Delayed Neural Network (TDNN), a Long Short-Term Memory Recurrent Neural Network (LSTM) and Bidirectional LSTM acoustic models, followed by 4-gram and Recurrent Neural Network (RNN) language model rescoring. Our system uses a grapheme lexicon during both training and decoding. The acoustic models are trained on 1,200 hours of Arabic broadcast speech. We also perform data augmentation (speed and volume perturbation) which gives us three times the original training data. For more details see the system description paper BIBREF4 . We kept the <UNK> from the ASR system, which indicates out-of-vocabulary (OOV) words, we replaced it with special symbol. Space was inserted between all characters including the word boundaries. An SVM classifier was trained similarly to the one used for the phoneme ASR systems, and we achieved 52% accuracy, 51.2% precision and 51.8% recall. The confusion matrix is different between the phoneme classifier and the character classifier systems, which motivates us to use both of them in the final system combination. Score Calibration All scores are calibrated to be between 0 and 1. A linear calibration is done by the Bosaris toolkit BIBREF29 . Fusion is also done in a linear manner. ADI Experiments For experiments and evaluation, we use i-vectors and transcriptions that are provided by the challenge organizers. Please refer to BIBREF23 for descriptions of i-vector extraction and Arabic speech-to-text configuration. Using Training Data for Training The first experiment we conducted used only the training data for developing the ADI system. Thus, the interpolated i-vector dialect model cannot be used for this experimental condition. Table TABREF14 shows the performance on dimension reduced i-vectors using the Siamese network (Siam i-vector), and Linear Discriminant Analysis (LDA i-vector), as compared to the baseline i-vector system. LDA reduces the 400-dimension i-vector to 4, while the Siamese network reduces it from 400 to 200. Since the Siamese network used a cosine distance for the loss function, the Siam i-vector showed better performance with the CDS scoring method, while others achieved better performance with an SVM. The best system using Siam i-vector showed overall 10% better performance accuracy, as compared to the baseline. Using Training and Development Data for Training For our second experiment, both the training and development data were used for training. For phoneme and character features, we show development set experimental results in Table TABREF15 . For i-vector experiments, we show results in Table TABREF16 . In the table we see that the interpolated dialect model gave significant improvements in all three metrics. The recursive whitening transformation gave slight improvements on the original i-vector, but not after LDA and the Siamese network. The best system is the original i-vector with recursive whitening, and an interpolated i-vector dialect model, which achieves over 20% accuracy improvement over the baseline. While the Siamese i-vector network helped in the training data only experiments, it does not show any advantage over the baseline i-vector for this condition. We suspect this result is due to the composition of the data used for training the Siamese network. To train the network, i-vector pairs are chosen from from training dataset. We selected the pairs using both the training and development datasets. However, if we could put more emphasis on the development data, we suspect the Siamese i-vector network would be more robust on the test data. We plan to further examine the performances due to different compositions of data in the future. Performance Evaluation of Submission Tables TABREF21 and TABREF22 show detailed performance evaluations of our three submitted systems. System 1 was trained using only the training data as shown in Table TABREF21 . Systems 2 and 3 were trained using both the training and development sets as shown in Table TABREF22 . We found the best linear fusion weight based on System 1 to prevent over-fitting was 0.7, 0.2 and 0.1 for i-vector, character, and phonetic based scores respectively. We applied the same weights to Systems 2 and 3 for fusion. From Table TABREF21 , we see that the Siamese network demonstrates its effectiveness on both the development and test sets without using any information of the test domain. The interpolated i-vector dialect model also demonstrates that it reflects test domain information well as shown by Systems 2 and 3 in Table TABREF22 . Although we expected that the linguistic features would not affected by the domain mismatch, character and phoneme features show useful contributions for all systems. We believe the reason for the performance degradation of Systems 2 and 3 after fusion on the development data can be seen in the fusion rule. We applied the fusion rule derived from System 1 which was not optimal for Systems 2 and 3, considering the development set evaluation. By including the development data as part of their training, Systems 2 and 3 are subsequently overfit on the development data, which was why we used the fusion rule of System 1. From the excellent fusion performance on the test data for Systems 2 and 3, we believe that the fusion rule from System 1 prevented an over-fitted result. Conclusion In this paper, we describe the MIT-QCRI ADI system using both audio and linguistic features for the MGB-3 challenge. We studied several approaches to address dialect variability and domain mismatches between the training and test sets. Without knowledge of the test domain where the system will be applied, i-vector dimensionality reduction using a Siamese network was found to be useful, while an interpolated i-vector dialect model showed effectiveness with relatively small amounts of test domain information from the development data. On both conditions, fusion of audio and linguistic feature guarantees substantial improvements on dialect identification. As these approaches are not limited to dialect identification, we plan to explore their utility on other speaker and language recognition problems in the future.
Unanswerable
e87f47a293e0b49ab8b15fc6633d9ca6dc9de071
e87f47a293e0b49ab8b15fc6633d9ca6dc9de071_0
Q: Which are the four Arabic dialects? Text: Introduction One of the challenges of processing real-world spoken content, such as media broadcasts, is the potential presence of different dialects of a language in the material. Dialect identification can be a useful capability to identify which dialect is being spoken during a recording. Dialect identification can be regarded as a special case of language recognition, requiring an ability to discriminate between different members within the same language family, as opposed to across language families (i.e., for language recognition). The dominant approach, based on i-vector extraction, has proven to be very effective for both language and speaker recognition BIBREF0 . Recently, phonetically aware deep neural models have also been found to be effective in combination with i-vectors BIBREF1 , BIBREF2 , BIBREF3 . Phonetically aware models could be beneficial for dialect identification, since they provide a mechanism to focus attention on small phonetic differences between dialects with predominantly common phonetic inventories. Since 2015, the Arabic Multi-Genre Broadcast (MGB) Challenge tasks have provided a valuable resource for researchers interested in processing multi-dialectal Arabic speech. For the ASRU 2017 MGB-3 Challenge, there were two possible tasks. The first task was aimed at developing an automatic speech recognition system for Arabic dialectal speech based on a multi-genre broadcast audio dataset. The second task was aimed at developing an Arabic Dialect Identification (ADI) capability for five major Arabic dialects. This paper reports our experimentation efforts for the ADI task. While the MGB-3 Arabic ASR task included seven different genres from the broadcast domain, the ADI task focused solely on broadcast news. Participants were provided high-quality Aljazeera news broadcasts as well as transcriptions generated by a multi-dialect ASR system created from the MGB-2 dataset BIBREF4 . The biggest difference from previous MGB challenges is that only a relatively small development set of in-domain data is provided for adaptation to the test set (i.e., the training data is mismatched with the test data). For the ADI baseline, participants were also provided with i-vector features from the audio dataset, and lexical features from the transcripts. Evaluation software was shared with all participants using baseline features available via Github. The evaluation scenario for the MGB-3 ADI task can be viewed as channel and domain mismatch because the recording environment of the training data is different from the development and test data. In general, channel or domain mismatch between training and test data can be a significant factor affecting system performance. Differences in channel, genre, language, topic etc. produce shifts in low-dimensional projections of the corresponding speech and ultimately cause performance degradations on evaluation data. In order to address performance degradation of speaker and language recognition systems due to domain mismatches, researchers have proposed various approaches to compensate for, and to adapt to the mismatch BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . For the MGB-3 ADI task, we utilized the development data to adapt to the test data recording domain, and investigated approaches to improve ADI performance both on the domain mismatched scenario, and the matching scenario, by using a recursive whitening transformation, a weighted dialect i-vector model, and a Siamese Neural Network. In contrast to the language recognition scenario, where there are different linguistic units across languages, language dialects typically share a common phonetic inventory and written language. Thus, we can potentially use ASR outputs such as phones, characters, and lexicons as features. N-gram histograms of phonemes, characters and lexicons can be used as feature vectors directly, and indeed, a lexicon-based n-gram feature vector was provided for the MGB-3 ADI baseline. The linguistic feature space is, naturally, completely different to the audio feature space, so a fusion of the results from both feature representations has been previously shown to be beneficial BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . Moreover, the linguistic feature has an advantage in channel domain mismatch situations because the transcription itself does not reflect the recording environment, and only contains linguistic information. In this paper, we describe our work for the MGB-3 ADI Challenge. The final MIT-QCRI submitted system is a combination of audio and linguistic feature-based systems, and includes multiple approaches to address the challenging mismatched conditions. From the official results, this system achieved the best performance among all participants. The following sections describe our research in greater detail. MGB-3 Arabic Dialect Identification For the MGB-3 ADI task, the challenge organizers provided 13,825 utterances (53.6 hours) for the training (TRN) set, 1,524 utterances (10 hours) for a development (DEV) set, and 1,492 utterances (10.1 hours) for a test (TST) set. Each dataset consisted of five Arabic dialects: Egyptian (EGY), Levantine (LEV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA). Detailed statistics of the ADI dataset can be found in BIBREF23 . Table TABREF3 shows some facts about the evaluation conditions and data properties. Note that the development set is relatively small compared to the training set. However, it is matched with the test set channel domain. Thus, the development set provides valuable information to adapt or compensate the channel (recording) domain mismatch between the train and test sets. Dialect Identification Task & System The MGB-3 ADI task asks participants to classify speech as one of five dialects, by specifying one dialect for each audio file for their submission. Performance is evaluated via three indices: overall accuracy, average precision, and average recall for the five dialects. Baseline ADI System The challenge organizers provided features and code for a baseline ADI system. The features consisted of 400 dimensional i-vector features for each audio file (based on bottleneck feature inputs for their frame-level acoustic representation), as well as lexical features using bigrams generated from transcriptions BIBREF23 . For baseline dialect identification, a multi-class Support Vector Machine (SVM) was used. The baseline i-vector performance was 57.3%, 60.8%, and 58.0% for accuracy, precision and recall respectively. Lexical features achieved 48.4%, 51.0%, and 49.3%, respectively. While the audio-based features achieved better performance than the lexical features, both systems only obtained approximately 50% accuracy, indicating that this ADI task is difficult, considering that there are only five classes to choose from. Siamese Neural Network-based ADI To further distinguish speech from different Arabic dialects, while making speech from the same dialect more similar, we adopted a Siamese neural network architecture BIBREF24 based on an i-vector feature space. The Siamese neural network has two parallel convolutional networks, INLINEFORM0 , that share the same set of weights, INLINEFORM1 , as shown in Figure FIGREF5 (a). Let INLINEFORM2 and INLINEFORM3 be a pair of i-vectors for which we wish to compute a distance. Let INLINEFORM4 be the label for the pair, where INLINEFORM5 = 1 if the i-vectors INLINEFORM6 and INLINEFORM7 belong to same dialect, and INLINEFORM8 otherwise. To optimize the network, we use a Euclidean distance loss function between the label and the cosine distance, INLINEFORM9 , where INLINEFORM10 For training, i-vector pairs and their corresponding labels can be processed by combinations of i-vectors from the training dataset. The trained convolutional network INLINEFORM0 transforms an i-vector INLINEFORM1 to a low-dimensional subspace that is more robust for distinguishing dialects. A detailed illustration of the convolutional network INLINEFORM2 is shown in Figure FIGREF5 (b). The final transformed i-vector, INLINEFORM3 , is a 200-dimensional vector. No nonlinear activation function was used on the fully connected layer. A cosine distance is used for scoring. i-vector Post-Processing In this section we describe the domain adaptation techniques we investigated using the development set to help adapt our models to the test set. Although the baseline system used an SVM classifier, Cosine Distance Scoring (CDS) is a fast, simple, and effective method to measure the similarity between an enrolled i-vector dialect model, and a test utterance i-vector. Under CDS, ZT-norm or S-norm can be also applied for score normalization BIBREF25 . Dialect enrollment can be obtained by means of i-vectors for each dialect, and is called the i-vector dialect model: INLINEFORM0 , where INLINEFORM1 is the number of utterances for each dialect INLINEFORM2 . Since we have two datasets for dialect enrollment, INLINEFORM3 for the training set, and INLINEFORM4 for the development set, we use an interpolation approach with parameter INLINEFORM5 , where INLINEFORM6 We observed that the mismatched training set is useful when combined with matched development set. Figure FIGREF7 shows the performance evaluation by parameter INLINEFORM0 on the same experimental conditions of System 2 in Section 4.3. This approach can be thought of as exactly the same as score fusion for different system. However, score fusion is usually performed at the system score level, while this approach uses a combination of knowledge of in-domain and out-of-domain i-vectors with a gamma weight on a single system. For i-vector-based speaker and language recognition approaches, a whitening transformation and length normalization is considered essential BIBREF26 . Since length normalization is inherently a nonlinear, non-whitening operation, recently, a recursive whitening transformation has been proposed to reduce residual un-whitened components in the i-vector space, as illustrated in Figure FIGREF10 BIBREF14 . In this approach, the data subset that best matches the test data is used at each iteration to calculate the whitening transformation. In our ADI experiments, we applied 1 to 3 levels of recursive whitening transformation using the training and development data. Phoneme Features Phoneme feature extraction consists of extracting the phone sequence, and phone duration statistics using four different speech recognizers: Czech, Hungarian, and Russian using narrowband model, and English using a broadband model BIBREF27 . We evaluated the four systems using a Support Vector Machine (SVM). The hyper-parameters for the SVM are distance from the hyperplane (C is 0.01), and penalty l2. We used the training data for training the SVM and the development data for testing. Table TABREF13 shows the results for the four phoneme recognizers. The Hungarian phoneme recognition obtained the best results, so we used it for the final system combination. Character Features Word sequences are extracted using a state-of-the-art Arabic speech-to-text transcription system built as part of the MGB-2 BIBREF28 . The system is a combination of a Time Delayed Neural Network (TDNN), a Long Short-Term Memory Recurrent Neural Network (LSTM) and Bidirectional LSTM acoustic models, followed by 4-gram and Recurrent Neural Network (RNN) language model rescoring. Our system uses a grapheme lexicon during both training and decoding. The acoustic models are trained on 1,200 hours of Arabic broadcast speech. We also perform data augmentation (speed and volume perturbation) which gives us three times the original training data. For more details see the system description paper BIBREF4 . We kept the <UNK> from the ASR system, which indicates out-of-vocabulary (OOV) words, we replaced it with special symbol. Space was inserted between all characters including the word boundaries. An SVM classifier was trained similarly to the one used for the phoneme ASR systems, and we achieved 52% accuracy, 51.2% precision and 51.8% recall. The confusion matrix is different between the phoneme classifier and the character classifier systems, which motivates us to use both of them in the final system combination. Score Calibration All scores are calibrated to be between 0 and 1. A linear calibration is done by the Bosaris toolkit BIBREF29 . Fusion is also done in a linear manner. ADI Experiments For experiments and evaluation, we use i-vectors and transcriptions that are provided by the challenge organizers. Please refer to BIBREF23 for descriptions of i-vector extraction and Arabic speech-to-text configuration. Using Training Data for Training The first experiment we conducted used only the training data for developing the ADI system. Thus, the interpolated i-vector dialect model cannot be used for this experimental condition. Table TABREF14 shows the performance on dimension reduced i-vectors using the Siamese network (Siam i-vector), and Linear Discriminant Analysis (LDA i-vector), as compared to the baseline i-vector system. LDA reduces the 400-dimension i-vector to 4, while the Siamese network reduces it from 400 to 200. Since the Siamese network used a cosine distance for the loss function, the Siam i-vector showed better performance with the CDS scoring method, while others achieved better performance with an SVM. The best system using Siam i-vector showed overall 10% better performance accuracy, as compared to the baseline. Using Training and Development Data for Training For our second experiment, both the training and development data were used for training. For phoneme and character features, we show development set experimental results in Table TABREF15 . For i-vector experiments, we show results in Table TABREF16 . In the table we see that the interpolated dialect model gave significant improvements in all three metrics. The recursive whitening transformation gave slight improvements on the original i-vector, but not after LDA and the Siamese network. The best system is the original i-vector with recursive whitening, and an interpolated i-vector dialect model, which achieves over 20% accuracy improvement over the baseline. While the Siamese i-vector network helped in the training data only experiments, it does not show any advantage over the baseline i-vector for this condition. We suspect this result is due to the composition of the data used for training the Siamese network. To train the network, i-vector pairs are chosen from from training dataset. We selected the pairs using both the training and development datasets. However, if we could put more emphasis on the development data, we suspect the Siamese i-vector network would be more robust on the test data. We plan to further examine the performances due to different compositions of data in the future. Performance Evaluation of Submission Tables TABREF21 and TABREF22 show detailed performance evaluations of our three submitted systems. System 1 was trained using only the training data as shown in Table TABREF21 . Systems 2 and 3 were trained using both the training and development sets as shown in Table TABREF22 . We found the best linear fusion weight based on System 1 to prevent over-fitting was 0.7, 0.2 and 0.1 for i-vector, character, and phonetic based scores respectively. We applied the same weights to Systems 2 and 3 for fusion. From Table TABREF21 , we see that the Siamese network demonstrates its effectiveness on both the development and test sets without using any information of the test domain. The interpolated i-vector dialect model also demonstrates that it reflects test domain information well as shown by Systems 2 and 3 in Table TABREF22 . Although we expected that the linguistic features would not affected by the domain mismatch, character and phoneme features show useful contributions for all systems. We believe the reason for the performance degradation of Systems 2 and 3 after fusion on the development data can be seen in the fusion rule. We applied the fusion rule derived from System 1 which was not optimal for Systems 2 and 3, considering the development set evaluation. By including the development data as part of their training, Systems 2 and 3 are subsequently overfit on the development data, which was why we used the fusion rule of System 1. From the excellent fusion performance on the test data for Systems 2 and 3, we believe that the fusion rule from System 1 prevented an over-fitted result. Conclusion In this paper, we describe the MIT-QCRI ADI system using both audio and linguistic features for the MGB-3 challenge. We studied several approaches to address dialect variability and domain mismatches between the training and test sets. Without knowledge of the test domain where the system will be applied, i-vector dimensionality reduction using a Siamese network was found to be useful, while an interpolated i-vector dialect model showed effectiveness with relatively small amounts of test domain information from the development data. On both conditions, fusion of audio and linguistic feature guarantees substantial improvements on dialect identification. As these approaches are not limited to dialect identification, we plan to explore their utility on other speaker and language recognition problems in the future.
Egyptian (EGY), Levantine (LEV), Gulf (GLF), North African (NOR)
7426a6e800d6c11795941616fc4a243e75716a10
7426a6e800d6c11795941616fc4a243e75716a10_0
Q: What factors contribute to interpretive biases according to this research? Text: Introduction Bias is generally considered to be a negative term: a biased story is seen as one that perverts or subverts the truth by offering a partial or incomplete perspective on the facts. But bias is in fact essential to understanding: one cannot interpret a set of facts—something humans are disposed to try to do even in the presence of data that is nothing but noise [38]—without relying on a bias or hypothesis to guide that interpretation. Suppose someone presents you with the sequence INLINEFORM0 and tells you to guess the next number. To make an educated guess, you must understand this sequence as instantiating a particular pattern; otherwise, every possible continuation of the sequence will be equally probable for you. Formulating a hypothesis about what pattern is at work will allow you to predict how the sequence will play out, putting you in a position to make a reasonable guess as to what comes after 3. Formulating the hypothesis that this sequence is structured by the Fibonacci function (even if you don't know its name), for example, will lead you to guess that the next number is 5; formulating the hypothesis that the sequence is structured by the successor function but that every odd successor is repeated once will lead you to guess that it is 3. Detecting a certain pattern allows you to determine what we will call a history: a set of given entities or eventualities and a set of relations linking those entities together. The sequence of numbers INLINEFORM1 and the set of relation instances that the Fibonacci sequence entails as holding between them is one example of a history. Bias, then, is the set of features, constraints, and assumptions that lead an interpreter to select one history—one way of stitching together a set of observed data—over another. Bias is also operative in linguistic interpretation. An interpreter's bias surfaces, for example, when the interpreter connects bits of information content together to resolve ambiguities. Consider: . Julie isn't coming. The meeting has been cancelled. While these clauses are not explicitly connected, an interpreter will typically have antecedent biases that lead her to interpret eventualities described by the two clauses as figuring in one of two histories: one in which the eventuality described by the first clause caused the second, or one in which the second caused the first. Any time that structural connections are left implicit by speakers—and this is much if not most of the time in text— interpreters will be left to infer these connections and thereby potentially create their own history or version of events. Every model of data, every history over that data, comes with a bias that allows us to use observed facts to make predictions; bias even determines what kind of predictions the model is meant to make. Bayesian inference, which underlies many powerful models of inference and machine learning, likewise relies on bias in several ways: the estimate of a state given evidence depends upon a prior probability distribution over states, on assumptions about what parameters are probabilistically independent, and on assumptions about the kind of conditional probability distribution that each parameter abides by (e.g., normal distribution, noisy-or, bimodal). Each of these generates a (potentially different) history. Objective of the paper In this paper, we propose a program for research on bias. We will show how to model various types of bias as well as the way in which bias leads to the selection of a history for a set of data, where the data might be a set of nonlinguistic entities or a set of linguistically expressed contents. In particular, we'll look at what people call “unbiased” histories. For us these also involve a bias, what we call a “truth seeking bias”. This is a bias that gets at the truth or acceptably close to it. Our model can show us what such a bias looks like. And we will examine the question of whether it is possible to find such a truth oriented bias for a set of facts, and if so, under what conditions. Can we detect and avoid biases that don't get at the truth but are devised for some other purpose? Our study of interpretive bias relies on three key premises. The first premise is that histories are discursive interpretations of a set of data in the sense that like discourse interpretations, they link together a set of entities with semantically meaningful relations. As such they are amenable to an analysis using the tools used to model a discourse's content and structure. The second is that a bias consists of a purpose or goal that the histories it generates are built to achieve and that agents build histories for many different purposes—to discover the truth or to understand, but also to conceal the truth, to praise or disparage, to persuade or to dissuade. To properly model histories and the role of biases in creating them, we need a model of the discourse purposes to whose end histories are constructed and of the way that they, together with prior assumptions, shape and determine histories. The third key premise of our approach is that bias is manifested in and conveyed through histories, and so studying histories is crucial for a better understanding of bias. Some examples of bias Let's consider the following example of biased interpretation of a conversation. Here is an example analyzed in BIBREF0 to which we will return in the course of the paper. . Sa Reporter: On a different subject is there a reason that the Senator won't say whether or not someone else bought some suits for him? Sheehan: Rachel, the Senator has reported every gift he has ever received. Reporter: That wasn't my question, Cullen. Sheehan: (i) The Senator has reported every gift he has ever received. (ii) We are not going to respond to unnamed sources on a blog. . Reporter: So Senator Coleman's friend has not bought these suits for him? Is that correct? Sheehan: The Senator has reported every gift he has ever received. Sheehan continues to repeat, “The Senator has reported every gift he has ever received” seven more times in two minutes to every follow up question by the reporter corps. http://www.youtube.com/watch?v=VySnpLoaUrI. For convenience, we denote this sentence uttered by Sheehan (which is an EDU in the languare of SDRT as we shall see presently) as INLINEFORM0 . Now imagine two “juries,” onlookers or judges who interpret what was said and evaluate the exchange, yielding differing interpretations. The interpretations differ principally in how the different contributions of Sheehan and the reporter hang together. In other words, the different interpretations provide different discourse structures that we show schematically in the graphs below. The first is one in which Sheehan's response INLINEFORM0 in SECREF3 b is somewhat puzzling and not taken as an answer to the reporter's question in SECREF3 a. In effect this “jury” could be the reporter herself. This Jury then interprets the move in SECREF3 c as a correction of the prior exchange. The repetition of INLINEFORM1 in SECREF3 d.ii is taken tentatively as a correction of the prior exchange (that is, the moves SECREF3 a, SECREF3 b and SECREF3 c together), which the Jury then takes the reporter to try to establish with SECREF3 e. When Sheehan repeats SECREF3 a again in SECREF3 f, this jury might very well take Sheehan to be evading all questions on the subject. A different Jury, however, might have a different take on the conversation as depicted in the discourse structure below. Such a jury might take INLINEFORM0 to be at least an indirect answer to the question posed in SECREF3 a, and as a correction to the Reporter's evidently not taking INLINEFORM1 as an answer. The same interpretation of INLINEFORM2 would hold for this Jury when it is repeated in SECREF3 f. Such a Jury would be a supporter of Sheehan or even Sheehan himself. What accounts for these divergent discourse structures? We will argue that it is the biases of the two Juries that create these different interpretations. And these biases are revealed at least implicitly in how they interpret the story: Jury 1 is at the outset at least guarded, if not skeptical, in its appraisal of Sheehan's interest in answering the reporter's questions. On the other hand, Jury 2 is fully convinced of Sheehan's position and thus interprets his responses much more charitably. BIBREF0 shows formally that there is a co-dependence between biases and interpretations; a certain interpretation created because of a certain bias can in turn strengthen that bias, and we will sketch some of the details of this story below. The situation of our two juries applies to a set of nonlinguistic facts. In such a case we take our “jury” to be the author of a history over that set of facts. The jury in this case evaluates and interprets the facts just as our juries did above concerning linguistic messages. To tell a history about a set of facts is to connect them together just as discourse constituents are connected together. And these connections affect and may even determine the way the facts are conceptualized BIBREF1 . Facts typically do not wear their connections to other facts on their sleeves and so how one takes those connections to be is often subject to bias. Even if their characterization and their connections to other facts are “intuitively clear”, our jury may choose to pick only certain connections to convey a particular history or even to make up connections that might be different. One jury might build a history over the set of facts that conveys one set of ideas, while the other might build a quite different history with a different message. Such histories reflect the purposes and assumptions that were exploited to create that structure. As an example of this, consider the lead paragraphs of articles from the New York Times, Townhall and Newsbusters concerning the March for Science held in April, 2017. The March for Science on April 22 may or may not accomplish the goals set out by its organizers. But it has required many people who work in a variety of scientific fields — as well as Americans who are passionate about science — to grapple with the proper role of science in our civic life. The discussion was evident in thousands of responses submitted to NYTimes.com ahead of the march, both from those who will attend and those who are sitting it out. –New York Times Do you have march fatigue yet? The left, apparently, does not, so we're in for some street theater on Earth Day, April 22, with the so-called March for Science. It's hard to think of a better way to undermine the public's faith in science than to stage demonstrations in Washington, D.C., and around the country modeled on the Women's March on Washington that took place in January. The Women's March was an anti-Donald Trump festival. Science, however, to be respected, must be purely the search for truth. The organizers of this “March for Science" – by acknowledging that their demonstration is modeled on the Women's March – are contributing to the politicization of science, exactly what true upholders of science should be at pains to avoid. –Townhall Thousands of people have expressed interest in attending the “March for Science” this Earth Day, but internally the event was fraught with conflict and many actual scientists rejected the march and refused to participate. –Newsbusters These different articles begin with some of the same basic facts: the date and purpose of the march, and the fact that the march's import for the science community is controversial, for example. But bias led the reporters to stitch together very different histories. The New York Times, for instance, interprets the controversy as generating a serious discussion about “the proper role of science in our civic life,” while Townhall interprets the march as a political stunt that does nothing but undermine science. While the choice of wording helps to convey bias, just as crucial is the way that the reporters portray the march as being related to other events. Which events authors choose to include in their history, which they leave out, and the way the events chosen relate to the march are crucial factors in conveying bias. Townhall's bias against the March of Science expressed in the argument that it politicizes science cannot be traced back to negative opinion words; it relies on a comparison between the March for Science and the Women's March, which is portrayed as a political, anti-Trump event. Newsbusters takes a different track: the opening paragraph conveys an overall negative perspective on the March for Science, despite its neutral language, but it achieves this by contrasting general interest in the march with a claimed negative view of the march by many “actual scientists.” On the other hand, the New York Times points to an important and presumably positive outcome of the march, despite its controversiality: a renewed look into the role of science in public life and politics. Like Newsbusters, it lacks any explicit evaluative language and relies on the structural relations between events to convey an overall positive perspective; it contrasts the controversy surrounding the march with a claim that the march has triggered an important discussion, which is in turn buttressed by the reporter's mentioning of the responses of the Times' readership. A formally precise account of interpretive bias will thus require an analysis of histories and their structure and to this end, we exploit Segmented Discourse Representation Theory or SDRT BIBREF2 , BIBREF3 . As the most precise and well-studied formal model of discourse structure and interpretation to date, SDRT enables us to characterize and to compare histories in terms of their structure and content. But neither SDRT nor any other, extant theoretical or computational approach to discourse interpretation can adequately deal with the inherent subjectivity and interest relativity of interpretation, which our study of bias will illuminate. Message Exchange (ME) Games, a theory of games that builds on SDRT, supplements SDRT with an analysis of the purposes and assumptions that figure in bias. While epistemic game theory in principle can supply an analysis of these assumptions, it lacks linguistic constraints and fails to reflect the basic structure of conversations BIBREF4 . ME games will enable us not only to model the purposes and assumptions behind histories but also to evaluate their complexity and feasibility in terms of the existence of winning strategies. Bias has been studied in cognitive psychology and empirical economics BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF5 , BIBREF13 . Since the seminal work of Kahneman and Tversky and the economist Allais, psychologists and empirical economists have provided valuable insights into cognitive biases in simple decision problems and simple mathematical tasks BIBREF14 . Some of this work, for example the bias of framing effects BIBREF7 , is directly relevant to our theory of interpretive bias. A situation is presented using certain lexical choices that lead to different “frames”: INLINEFORM0 of the people will live if you do INLINEFORM1 (frame 1) versus INLINEFORM2 of the people will die if you do INLINEFORM3 (frame 2). In fact, INLINEFORM4 , the total population in question; so the two consequents of the conditionals are equivalent. Each frame elaborates or “colors” INLINEFORM5 in a way that affects an interpreter's evaluation of INLINEFORM6 . These frames are in effect short histories whose discourse structure explains their coloring effect. Psychologists, empirical economists and statisticians have also investigated cases of cognitive bias in which subjects deviate from prescriptively rational or independently given objective outcomes in quantitative decision making and frequency estimation, even though they arguably have the goal of seeking an optimal or “true” solution. In a general analysis of interpretive bias like ours, however, it is an open question whether there is an objective norm or not, whether it is attainable and, if so, under what conditions, and whether an agent builds a history for attaining that norm or for some other purpose. Organization of the paper Our paper is organized as follows. Section SECREF2 introduces our model of interpretive bias. Section SECREF3 looks forward towards some consequences of our model for learning and interpretation. We then draw some conclusions in Section SECREF4 . A detailed and formal analysis of interpretive bias has important social implications. Questions of bias are not only timely but also pressing for democracies that are having a difficult time dealing with campaigns of disinformation and a society whose information sources are increasingly fragmented and whose biases are often concealed. Understanding linguistic and cognitive mechanisms for bias precisely and algorithmically can yield valuable tools for navigating in an informationally bewildering world. The model of interpretive bias As mentioned in Section SECREF1 , understanding interpretive bias requires two ingredients. First, we need to know what it is to interpret a text or to build a history over a set of facts. Our answer comes from analyzing discourse structure and interpretation in SDRT BIBREF2 , BIBREF3 . A history for a text connects its elementary information units, units that convey propositions or describe events, using semantic relations that we call discourse relations to construct a coherent and connected whole. Among such relations are logical, causal, evidential, sequential and resemblance relations as well as relations that link one unit with an elaboration of its content. It has been shown in the literature that discourse structure is an important factor in accurately extracting sentiments and opinions from text BIBREF15 , BIBREF16 , BIBREF17 , and our examples show that this is the case for interpretive bias as well. Epistemic ME games The second ingredient needed to understand interpretive bias is the connection between on the one hand the purpose and assumption behind telling a story and on the other the particular way in which that story is told. A history puts the entities to be understood into a structure that serves certain purposes or conversational goals BIBREF18 . Sometimes the history attempts to get at the “truth”, the true causal and taxonomic structure of a set of events. But a history may also serve other purposes—e.g., to persuade, or to dupe an audience. Over the past five years, BIBREF4 , BIBREF19 , BIBREF20 , BIBREF21 have developed an account of conversational purposes or goals and how they guide strategic reasoning in a framework called Message Exchange (ME) Games. ME games provide a general and formally precise framework for not only the analysis of conversational purposes and conversational strategies, but also for the typology of dialogue games from BIBREF22 and finally for the analysis of strategies for achieving what we would intuitively call “unbiased interpretation”, as we shall see in the next section. In fact in ME Games, conversational goals are analyzed as properties, and hence sets, of conversations; these are the conversations that “go well” for the player. ME games bring together the linguistic analysis of SDRT with a game theoretic approach to strategic reasoning; in an ME game, players alternate making sequences of discourse moves such as those described in SDRT, and a player wins if the conversation constructed belongs to her winning condition, which is a subset of the set of all possible conversational plays. ME games are designed to analyze the interaction between conversational structure, purposes and assumptions, in the absence of assumptions about cooperativity or other cognitive hypotheses, which can cause problems of interpretability in other frameworks BIBREF23 . ME games also assume a Jury that sets the winning conditions and thus evaluates whether the conversational moves made by players or conversationalists are successful or not. The Jury can be one or both of the players themselves or some exogenous body. To define an ME game, we first fix a finite set of players INLINEFORM0 and let INLINEFORM1 range over INLINEFORM2 . For simplicity, we consider here the case where there are only two players, that is INLINEFORM3 , but the notions can be easily lifted to the case where there are more than two players. Here, Player INLINEFORM4 will denote the opponent of Player INLINEFORM5 . We need a vocabulary INLINEFORM6 of moves or actions; these are the discourse moves as defined by the language of SDRT. The intuitive idea behind an ME game is that a conversation proceeds in turns where in each turn one of the players `speaks' or plays a string of elements from INLINEFORM7 . In addition, in the case of conversations, it is essential to keep track of “who says what”. To model this, each player INLINEFORM8 was assigned a copy INLINEFORM9 of the vocabulary INLINEFORM10 which is simply given as INLINEFORM11 . As BIBREF4 argues, a conversation may proceed indefinitely, and so conversations correspond to plays of ME games, typically denoted as INLINEFORM12 , which are the union of finite or infinite sequences in INLINEFORM13 , denoted as INLINEFORM14 and INLINEFORM15 respectively. The set of all possible conversations is thus INLINEFORM16 and is denoted as INLINEFORM17 . [ME game BIBREF4 ] A Message Exchange game (ME game), INLINEFORM18 , is a tuple INLINEFORM19 where INLINEFORM20 is a Jury. Due to the ambiguities in language, discourse moves in SDRT are underspecified formulas that may yield more than one fully specified discourse structure or histories for the conversation; a resulting play in an ME game thus forms one or more histories or complete discourse structures for the entire conversation. To make ME games into a truly realistic model of conversation requires taking account of the limited information available to conversational participants. BIBREF0 imported the notion of a type space from epistemic game theory BIBREF24 to take account of this. The type of a player INLINEFORM0 or the Jury is an abstract object that is used to code-up anything and everything about INLINEFORM1 or the Jury, including her behavior, the way she strategizes, her personal biases, etc. BIBREF24 . Let INLINEFORM2 denote the set of strategies for Player INLINEFORM3 in an ME game; let INLINEFORM4 ; and let INLINEFORM5 be the set of strategies of INLINEFORM6 given play INLINEFORM7 . [Harsanyi type space BIBREF24 ] A Harsanyi type space for INLINEFORM8 is a tuple INLINEFORM9 such that INLINEFORM10 and INLINEFORM11 , for each INLINEFORM12 , are non-empty (at-most countable) sets called the Jury-types and INLINEFORM13 -types respectively and INLINEFORM14 and INLINEFORM15 are the beliefs of Player INLINEFORM16 and the Jury respectively at play INLINEFORM17 . BIBREF0 defines the beliefs of the players and Jury using the following functions. [Belief function] For every play INLINEFORM18 the (first order) belief INLINEFORM19 of player INLINEFORM20 at INLINEFORM21 is a pair of measurable functions INLINEFORM22 where INLINEFORM23 is the belief function and INLINEFORM24 is the interpretation function defined as: INLINEFORM25 INLINEFORM26 where INLINEFORM0 is the set of probability distributions over the corresponding set. Similarly the (first order) belief INLINEFORM1 of the Jury is a pair of measurable functions INLINEFORM2 where the belief function INLINEFORM3 and the interpretation function INLINEFORM4 are defined as: INLINEFORM5 INLINEFORM6 Composing INLINEFORM0 and INLINEFORM1 together over their respective outputs reveals a correspondence between interpretations of plays and types for a fixed Jury type INLINEFORM2 : every history yields a distribution over types for the players and every tuple of types for the players and the Jury fixes a distribution over histories. We'll call this the types/history correspondence. An epistemic ME game is an ME game with a Harsanyi type space and a type/history correspondence as we've defined it. By adding types to an ME game, we provide the beginnings of a game theoretic model of interpretive bias that we believe is completely new. Our definition of bias is now: [Interpretive Bias] An interpretive bias in an epistemic ME game is the probability distribution over types given by the belief function of the conversationalists or players, or the Jury. Note that in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury. Outside of language, statisticians study bias; and sample bias is currently an important topic. To do so, they exploit statistical models with a set of parameters and random variables, which play the role of our types in interpretive bias. But for us, the interpretive process is already well underway once the model, with its constraints, features and explanatory hypotheses, is posited; at least a partial history, or set of histories, has already been created. The ME model in BIBREF0 not only makes histories dependent on biases but also conditionally updates an agent's bias, the probability distribution, given the interpretation of the conversation or more generally a course of events as it has so far unfolded and crucially as the agent has so far interpreted it. This means that certain biases are reinforced as a history develops, and in turn strengthen the probability of histories generated by such biases in virtue of the types/histories correspondence. We now turn to an analysis of SECREF3 discussed in BIBREF4 , BIBREF0 where arguably this happens. Generalizing from the case study The Sheehan case study in BIBREF0 shows the interactions of interpretation and probability distributions over types. We'll refer to content that exploit assumptions about types' epistemic content. SECREF3 also offers a case of a self-confirming bias with Jury INLINEFORM0 . But the analysis proposed by BIBREF0 leaves open an important open question about what types are relevant to constructing a particular history and only examines one out of many other cases of biased interpretation. In epistemic game models, the relevant types are typically given exogenously and Harsanyi's type space construction is silent on this question. The question seems a priori very hard to answer, because anything and everything might be relevant to constructing a history. In SECREF3 , the relevant types have to do with the interpreters' or Jurys' attitudes towards the commitments of the spokesman and Coleman. These attitudes might reinforce or be a product of other beliefs like beliefs about the spokesman's political affiliations. But we will put forward the following simplifying hypothesis: Hypothesis 1: epistemic content is based on assumptions about types defined by different attitudes to commitments by the players and or the Jury to the contents of a discourse move or sequence of discourse moves. Hypothesis 2: These assumptions can be represented as probability distributions over types. In SECREF3 , we've only looked at epistemic content from the point of view of the interpreter, which involves types for the Jury defined in terms of probability distributions over types for the speaker. But we can look at subjective interpretations from the perspective of the speaker as well. In other words, we look at how the speaker might conceptualize the discourse situation, in particular her audience. We illustrate this with another type of content based on types. Consider the following move by Marion Le Pen, a leader of the French nationalist, right-wing party le Front National in which she recently said: . La France était la fille aînée de l'église. Elle est en passe de devenir la petite nièce de l'Islam. (France was once the eldest daughter of the Catholic church. It is now becoming the little niece of Islam.) SECREF8 appeals to what the speaker takes to be her intended audience's beliefs about Islam, Catholicism and France. In virtue of these beliefs, this discourse move takes on a loaded racist meaning, conveying an assault on France and its once proud status by people of North African descent. Without those background beliefs, however, Le Pen's statement might merely be considered a somewhat curious description of a recent shift in religious majorities. This is known as a “dog whistle,” in which a discourse move communicates a content other than its grammatically determined content to a particular audience BIBREF25 . While BIBREF26 proposes that such messages are conventional implicatures, BIBREF25 , BIBREF27 show that dog whistle content doesn't behave like other conventional implicatures; in terms of tests about “at issue content”, dog whistle content patterns with other at issue content, not with the content associated with conventional implicatures in the sense of BIBREF28 . This also holds of content that resolves ambiguities as in SECREF3 . The dogwhistle content seems to be driven by the hearer's type in SECREF8 or the speaker's beliefs about the interpreter's or hearer's type. Generalizing from BIBREF29 , the use of the historical expression la fille ainée de l'église contrasted with la petite nièce has come to encode a type, in much the same way that dropping the final g in present participles and gerunds has come to signify a type BIBREF29 , for the speaker INLINEFORM0 about hearer INLINEFORM1 ; e.g., INLINEFORM2 will believe that INLINEFORM3 has the strategy of using just this language to access the loaded interpretation and moreover will identify with its content. Because this meaning comes about in virtue of the hearer's type, the speaker is in a position to plausibly deny that they committed to conveying a racist meaning, which is a feature of such dog whistles. In fact, we might say that all dogwhistle content is so determined. We can complicate the analysis by considering the speaker's types, the interlocutor's types and types for the Jury when these three components of an ME game are distinct (i.e. the Jury is distinct from the interlocutors). A case like this is the Bronston example discussed in BIBREF0 . By looking at dogwhistles, we've now distinguished two kinds of epistemic content that depends on an interpreters' type. The epistemic content may as in SECREF3 fill out the meaning of an underspecified play to produce a determinate history. Dog whistles add content to a specific discourse unit that goes beyond its grammatically determined meaning. More formally, we can define these two kinds of epistemic content using the machinery of ME games. Given that plays in an ME game are sequences of discourse moves, we can appeal to the semantics of these moves and a background consequence relation INLINEFORM0 defined as usual. In addition, a play INLINEFORM1 in an ME game may itself be a fully specified history or a sequence of discourse moves that is compatible with several fully specified histories given a particular interpreter's or Jury's type INLINEFORM2 . Let INLINEFORM3 be the set of histories (FLFs) compatible with a play INLINEFORM4 given an interpreter or Jury type INLINEFORM5 . INLINEFORM6 will be ambiguous and open to epistemic content supplementation just in case: (i) INLINEFORM7 for any type INLINEFORM8 for a linguistically competent jury, and (ii) there are INLINEFORM9 , such that INLINEFORM10 and INLINEFORM11 are semantically distinct (neither one entails the other). Now suppose that a play INLINEFORM12 gives rise through the grammar to a history, INLINEFORM13 . Then INLINEFORM14 is a dog whistle for INLINEFORM15 just in case: (i) INLINEFORM16 , (ii) INLINEFORM17 and (iii) there is a INLINEFORM18 that can positively affect some jury perhaps distinct from INLINEFORM19 and such that INLINEFORM20 . On this definition, a player who utters such a play INLINEFORM21 always has the excuse that what he/she actually meant was INLINEFORM22 when challenged—which seems to be one essential feature of a dog whistle. Plays with such semantic features may not be a pervasive feature of conversation; not every element is underspecified or is given a content over and above its linguistically determined one. But in interpreting a set of nonlinguistic facts INLINEFORM0 or data not already connected together in a history, that is in constructing a history over INLINEFORM1 , an interpreter INLINEFORM2 , who in this case is a speaker or writer, must appeal to her beliefs, which includes her beliefs about the Jury to whom her discourse actions are directed. So certainly the type of INLINEFORM3 , which includes beliefs about the Jury for the text, is relevant to what history emerges. The facts in INLINEFORM4 don't wear their relational properties to other facts on their sleeves so to speak, and so INLINEFORM5 has to supply the connections to construct the history. In effect for a set of non linguistically given facts, “ambiguities of attachment,” whose specification determines how the facts in INLINEFORM6 are related to each other, are ubiquitous and must be resolved in constructing a history. The speaker or “history creator” INLINEFORM7 's background beliefs determine the play and the history an interpreter INLINEFORM8 takes away. In the case of constructing a history over a set of nonlinguistic facts INLINEFORM0 , the interpreter INLINEFORM1 's task of getting the history INLINEFORM2 has constructed will not reliably succeed unless one of two conditions are met: either INLINEFORM3 and INLINEFORM4 just happen to share the relevant beliefs (have close enough types) so that they construct the same histories from INLINEFORM5 , or INLINEFORM6 uses linguistic devices to signal the history. ME games require winning conversations, and by extension texts, to be (mostly) coherent, which means that the discourse connections between the elements in the history must be largely determined in any successful play, or can be effectively determined by INLINEFORM14 . This means that INLINEFORM15 will usually reveal relevant information about her type through her play, in virtue of the type/history correspondence, enough to reconstruct the history or much of it. In the stories on the March for Science, for example, the reporters evoke very different connections between the march and other facts. The Townhall reporter, for instance, connects the March for Science to the Women's march and “leftwing” political manifestations and manifests a negative attitude toward the March. But he does so so unambiguously that little subjective interpretation on the part of the interpreter or Jury is needed to construct the history or assign a high probability to a type for INLINEFORM16 that drives the story. This discussion leads to the following observations. To construct a history over a set of disconnected nonlinguistic facts INLINEFORM0 , in general a Jury needs to exploit linguistic pointers to the connections between elements of INLINEFORM1 , if the speaker is to achieve the goal of imparting a (discourse) coherent story, unless the speaker knows that the Jury or interpreter has detailed knowledge of her type. The speaker may choose to leave certain elements underspecified or ambiguous, or use a specified construction, to invoke epistemic content for a particular type that she is confident the Jury instantiates. How much so depends on her confidence in the type of the Jury. This distribution or confidence level opens a panoply of options about the uses of epistemic content: at one end there are histories constructed from linguistic cues with standard, grammatically encoded meanings; at the other end there are histories generated by a code shared with only a few people whose types are mutually known. As the conversation proceeds as we have seen, probabilities about types are updated and so the model should predict that a speaker may resort to more code-like messages in the face of feedback confirming her hypotheses about the Jury's type (if such feedback can be given) and that the speaker may revert to a more message exploiting grammatical cues in the face of feedback disconfirming her hypotheses about the Jury's type. Thus, the epistemic ME model predicts a possible change in register as the speaker receives more information about the Jury's type, though this change is subject to other conversational goals coded in the speaker's victory condition for the ME game. ME persuasion games We've now seen how histories in ME games bring an interpretive bias, the bias of the history's creator, to the understanding of a certain set of facts. We've also seen how epistemic ME games allow for the introduction of epistemic content in the interpretation of plays. Each such epistemic interpretation is an instance of a bias that goes beyond the grammatically determined meaning of the play and is dependent upon the Jury's or interpreter's type. We now make explicit another crucial component of ME games and their relation to bias: the players' winning conditions or discourse goals. Why is this relevant to a study of bias? The short answer is that players' goals tells us whether two players' biases on a certain subject are compatible or resolvable or not. Imagine that our two Juries in SECREF3 shared the same goal—of getting at the truth behind the Senator's refusal to comment about the suits. They might still have come up with the opposing interpretations that they did in our discussion above. But they could have discussed their differences, and eventually would have come to agreement, as we show below in Proposition SECREF19 . However, our two Juries might have different purposes too. One Jury might have the purpose of finding out about the suits, like the reporters; the other might have the purpose just to see Senator Coleman defended, a potentially quite different winning condition and collection of histories. In so doing we would identify Jury 1 with the reporters or at least Rachel, and Jury 2 with Sheehan. Such different discourse purposes have to be taken into account in attempting to make a distinction between good and bad biases. From the perspective of subjective rationality or rationalizability (an important criterion in epistemic game theory BIBREF33 ), good biases for a particular conversation should be those that lead to histories in the winning condition, histories that fulfill the discourse purpose; bad biases lead to histories that do not achieve the winning condition. The goals that a Jury or interpreter INLINEFORM0 adopts and her biases go together; INLINEFORM1 's interpretive bias is good for speaker INLINEFORM2 , if it helps INLINEFORM3 achieve her winning condition. Hence, INLINEFORM4 's beliefs about INLINEFORM5 are crucial to her success and rationalizable behavior. Based on those beliefs INLINEFORM6 's behavior is rationalizable in the sense we have just discussed. If she believes Jury 2 is the one whose winning condition she should satisfy, there is no reason for her to change that behavior. Furthermore, suppose Jury 1 and Jury 2 discuss their evaluations; given that they have different goals, there is no reason for them to come to an agreement with the other's point of view either. Both interpretations are rationalizable as well, if the respective Juries have the goals they do above. A similar story applies to constructing histories over a set of facts, in so far as they had different conceptions of winning conditions set by their respective Juries. In contrast to Aumann's dictum BIBREF32 , in our scenario there is every reason to agree to disagree! Understanding such discourse goals is crucial to understanding bias for at least two reasons. The first is that together with the types that are conventionally coded in discourse moves, they fix the space of relevant types. In SECREF3 , Jury 1 is sensitive to a winning condition in which the truth about the suits is revealed, what we call a truth oriented goal. The goal of Jury 2, on the other hand, is to see that Coleman is successfully defended, what we call a persuasion goal. In fact, we show below that a truth oriented goal is a kind of persuasion goal. Crucial to the accomplishment of either of these goals is for the Jury INLINEFORM0 to decide whether the speaker INLINEFORM1 is committing to a definite answer that she will defend (or better yet an answer that she believes) on a given move to a question from her interlocutor or is INLINEFORM2 trying to avoid any such commitments. If it's the latter, then INLINEFORM3 would be epistemically rash to be persuaded. But the two possibilities are just the two types for Sheehan that are relevant to the interpretation of the ambiguous moves in SECREF3 . Because persuasive goals are almost ubiquitous at least as parts of speaker goals, not only in conversation but also for texts (think of how the reporters in the examples on the March for Science are seeking to convince us of a particular view of the event), we claim that these two types are relevant to the interpretation of many, if not all, conversations. In general we conjecture that the relevant types for interpretation may all rely on epistemic requirements for meeting various kinds of conversational goals. The second reason that discourse goals are key to understanding bias is that by analyzing persuasion goals in more detail we get to the heart of what bias is. Imagine a kind of ME game played between two players, E(loïse) and A(belard), where E proposes and tries to defend a particular interpretation of some set of facts INLINEFORM0 , and A tries to show the interpretation is incorrect, misguided, based on prejudice or whatever will convince the Jury to be dissuaded from adopting E's interpretation of INLINEFORM1 . As in all ME games, E's victory condition in an ME persuasion game is a set of histories determined by the Jury, but but it crucially depends on E's and A's beliefs about the Jury: E has to provide a history INLINEFORM2 over INLINEFORM3 ; A has to attack that history in ways that accord with her beliefs about the Jury; and E has to defend INLINEFORM4 in ways that will, given her beliefs, dispose the Jury favorably to it. An ME persuasion game is one where E and A each present elements of INLINEFORM0 and may also make argumentative or attack moves in their conversation. At each turn of the game, A can argue about the history constructed by E over the facts given so far, challenge it with new facts or attack its assumptions, with the result that E may rethink and redo portions her history over INLINEFORM1 (though not abandon the original history entirely) in order to render A's attack moot. E wins if the history she finally settles on for the facts in INLINEFORM2 allows her to rebut every attack by A; A wins otherwise. A reasonable precisification of this victory condition is that the proportions of good unanswered attacks on the latest version of E's history with respect to the total number of attacks at some point continues to diminish and eventually goes to 0. This is a sort of limit condition: if we think of the initial segments INLINEFORM3 E's play as producing an “initial” history INLINEFORM4 over INLINEFORM5 , as INLINEFORM6 , INLINEFORM7 has no unanswered counterattacks by A that affect the Jury. Such winning histories are extremely difficult to construct; as one can see from inspection, no finite segment of an infinite play guarantees such a winning condition. We shall call a history segment that is part of a history in INLINEFORM8 's winning condition as we have just characterized it, E-defensible. The notion of an ME persuasion game opens the door to a study of attacks, a study that can draw on work in argumentation and game theory BIBREF34 , BIBREF35 , BIBREF36 . ME games and ME persuasion games in particular go beyond the work just cited, however, because our notion of an effective attack involves the type of the Jury as a crucial parameter; the effectiveness of an attack for a Jury relies on its prejudices, technically its priors about the game's players' types (and hence their beliefs and motives). For instance, an uncovering of an agent's racist bias when confronted with a dog whistle like that in SECREF8 is an effective attack technique if the respondent's type for the Jury is such that it is sensitive to such accusations, while it will fail if the Jury is insensitive to such accusations. ME games make plain the importance in a persuasion game of accurately gauging the beliefs of the Jury! ME truth games We now turn to a special kind of ME persuasion game with what we call a disinterested Jury. The intuition behind a disinterested Jury is simple: such a Jury judges the persuasion game based only on the public commitments that follow from the discourse moves that the players make. It is not predisposed to either player in the game. While it is difficult to define such a disinterested Jury in terms of its credences, its probability distribution over types, we can establish some necessary conditions. We first define the notion of the dual of a play of an ME game. Let INLINEFORM0 be an element of the labeled vocabulary with player INLINEFORM1 . Define its dual as: INLINEFORM2 The dual of a play INLINEFORM0 then is simply the lifting of this operator over the entire sequence of INLINEFORM1 . That is, if INLINEFORM2 , where INLINEFORM3 then INLINEFORM4 Then, a disinterested Jury must necessarily satisfy: Indifference towards player identity: A Jury INLINEFORM0 is unbiased only if for every INLINEFORM1 , INLINEFORM2 iff INLINEFORM3 . Symmetry of prior belief: A Jury is unbiased only if it has symmetrical prior beliefs about the player types. Clearly, the Jury INLINEFORM0 does not have symmetrical prior beliefs nor is it indifferent to player identity, while Jury INLINEFORM1 arguably has symmetrical beliefs about the participants in SECREF3 . Note also that while Symmetry of prior beliefs is satisfied by a uniform distribution over all types, but it does not entail such a uniform distribution. Symmetry is closely related to the principle of maximum entropy used in fields as diverse as physics and computational linguistics BIBREF37 , according to which in the absence of any information about the players would entail a uniform probability distribution over types. A distinterested Jury should evaluate a conversation based solely on the strength of the points put forth by the participants. But also crucially it should evaluate the conversation in light of the right points. So for instance, appeals to ad hominem attacks by A or colorful insults should not sway the Jury in favor of A. They should evaluate only based on how the points brought forward affect their credences under conditionalization. A distinterested Jury is impressed only by certain attacks from A, ones based on evidence (E's claims aren't supported by the facts) and on formal properties of coherence, consistency and explanatory or predictive power. In such a game it is common knowledge that attacks based on information about E's type that is not relevant either to the evidential support or formal properties of her history are ignored by the Jury and the participants know this. The same goes for E; counterattacks by her on A that are not based on evidence or the formal properties mentioned above. BIBREF4 discusses the formal properties of coherence and consistency in detail, and we say more about explanatory and predictive power below. The evidential criterion, however, is also particularly important, and it is one that a disinterested Jury must attend to. Luckily for us, formal epistemologists have formulated constraints like cognitive skill and safety or anti-luck on beliefs that are relevant to characterizing this evidential criterion BIBREF38 , BIBREF39 . Cognitive skill is a factor that affects the success (accuracy) of an agent's beliefs: the success of an agent's beliefs is the result of her cognitive skill, exactly to the extent that the reasoning process that produces them makes evidential factors (how weighty, specific, misleading, etc., the agent's evidence is) comparatively important for explaining that success, and makes non-evidential factors comparatively unimportant. In addition, we will require that the relevant evidential factors are those that have been demonstrated to be effective in the relevant areas of inquiry. So if a Jury measures the success of a persuasion game in virtue of a criterion of cognitive ability on the part of the participants and this is common knowledge among the participants (something we will assume throughout here), then, for instance, A's attacks have to be about the particular evidence adduced to support E's history, the way it was collected or verifiable errors in measurements etc., and preclude general skeptical claims from credible attacks in such a game. These epistemic components thus engender more relevant types for interpretation: are the players using cognitive skill and anti-luck conditions or not? More particularly, most climate skeptics' attacks on climate change science, using general doubts about the evidence without using any credible scientific criteria attacking specific evidential bases, would consequently be ruled as irrelevant in virtue of a property like cognitive skill. But this criterion may also affect the Jury's interpretation of the conversation. A Jury whose beliefs are constrained by cognitive ability will adjust its beliefs about player types and about interpretation only in the light of relevant evidential factors. Safety is a feature of beliefs that says that conditionalizing on circumstances that could have been otherwise without one's evidence changing should not affect the strength of one's beliefs. Safety rules out out belief profiles in which luck or mere hunches play a role. The notion of a disinterested jury is formally a complicated one. Consider an interpretation of a conversation between two players E and A. Bias can be understood as a sort of modal operator over an agent's first order and higher order beliefs. So a disinterested Jury in an ME game means that neither its beliefs about A nor about E involve an interested bias; nor do its beliefs about A's beliefs about E's beliefs or E's beliefs about the A's beliefs about E's beliefs, and so on up the epistemic hierarchy. Thus, a disinterested Jury in this setting involves an infinitary conjunction of modal statements, which is intuitively (and mathematically) a complex condition on beliefs. And since this disinterestedness must be common knowledge amongst the players, E and A have equally complex beliefs. We are interested in ME persuasion games in which the truth may emerge. Is an ME persuasion game with a disinterested Jury sufficient to ensure such an outcome? No. there may be a fatal flaw in E's history that INLINEFORM0 does not uncover and that the Jury does not see. We have to suppose certain abilities on the part of INLINEFORM1 and/or the Jury—namely, that if E has covered up some evidence or falsely constructed evidence or has introduced an inconsistency in her history, that eventually A will uncover it. Further, if there is an unexplained leap, an incoherence in the history, then INLINEFORM2 will eventually find it. Endowing INLINEFORM3 with such capacities would suffice to ensure a history that is in E's winning condition to be the best possible approximation to the truth, a sort of Peircean ideal. Even if we assume only that INLINEFORM4 is a competent and skilled practitioner of her art, we have something like a good approximation of the truth for any history in E's winning condition. We call a persuasion game with such a disinterested Jury and such a winning condition for INLINEFORM5 an ME truth game. In an ME truth game, a player or a Jury may not be completely disinterested because of skewed priors. But she may still be interested in finding out the truth and thus adjusting her priors in the face of evidence. We put some constraints on the revision of beliefs of a truth interested player. Suppose such a player INLINEFORM0 has a prior INLINEFORM1 on INLINEFORM2 such that INLINEFORM5 , but in a play INLINEFORM6 of an ME truth game it is revealed that INLINEFORM7 has no confirming evidence for INLINEFORM8 that the opponent INLINEFORM9 cannot attack without convincing rebuttal. Then a truth interested player INLINEFORM10 should update her beliefs INLINEFORM11 after INLINEFORM12 so that INLINEFORM13 . On the other hand, if INLINEFORM14 cannot rebut the confirming evidence that INLINEFORM15 has for INLINEFORM16 , then INLINEFORM17 . Where INLINEFORM18 is infinite, we put a condition on the prefixes INLINEFORM19 of INLINEFORM20 : INLINEFORM21 . Given our concepts of truth interested players and an ME truth game, we can show the following. If the two players of a 2 history ME truth game INLINEFORM22 , have access to all the facts in INLINEFORM23 , and are truth interested but have incompatible histories for INLINEFORM24 based on distinct priors, they will eventually agree to a common history for INLINEFORM25 . To prove this, we note that our players will note the disagreement and try to overcome it since they have a common interest, in the truth about INLINEFORM26 . Then it suffices to look at two cases: in case one, one player INLINEFORM27 converges to the INLINEFORM28 's beliefs in the ME game because INLINEFORM29 successfully attacks the grounds on which INLINEFORM30 's incompatible interpretation is based; in case two, neither INLINEFORM31 nor INLINEFORM32 is revealed to have good evidential grounds for their conflicting beliefs and so they converge to common revised beliefs that assign an equal probability to the prior beliefs that were in conflict. Note that the difference with BIBREF32 is that we need to assume that players interested in the truth conditionalize upon outcomes of discussion in an ME game in the same way. Players who do not do this need not ever agree. There are interesting variants of an ME truth game where one has to do with approximations. ME truth games are infinitary games, in which getting a winning history is something E may or may not achieve in the limit. But typically we want the right, or “good enough” interpretation sooner rather than later. We can also appeal to discounted ME games developed in BIBREF21 , in which the scores are assigned to individual discourse moves in context which diminish as the game progresses, to investigate cases where getting things right, or right enough, early on in an ME truth game is crucial. In another variant of an ME truth game, which we call a 2-history ME truth game, we pit two biases one for E and one for A, and the two competing histories they engender, about a set of facts against each other. Note that such a game is not necessarily win-lose as is the original ME truth game, because neither history the conversationalists develop and defend may satisfy the disinterested Jury. That is, both E and A may lose in such a game. Is it also possible that they both win? Can both E and A revise their histories so that their opponents have in the end no telling attacks against their histories? We think not at least in the case where the histories make or entail contradictory claims: in such a case they should both lose because they cannot defeat the opposing possibility. Suppose INLINEFORM0 wants to win an ME truth game and to construct a truthful history. Let's assume that the set of facts INLINEFORM1 over which the history is constructed is finite. What should she do? Is it possible for her to win? How hard is it for her to win? Does INLINEFORM2 have a winning strategy? As an ME truth game is win-lose, if the winning condition is Borel definable, it will be determined BIBREF4 ; either INLINEFORM3 has a winning strategy or INLINEFORM4 does. Whether INLINEFORM5 has a winning strategy or not is important: if she does, there is a method for finding an optimal history in the winning set; if she doesn't, an optimal history from the point of view of a truth-seeking goal in the ME truth game is not always attainable. To construct a history from ambiguous signals for a history over INLINEFORM0 , the interpreter must rely on her beliefs about the situation and her interlocutors to estimate the right history. So the question of getting at truthful interpretations of histories depends at least in part on the right answer to the question, what are the right beliefs about the situation and the participants that should be invoked in interpretation? Given that beliefs are probabilistic, the space of possible beliefs is vast. The right set of beliefs will typically form a very small set with respect to the set of all possible beliefs about a typical conversational setting. Assuming that one will be in such a position “by default” without any further argumentation is highly implausible, as a simple measure theoretic argument ensures that the set of possible interpretations are almost always biased away from a winning history in an ME truth game. What is needed for E-defensibility and a winning strategy in an ME truth game? BIBREF4 argued that consistency and coherence (roughly, the elements of the history have to be semantically connected in relevant ways BIBREF3 ) are necessary conditions on all winning conditions and would thus apply to such histories. A necessary additional property is completeness, an accounting of all or sufficiently many of the facts the history is claimed to cover. We've also mentioned the care that has to be paid to the evidence and how it supports the history. Finally, it became apparent when we considered a variant of an ME truth game in which two competing histories were pitted against each other that a winning condition for each player is that they must be able to defeat the opposing view or at least cast doubt on it. More particularly, truth seeking biases should provide predictive and explanatory power, which are difficult to define. But we offer the following encoding of predictiveness and explanatory power as constraints on continuations of a given history in an ME truth game. [Predictiveness] A history INLINEFORM0 developed in an ME game for a set of facts INLINEFORM1 is predictive just in case when INLINEFORM2 is presented with a set of facts INLINEFORM3 relevantly similar to INLINEFORM4 , INLINEFORM5 implies a E-defensible extension INLINEFORM6 of INLINEFORM7 to all the facts in INLINEFORM8 . A similar definition can be given for the explanatory power of a history. Does INLINEFORM0 have a strategy for constructing a truthful history that can guarantee all of these things? Well, if the facts INLINEFORM1 it is supposed to relate are sufficiently simple or sufficiently unambiguous in the sense that they determine just one history and E is effectively able to build and defend such a history, then yes she does. So very simple cases like establishing whether your daughter has a snack for after school in the morning or not are easy to determine, and the history is equally simple, once you have the right evidence: yes she has a snack, or no she doesn't. A text which is unambiguous similarly determines only one history, and linguistic competence should suffice to determine what that history is. On the other hand, it is also possible that INLINEFORM2 may determine the right history INLINEFORM3 from a play INLINEFORM4 when INLINEFORM5 depends on the type of the relevant players of INLINEFORM6 . For INLINEFORM7 can have a true “type” for the players relevant to INLINEFORM8 . In general whether or not a player has a winning strategy will depend on the structure of the optimal history targeted, as well as on the resources and constraints on the players in an ME truth game. In the more general case, however, whether INLINEFORM0 has a winning strategy in an ME truth game become in general non trivial. At least in a relative sort of way, E can construct a model satisfying her putative history at each stage to show consistency (relative to ZF or some other background theory); coherence can be verified by inspection over the finite discourse graph of the relevant history at each stage and ensuing attacks. Finally completeness and evidential support can be guaranteed at each stage in the history's construction, if E has the right sort of beliefs. If all this can be guaranteed at each stage, von Neumann's minimax theorem or its extension in BIBREF40 guarantees that E has a winning strategy for E-defensibility. In future work, we plan to analyze in detail some complicated examples like the ongoing debate about climate, change where there is large scale scientific agreement but where disagreement exists because of distinct winning conditions. Looking ahead An ME truth game suggests a certain notion of truth: the truth is a winning history in an ME persuasion game with a disinterested Jury. This is a Peircean “best attainable” approximation of the truth, an ”internal” notion of truth based on consistency, coherence with the available evidence and explanatory and predictive power. But we could investigate also a more external view of truth. Such a view would suppose that the Jury has in its possession the “true history over a set of facts INLINEFORM0 , that the history eventually constructed by E should converge to within a certain margin of error in the limit. We think ME games are a promising tool for investigating bias, and in this section we mention some possible applications and open questions that ME games might help us answer. ME truth games allow us to analyze extant strategies for eliminating bias. For instance, given two histories for a given set of facts, it is a common opinion that one finds a less biased history by splitting the difference between them. This is a strategy perhaps distantly inspired by the idea that the truth lies in the golden mean between extremes. But is this really true? ME games should allow us to encode this strategy and find out. Another connection that our approach can exploit is the one between games and reinforcement learning BIBREF44 , BIBREF45 , BIBREF46 . While reinforcement learning is traditionally understood as a problem involving a single agent and is not powerful enough to understand the dynamics of competing biases of agents with different winning conditions, there is a direct connection made in BIBREF45 between evolutionary games with replicator dynamics and the stochastic learning theory of BIBREF47 with links to multiagent reinforcement learning. BIBREF44 , BIBREF46 provide a foundation for multiagent reinforcement learning in stochastic games. The connection between ME games and stochastic and evolutionary games has not been explored but some victory conditions in ME games can be an objective that a replicator dynamics converges to, and epistemic ME games already encompass a stochastic component. Thus, our research will be able to draw on relevant results in these areas. A typical assumption we make as scientists is that rationality would lead us to always prefer to have a more complete and more accurate history for our world. But bias isn't so simple, as an analysis of ME games can show. ME games are played for many purposes with non truth-seeking biases that lead to histories that are not a best approximation to the truth may be the rational or optimal choice, if the winning condition in the game is other than that defined in an ME truth game. This has real political and social relevance; for example, a plausible hypothesis is that those who argue that climate change is a hoax are building an alternative history, not to get at the truth but for other political purposes. Even being a truth interested player can at least initially fail to generate histories that are in the winning condition of an ME truth game. Suppose E, motivated by truth interest, has constructed for facts INLINEFORM0 a history INLINEFORM1 that meets constraints including coherence, consistency, and completeness, and it provides explanatory and predictive power for at least a large subset INLINEFORM2 of INLINEFORM3 . E's conceptualization of INLINEFORM4 can still go wrong, and E may fail to have a winning strategy in interesting ways. First, INLINEFORM5 can mischaracterize INLINEFORM6 with high confidence in virtue of evidence only from INLINEFORM7 BIBREF48 ; Especially if INLINEFORM8 is large and hence INLINEFORM9 is just simply very “long”, it is intuitively more difficult even for truth seeking players to come to accept that an alternative history is the correct one. Second, INLINEFORM10 may lack or be incompatible with concepts that would be needed to be aware of facts in INLINEFORM11 . BIBREF55 , BIBREF23 investigate a special case of this, a case of unawareness. To succeed E would have to learn the requisite concepts first. All of this has important implications for learning. We can represent learning as the following ME games. It is common to represent making a prediction Y from data X as a zero sum game between our player E and Nature: E wins if for data X provided by Nature, E makes a prediction that the Jury judges to be correct. More generally, an iterated learning process is a repeated zero sum game, in which E makes predictions in virtue of some history, which one might also call a model or a set of hypotheses; if she makes a correct prediction at round n, she reinforces her beliefs in her current history; if she makes a wrong prediction, she adjusts it. The winning condition may be defined in terms of some function of the scores at each learning round or in terms of some global convergence property. Learning conceived in this way is a variant of a simple ME truth game in which costs are assigned to individual discourse moves as in discounted ME games. In an ME truth game, where E develops a history INLINEFORM0 over a set of facts INLINEFORM1 while A argues for an alternative history INLINEFORM2 over INLINEFORM3 , A can successfully defend history INLINEFORM4 as long as either the true history INLINEFORM5 is (a) not learnable or (b) not uniquely learnable. In case (a), E cannot convince the Jury that INLINEFORM6 is the right history; in case (b) A can justify INLINEFORM7 as an alternative interpretation. Consider the bias of a hardened climate change skeptic: the ME model predicts that simply presenting new facts to the agent will not induce him to change his history, even if to a disinterested Jury his history is clearly not in his winning condition. He may either simply refuse to be convinced because he is not truth interested, or because he thinks his alternative history INLINEFORM8 can explain all of the data in INLINEFORM9 just as well as E's climate science history INLINEFORM10 . Thus, ME games open up an unexplored research area of unlearnable histories for certain agents. Conclusions In this paper, we have put forward the foundations of a formal model of interpretive bias. Our approach differs from philosophical and AI work on dialogue that links dialogue understanding to the recovery of speaker intentions and beliefs BIBREF56 , BIBREF57 . Studies of multimodal interactions in Human Robot Interaction (HRI) have also followed the Gricean tradition BIBREF58 , BIBREF59 , BIBREF60 . BIBREF61 , BIBREF4 , BIBREF62 ), offer many reasons why a Gricean program for dialogue understanding is difficult for dialogues in which there is not a shared task and a strong notion of co-operativity. Our model is not in the business of intention and belief recovery, but rather works from what contents agents explicitly commit to with their actions, linguistic and otherwise, to determine a rational reconstruction of an underlying interpretive bias and what goals a bias would satisfy. In this we also go beyond what current theories of discourse structure like SDRT can accomplish. Our theoretical work also requires an empirical component on exactly how bias is manifested to be complete. This has links to the recent interest in fake news. Modeling interpretive bias can help in detecting fake news by providing relevant types to check in interpretation and by providing an epistemic foundation for fake news detection by exploiting ME truth games where one can draw from various sources to check the credibility of a story. In a future paper, we intend to investigate these connections thoroughly. References Asher, N., Lascarides, A.: Strategic conversation. Semantics and Pragmatics 6(2), http:// dx.doi.org/10.3765/sp.6.2. (2013) Asher, N., Paul, S.: Evaluating conversational success: Weighted message exchange games. In: Hunter, J., Simons, M., Stone, M. (eds.) 20th workshop on the semantics and pragmatics of dialogue (SEMDIAL). New Jersey, USA (July 2016) Asher, N.: Reference to Abstract Objects in Discourse. Kluwer Academic Publishers (1993) Asher, N., Lascarides, A.: Logics of Conversation. Cambridge University Press (2003) Asher, N., Paul, S.: Conversations and incomplete knowledge. In: Proceedings of Semdial Conference. pp. 173–176. Amsterdam (December 2013) Asher, N., Paul, S.: Conversation and games. In: Ghosh, S., Prasad, S. (eds.) Logic and Its Applications: 7th Indian Conference, ICLA 2017, Kanpur, India, January 5-7, 2017, Proceedings. vol. 10119, pp. 1–18. Springer, Kanpur, India (January 2017) Asher, N., Paul, S.: Strategic conversation under imperfect information: epistemic Message Exchange games (2017), accepted for publication in Journal of Logic, Language and Information Asher, N., Paul, S., Venant, A.: Message exchange games in strategic conversations. Journal of Philosophical Logic 46.4, 355–404 (2017), http://dx.doi.org/10.1007/s10992-016-9402-1 Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Machine learning 47(2-3), 235–256 (2002) Aumann, R.J.: Agreeing to disagree. The Annals of Statistics 4(6), 1236–1239 (1976) Banks, J.S., Sundaram, R.K.: Switching costs and the gittins index. Econometrica: Journal of the Econometric Society pp. 687–694 (1994) Baron, J.: Thinking and deciding. Cambridge University Press (2000) Battigalli, P.: Rationalizability in infinite, dynamic games with incomplete information. Research in Economics 57(1), 1–38 (2003) Berger, A.L., Pietra, V.J.D., Pietra, S.A.D.: A maximum entropy approach to natural language processing. Computational linguistics 22(1), 39–71 (1996) Besnard, P., Hunter, A.: Elements of argumentation, vol. 47. MIT press Cambridge (2008) Blackwell, D.: An analog of the minimax theorem for vector payoffs. Pacific Journal of Mathematics 6(1), 1–8 (1956) Börgers, T., Sarin, R.: Learning through reinforcement and replicator dynamics. Journal of Economic Theory 77(1), 1–14 (1997) Burnetas, A.N., Katehakis, M.N.: Optimal adaptive policies for markov decision processes. Mathematics of Operations Research 22(1), 222–255 (1997) Burnett, H.: Sociolinguistic interaction and identity construction: The view from game-theoretic pragmatics. Journal of Sociolinguistics 21(2), 238–271 (2017) Bush, R.R., Mosteller, F.: Stochastic models for learning. John Wiley & Sons, Inc. (1955) Cadilhac, A., Asher, N., Benamara, F., Lascarides, A.: Commitments to preferences in dialogue. In: Proceedings of the 12th Annual SIGDIAL Meeting on Discourse and Dialogue. pp. 204–215 (2011) Cadilhac, A., Asher, N., Benamara, F., Lascarides, A.: Grounding strategic conversation: Using negotiation dialogues to predict trades in a win-lose game. In: Proceedings of EMNLP. pp. 357–368. Seattle (2013) Cadilhac, A., Asher, N., Benamara, F., Popescu, V., Seck, M.: Preference extraction form negotiation dialogues. In: Biennial European Conference on Artificial Intelligence (ECAI) (2012) Chambers, N., Allen, J., Galescu, L., Jung, H.: A dialogue-based approach to multi-robot team control. In: The 3rd International Multi-Robot Systems Workshop. Washington, DC (2005) Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial intelligence 77(2), 321–357 (1995) Erev, I., Wallsten, T.S., Budescu, D.V.: Simultaneous over-and underconfidence: The role of error in judgment processes. Psychological review 101(3), 519 (1994) Foster, M.E., Petrick, R.P.A.: Planning for social interaction with sensor uncertainty. In: The ICAPS 2014 Scheduling and Planning Applications Workshop (SPARK). pp. 19–20. Portsmouth, New Hampshire, USA (Jun 2014) Garivier, A., Cappé, O.: The kl-ucb algorithm for bounded stochastic bandits and beyond. In: COLT. pp. 359–376 (2011) Glazer, J., Rubinstein, A.: On optimal rules of persuasion. Econometrica 72(6), 119–123 (2004) Grice, H.P.: Utterer's meaning and intentions. Philosophical Review 68(2), 147–177 (1969) Grice, H.P.: Logic and conversation. In: Cole, P., Morgan, J.L. (eds.) Syntax and Semantics Volume 3: Speech Acts, pp. 41–58. Academic Press (1975) Grosz, B., Sidner, C.: Attention, intentions and the structure of discourse. Computational Linguistics 12, 175–204 (1986) Harsanyi, J.C.: Games with incomplete information played by “bayesian” players, parts i-iii. Management science 14, 159–182 (1967) Henderson, R., McCready, E.: Dogwhistles and the at-issue/non-at-issue distinction. Published on Semantics Archive (2017) Hilbert, M.: Toward a synthesis of cognitive biases: how noisy information processing can bias human decision making. Psychological bulletin 138(2), 211 (2012) Hintzman, D.L.: Minerva 2: A simulation model of human memory. Behavior Research Methods, Instruments, & Computers 16(2), 96–101 (1984) Hintzman, D.L.: Judgments of frequency and recognition memory in a multiple-trace memory model. Psychological review 95(4), 528 (1988) Hu, J., Wellman, M.P.: Multiagent reinforcement learning: theoretical framework and an algorithm. In: ICML. vol. 98, pp. 242–250 (1998) Hunter, J., Asher, N., Lascarides, A.: Situated conversation (2017), submitted to Semantics and Pragmatics Khoo, J.: Code words in political discourse. Philosophical Topics 45(2), 33–64 (2017) Konek, J.: Probabilistic knowledge and cognitive ability. Philosophical Review 125(4), 509–587 (2016) Lai, T.L., Robbins, H.: Asymptotically efficient adaptive allocation rules. Advances in applied mathematics 6(1), 4–22 (1985) Lakkaraju, H., Kamar, E., Caruana, R., Horvitz, E.: Discovering blind spots of predictive models: Representations and policies for guided exploration. arXiv preprint arXiv:1610.09064 (2016) Lee, M., Solomon, N.: Unreliable Sources: A Guide to Detecting Bias in News Media. Lyle Smart, New York (1990) Lepore, E., Stone, M.: Imagination and Convention: Distinguishing Grammar and Inference in Language. Oxford University Press (2015) Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the eleventh international conference on machine learning. vol. 157, pp. 157–163 (1994) Morey, M., Muller, P., Asher, N.: A dependency perspective on rst discourse parsing and evaluation (2017), submitted to Computational Linguistics Moss, S.: Epistemology formalized. Philosophical Review 122(1), 1–43 (2013) Perret, J., Afantenos, S., Asher, N., Morey, M.: Integer linear programming for discourse parsing. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pp. 99–109. Association for Computational Linguistics, San Diego, California (June 2016), http://www.aclweb.org/anthology/N16-1013 Perzanowski, D., Schultz, A., Adams, W., Marsh, E., Bugajska, M.: Building a multimodal human-robot interface. Intelligent Systems 16(1), 16–21 (2001) Potts, C.: The logic of conventional implicatures. Oxford University Press Oxford (2005) Recanati, F.: Literal Meaning. Cambridge University Press (2004) Sperber, D., Wilson, D.: Relevance. Blackwells (1986) Stanley, J.: How propaganda works. Princeton University Press (2015) Tversky, A., Kahneman, D.: Availability: A heuristic for judging frequency and probability. Cognitive psychology 5(2), 207–232 (1973) Tversky, A., Kahneman, D.: Judgment under uncertainty: Heuristics and biases. In: Utility, probability, and human decision making, pp. 141–162. Springer (1975) Tversky, A., Kahneman, D.: Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological review 90(4), 293 (1983) Tversky, A., Kahneman, D.: The framing of decisions and the psychology of choice. In: Environmental Impact Assessment, Technology Assessment, and Risk Analysis, pp. 107–129. Springer (1985) Venant, A.: Structures, Semantics and Games in Strategic Conversations. Ph.D. thesis, Université Paul Sabatier, Toulouse (2016) Venant, A., Asher, N., Muller, P., Denis, P., Afantenos, S.: Expressivity and comparison of models of discourse structure. In: Proceedings of the SIGDIAL 2013 Conference. pp. 2–11. Association for Computational Linguistics, Metz, France (August 2013), http://www.aclweb.org/anthology/W13-4002 Venant, A., Degremont, C., Asher, N.: Semantic similarity. In: LENLS 10. Tokyo, Japan (2013) Walton, D.N.: Logical dialogue-games. University Press of America (1984) Whittle, P.: Multi-armed bandits and the gittins index. Journal of the Royal Statistical Society. Series B (Methodological) pp. 143–149 (1980) Wilkinson, N., Klaes, M.: An introduction to behavioral economics. Palgrave Macmillan (2012)
Which events authors choose to include in their history, which they leave out, and the way the events chosen relate to the march
da4535b75e360604e3ce4bb3631b0ba96f4dadd3
da4535b75e360604e3ce4bb3631b0ba96f4dadd3_0
Q: Which interpretative biases are analyzed in this paper? Text: Introduction Bias is generally considered to be a negative term: a biased story is seen as one that perverts or subverts the truth by offering a partial or incomplete perspective on the facts. But bias is in fact essential to understanding: one cannot interpret a set of facts—something humans are disposed to try to do even in the presence of data that is nothing but noise [38]—without relying on a bias or hypothesis to guide that interpretation. Suppose someone presents you with the sequence INLINEFORM0 and tells you to guess the next number. To make an educated guess, you must understand this sequence as instantiating a particular pattern; otherwise, every possible continuation of the sequence will be equally probable for you. Formulating a hypothesis about what pattern is at work will allow you to predict how the sequence will play out, putting you in a position to make a reasonable guess as to what comes after 3. Formulating the hypothesis that this sequence is structured by the Fibonacci function (even if you don't know its name), for example, will lead you to guess that the next number is 5; formulating the hypothesis that the sequence is structured by the successor function but that every odd successor is repeated once will lead you to guess that it is 3. Detecting a certain pattern allows you to determine what we will call a history: a set of given entities or eventualities and a set of relations linking those entities together. The sequence of numbers INLINEFORM1 and the set of relation instances that the Fibonacci sequence entails as holding between them is one example of a history. Bias, then, is the set of features, constraints, and assumptions that lead an interpreter to select one history—one way of stitching together a set of observed data—over another. Bias is also operative in linguistic interpretation. An interpreter's bias surfaces, for example, when the interpreter connects bits of information content together to resolve ambiguities. Consider: . Julie isn't coming. The meeting has been cancelled. While these clauses are not explicitly connected, an interpreter will typically have antecedent biases that lead her to interpret eventualities described by the two clauses as figuring in one of two histories: one in which the eventuality described by the first clause caused the second, or one in which the second caused the first. Any time that structural connections are left implicit by speakers—and this is much if not most of the time in text— interpreters will be left to infer these connections and thereby potentially create their own history or version of events. Every model of data, every history over that data, comes with a bias that allows us to use observed facts to make predictions; bias even determines what kind of predictions the model is meant to make. Bayesian inference, which underlies many powerful models of inference and machine learning, likewise relies on bias in several ways: the estimate of a state given evidence depends upon a prior probability distribution over states, on assumptions about what parameters are probabilistically independent, and on assumptions about the kind of conditional probability distribution that each parameter abides by (e.g., normal distribution, noisy-or, bimodal). Each of these generates a (potentially different) history. Objective of the paper In this paper, we propose a program for research on bias. We will show how to model various types of bias as well as the way in which bias leads to the selection of a history for a set of data, where the data might be a set of nonlinguistic entities or a set of linguistically expressed contents. In particular, we'll look at what people call “unbiased” histories. For us these also involve a bias, what we call a “truth seeking bias”. This is a bias that gets at the truth or acceptably close to it. Our model can show us what such a bias looks like. And we will examine the question of whether it is possible to find such a truth oriented bias for a set of facts, and if so, under what conditions. Can we detect and avoid biases that don't get at the truth but are devised for some other purpose? Our study of interpretive bias relies on three key premises. The first premise is that histories are discursive interpretations of a set of data in the sense that like discourse interpretations, they link together a set of entities with semantically meaningful relations. As such they are amenable to an analysis using the tools used to model a discourse's content and structure. The second is that a bias consists of a purpose or goal that the histories it generates are built to achieve and that agents build histories for many different purposes—to discover the truth or to understand, but also to conceal the truth, to praise or disparage, to persuade or to dissuade. To properly model histories and the role of biases in creating them, we need a model of the discourse purposes to whose end histories are constructed and of the way that they, together with prior assumptions, shape and determine histories. The third key premise of our approach is that bias is manifested in and conveyed through histories, and so studying histories is crucial for a better understanding of bias. Some examples of bias Let's consider the following example of biased interpretation of a conversation. Here is an example analyzed in BIBREF0 to which we will return in the course of the paper. . Sa Reporter: On a different subject is there a reason that the Senator won't say whether or not someone else bought some suits for him? Sheehan: Rachel, the Senator has reported every gift he has ever received. Reporter: That wasn't my question, Cullen. Sheehan: (i) The Senator has reported every gift he has ever received. (ii) We are not going to respond to unnamed sources on a blog. . Reporter: So Senator Coleman's friend has not bought these suits for him? Is that correct? Sheehan: The Senator has reported every gift he has ever received. Sheehan continues to repeat, “The Senator has reported every gift he has ever received” seven more times in two minutes to every follow up question by the reporter corps. http://www.youtube.com/watch?v=VySnpLoaUrI. For convenience, we denote this sentence uttered by Sheehan (which is an EDU in the languare of SDRT as we shall see presently) as INLINEFORM0 . Now imagine two “juries,” onlookers or judges who interpret what was said and evaluate the exchange, yielding differing interpretations. The interpretations differ principally in how the different contributions of Sheehan and the reporter hang together. In other words, the different interpretations provide different discourse structures that we show schematically in the graphs below. The first is one in which Sheehan's response INLINEFORM0 in SECREF3 b is somewhat puzzling and not taken as an answer to the reporter's question in SECREF3 a. In effect this “jury” could be the reporter herself. This Jury then interprets the move in SECREF3 c as a correction of the prior exchange. The repetition of INLINEFORM1 in SECREF3 d.ii is taken tentatively as a correction of the prior exchange (that is, the moves SECREF3 a, SECREF3 b and SECREF3 c together), which the Jury then takes the reporter to try to establish with SECREF3 e. When Sheehan repeats SECREF3 a again in SECREF3 f, this jury might very well take Sheehan to be evading all questions on the subject. A different Jury, however, might have a different take on the conversation as depicted in the discourse structure below. Such a jury might take INLINEFORM0 to be at least an indirect answer to the question posed in SECREF3 a, and as a correction to the Reporter's evidently not taking INLINEFORM1 as an answer. The same interpretation of INLINEFORM2 would hold for this Jury when it is repeated in SECREF3 f. Such a Jury would be a supporter of Sheehan or even Sheehan himself. What accounts for these divergent discourse structures? We will argue that it is the biases of the two Juries that create these different interpretations. And these biases are revealed at least implicitly in how they interpret the story: Jury 1 is at the outset at least guarded, if not skeptical, in its appraisal of Sheehan's interest in answering the reporter's questions. On the other hand, Jury 2 is fully convinced of Sheehan's position and thus interprets his responses much more charitably. BIBREF0 shows formally that there is a co-dependence between biases and interpretations; a certain interpretation created because of a certain bias can in turn strengthen that bias, and we will sketch some of the details of this story below. The situation of our two juries applies to a set of nonlinguistic facts. In such a case we take our “jury” to be the author of a history over that set of facts. The jury in this case evaluates and interprets the facts just as our juries did above concerning linguistic messages. To tell a history about a set of facts is to connect them together just as discourse constituents are connected together. And these connections affect and may even determine the way the facts are conceptualized BIBREF1 . Facts typically do not wear their connections to other facts on their sleeves and so how one takes those connections to be is often subject to bias. Even if their characterization and their connections to other facts are “intuitively clear”, our jury may choose to pick only certain connections to convey a particular history or even to make up connections that might be different. One jury might build a history over the set of facts that conveys one set of ideas, while the other might build a quite different history with a different message. Such histories reflect the purposes and assumptions that were exploited to create that structure. As an example of this, consider the lead paragraphs of articles from the New York Times, Townhall and Newsbusters concerning the March for Science held in April, 2017. The March for Science on April 22 may or may not accomplish the goals set out by its organizers. But it has required many people who work in a variety of scientific fields — as well as Americans who are passionate about science — to grapple with the proper role of science in our civic life. The discussion was evident in thousands of responses submitted to NYTimes.com ahead of the march, both from those who will attend and those who are sitting it out. –New York Times Do you have march fatigue yet? The left, apparently, does not, so we're in for some street theater on Earth Day, April 22, with the so-called March for Science. It's hard to think of a better way to undermine the public's faith in science than to stage demonstrations in Washington, D.C., and around the country modeled on the Women's March on Washington that took place in January. The Women's March was an anti-Donald Trump festival. Science, however, to be respected, must be purely the search for truth. The organizers of this “March for Science" – by acknowledging that their demonstration is modeled on the Women's March – are contributing to the politicization of science, exactly what true upholders of science should be at pains to avoid. –Townhall Thousands of people have expressed interest in attending the “March for Science” this Earth Day, but internally the event was fraught with conflict and many actual scientists rejected the march and refused to participate. –Newsbusters These different articles begin with some of the same basic facts: the date and purpose of the march, and the fact that the march's import for the science community is controversial, for example. But bias led the reporters to stitch together very different histories. The New York Times, for instance, interprets the controversy as generating a serious discussion about “the proper role of science in our civic life,” while Townhall interprets the march as a political stunt that does nothing but undermine science. While the choice of wording helps to convey bias, just as crucial is the way that the reporters portray the march as being related to other events. Which events authors choose to include in their history, which they leave out, and the way the events chosen relate to the march are crucial factors in conveying bias. Townhall's bias against the March of Science expressed in the argument that it politicizes science cannot be traced back to negative opinion words; it relies on a comparison between the March for Science and the Women's March, which is portrayed as a political, anti-Trump event. Newsbusters takes a different track: the opening paragraph conveys an overall negative perspective on the March for Science, despite its neutral language, but it achieves this by contrasting general interest in the march with a claimed negative view of the march by many “actual scientists.” On the other hand, the New York Times points to an important and presumably positive outcome of the march, despite its controversiality: a renewed look into the role of science in public life and politics. Like Newsbusters, it lacks any explicit evaluative language and relies on the structural relations between events to convey an overall positive perspective; it contrasts the controversy surrounding the march with a claim that the march has triggered an important discussion, which is in turn buttressed by the reporter's mentioning of the responses of the Times' readership. A formally precise account of interpretive bias will thus require an analysis of histories and their structure and to this end, we exploit Segmented Discourse Representation Theory or SDRT BIBREF2 , BIBREF3 . As the most precise and well-studied formal model of discourse structure and interpretation to date, SDRT enables us to characterize and to compare histories in terms of their structure and content. But neither SDRT nor any other, extant theoretical or computational approach to discourse interpretation can adequately deal with the inherent subjectivity and interest relativity of interpretation, which our study of bias will illuminate. Message Exchange (ME) Games, a theory of games that builds on SDRT, supplements SDRT with an analysis of the purposes and assumptions that figure in bias. While epistemic game theory in principle can supply an analysis of these assumptions, it lacks linguistic constraints and fails to reflect the basic structure of conversations BIBREF4 . ME games will enable us not only to model the purposes and assumptions behind histories but also to evaluate their complexity and feasibility in terms of the existence of winning strategies. Bias has been studied in cognitive psychology and empirical economics BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF5 , BIBREF13 . Since the seminal work of Kahneman and Tversky and the economist Allais, psychologists and empirical economists have provided valuable insights into cognitive biases in simple decision problems and simple mathematical tasks BIBREF14 . Some of this work, for example the bias of framing effects BIBREF7 , is directly relevant to our theory of interpretive bias. A situation is presented using certain lexical choices that lead to different “frames”: INLINEFORM0 of the people will live if you do INLINEFORM1 (frame 1) versus INLINEFORM2 of the people will die if you do INLINEFORM3 (frame 2). In fact, INLINEFORM4 , the total population in question; so the two consequents of the conditionals are equivalent. Each frame elaborates or “colors” INLINEFORM5 in a way that affects an interpreter's evaluation of INLINEFORM6 . These frames are in effect short histories whose discourse structure explains their coloring effect. Psychologists, empirical economists and statisticians have also investigated cases of cognitive bias in which subjects deviate from prescriptively rational or independently given objective outcomes in quantitative decision making and frequency estimation, even though they arguably have the goal of seeking an optimal or “true” solution. In a general analysis of interpretive bias like ours, however, it is an open question whether there is an objective norm or not, whether it is attainable and, if so, under what conditions, and whether an agent builds a history for attaining that norm or for some other purpose. Organization of the paper Our paper is organized as follows. Section SECREF2 introduces our model of interpretive bias. Section SECREF3 looks forward towards some consequences of our model for learning and interpretation. We then draw some conclusions in Section SECREF4 . A detailed and formal analysis of interpretive bias has important social implications. Questions of bias are not only timely but also pressing for democracies that are having a difficult time dealing with campaigns of disinformation and a society whose information sources are increasingly fragmented and whose biases are often concealed. Understanding linguistic and cognitive mechanisms for bias precisely and algorithmically can yield valuable tools for navigating in an informationally bewildering world. The model of interpretive bias As mentioned in Section SECREF1 , understanding interpretive bias requires two ingredients. First, we need to know what it is to interpret a text or to build a history over a set of facts. Our answer comes from analyzing discourse structure and interpretation in SDRT BIBREF2 , BIBREF3 . A history for a text connects its elementary information units, units that convey propositions or describe events, using semantic relations that we call discourse relations to construct a coherent and connected whole. Among such relations are logical, causal, evidential, sequential and resemblance relations as well as relations that link one unit with an elaboration of its content. It has been shown in the literature that discourse structure is an important factor in accurately extracting sentiments and opinions from text BIBREF15 , BIBREF16 , BIBREF17 , and our examples show that this is the case for interpretive bias as well. Epistemic ME games The second ingredient needed to understand interpretive bias is the connection between on the one hand the purpose and assumption behind telling a story and on the other the particular way in which that story is told. A history puts the entities to be understood into a structure that serves certain purposes or conversational goals BIBREF18 . Sometimes the history attempts to get at the “truth”, the true causal and taxonomic structure of a set of events. But a history may also serve other purposes—e.g., to persuade, or to dupe an audience. Over the past five years, BIBREF4 , BIBREF19 , BIBREF20 , BIBREF21 have developed an account of conversational purposes or goals and how they guide strategic reasoning in a framework called Message Exchange (ME) Games. ME games provide a general and formally precise framework for not only the analysis of conversational purposes and conversational strategies, but also for the typology of dialogue games from BIBREF22 and finally for the analysis of strategies for achieving what we would intuitively call “unbiased interpretation”, as we shall see in the next section. In fact in ME Games, conversational goals are analyzed as properties, and hence sets, of conversations; these are the conversations that “go well” for the player. ME games bring together the linguistic analysis of SDRT with a game theoretic approach to strategic reasoning; in an ME game, players alternate making sequences of discourse moves such as those described in SDRT, and a player wins if the conversation constructed belongs to her winning condition, which is a subset of the set of all possible conversational plays. ME games are designed to analyze the interaction between conversational structure, purposes and assumptions, in the absence of assumptions about cooperativity or other cognitive hypotheses, which can cause problems of interpretability in other frameworks BIBREF23 . ME games also assume a Jury that sets the winning conditions and thus evaluates whether the conversational moves made by players or conversationalists are successful or not. The Jury can be one or both of the players themselves or some exogenous body. To define an ME game, we first fix a finite set of players INLINEFORM0 and let INLINEFORM1 range over INLINEFORM2 . For simplicity, we consider here the case where there are only two players, that is INLINEFORM3 , but the notions can be easily lifted to the case where there are more than two players. Here, Player INLINEFORM4 will denote the opponent of Player INLINEFORM5 . We need a vocabulary INLINEFORM6 of moves or actions; these are the discourse moves as defined by the language of SDRT. The intuitive idea behind an ME game is that a conversation proceeds in turns where in each turn one of the players `speaks' or plays a string of elements from INLINEFORM7 . In addition, in the case of conversations, it is essential to keep track of “who says what”. To model this, each player INLINEFORM8 was assigned a copy INLINEFORM9 of the vocabulary INLINEFORM10 which is simply given as INLINEFORM11 . As BIBREF4 argues, a conversation may proceed indefinitely, and so conversations correspond to plays of ME games, typically denoted as INLINEFORM12 , which are the union of finite or infinite sequences in INLINEFORM13 , denoted as INLINEFORM14 and INLINEFORM15 respectively. The set of all possible conversations is thus INLINEFORM16 and is denoted as INLINEFORM17 . [ME game BIBREF4 ] A Message Exchange game (ME game), INLINEFORM18 , is a tuple INLINEFORM19 where INLINEFORM20 is a Jury. Due to the ambiguities in language, discourse moves in SDRT are underspecified formulas that may yield more than one fully specified discourse structure or histories for the conversation; a resulting play in an ME game thus forms one or more histories or complete discourse structures for the entire conversation. To make ME games into a truly realistic model of conversation requires taking account of the limited information available to conversational participants. BIBREF0 imported the notion of a type space from epistemic game theory BIBREF24 to take account of this. The type of a player INLINEFORM0 or the Jury is an abstract object that is used to code-up anything and everything about INLINEFORM1 or the Jury, including her behavior, the way she strategizes, her personal biases, etc. BIBREF24 . Let INLINEFORM2 denote the set of strategies for Player INLINEFORM3 in an ME game; let INLINEFORM4 ; and let INLINEFORM5 be the set of strategies of INLINEFORM6 given play INLINEFORM7 . [Harsanyi type space BIBREF24 ] A Harsanyi type space for INLINEFORM8 is a tuple INLINEFORM9 such that INLINEFORM10 and INLINEFORM11 , for each INLINEFORM12 , are non-empty (at-most countable) sets called the Jury-types and INLINEFORM13 -types respectively and INLINEFORM14 and INLINEFORM15 are the beliefs of Player INLINEFORM16 and the Jury respectively at play INLINEFORM17 . BIBREF0 defines the beliefs of the players and Jury using the following functions. [Belief function] For every play INLINEFORM18 the (first order) belief INLINEFORM19 of player INLINEFORM20 at INLINEFORM21 is a pair of measurable functions INLINEFORM22 where INLINEFORM23 is the belief function and INLINEFORM24 is the interpretation function defined as: INLINEFORM25 INLINEFORM26 where INLINEFORM0 is the set of probability distributions over the corresponding set. Similarly the (first order) belief INLINEFORM1 of the Jury is a pair of measurable functions INLINEFORM2 where the belief function INLINEFORM3 and the interpretation function INLINEFORM4 are defined as: INLINEFORM5 INLINEFORM6 Composing INLINEFORM0 and INLINEFORM1 together over their respective outputs reveals a correspondence between interpretations of plays and types for a fixed Jury type INLINEFORM2 : every history yields a distribution over types for the players and every tuple of types for the players and the Jury fixes a distribution over histories. We'll call this the types/history correspondence. An epistemic ME game is an ME game with a Harsanyi type space and a type/history correspondence as we've defined it. By adding types to an ME game, we provide the beginnings of a game theoretic model of interpretive bias that we believe is completely new. Our definition of bias is now: [Interpretive Bias] An interpretive bias in an epistemic ME game is the probability distribution over types given by the belief function of the conversationalists or players, or the Jury. Note that in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury. Outside of language, statisticians study bias; and sample bias is currently an important topic. To do so, they exploit statistical models with a set of parameters and random variables, which play the role of our types in interpretive bias. But for us, the interpretive process is already well underway once the model, with its constraints, features and explanatory hypotheses, is posited; at least a partial history, or set of histories, has already been created. The ME model in BIBREF0 not only makes histories dependent on biases but also conditionally updates an agent's bias, the probability distribution, given the interpretation of the conversation or more generally a course of events as it has so far unfolded and crucially as the agent has so far interpreted it. This means that certain biases are reinforced as a history develops, and in turn strengthen the probability of histories generated by such biases in virtue of the types/histories correspondence. We now turn to an analysis of SECREF3 discussed in BIBREF4 , BIBREF0 where arguably this happens. Generalizing from the case study The Sheehan case study in BIBREF0 shows the interactions of interpretation and probability distributions over types. We'll refer to content that exploit assumptions about types' epistemic content. SECREF3 also offers a case of a self-confirming bias with Jury INLINEFORM0 . But the analysis proposed by BIBREF0 leaves open an important open question about what types are relevant to constructing a particular history and only examines one out of many other cases of biased interpretation. In epistemic game models, the relevant types are typically given exogenously and Harsanyi's type space construction is silent on this question. The question seems a priori very hard to answer, because anything and everything might be relevant to constructing a history. In SECREF3 , the relevant types have to do with the interpreters' or Jurys' attitudes towards the commitments of the spokesman and Coleman. These attitudes might reinforce or be a product of other beliefs like beliefs about the spokesman's political affiliations. But we will put forward the following simplifying hypothesis: Hypothesis 1: epistemic content is based on assumptions about types defined by different attitudes to commitments by the players and or the Jury to the contents of a discourse move or sequence of discourse moves. Hypothesis 2: These assumptions can be represented as probability distributions over types. In SECREF3 , we've only looked at epistemic content from the point of view of the interpreter, which involves types for the Jury defined in terms of probability distributions over types for the speaker. But we can look at subjective interpretations from the perspective of the speaker as well. In other words, we look at how the speaker might conceptualize the discourse situation, in particular her audience. We illustrate this with another type of content based on types. Consider the following move by Marion Le Pen, a leader of the French nationalist, right-wing party le Front National in which she recently said: . La France était la fille aînée de l'église. Elle est en passe de devenir la petite nièce de l'Islam. (France was once the eldest daughter of the Catholic church. It is now becoming the little niece of Islam.) SECREF8 appeals to what the speaker takes to be her intended audience's beliefs about Islam, Catholicism and France. In virtue of these beliefs, this discourse move takes on a loaded racist meaning, conveying an assault on France and its once proud status by people of North African descent. Without those background beliefs, however, Le Pen's statement might merely be considered a somewhat curious description of a recent shift in religious majorities. This is known as a “dog whistle,” in which a discourse move communicates a content other than its grammatically determined content to a particular audience BIBREF25 . While BIBREF26 proposes that such messages are conventional implicatures, BIBREF25 , BIBREF27 show that dog whistle content doesn't behave like other conventional implicatures; in terms of tests about “at issue content”, dog whistle content patterns with other at issue content, not with the content associated with conventional implicatures in the sense of BIBREF28 . This also holds of content that resolves ambiguities as in SECREF3 . The dogwhistle content seems to be driven by the hearer's type in SECREF8 or the speaker's beliefs about the interpreter's or hearer's type. Generalizing from BIBREF29 , the use of the historical expression la fille ainée de l'église contrasted with la petite nièce has come to encode a type, in much the same way that dropping the final g in present participles and gerunds has come to signify a type BIBREF29 , for the speaker INLINEFORM0 about hearer INLINEFORM1 ; e.g., INLINEFORM2 will believe that INLINEFORM3 has the strategy of using just this language to access the loaded interpretation and moreover will identify with its content. Because this meaning comes about in virtue of the hearer's type, the speaker is in a position to plausibly deny that they committed to conveying a racist meaning, which is a feature of such dog whistles. In fact, we might say that all dogwhistle content is so determined. We can complicate the analysis by considering the speaker's types, the interlocutor's types and types for the Jury when these three components of an ME game are distinct (i.e. the Jury is distinct from the interlocutors). A case like this is the Bronston example discussed in BIBREF0 . By looking at dogwhistles, we've now distinguished two kinds of epistemic content that depends on an interpreters' type. The epistemic content may as in SECREF3 fill out the meaning of an underspecified play to produce a determinate history. Dog whistles add content to a specific discourse unit that goes beyond its grammatically determined meaning. More formally, we can define these two kinds of epistemic content using the machinery of ME games. Given that plays in an ME game are sequences of discourse moves, we can appeal to the semantics of these moves and a background consequence relation INLINEFORM0 defined as usual. In addition, a play INLINEFORM1 in an ME game may itself be a fully specified history or a sequence of discourse moves that is compatible with several fully specified histories given a particular interpreter's or Jury's type INLINEFORM2 . Let INLINEFORM3 be the set of histories (FLFs) compatible with a play INLINEFORM4 given an interpreter or Jury type INLINEFORM5 . INLINEFORM6 will be ambiguous and open to epistemic content supplementation just in case: (i) INLINEFORM7 for any type INLINEFORM8 for a linguistically competent jury, and (ii) there are INLINEFORM9 , such that INLINEFORM10 and INLINEFORM11 are semantically distinct (neither one entails the other). Now suppose that a play INLINEFORM12 gives rise through the grammar to a history, INLINEFORM13 . Then INLINEFORM14 is a dog whistle for INLINEFORM15 just in case: (i) INLINEFORM16 , (ii) INLINEFORM17 and (iii) there is a INLINEFORM18 that can positively affect some jury perhaps distinct from INLINEFORM19 and such that INLINEFORM20 . On this definition, a player who utters such a play INLINEFORM21 always has the excuse that what he/she actually meant was INLINEFORM22 when challenged—which seems to be one essential feature of a dog whistle. Plays with such semantic features may not be a pervasive feature of conversation; not every element is underspecified or is given a content over and above its linguistically determined one. But in interpreting a set of nonlinguistic facts INLINEFORM0 or data not already connected together in a history, that is in constructing a history over INLINEFORM1 , an interpreter INLINEFORM2 , who in this case is a speaker or writer, must appeal to her beliefs, which includes her beliefs about the Jury to whom her discourse actions are directed. So certainly the type of INLINEFORM3 , which includes beliefs about the Jury for the text, is relevant to what history emerges. The facts in INLINEFORM4 don't wear their relational properties to other facts on their sleeves so to speak, and so INLINEFORM5 has to supply the connections to construct the history. In effect for a set of non linguistically given facts, “ambiguities of attachment,” whose specification determines how the facts in INLINEFORM6 are related to each other, are ubiquitous and must be resolved in constructing a history. The speaker or “history creator” INLINEFORM7 's background beliefs determine the play and the history an interpreter INLINEFORM8 takes away. In the case of constructing a history over a set of nonlinguistic facts INLINEFORM0 , the interpreter INLINEFORM1 's task of getting the history INLINEFORM2 has constructed will not reliably succeed unless one of two conditions are met: either INLINEFORM3 and INLINEFORM4 just happen to share the relevant beliefs (have close enough types) so that they construct the same histories from INLINEFORM5 , or INLINEFORM6 uses linguistic devices to signal the history. ME games require winning conversations, and by extension texts, to be (mostly) coherent, which means that the discourse connections between the elements in the history must be largely determined in any successful play, or can be effectively determined by INLINEFORM14 . This means that INLINEFORM15 will usually reveal relevant information about her type through her play, in virtue of the type/history correspondence, enough to reconstruct the history or much of it. In the stories on the March for Science, for example, the reporters evoke very different connections between the march and other facts. The Townhall reporter, for instance, connects the March for Science to the Women's march and “leftwing” political manifestations and manifests a negative attitude toward the March. But he does so so unambiguously that little subjective interpretation on the part of the interpreter or Jury is needed to construct the history or assign a high probability to a type for INLINEFORM16 that drives the story. This discussion leads to the following observations. To construct a history over a set of disconnected nonlinguistic facts INLINEFORM0 , in general a Jury needs to exploit linguistic pointers to the connections between elements of INLINEFORM1 , if the speaker is to achieve the goal of imparting a (discourse) coherent story, unless the speaker knows that the Jury or interpreter has detailed knowledge of her type. The speaker may choose to leave certain elements underspecified or ambiguous, or use a specified construction, to invoke epistemic content for a particular type that she is confident the Jury instantiates. How much so depends on her confidence in the type of the Jury. This distribution or confidence level opens a panoply of options about the uses of epistemic content: at one end there are histories constructed from linguistic cues with standard, grammatically encoded meanings; at the other end there are histories generated by a code shared with only a few people whose types are mutually known. As the conversation proceeds as we have seen, probabilities about types are updated and so the model should predict that a speaker may resort to more code-like messages in the face of feedback confirming her hypotheses about the Jury's type (if such feedback can be given) and that the speaker may revert to a more message exploiting grammatical cues in the face of feedback disconfirming her hypotheses about the Jury's type. Thus, the epistemic ME model predicts a possible change in register as the speaker receives more information about the Jury's type, though this change is subject to other conversational goals coded in the speaker's victory condition for the ME game. ME persuasion games We've now seen how histories in ME games bring an interpretive bias, the bias of the history's creator, to the understanding of a certain set of facts. We've also seen how epistemic ME games allow for the introduction of epistemic content in the interpretation of plays. Each such epistemic interpretation is an instance of a bias that goes beyond the grammatically determined meaning of the play and is dependent upon the Jury's or interpreter's type. We now make explicit another crucial component of ME games and their relation to bias: the players' winning conditions or discourse goals. Why is this relevant to a study of bias? The short answer is that players' goals tells us whether two players' biases on a certain subject are compatible or resolvable or not. Imagine that our two Juries in SECREF3 shared the same goal—of getting at the truth behind the Senator's refusal to comment about the suits. They might still have come up with the opposing interpretations that they did in our discussion above. But they could have discussed their differences, and eventually would have come to agreement, as we show below in Proposition SECREF19 . However, our two Juries might have different purposes too. One Jury might have the purpose of finding out about the suits, like the reporters; the other might have the purpose just to see Senator Coleman defended, a potentially quite different winning condition and collection of histories. In so doing we would identify Jury 1 with the reporters or at least Rachel, and Jury 2 with Sheehan. Such different discourse purposes have to be taken into account in attempting to make a distinction between good and bad biases. From the perspective of subjective rationality or rationalizability (an important criterion in epistemic game theory BIBREF33 ), good biases for a particular conversation should be those that lead to histories in the winning condition, histories that fulfill the discourse purpose; bad biases lead to histories that do not achieve the winning condition. The goals that a Jury or interpreter INLINEFORM0 adopts and her biases go together; INLINEFORM1 's interpretive bias is good for speaker INLINEFORM2 , if it helps INLINEFORM3 achieve her winning condition. Hence, INLINEFORM4 's beliefs about INLINEFORM5 are crucial to her success and rationalizable behavior. Based on those beliefs INLINEFORM6 's behavior is rationalizable in the sense we have just discussed. If she believes Jury 2 is the one whose winning condition she should satisfy, there is no reason for her to change that behavior. Furthermore, suppose Jury 1 and Jury 2 discuss their evaluations; given that they have different goals, there is no reason for them to come to an agreement with the other's point of view either. Both interpretations are rationalizable as well, if the respective Juries have the goals they do above. A similar story applies to constructing histories over a set of facts, in so far as they had different conceptions of winning conditions set by their respective Juries. In contrast to Aumann's dictum BIBREF32 , in our scenario there is every reason to agree to disagree! Understanding such discourse goals is crucial to understanding bias for at least two reasons. The first is that together with the types that are conventionally coded in discourse moves, they fix the space of relevant types. In SECREF3 , Jury 1 is sensitive to a winning condition in which the truth about the suits is revealed, what we call a truth oriented goal. The goal of Jury 2, on the other hand, is to see that Coleman is successfully defended, what we call a persuasion goal. In fact, we show below that a truth oriented goal is a kind of persuasion goal. Crucial to the accomplishment of either of these goals is for the Jury INLINEFORM0 to decide whether the speaker INLINEFORM1 is committing to a definite answer that she will defend (or better yet an answer that she believes) on a given move to a question from her interlocutor or is INLINEFORM2 trying to avoid any such commitments. If it's the latter, then INLINEFORM3 would be epistemically rash to be persuaded. But the two possibilities are just the two types for Sheehan that are relevant to the interpretation of the ambiguous moves in SECREF3 . Because persuasive goals are almost ubiquitous at least as parts of speaker goals, not only in conversation but also for texts (think of how the reporters in the examples on the March for Science are seeking to convince us of a particular view of the event), we claim that these two types are relevant to the interpretation of many, if not all, conversations. In general we conjecture that the relevant types for interpretation may all rely on epistemic requirements for meeting various kinds of conversational goals. The second reason that discourse goals are key to understanding bias is that by analyzing persuasion goals in more detail we get to the heart of what bias is. Imagine a kind of ME game played between two players, E(loïse) and A(belard), where E proposes and tries to defend a particular interpretation of some set of facts INLINEFORM0 , and A tries to show the interpretation is incorrect, misguided, based on prejudice or whatever will convince the Jury to be dissuaded from adopting E's interpretation of INLINEFORM1 . As in all ME games, E's victory condition in an ME persuasion game is a set of histories determined by the Jury, but but it crucially depends on E's and A's beliefs about the Jury: E has to provide a history INLINEFORM2 over INLINEFORM3 ; A has to attack that history in ways that accord with her beliefs about the Jury; and E has to defend INLINEFORM4 in ways that will, given her beliefs, dispose the Jury favorably to it. An ME persuasion game is one where E and A each present elements of INLINEFORM0 and may also make argumentative or attack moves in their conversation. At each turn of the game, A can argue about the history constructed by E over the facts given so far, challenge it with new facts or attack its assumptions, with the result that E may rethink and redo portions her history over INLINEFORM1 (though not abandon the original history entirely) in order to render A's attack moot. E wins if the history she finally settles on for the facts in INLINEFORM2 allows her to rebut every attack by A; A wins otherwise. A reasonable precisification of this victory condition is that the proportions of good unanswered attacks on the latest version of E's history with respect to the total number of attacks at some point continues to diminish and eventually goes to 0. This is a sort of limit condition: if we think of the initial segments INLINEFORM3 E's play as producing an “initial” history INLINEFORM4 over INLINEFORM5 , as INLINEFORM6 , INLINEFORM7 has no unanswered counterattacks by A that affect the Jury. Such winning histories are extremely difficult to construct; as one can see from inspection, no finite segment of an infinite play guarantees such a winning condition. We shall call a history segment that is part of a history in INLINEFORM8 's winning condition as we have just characterized it, E-defensible. The notion of an ME persuasion game opens the door to a study of attacks, a study that can draw on work in argumentation and game theory BIBREF34 , BIBREF35 , BIBREF36 . ME games and ME persuasion games in particular go beyond the work just cited, however, because our notion of an effective attack involves the type of the Jury as a crucial parameter; the effectiveness of an attack for a Jury relies on its prejudices, technically its priors about the game's players' types (and hence their beliefs and motives). For instance, an uncovering of an agent's racist bias when confronted with a dog whistle like that in SECREF8 is an effective attack technique if the respondent's type for the Jury is such that it is sensitive to such accusations, while it will fail if the Jury is insensitive to such accusations. ME games make plain the importance in a persuasion game of accurately gauging the beliefs of the Jury! ME truth games We now turn to a special kind of ME persuasion game with what we call a disinterested Jury. The intuition behind a disinterested Jury is simple: such a Jury judges the persuasion game based only on the public commitments that follow from the discourse moves that the players make. It is not predisposed to either player in the game. While it is difficult to define such a disinterested Jury in terms of its credences, its probability distribution over types, we can establish some necessary conditions. We first define the notion of the dual of a play of an ME game. Let INLINEFORM0 be an element of the labeled vocabulary with player INLINEFORM1 . Define its dual as: INLINEFORM2 The dual of a play INLINEFORM0 then is simply the lifting of this operator over the entire sequence of INLINEFORM1 . That is, if INLINEFORM2 , where INLINEFORM3 then INLINEFORM4 Then, a disinterested Jury must necessarily satisfy: Indifference towards player identity: A Jury INLINEFORM0 is unbiased only if for every INLINEFORM1 , INLINEFORM2 iff INLINEFORM3 . Symmetry of prior belief: A Jury is unbiased only if it has symmetrical prior beliefs about the player types. Clearly, the Jury INLINEFORM0 does not have symmetrical prior beliefs nor is it indifferent to player identity, while Jury INLINEFORM1 arguably has symmetrical beliefs about the participants in SECREF3 . Note also that while Symmetry of prior beliefs is satisfied by a uniform distribution over all types, but it does not entail such a uniform distribution. Symmetry is closely related to the principle of maximum entropy used in fields as diverse as physics and computational linguistics BIBREF37 , according to which in the absence of any information about the players would entail a uniform probability distribution over types. A distinterested Jury should evaluate a conversation based solely on the strength of the points put forth by the participants. But also crucially it should evaluate the conversation in light of the right points. So for instance, appeals to ad hominem attacks by A or colorful insults should not sway the Jury in favor of A. They should evaluate only based on how the points brought forward affect their credences under conditionalization. A distinterested Jury is impressed only by certain attacks from A, ones based on evidence (E's claims aren't supported by the facts) and on formal properties of coherence, consistency and explanatory or predictive power. In such a game it is common knowledge that attacks based on information about E's type that is not relevant either to the evidential support or formal properties of her history are ignored by the Jury and the participants know this. The same goes for E; counterattacks by her on A that are not based on evidence or the formal properties mentioned above. BIBREF4 discusses the formal properties of coherence and consistency in detail, and we say more about explanatory and predictive power below. The evidential criterion, however, is also particularly important, and it is one that a disinterested Jury must attend to. Luckily for us, formal epistemologists have formulated constraints like cognitive skill and safety or anti-luck on beliefs that are relevant to characterizing this evidential criterion BIBREF38 , BIBREF39 . Cognitive skill is a factor that affects the success (accuracy) of an agent's beliefs: the success of an agent's beliefs is the result of her cognitive skill, exactly to the extent that the reasoning process that produces them makes evidential factors (how weighty, specific, misleading, etc., the agent's evidence is) comparatively important for explaining that success, and makes non-evidential factors comparatively unimportant. In addition, we will require that the relevant evidential factors are those that have been demonstrated to be effective in the relevant areas of inquiry. So if a Jury measures the success of a persuasion game in virtue of a criterion of cognitive ability on the part of the participants and this is common knowledge among the participants (something we will assume throughout here), then, for instance, A's attacks have to be about the particular evidence adduced to support E's history, the way it was collected or verifiable errors in measurements etc., and preclude general skeptical claims from credible attacks in such a game. These epistemic components thus engender more relevant types for interpretation: are the players using cognitive skill and anti-luck conditions or not? More particularly, most climate skeptics' attacks on climate change science, using general doubts about the evidence without using any credible scientific criteria attacking specific evidential bases, would consequently be ruled as irrelevant in virtue of a property like cognitive skill. But this criterion may also affect the Jury's interpretation of the conversation. A Jury whose beliefs are constrained by cognitive ability will adjust its beliefs about player types and about interpretation only in the light of relevant evidential factors. Safety is a feature of beliefs that says that conditionalizing on circumstances that could have been otherwise without one's evidence changing should not affect the strength of one's beliefs. Safety rules out out belief profiles in which luck or mere hunches play a role. The notion of a disinterested jury is formally a complicated one. Consider an interpretation of a conversation between two players E and A. Bias can be understood as a sort of modal operator over an agent's first order and higher order beliefs. So a disinterested Jury in an ME game means that neither its beliefs about A nor about E involve an interested bias; nor do its beliefs about A's beliefs about E's beliefs or E's beliefs about the A's beliefs about E's beliefs, and so on up the epistemic hierarchy. Thus, a disinterested Jury in this setting involves an infinitary conjunction of modal statements, which is intuitively (and mathematically) a complex condition on beliefs. And since this disinterestedness must be common knowledge amongst the players, E and A have equally complex beliefs. We are interested in ME persuasion games in which the truth may emerge. Is an ME persuasion game with a disinterested Jury sufficient to ensure such an outcome? No. there may be a fatal flaw in E's history that INLINEFORM0 does not uncover and that the Jury does not see. We have to suppose certain abilities on the part of INLINEFORM1 and/or the Jury—namely, that if E has covered up some evidence or falsely constructed evidence or has introduced an inconsistency in her history, that eventually A will uncover it. Further, if there is an unexplained leap, an incoherence in the history, then INLINEFORM2 will eventually find it. Endowing INLINEFORM3 with such capacities would suffice to ensure a history that is in E's winning condition to be the best possible approximation to the truth, a sort of Peircean ideal. Even if we assume only that INLINEFORM4 is a competent and skilled practitioner of her art, we have something like a good approximation of the truth for any history in E's winning condition. We call a persuasion game with such a disinterested Jury and such a winning condition for INLINEFORM5 an ME truth game. In an ME truth game, a player or a Jury may not be completely disinterested because of skewed priors. But she may still be interested in finding out the truth and thus adjusting her priors in the face of evidence. We put some constraints on the revision of beliefs of a truth interested player. Suppose such a player INLINEFORM0 has a prior INLINEFORM1 on INLINEFORM2 such that INLINEFORM5 , but in a play INLINEFORM6 of an ME truth game it is revealed that INLINEFORM7 has no confirming evidence for INLINEFORM8 that the opponent INLINEFORM9 cannot attack without convincing rebuttal. Then a truth interested player INLINEFORM10 should update her beliefs INLINEFORM11 after INLINEFORM12 so that INLINEFORM13 . On the other hand, if INLINEFORM14 cannot rebut the confirming evidence that INLINEFORM15 has for INLINEFORM16 , then INLINEFORM17 . Where INLINEFORM18 is infinite, we put a condition on the prefixes INLINEFORM19 of INLINEFORM20 : INLINEFORM21 . Given our concepts of truth interested players and an ME truth game, we can show the following. If the two players of a 2 history ME truth game INLINEFORM22 , have access to all the facts in INLINEFORM23 , and are truth interested but have incompatible histories for INLINEFORM24 based on distinct priors, they will eventually agree to a common history for INLINEFORM25 . To prove this, we note that our players will note the disagreement and try to overcome it since they have a common interest, in the truth about INLINEFORM26 . Then it suffices to look at two cases: in case one, one player INLINEFORM27 converges to the INLINEFORM28 's beliefs in the ME game because INLINEFORM29 successfully attacks the grounds on which INLINEFORM30 's incompatible interpretation is based; in case two, neither INLINEFORM31 nor INLINEFORM32 is revealed to have good evidential grounds for their conflicting beliefs and so they converge to common revised beliefs that assign an equal probability to the prior beliefs that were in conflict. Note that the difference with BIBREF32 is that we need to assume that players interested in the truth conditionalize upon outcomes of discussion in an ME game in the same way. Players who do not do this need not ever agree. There are interesting variants of an ME truth game where one has to do with approximations. ME truth games are infinitary games, in which getting a winning history is something E may or may not achieve in the limit. But typically we want the right, or “good enough” interpretation sooner rather than later. We can also appeal to discounted ME games developed in BIBREF21 , in which the scores are assigned to individual discourse moves in context which diminish as the game progresses, to investigate cases where getting things right, or right enough, early on in an ME truth game is crucial. In another variant of an ME truth game, which we call a 2-history ME truth game, we pit two biases one for E and one for A, and the two competing histories they engender, about a set of facts against each other. Note that such a game is not necessarily win-lose as is the original ME truth game, because neither history the conversationalists develop and defend may satisfy the disinterested Jury. That is, both E and A may lose in such a game. Is it also possible that they both win? Can both E and A revise their histories so that their opponents have in the end no telling attacks against their histories? We think not at least in the case where the histories make or entail contradictory claims: in such a case they should both lose because they cannot defeat the opposing possibility. Suppose INLINEFORM0 wants to win an ME truth game and to construct a truthful history. Let's assume that the set of facts INLINEFORM1 over which the history is constructed is finite. What should she do? Is it possible for her to win? How hard is it for her to win? Does INLINEFORM2 have a winning strategy? As an ME truth game is win-lose, if the winning condition is Borel definable, it will be determined BIBREF4 ; either INLINEFORM3 has a winning strategy or INLINEFORM4 does. Whether INLINEFORM5 has a winning strategy or not is important: if she does, there is a method for finding an optimal history in the winning set; if she doesn't, an optimal history from the point of view of a truth-seeking goal in the ME truth game is not always attainable. To construct a history from ambiguous signals for a history over INLINEFORM0 , the interpreter must rely on her beliefs about the situation and her interlocutors to estimate the right history. So the question of getting at truthful interpretations of histories depends at least in part on the right answer to the question, what are the right beliefs about the situation and the participants that should be invoked in interpretation? Given that beliefs are probabilistic, the space of possible beliefs is vast. The right set of beliefs will typically form a very small set with respect to the set of all possible beliefs about a typical conversational setting. Assuming that one will be in such a position “by default” without any further argumentation is highly implausible, as a simple measure theoretic argument ensures that the set of possible interpretations are almost always biased away from a winning history in an ME truth game. What is needed for E-defensibility and a winning strategy in an ME truth game? BIBREF4 argued that consistency and coherence (roughly, the elements of the history have to be semantically connected in relevant ways BIBREF3 ) are necessary conditions on all winning conditions and would thus apply to such histories. A necessary additional property is completeness, an accounting of all or sufficiently many of the facts the history is claimed to cover. We've also mentioned the care that has to be paid to the evidence and how it supports the history. Finally, it became apparent when we considered a variant of an ME truth game in which two competing histories were pitted against each other that a winning condition for each player is that they must be able to defeat the opposing view or at least cast doubt on it. More particularly, truth seeking biases should provide predictive and explanatory power, which are difficult to define. But we offer the following encoding of predictiveness and explanatory power as constraints on continuations of a given history in an ME truth game. [Predictiveness] A history INLINEFORM0 developed in an ME game for a set of facts INLINEFORM1 is predictive just in case when INLINEFORM2 is presented with a set of facts INLINEFORM3 relevantly similar to INLINEFORM4 , INLINEFORM5 implies a E-defensible extension INLINEFORM6 of INLINEFORM7 to all the facts in INLINEFORM8 . A similar definition can be given for the explanatory power of a history. Does INLINEFORM0 have a strategy for constructing a truthful history that can guarantee all of these things? Well, if the facts INLINEFORM1 it is supposed to relate are sufficiently simple or sufficiently unambiguous in the sense that they determine just one history and E is effectively able to build and defend such a history, then yes she does. So very simple cases like establishing whether your daughter has a snack for after school in the morning or not are easy to determine, and the history is equally simple, once you have the right evidence: yes she has a snack, or no she doesn't. A text which is unambiguous similarly determines only one history, and linguistic competence should suffice to determine what that history is. On the other hand, it is also possible that INLINEFORM2 may determine the right history INLINEFORM3 from a play INLINEFORM4 when INLINEFORM5 depends on the type of the relevant players of INLINEFORM6 . For INLINEFORM7 can have a true “type” for the players relevant to INLINEFORM8 . In general whether or not a player has a winning strategy will depend on the structure of the optimal history targeted, as well as on the resources and constraints on the players in an ME truth game. In the more general case, however, whether INLINEFORM0 has a winning strategy in an ME truth game become in general non trivial. At least in a relative sort of way, E can construct a model satisfying her putative history at each stage to show consistency (relative to ZF or some other background theory); coherence can be verified by inspection over the finite discourse graph of the relevant history at each stage and ensuing attacks. Finally completeness and evidential support can be guaranteed at each stage in the history's construction, if E has the right sort of beliefs. If all this can be guaranteed at each stage, von Neumann's minimax theorem or its extension in BIBREF40 guarantees that E has a winning strategy for E-defensibility. In future work, we plan to analyze in detail some complicated examples like the ongoing debate about climate, change where there is large scale scientific agreement but where disagreement exists because of distinct winning conditions. Looking ahead An ME truth game suggests a certain notion of truth: the truth is a winning history in an ME persuasion game with a disinterested Jury. This is a Peircean “best attainable” approximation of the truth, an ”internal” notion of truth based on consistency, coherence with the available evidence and explanatory and predictive power. But we could investigate also a more external view of truth. Such a view would suppose that the Jury has in its possession the “true history over a set of facts INLINEFORM0 , that the history eventually constructed by E should converge to within a certain margin of error in the limit. We think ME games are a promising tool for investigating bias, and in this section we mention some possible applications and open questions that ME games might help us answer. ME truth games allow us to analyze extant strategies for eliminating bias. For instance, given two histories for a given set of facts, it is a common opinion that one finds a less biased history by splitting the difference between them. This is a strategy perhaps distantly inspired by the idea that the truth lies in the golden mean between extremes. But is this really true? ME games should allow us to encode this strategy and find out. Another connection that our approach can exploit is the one between games and reinforcement learning BIBREF44 , BIBREF45 , BIBREF46 . While reinforcement learning is traditionally understood as a problem involving a single agent and is not powerful enough to understand the dynamics of competing biases of agents with different winning conditions, there is a direct connection made in BIBREF45 between evolutionary games with replicator dynamics and the stochastic learning theory of BIBREF47 with links to multiagent reinforcement learning. BIBREF44 , BIBREF46 provide a foundation for multiagent reinforcement learning in stochastic games. The connection between ME games and stochastic and evolutionary games has not been explored but some victory conditions in ME games can be an objective that a replicator dynamics converges to, and epistemic ME games already encompass a stochastic component. Thus, our research will be able to draw on relevant results in these areas. A typical assumption we make as scientists is that rationality would lead us to always prefer to have a more complete and more accurate history for our world. But bias isn't so simple, as an analysis of ME games can show. ME games are played for many purposes with non truth-seeking biases that lead to histories that are not a best approximation to the truth may be the rational or optimal choice, if the winning condition in the game is other than that defined in an ME truth game. This has real political and social relevance; for example, a plausible hypothesis is that those who argue that climate change is a hoax are building an alternative history, not to get at the truth but for other political purposes. Even being a truth interested player can at least initially fail to generate histories that are in the winning condition of an ME truth game. Suppose E, motivated by truth interest, has constructed for facts INLINEFORM0 a history INLINEFORM1 that meets constraints including coherence, consistency, and completeness, and it provides explanatory and predictive power for at least a large subset INLINEFORM2 of INLINEFORM3 . E's conceptualization of INLINEFORM4 can still go wrong, and E may fail to have a winning strategy in interesting ways. First, INLINEFORM5 can mischaracterize INLINEFORM6 with high confidence in virtue of evidence only from INLINEFORM7 BIBREF48 ; Especially if INLINEFORM8 is large and hence INLINEFORM9 is just simply very “long”, it is intuitively more difficult even for truth seeking players to come to accept that an alternative history is the correct one. Second, INLINEFORM10 may lack or be incompatible with concepts that would be needed to be aware of facts in INLINEFORM11 . BIBREF55 , BIBREF23 investigate a special case of this, a case of unawareness. To succeed E would have to learn the requisite concepts first. All of this has important implications for learning. We can represent learning as the following ME games. It is common to represent making a prediction Y from data X as a zero sum game between our player E and Nature: E wins if for data X provided by Nature, E makes a prediction that the Jury judges to be correct. More generally, an iterated learning process is a repeated zero sum game, in which E makes predictions in virtue of some history, which one might also call a model or a set of hypotheses; if she makes a correct prediction at round n, she reinforces her beliefs in her current history; if she makes a wrong prediction, she adjusts it. The winning condition may be defined in terms of some function of the scores at each learning round or in terms of some global convergence property. Learning conceived in this way is a variant of a simple ME truth game in which costs are assigned to individual discourse moves as in discounted ME games. In an ME truth game, where E develops a history INLINEFORM0 over a set of facts INLINEFORM1 while A argues for an alternative history INLINEFORM2 over INLINEFORM3 , A can successfully defend history INLINEFORM4 as long as either the true history INLINEFORM5 is (a) not learnable or (b) not uniquely learnable. In case (a), E cannot convince the Jury that INLINEFORM6 is the right history; in case (b) A can justify INLINEFORM7 as an alternative interpretation. Consider the bias of a hardened climate change skeptic: the ME model predicts that simply presenting new facts to the agent will not induce him to change his history, even if to a disinterested Jury his history is clearly not in his winning condition. He may either simply refuse to be convinced because he is not truth interested, or because he thinks his alternative history INLINEFORM8 can explain all of the data in INLINEFORM9 just as well as E's climate science history INLINEFORM10 . Thus, ME games open up an unexplored research area of unlearnable histories for certain agents. Conclusions In this paper, we have put forward the foundations of a formal model of interpretive bias. Our approach differs from philosophical and AI work on dialogue that links dialogue understanding to the recovery of speaker intentions and beliefs BIBREF56 , BIBREF57 . Studies of multimodal interactions in Human Robot Interaction (HRI) have also followed the Gricean tradition BIBREF58 , BIBREF59 , BIBREF60 . BIBREF61 , BIBREF4 , BIBREF62 ), offer many reasons why a Gricean program for dialogue understanding is difficult for dialogues in which there is not a shared task and a strong notion of co-operativity. Our model is not in the business of intention and belief recovery, but rather works from what contents agents explicitly commit to with their actions, linguistic and otherwise, to determine a rational reconstruction of an underlying interpretive bias and what goals a bias would satisfy. In this we also go beyond what current theories of discourse structure like SDRT can accomplish. Our theoretical work also requires an empirical component on exactly how bias is manifested to be complete. This has links to the recent interest in fake news. Modeling interpretive bias can help in detecting fake news by providing relevant types to check in interpretation and by providing an epistemic foundation for fake news detection by exploiting ME truth games where one can draw from various sources to check the credibility of a story. In a future paper, we intend to investigate these connections thoroughly. References Asher, N., Lascarides, A.: Strategic conversation. Semantics and Pragmatics 6(2), http:// dx.doi.org/10.3765/sp.6.2. (2013) Asher, N., Paul, S.: Evaluating conversational success: Weighted message exchange games. In: Hunter, J., Simons, M., Stone, M. (eds.) 20th workshop on the semantics and pragmatics of dialogue (SEMDIAL). New Jersey, USA (July 2016) Asher, N.: Reference to Abstract Objects in Discourse. Kluwer Academic Publishers (1993) Asher, N., Lascarides, A.: Logics of Conversation. Cambridge University Press (2003) Asher, N., Paul, S.: Conversations and incomplete knowledge. In: Proceedings of Semdial Conference. pp. 173–176. Amsterdam (December 2013) Asher, N., Paul, S.: Conversation and games. In: Ghosh, S., Prasad, S. (eds.) Logic and Its Applications: 7th Indian Conference, ICLA 2017, Kanpur, India, January 5-7, 2017, Proceedings. vol. 10119, pp. 1–18. Springer, Kanpur, India (January 2017) Asher, N., Paul, S.: Strategic conversation under imperfect information: epistemic Message Exchange games (2017), accepted for publication in Journal of Logic, Language and Information Asher, N., Paul, S., Venant, A.: Message exchange games in strategic conversations. Journal of Philosophical Logic 46.4, 355–404 (2017), http://dx.doi.org/10.1007/s10992-016-9402-1 Auer, P., Cesa-Bianchi, N., Fischer, P.: Finite-time analysis of the multiarmed bandit problem. Machine learning 47(2-3), 235–256 (2002) Aumann, R.J.: Agreeing to disagree. The Annals of Statistics 4(6), 1236–1239 (1976) Banks, J.S., Sundaram, R.K.: Switching costs and the gittins index. Econometrica: Journal of the Econometric Society pp. 687–694 (1994) Baron, J.: Thinking and deciding. Cambridge University Press (2000) Battigalli, P.: Rationalizability in infinite, dynamic games with incomplete information. Research in Economics 57(1), 1–38 (2003) Berger, A.L., Pietra, V.J.D., Pietra, S.A.D.: A maximum entropy approach to natural language processing. Computational linguistics 22(1), 39–71 (1996) Besnard, P., Hunter, A.: Elements of argumentation, vol. 47. MIT press Cambridge (2008) Blackwell, D.: An analog of the minimax theorem for vector payoffs. Pacific Journal of Mathematics 6(1), 1–8 (1956) Börgers, T., Sarin, R.: Learning through reinforcement and replicator dynamics. Journal of Economic Theory 77(1), 1–14 (1997) Burnetas, A.N., Katehakis, M.N.: Optimal adaptive policies for markov decision processes. Mathematics of Operations Research 22(1), 222–255 (1997) Burnett, H.: Sociolinguistic interaction and identity construction: The view from game-theoretic pragmatics. Journal of Sociolinguistics 21(2), 238–271 (2017) Bush, R.R., Mosteller, F.: Stochastic models for learning. John Wiley & Sons, Inc. (1955) Cadilhac, A., Asher, N., Benamara, F., Lascarides, A.: Commitments to preferences in dialogue. In: Proceedings of the 12th Annual SIGDIAL Meeting on Discourse and Dialogue. pp. 204–215 (2011) Cadilhac, A., Asher, N., Benamara, F., Lascarides, A.: Grounding strategic conversation: Using negotiation dialogues to predict trades in a win-lose game. In: Proceedings of EMNLP. pp. 357–368. Seattle (2013) Cadilhac, A., Asher, N., Benamara, F., Popescu, V., Seck, M.: Preference extraction form negotiation dialogues. In: Biennial European Conference on Artificial Intelligence (ECAI) (2012) Chambers, N., Allen, J., Galescu, L., Jung, H.: A dialogue-based approach to multi-robot team control. In: The 3rd International Multi-Robot Systems Workshop. Washington, DC (2005) Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial intelligence 77(2), 321–357 (1995) Erev, I., Wallsten, T.S., Budescu, D.V.: Simultaneous over-and underconfidence: The role of error in judgment processes. Psychological review 101(3), 519 (1994) Foster, M.E., Petrick, R.P.A.: Planning for social interaction with sensor uncertainty. In: The ICAPS 2014 Scheduling and Planning Applications Workshop (SPARK). pp. 19–20. Portsmouth, New Hampshire, USA (Jun 2014) Garivier, A., Cappé, O.: The kl-ucb algorithm for bounded stochastic bandits and beyond. In: COLT. pp. 359–376 (2011) Glazer, J., Rubinstein, A.: On optimal rules of persuasion. Econometrica 72(6), 119–123 (2004) Grice, H.P.: Utterer's meaning and intentions. Philosophical Review 68(2), 147–177 (1969) Grice, H.P.: Logic and conversation. In: Cole, P., Morgan, J.L. (eds.) Syntax and Semantics Volume 3: Speech Acts, pp. 41–58. Academic Press (1975) Grosz, B., Sidner, C.: Attention, intentions and the structure of discourse. Computational Linguistics 12, 175–204 (1986) Harsanyi, J.C.: Games with incomplete information played by “bayesian” players, parts i-iii. Management science 14, 159–182 (1967) Henderson, R., McCready, E.: Dogwhistles and the at-issue/non-at-issue distinction. Published on Semantics Archive (2017) Hilbert, M.: Toward a synthesis of cognitive biases: how noisy information processing can bias human decision making. Psychological bulletin 138(2), 211 (2012) Hintzman, D.L.: Minerva 2: A simulation model of human memory. Behavior Research Methods, Instruments, & Computers 16(2), 96–101 (1984) Hintzman, D.L.: Judgments of frequency and recognition memory in a multiple-trace memory model. Psychological review 95(4), 528 (1988) Hu, J., Wellman, M.P.: Multiagent reinforcement learning: theoretical framework and an algorithm. In: ICML. vol. 98, pp. 242–250 (1998) Hunter, J., Asher, N., Lascarides, A.: Situated conversation (2017), submitted to Semantics and Pragmatics Khoo, J.: Code words in political discourse. Philosophical Topics 45(2), 33–64 (2017) Konek, J.: Probabilistic knowledge and cognitive ability. Philosophical Review 125(4), 509–587 (2016) Lai, T.L., Robbins, H.: Asymptotically efficient adaptive allocation rules. Advances in applied mathematics 6(1), 4–22 (1985) Lakkaraju, H., Kamar, E., Caruana, R., Horvitz, E.: Discovering blind spots of predictive models: Representations and policies for guided exploration. arXiv preprint arXiv:1610.09064 (2016) Lee, M., Solomon, N.: Unreliable Sources: A Guide to Detecting Bias in News Media. Lyle Smart, New York (1990) Lepore, E., Stone, M.: Imagination and Convention: Distinguishing Grammar and Inference in Language. Oxford University Press (2015) Littman, M.L.: Markov games as a framework for multi-agent reinforcement learning. In: Proceedings of the eleventh international conference on machine learning. vol. 157, pp. 157–163 (1994) Morey, M., Muller, P., Asher, N.: A dependency perspective on rst discourse parsing and evaluation (2017), submitted to Computational Linguistics Moss, S.: Epistemology formalized. Philosophical Review 122(1), 1–43 (2013) Perret, J., Afantenos, S., Asher, N., Morey, M.: Integer linear programming for discourse parsing. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pp. 99–109. Association for Computational Linguistics, San Diego, California (June 2016), http://www.aclweb.org/anthology/N16-1013 Perzanowski, D., Schultz, A., Adams, W., Marsh, E., Bugajska, M.: Building a multimodal human-robot interface. Intelligent Systems 16(1), 16–21 (2001) Potts, C.: The logic of conventional implicatures. Oxford University Press Oxford (2005) Recanati, F.: Literal Meaning. Cambridge University Press (2004) Sperber, D., Wilson, D.: Relevance. Blackwells (1986) Stanley, J.: How propaganda works. Princeton University Press (2015) Tversky, A., Kahneman, D.: Availability: A heuristic for judging frequency and probability. Cognitive psychology 5(2), 207–232 (1973) Tversky, A., Kahneman, D.: Judgment under uncertainty: Heuristics and biases. In: Utility, probability, and human decision making, pp. 141–162. Springer (1975) Tversky, A., Kahneman, D.: Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological review 90(4), 293 (1983) Tversky, A., Kahneman, D.: The framing of decisions and the psychology of choice. In: Environmental Impact Assessment, Technology Assessment, and Risk Analysis, pp. 107–129. Springer (1985) Venant, A.: Structures, Semantics and Games in Strategic Conversations. Ph.D. thesis, Université Paul Sabatier, Toulouse (2016) Venant, A., Asher, N., Muller, P., Denis, P., Afantenos, S.: Expressivity and comparison of models of discourse structure. In: Proceedings of the SIGDIAL 2013 Conference. pp. 2–11. Association for Computational Linguistics, Metz, France (August 2013), http://www.aclweb.org/anthology/W13-4002 Venant, A., Degremont, C., Asher, N.: Semantic similarity. In: LENLS 10. Tokyo, Japan (2013) Walton, D.N.: Logical dialogue-games. University Press of America (1984) Whittle, P.: Multi-armed bandits and the gittins index. Journal of the Royal Statistical Society. Series B (Methodological) pp. 143–149 (1980) Wilkinson, N., Klaes, M.: An introduction to behavioral economics. Palgrave Macmillan (2012)
in an ME game there are typically several interpretive biases at work: each player has her own bias, as does the Jury
4d30c2223939b31216f2e90ef33fe0db97e962ac
4d30c2223939b31216f2e90ef33fe0db97e962ac_0
Q: How many words are coded in the dictionary? Text: Introduction Swiss German refers to any of the German varieties that are spoken in about two thirds of Switzerland BIBREF0. Besides at least one of those dialectal varieties, Swiss German people also master standard (or 'High') German which is taught in school as the official language of communication. Swiss German is varies strongly. Many differences exist in the dialectal continuum of the German speaking part of Switzerland. Besides pronunciation, it also varies a lot in writing. Standard German used to be the exclusive language for writing in Switzerland. Writing in Swiss German has only come up rather recently (notably in text messaging). Because of this, there are no orthographic conventions for Swiss German varieties. Even people speaking the same dialect can, and often do, write phonetically identical words differently. In this paper, we present a dictionary of written standard German words paired with their pronunciation in Swiss German words. Additionally Swiss German spontaneous writings, i.e. writings as they may be used in text messages by native speakers, are paired with Swiss German pronunciations. The primary motivation for building this dictionary is rendering Swiss German accessible for technologies such as Automatic Speech Recognition (ASR). This is the first publicly described Swiss German dictionary shared for research purposes. Furthermore, this is the first dictionary that combines pronunciations of Swiss German with spontaneous writings. Related Work This dictionary complements previously developed resources for Swiss German, which share some common information. Spontaneous noisy writing has already been recorded in text corpora BIBREF1, BIBREF2, BIBREF3, some of which are also normalized. These resources contain relatively large lexicons of words used in context, but they do not contain any information about pronunciation. The features of speech are represented in other resources, such as BIBREF4, BIBREF5, BIBREF6, which, on the other hand, contain relatively small lexicons (small set of words known to vary across dialects). The ArchiMob corpus does contain a large lexicon of speech and writing (Dieth transcription), but the spoken part is available in audio sources only, without phonetic transcription. This dictionary is the first resource to combine all the relevant information together. A relatively large lexicon has been constructed in which phonetic transcriptions (in the SAMPA alphabet) are mapped to various spontaneous writings controlling for the regional distribution. Some of the representations in this dictionary are produced manually, while others are added using automatic processing. Automatic word-level conversion between various writings in Swiss German has been addressed in several projects, mostly for the purpose of writing normalization BIBREF7, BIBREF2, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF0, BIBREF12. The task of normalization consist of mapping multiple variants of a single lexical item into a single writing usually identical to standard German (an example would be the Swiss German words aarbet and arbäit which both map to standard German arbeit ('work')). Early data sets were processed manually (SMS). This was followed by an implementation of character-level statistical machine translation models BIBREF13, BIBREF14 and, more recently, with neural sequence-to-sequence technology. The solution by lusettietal18 employes soft-attention encoder-decoder recurrent networks enhanced with synchronous multilevel decoding. ruzsicsetal19 develop these models further to integrate linguistic (PoS) features. A slightly different task of translating between standard German and Swiss dialects was first addressed with finite state technology BIBREF15. More recently, honnet-etal17 test convolutional neural networks on several data sets. We continue the work on using neural networks for modeling word-level conversion. Unlike previous work, which dealt with written forms only, we train models for mapping phonetic representations to various possible writings. The proposed solution relies on the latest framework for sequence-to-sequence tasks — transformer networks BIBREF16. Dictionary Content and access We pair 11'248 standard German written words with their phonetical representations in six different Swiss dialects: Zürich, St. Gallen, Basel, Bern, Visp, and Stans (Figure FIGREF1). The phonetic words were written in a modified version of the Speech Assessment Methods Phonetic Alphabet (SAMPA). The Swiss German phonetic words are also paired with Swiss German writings in the latin alphabet. (From here onwards, a phonetic representation of a Swiss German word will be called a SAMPA and a written Swiss German word will be called a GSW.) This dictionary comes in two versions as we used two differently sized sets of SAMPA characters. Our extended set including 137 phones allows for a detailed and adequate representation of the diverse pronunciation in Switzerland. The smaller set of 59 phones is easier to compute. The phone reduction was mainly done by splitting up combined SAMPA-characters such as diphthongs. UI s t r $ \lbrace $ tt @ and U I s t r $ \lbrace $ t t @ for example are both representations of the Stans pronunciation of the standard German word austreten ('step out'). The latter representation belongs to the dictionary based on the smaller phoneset. Table TABREF2 shows an example of five dictionary entries based on the bigger phoneset. For a subset of 9000 of 11'248 standard German words, we have manually annotated GSWs for Visp (9000) and for Zurich (2 x 9000, done by two different annotators). For a subsubset of 600 of those standard German words we have manually annotated GSWs for the four other dialects of St. Gallen, Basel, Bern, and Stans. The remaining writing variants are generated using automatic methods described below. The dictionary is freely available for research purposes under the creative commons share-alike non-commercial licence via this website http://tiny.uzh.ch/11X. Construction of the dictionary In the following we present the steps of construction of our dictionary, also detailing how we chose the six dialects to represent Swiss German and how, starting with a list of standard German words, we retrieved the mapping SAMPAs and GSWs. Construction of the dictionary ::: Discretising continuous variation To be able to represent Swiss German by only a few dialects which differ considerably it is necessary to discretize linguistic varieties. Because, as mentioned earlier, regional language variation in Switzerland is continuous. For this identification of different varieties we used a dialectometric analysis BIBREF17. This analysis is based on lexical, phonological, morphological data of the German speaking areas of Switzerland BIBREF4. As we worked with word-lists and not sentences, we discounted syntactical influences on area boundaries that are also described in that analysis. We represent six differentiated linguistic varieties. We considered working with ten linguistic varieties because this number of areas was the 'best-cut'-analysis in the dialectometric analysis BIBREF17. Yet, due to time restraints and considerable overlap between some of the linguistic varieties, we reduced this number to six. We also made some adjustements to the chosen varieties in order to correspond better to the perception of speakers and in favor of more densely populated areas. One way to represent the six individualized linguistic varieties would have been to annotate the dialectal centers, i.e. those places that have the average values of dialectal properties within the area where the variety is spoken. However, we chose to represent the linguistic varieties by the most convenient urban places. Those were the dialects of the Cities Zurich, St. Gallen, Basel, Bern, and Visp, and Stans. Construction of the dictionary ::: Manual annotation ::: SAMPAs For each standard German word in our dictionary we manually annotated its phonetic representation in the six chosen dialects. The information about the pronunciation of Swiss German words is partially available also from other sources but not fully accessible BIBREF4 BIBREF7. To help us with pronunciation our annotators first used their knowledge as native speakers (for Zurich and Visp). Secondly, they consulted dialect specific grammars BIBREF18 BIBREF19 BIBREF20 BIBREF21 BIBREF22 as well as dialect specific lexica BIBREF23 BIBREF24 BIBREF25. They also considered existing Swiss German dictionaries BIBREF7 BIBREF4, listened to recordings BIBREF0 and conferred with friends and acquaintances originating from the respective locations. Construction of the dictionary ::: Manual annotation ::: GSWs 9000 GSWs for Visp German and 2 x 9000 GSWs for Zurich German were annotated by native speakers of the respective dialect. Our annotators created the GSWs while looking at standard German words and without looking at the corresponding SAMPAs for Visp and Zurich. Through this independence from SAMPAs we are able to avoid biases concerning the phonetics as well as the meaning of the word in generating GSWs. At a later stage of our work, we added each 600 GSWs for the four dialects of St. Gallen, Basel, Bern, and Stans in order to improve our phoneme-to-grapheme(p2g) model (see next section). For the manual annotation of these dialects we had no native speakers. Therefore, when writing the GSWs, our annotators relied on the corresponding SAMPAs of these dialects, which they had made an effort to create before. Construction of the dictionary ::: Automatic annotation In order to account for the mentioned variety of everyday Swiss German writing, we aimed for more than one GSW per SAMPA. The heterogeneous writing style makes the SAMPA$\,\rightarrow \,$GSW a one to many relation instead of the regular one to one that speakers of standard languages are accustomed to. To save time in generating the many GSWs, we opted for an automatic process. We first tried to automatize the generation of GSWs with a rule-based program. Via SAMPAs together with phoneme-to-grapheme mappings we tried to obtain all possible GSWs. Yet, this yielded mostly impossible writings and also not all the writings we had already done manually. We then set up a phoneme-to-grapheme(p2g) model to generate the most likely spellings. Construction of the dictionary ::: Automatic annotation ::: Transformer-based Phoneme to Grapheme (p2g) The process of generating written forms from a given SAMPA can be viewed as a sequence-to-sequence problem, where the input is a sequence of phonemes and the output is a sequence of graphemes. We decided to use a Transformer-based model for the phoneme-to-grapheme (p2g) task. The reason for this is twofold. First, the Transformer has shown great success in seq2seq tasks and it has outperformed LSTM and CNN-based models. Second, it is computationally more efficient than LSTM and CNN networks. The Transformer consists of an encoder and a decoder part. The encoder generates a contextual representation for each input SAMPA that is then fed into the decoder together with the previously decoded grapheme. They both have N identical layers. In the encoder, each layer has a multi-head self-attention layer and a position-wise fully-connected feed-forward layer. While in the decoder, in addition to these two layers, we also have an additional multi-headed attention layer that uses the output of the encoder BIBREF16. We are using a Pytorch implementation of the Transformer. As a result of the small size of the dataset, we are using a smaller model with only 2 layers and 2 heads. The dimension of the key (d_k) and value (d_v) is 32, the dimension of the model (d_model) and the word vectors (d_word_vec) is 50 and the hidden inner dimension (d_inner_hid) is 400. The model is trained for 55 epochs with a batch size of 64 and a dropout of 0.2. For decoding the output of the model, we are using beam search with beam size 10. We experimented with different beam sizes, but we saw that it does not have significant influence on the result. The training set is made of 24'000 phonemes-to-graphemes pairs, which are the result of transcribing 8'000 High German words into two Zurich forms and one Visp form. Those transcriptions were made independently by three native speakers. Due to the scarcity of data, we decided not to distinguish between dialects. Hence, a single model receives a sequence of SAMPA symbols and learns to generate a matching sequence of characters. Construction of the dictionary ::: Automatic annotation ::: Test set and evaluation Our team of Swiss German annotators evaluated a test-set of 1000 words. We aimed to exclude only very far-off forms (tagged '0'), such that they are very probably to be seen as false by Swiss German speakers. The accepted writings (tagged '1') might include some that seem off to the Swiss German reader. In order to consistently rate the output, the criteria shown in table TABREF4 were followed. A GSW was tagged '0' if there was at least one letter added, missing, or changed without comprehensible phonetic reason. GSWs were also tagged '0' if there were at least two mistakes that our annotators saw as minor. 'Minor mistakes' are substitutions of related sounds or spellings, added or omitted geminates, and changes in vowel length. For each of the 1000 words in the test-set, five GSW-predictions in all six dialects were given to our annotators. For Visp and Zurich they tagged each 1000x5 GSW predictions with 1 or 0. For St. Gallen, Basel, Bern, and Stans, they evaluated 200x5. In Table TABREF13 we show the result from this evaluation. We count the number of correct GSWs (labeled as '1') among the top 5 candidates generated by the p2g model, where the first candidate is the most relevant, then the second one and so on. The evaluation was done at a stage where our model was trained only on GSW for Zurich and Visp (see sec. SECREF8). The amount of correct predictions are lower for the dialects of St. Gallen, Basel, Bern, and Stans, mainly because there were some special SAMPA characters we used for those dialects and the model did not have the correlating latin character strings. After the evaluation, we added each 600 GSWs for the four dialects of St. Gallen, Basel, Bern, and Stans to improve the model. Construction of the dictionary ::: Automatic annotation ::: Grapheme to Phoneme (g2p) and its benefits for ASR Automatic speech recognition (ASR) systems are the main use cases for our dictionary. ASR systems convert spoken language into text. Today, they are widely used in different domains from customer and help centers to voice-controlled assistants and devices. The main resources needed for an ASR system are audio, transcriptions and a phonetic dictionary. The quality of the ASR system is highly dependant of the quality of the dictionary. With our resource we provide such a phonetic dictionary. To increase the benefits of our data for ASR systems, we also trained a grapheme-to-phoneme (g2p) model: Out-of-vocabulary words can be a problem for ASR system. For those out-of-vocabulary words we need a model that can generate pronunciations from a written form, in real time. This is why we train a grapheme-to-phoneme (g2p) model that generates a sequence of phonemes for a given word. We train the g2p model using our dictionary and compare its performance with a widely used joint-sequence g2p model, Sequitur BIBREF26. For the g2p model we are using the same architecture as for the p2g model. The only difference is input and output vocabulary. The Sequitur and our model are using the dictionary with the same train (19'898 samples), test (2'412 samples) and validation (2'212 samples) split. Additionally, we also test their performance only on the items from the Zurich and Visp dialect, because most of the samples are from this two dialects. In Table TABREF15 we show the result of the comparison of the two models. We compute the edit distance between the predicted and the true pronunciation and report the number of exact matches. In the first columns we have the result using the whole test set with all the dialects, and in the 2nd and 3rd columns we show the number of exact matches only on the samples from the test set that are from the Zurich and Visp dialect. For here we can clearly see that our model performs better than the Sequitur model. The reason why we have less matches in the Visp dialect compared to Zurich is because most of the our data is from the Zurich dialect. Discussion One of our objectives was to map phonetic words with their writings. There are some mismatches between SAMPA and GSWs in our dictionary, especially when the GSWs were done manually and independently from the SAMPA. Those mismatches occur where there is no straightforward correspondence of a standard German and Swiss German word. Two kinds of such a missing correspondence can be distinguished. First, there are ambiguous standard German words. And that is necessarily so, as our dictionary is based on a list of standard German words without sentential or any other context. An example for a (morphologically) ambiguous word is standard German liebe. As we did not differentiate upper- and lower-case, it can both mean (a) 'I love' or (b) 'the love'. As evident from table 1, liebe (a) and liebi (b) were mixed in our dictionary. The same is the case for standard German frage which means either (a) 'I ask' or (b) 'the question'. Swiss German fröge, froge, fregu (a) and or (b) fraag, froog were mixed. (For both examples, see table 1.) The second case of missing straightforward correspondence is distance between standard German and Swiss German. For one, lexical preferences in Swiss German differ from those in standard German. To express that food is 'tasty' in standard German, the word lecker is used. This is also possible in Swiss German, yet the word fein is much more common. Another example is that the standard German word rasch ('swiftly') is uncommon in Swiss German – synonyms of the word are preferred. Both of this shows in the variety of options our annotators chose for those words (see table 1). Also, the same standard German word may have several dialectal versions in Swiss German. For example there is a short and long version for the standard German word grossvater, namely grospi and grossvatter. A second aim was to represent the way Swiss German speaking people write spontaneously. However, as our annotators wrote the spontaneous GSWs mostly while looking at standard German words, our GSWs might be biased towards standard German orthography. Yet, there is potentially also a standard German influence in the way Swiss German is actually written. We partly revised our dictionary in order to adapt to everyday writing: We introduced explicit boundary marking into our SAMPAs. We inserted an _ in the SAMPA where there would usually be a space in writing. An example where people would conventionally add a space are corresponding forms to standard German preterite forms, for example 'ging'. The Swiss German corresponding past participles – here isch gange – would (most often) be written separately. So entries like b i n k a N @ in table 1 were changed to b i n _ k a N @. Conclusion In this work we introduced the first Swiss German dictionary. Through its dual nature - both spontaneous written forms in multiple dialects and accompanying phonetic representations - we believe it will become a valuable resource for multiple tasks, including automated speech recognition (ASR). This resource was created using a combination of manual and automated work, in a collaboration between linguists and data scientists that leverages the best of two worlds - domain knowledge and data-driven focus on likely character combinations. Through the combination of complementary skills we overcame the difficulty posed by the important variations in written Swiss German and generated a resource that adds value to downstream tasks. We show that the SAMPA to written Swiss German is useful in speech recognition and can replace the previous state of the art. Moreover the written form to SAMPA is promising and has applications in areas like text-to-speech. We make the dictionary freely available for researchers to expand and use. Acknowledgements We would like to thank our collaborators Alina Mächler and Raphael Tandler for their valueable contribution.
11'248
7b47aa6ba247874eaa8ab74d7cb6205251c01eb5
7b47aa6ba247874eaa8ab74d7cb6205251c01eb5_0
Q: Is the model evaluated on the graphemes-to-phonemes task? Text: Introduction Swiss German refers to any of the German varieties that are spoken in about two thirds of Switzerland BIBREF0. Besides at least one of those dialectal varieties, Swiss German people also master standard (or 'High') German which is taught in school as the official language of communication. Swiss German is varies strongly. Many differences exist in the dialectal continuum of the German speaking part of Switzerland. Besides pronunciation, it also varies a lot in writing. Standard German used to be the exclusive language for writing in Switzerland. Writing in Swiss German has only come up rather recently (notably in text messaging). Because of this, there are no orthographic conventions for Swiss German varieties. Even people speaking the same dialect can, and often do, write phonetically identical words differently. In this paper, we present a dictionary of written standard German words paired with their pronunciation in Swiss German words. Additionally Swiss German spontaneous writings, i.e. writings as they may be used in text messages by native speakers, are paired with Swiss German pronunciations. The primary motivation for building this dictionary is rendering Swiss German accessible for technologies such as Automatic Speech Recognition (ASR). This is the first publicly described Swiss German dictionary shared for research purposes. Furthermore, this is the first dictionary that combines pronunciations of Swiss German with spontaneous writings. Related Work This dictionary complements previously developed resources for Swiss German, which share some common information. Spontaneous noisy writing has already been recorded in text corpora BIBREF1, BIBREF2, BIBREF3, some of which are also normalized. These resources contain relatively large lexicons of words used in context, but they do not contain any information about pronunciation. The features of speech are represented in other resources, such as BIBREF4, BIBREF5, BIBREF6, which, on the other hand, contain relatively small lexicons (small set of words known to vary across dialects). The ArchiMob corpus does contain a large lexicon of speech and writing (Dieth transcription), but the spoken part is available in audio sources only, without phonetic transcription. This dictionary is the first resource to combine all the relevant information together. A relatively large lexicon has been constructed in which phonetic transcriptions (in the SAMPA alphabet) are mapped to various spontaneous writings controlling for the regional distribution. Some of the representations in this dictionary are produced manually, while others are added using automatic processing. Automatic word-level conversion between various writings in Swiss German has been addressed in several projects, mostly for the purpose of writing normalization BIBREF7, BIBREF2, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF0, BIBREF12. The task of normalization consist of mapping multiple variants of a single lexical item into a single writing usually identical to standard German (an example would be the Swiss German words aarbet and arbäit which both map to standard German arbeit ('work')). Early data sets were processed manually (SMS). This was followed by an implementation of character-level statistical machine translation models BIBREF13, BIBREF14 and, more recently, with neural sequence-to-sequence technology. The solution by lusettietal18 employes soft-attention encoder-decoder recurrent networks enhanced with synchronous multilevel decoding. ruzsicsetal19 develop these models further to integrate linguistic (PoS) features. A slightly different task of translating between standard German and Swiss dialects was first addressed with finite state technology BIBREF15. More recently, honnet-etal17 test convolutional neural networks on several data sets. We continue the work on using neural networks for modeling word-level conversion. Unlike previous work, which dealt with written forms only, we train models for mapping phonetic representations to various possible writings. The proposed solution relies on the latest framework for sequence-to-sequence tasks — transformer networks BIBREF16. Dictionary Content and access We pair 11'248 standard German written words with their phonetical representations in six different Swiss dialects: Zürich, St. Gallen, Basel, Bern, Visp, and Stans (Figure FIGREF1). The phonetic words were written in a modified version of the Speech Assessment Methods Phonetic Alphabet (SAMPA). The Swiss German phonetic words are also paired with Swiss German writings in the latin alphabet. (From here onwards, a phonetic representation of a Swiss German word will be called a SAMPA and a written Swiss German word will be called a GSW.) This dictionary comes in two versions as we used two differently sized sets of SAMPA characters. Our extended set including 137 phones allows for a detailed and adequate representation of the diverse pronunciation in Switzerland. The smaller set of 59 phones is easier to compute. The phone reduction was mainly done by splitting up combined SAMPA-characters such as diphthongs. UI s t r $ \lbrace $ tt @ and U I s t r $ \lbrace $ t t @ for example are both representations of the Stans pronunciation of the standard German word austreten ('step out'). The latter representation belongs to the dictionary based on the smaller phoneset. Table TABREF2 shows an example of five dictionary entries based on the bigger phoneset. For a subset of 9000 of 11'248 standard German words, we have manually annotated GSWs for Visp (9000) and for Zurich (2 x 9000, done by two different annotators). For a subsubset of 600 of those standard German words we have manually annotated GSWs for the four other dialects of St. Gallen, Basel, Bern, and Stans. The remaining writing variants are generated using automatic methods described below. The dictionary is freely available for research purposes under the creative commons share-alike non-commercial licence via this website http://tiny.uzh.ch/11X. Construction of the dictionary In the following we present the steps of construction of our dictionary, also detailing how we chose the six dialects to represent Swiss German and how, starting with a list of standard German words, we retrieved the mapping SAMPAs and GSWs. Construction of the dictionary ::: Discretising continuous variation To be able to represent Swiss German by only a few dialects which differ considerably it is necessary to discretize linguistic varieties. Because, as mentioned earlier, regional language variation in Switzerland is continuous. For this identification of different varieties we used a dialectometric analysis BIBREF17. This analysis is based on lexical, phonological, morphological data of the German speaking areas of Switzerland BIBREF4. As we worked with word-lists and not sentences, we discounted syntactical influences on area boundaries that are also described in that analysis. We represent six differentiated linguistic varieties. We considered working with ten linguistic varieties because this number of areas was the 'best-cut'-analysis in the dialectometric analysis BIBREF17. Yet, due to time restraints and considerable overlap between some of the linguistic varieties, we reduced this number to six. We also made some adjustements to the chosen varieties in order to correspond better to the perception of speakers and in favor of more densely populated areas. One way to represent the six individualized linguistic varieties would have been to annotate the dialectal centers, i.e. those places that have the average values of dialectal properties within the area where the variety is spoken. However, we chose to represent the linguistic varieties by the most convenient urban places. Those were the dialects of the Cities Zurich, St. Gallen, Basel, Bern, and Visp, and Stans. Construction of the dictionary ::: Manual annotation ::: SAMPAs For each standard German word in our dictionary we manually annotated its phonetic representation in the six chosen dialects. The information about the pronunciation of Swiss German words is partially available also from other sources but not fully accessible BIBREF4 BIBREF7. To help us with pronunciation our annotators first used their knowledge as native speakers (for Zurich and Visp). Secondly, they consulted dialect specific grammars BIBREF18 BIBREF19 BIBREF20 BIBREF21 BIBREF22 as well as dialect specific lexica BIBREF23 BIBREF24 BIBREF25. They also considered existing Swiss German dictionaries BIBREF7 BIBREF4, listened to recordings BIBREF0 and conferred with friends and acquaintances originating from the respective locations. Construction of the dictionary ::: Manual annotation ::: GSWs 9000 GSWs for Visp German and 2 x 9000 GSWs for Zurich German were annotated by native speakers of the respective dialect. Our annotators created the GSWs while looking at standard German words and without looking at the corresponding SAMPAs for Visp and Zurich. Through this independence from SAMPAs we are able to avoid biases concerning the phonetics as well as the meaning of the word in generating GSWs. At a later stage of our work, we added each 600 GSWs for the four dialects of St. Gallen, Basel, Bern, and Stans in order to improve our phoneme-to-grapheme(p2g) model (see next section). For the manual annotation of these dialects we had no native speakers. Therefore, when writing the GSWs, our annotators relied on the corresponding SAMPAs of these dialects, which they had made an effort to create before. Construction of the dictionary ::: Automatic annotation In order to account for the mentioned variety of everyday Swiss German writing, we aimed for more than one GSW per SAMPA. The heterogeneous writing style makes the SAMPA$\,\rightarrow \,$GSW a one to many relation instead of the regular one to one that speakers of standard languages are accustomed to. To save time in generating the many GSWs, we opted for an automatic process. We first tried to automatize the generation of GSWs with a rule-based program. Via SAMPAs together with phoneme-to-grapheme mappings we tried to obtain all possible GSWs. Yet, this yielded mostly impossible writings and also not all the writings we had already done manually. We then set up a phoneme-to-grapheme(p2g) model to generate the most likely spellings. Construction of the dictionary ::: Automatic annotation ::: Transformer-based Phoneme to Grapheme (p2g) The process of generating written forms from a given SAMPA can be viewed as a sequence-to-sequence problem, where the input is a sequence of phonemes and the output is a sequence of graphemes. We decided to use a Transformer-based model for the phoneme-to-grapheme (p2g) task. The reason for this is twofold. First, the Transformer has shown great success in seq2seq tasks and it has outperformed LSTM and CNN-based models. Second, it is computationally more efficient than LSTM and CNN networks. The Transformer consists of an encoder and a decoder part. The encoder generates a contextual representation for each input SAMPA that is then fed into the decoder together with the previously decoded grapheme. They both have N identical layers. In the encoder, each layer has a multi-head self-attention layer and a position-wise fully-connected feed-forward layer. While in the decoder, in addition to these two layers, we also have an additional multi-headed attention layer that uses the output of the encoder BIBREF16. We are using a Pytorch implementation of the Transformer. As a result of the small size of the dataset, we are using a smaller model with only 2 layers and 2 heads. The dimension of the key (d_k) and value (d_v) is 32, the dimension of the model (d_model) and the word vectors (d_word_vec) is 50 and the hidden inner dimension (d_inner_hid) is 400. The model is trained for 55 epochs with a batch size of 64 and a dropout of 0.2. For decoding the output of the model, we are using beam search with beam size 10. We experimented with different beam sizes, but we saw that it does not have significant influence on the result. The training set is made of 24'000 phonemes-to-graphemes pairs, which are the result of transcribing 8'000 High German words into two Zurich forms and one Visp form. Those transcriptions were made independently by three native speakers. Due to the scarcity of data, we decided not to distinguish between dialects. Hence, a single model receives a sequence of SAMPA symbols and learns to generate a matching sequence of characters. Construction of the dictionary ::: Automatic annotation ::: Test set and evaluation Our team of Swiss German annotators evaluated a test-set of 1000 words. We aimed to exclude only very far-off forms (tagged '0'), such that they are very probably to be seen as false by Swiss German speakers. The accepted writings (tagged '1') might include some that seem off to the Swiss German reader. In order to consistently rate the output, the criteria shown in table TABREF4 were followed. A GSW was tagged '0' if there was at least one letter added, missing, or changed without comprehensible phonetic reason. GSWs were also tagged '0' if there were at least two mistakes that our annotators saw as minor. 'Minor mistakes' are substitutions of related sounds or spellings, added or omitted geminates, and changes in vowel length. For each of the 1000 words in the test-set, five GSW-predictions in all six dialects were given to our annotators. For Visp and Zurich they tagged each 1000x5 GSW predictions with 1 or 0. For St. Gallen, Basel, Bern, and Stans, they evaluated 200x5. In Table TABREF13 we show the result from this evaluation. We count the number of correct GSWs (labeled as '1') among the top 5 candidates generated by the p2g model, where the first candidate is the most relevant, then the second one and so on. The evaluation was done at a stage where our model was trained only on GSW for Zurich and Visp (see sec. SECREF8). The amount of correct predictions are lower for the dialects of St. Gallen, Basel, Bern, and Stans, mainly because there were some special SAMPA characters we used for those dialects and the model did not have the correlating latin character strings. After the evaluation, we added each 600 GSWs for the four dialects of St. Gallen, Basel, Bern, and Stans to improve the model. Construction of the dictionary ::: Automatic annotation ::: Grapheme to Phoneme (g2p) and its benefits for ASR Automatic speech recognition (ASR) systems are the main use cases for our dictionary. ASR systems convert spoken language into text. Today, they are widely used in different domains from customer and help centers to voice-controlled assistants and devices. The main resources needed for an ASR system are audio, transcriptions and a phonetic dictionary. The quality of the ASR system is highly dependant of the quality of the dictionary. With our resource we provide such a phonetic dictionary. To increase the benefits of our data for ASR systems, we also trained a grapheme-to-phoneme (g2p) model: Out-of-vocabulary words can be a problem for ASR system. For those out-of-vocabulary words we need a model that can generate pronunciations from a written form, in real time. This is why we train a grapheme-to-phoneme (g2p) model that generates a sequence of phonemes for a given word. We train the g2p model using our dictionary and compare its performance with a widely used joint-sequence g2p model, Sequitur BIBREF26. For the g2p model we are using the same architecture as for the p2g model. The only difference is input and output vocabulary. The Sequitur and our model are using the dictionary with the same train (19'898 samples), test (2'412 samples) and validation (2'212 samples) split. Additionally, we also test their performance only on the items from the Zurich and Visp dialect, because most of the samples are from this two dialects. In Table TABREF15 we show the result of the comparison of the two models. We compute the edit distance between the predicted and the true pronunciation and report the number of exact matches. In the first columns we have the result using the whole test set with all the dialects, and in the 2nd and 3rd columns we show the number of exact matches only on the samples from the test set that are from the Zurich and Visp dialect. For here we can clearly see that our model performs better than the Sequitur model. The reason why we have less matches in the Visp dialect compared to Zurich is because most of the our data is from the Zurich dialect. Discussion One of our objectives was to map phonetic words with their writings. There are some mismatches between SAMPA and GSWs in our dictionary, especially when the GSWs were done manually and independently from the SAMPA. Those mismatches occur where there is no straightforward correspondence of a standard German and Swiss German word. Two kinds of such a missing correspondence can be distinguished. First, there are ambiguous standard German words. And that is necessarily so, as our dictionary is based on a list of standard German words without sentential or any other context. An example for a (morphologically) ambiguous word is standard German liebe. As we did not differentiate upper- and lower-case, it can both mean (a) 'I love' or (b) 'the love'. As evident from table 1, liebe (a) and liebi (b) were mixed in our dictionary. The same is the case for standard German frage which means either (a) 'I ask' or (b) 'the question'. Swiss German fröge, froge, fregu (a) and or (b) fraag, froog were mixed. (For both examples, see table 1.) The second case of missing straightforward correspondence is distance between standard German and Swiss German. For one, lexical preferences in Swiss German differ from those in standard German. To express that food is 'tasty' in standard German, the word lecker is used. This is also possible in Swiss German, yet the word fein is much more common. Another example is that the standard German word rasch ('swiftly') is uncommon in Swiss German – synonyms of the word are preferred. Both of this shows in the variety of options our annotators chose for those words (see table 1). Also, the same standard German word may have several dialectal versions in Swiss German. For example there is a short and long version for the standard German word grossvater, namely grospi and grossvatter. A second aim was to represent the way Swiss German speaking people write spontaneously. However, as our annotators wrote the spontaneous GSWs mostly while looking at standard German words, our GSWs might be biased towards standard German orthography. Yet, there is potentially also a standard German influence in the way Swiss German is actually written. We partly revised our dictionary in order to adapt to everyday writing: We introduced explicit boundary marking into our SAMPAs. We inserted an _ in the SAMPA where there would usually be a space in writing. An example where people would conventionally add a space are corresponding forms to standard German preterite forms, for example 'ging'. The Swiss German corresponding past participles – here isch gange – would (most often) be written separately. So entries like b i n k a N @ in table 1 were changed to b i n _ k a N @. Conclusion In this work we introduced the first Swiss German dictionary. Through its dual nature - both spontaneous written forms in multiple dialects and accompanying phonetic representations - we believe it will become a valuable resource for multiple tasks, including automated speech recognition (ASR). This resource was created using a combination of manual and automated work, in a collaboration between linguists and data scientists that leverages the best of two worlds - domain knowledge and data-driven focus on likely character combinations. Through the combination of complementary skills we overcame the difficulty posed by the important variations in written Swiss German and generated a resource that adds value to downstream tasks. We show that the SAMPA to written Swiss German is useful in speech recognition and can replace the previous state of the art. Moreover the written form to SAMPA is promising and has applications in areas like text-to-speech. We make the dictionary freely available for researchers to expand and use. Acknowledgements We would like to thank our collaborators Alina Mächler and Raphael Tandler for their valueable contribution.
Yes
ce14b87dacfd5206d2a5af7c0ed1cfeb7b181922
ce14b87dacfd5206d2a5af7c0ed1cfeb7b181922_0
Q: How does the QuaSP+Zero model work? Text: Introduction Many natural language tasks require recognizing and reasoning with qualitative relationships. For example, we may read about temperatures rising (climate science), a drug dose being increased (medicine), or the supply of goods being reduced (economics), and want to reason about the effects. Qualitative story problems, of the kind found in elementary exams (e.g., Figure FIGREF1 ), form a natural example of many of these linguistic and reasoning challenges, and is the target of this work. Understanding and answering such questions is particularly challenging. Corpus-based methods perform poorly in this setting, as the questions ask about novel scenarios rather than facts that can be looked up. Similarly, word association methods struggle, as a single word change (e.g., “more” to “less”) can flip the answer. Rather, the task appears to require knowledge of the underlying qualitative relations (e.g., “more friction implies less speed”). Qualitative modeling BIBREF0 , BIBREF1 , BIBREF2 provides a means for encoding and reasoning about such relationships. Relationships are expressed in a natural, qualitative way (e.g., if X increases, then so will Y), rather than requiring numeric equations, and inference allows complex questions to be answered. However, the semantic parsing task of mapping real world questions into these models is formidable and presents unique challenges. These challenges must be solved if natural questions involving qualitative relationships are to be reliably answered. We make three contributions: (1) a simple and flexible conceptual framework for formally representing these kinds of questions, in particular ones that express qualitative comparisons between two scenarios; (2) a challenging new dataset (QuaRel), including logical forms, exemplifying the parsing challenges; and (3) two novel models that extend type-constrained semantic parsing to address these challenges. Our first model, QuaSP+, addresses the problem of tracking different “worlds” in questions, resulting in significantly higher scores than with off-the-shelf tools (Section SECREF36 ). The second model, QuaSP+Zero, demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships on unseen properties, without requiring additional training data, something not possible with previous models (Section SECREF44 ). Together these contributions make inroads into answering complex, qualitative questions by linking language and reasoning, and offer a new dataset and models to spur further progress by the community. Related Work There has been rapid progress in question-answering (QA), spanning a wide variety of tasks and phenomena, including factoid QA BIBREF3 , entailment BIBREF4 , sentiment BIBREF5 , and ellipsis and coreference BIBREF6 . Our contribution here is the first dataset specifically targeted at qualitative relationships, an important category of language that has been less explored. While questions requiring reasoning about qualitative relations sometimes appear in other datasets, e.g., BIBREF7 , our dataset specifically focuses on them so their challenges can be studied. For answering such questions, we treat the problem as mapping language to a structured formalism (semantic parsing) where simple qualitative reasoning can occur. Semantic parsing has a long history BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , using datasets about geography BIBREF8 , travel booking BIBREF12 , factoid QA over knowledge bases BIBREF10 , Wikipedia tables BIBREF13 , and many more. Our contributions to this line of research are: a dataset that features phenomena under-represented in prior datasets, namely (1) highly diverse language describing open-domain qualitative problems, and (2) the need to reason over entities that have no explicit formal representation; and methods for adapting existing semantic parsers to address these phenomena. For the target formalism itself, we draw on the extensive body of work on qualitative reasoning BIBREF0 , BIBREF1 , BIBREF2 to create a logical form language that can express the required qualitative knowledge, yet is sufficiently constrained that parsing into it is feasible, described in more detail in Section SECREF3 . There has been some work connecting language with qualitative reasoning, although mainly focused on extracting qualitative models themselves from text rather than question interpretation, e.g., BIBREF14 , BIBREF15 . Recent work by BIBREF16 crouse2018learning also includes interpreting questions that require identifying qualitative processes in text, in constrast to our setting of interpreting NL story questions that involve qualitative comparisons. Answering story problems has received attention in the domain of arithmetic, where simple algebra story questions (e.g., “Sue had 5 cookies, then gave 2 to Joe...”) are mapped to a system of equations, e.g., BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . This task is loosely analogous to ours (we instead map to qualitative relations) except that in arithmetic the entities to relate are often identifiable (namely, the numbers). Our qualitative story questions lack this structure, adding an extra challenge. The QuaRel dataset shares some structure with the Winograd Schema Challenge BIBREF21 , being 2-way multiple choice questions invoking both commonsense and coreference. However, they test different aspects of commonsense: Winograd uses coreference resolution to test commonsense understanding of scenarios, while QuaRel tests reasoning about qualitative relationships requiring tracking of coreferent “worlds.” Finally, crowdsourcing datasets has become a driving force in AI, producing significant progress, e.g., BIBREF3 , BIBREF22 , BIBREF23 . However, for semantic parsing tasks, one obstacle has been the difficulty in crowdsourcing target logical forms for questions. Here, we show how those logical forms can be obtained indirectly from workers without training the workers in the formalism, loosely similar to BIBREF24 . Knowledge Representation We first describe our framework for representing questions and the knowledge to answer them. Our dataset, described later, includes logical forms expressed in this language. Qualitative Background Knowledge We use a simple representation of qualitative relationships, leveraging prior work in qualitative reasoning BIBREF0 . Let INLINEFORM0 be the set of properties relevant to the question set's domain (e.g., smoothness, friction, speed). Let INLINEFORM1 be a set of qualitative values for property INLINEFORM2 (e.g., fast, slow). For the background knowledge about the domain itself (a qualitative model), following BIBREF0 Forbus1984QualitativePT, we use the following predicates: [vskip=1mm,leftmargin=5mm] q+(property1, property2) q-(property1, property2) q+ denotes that property1 and property2 are qualitatively proportional, e.g., if property1 goes up, property2 will too, while q- denotes inverse proportionality, e.g., [vskip=1mm,leftmargin=5mm] # If friction goes up, speed goes down. q-(friction, speed). We also introduce the predicate: [vskip=1mm,leftmargin=5mm] higher-than( INLINEFORM0 , INLINEFORM1 , property INLINEFORM2 ) where INLINEFORM3 , allowing an ordering of property values to be specified, e.g., higher-than(fast, slow, speed). For our purposes here, we simplify to use just two property values, low and high, for all properties. (The parser learns mappings from words to these values, described later). Given these primitives, compact theories can be authored for a particular domain by choosing relevant properties INLINEFORM0 , and specifying qualitative relationships (q+,q-) and ordinal values (higher-than) for them. For example, a simple theory about friction is sketched graphically in Figure FIGREF3 . Our observation is that these theories are relatively small, simple, and easy to author. Rather, the primary challenge is in mapping the complex and varied language of questions into a form that interfaces with this representation. This language can be extended to include additional primitives from qualitative modeling, e.g., i+(x,y) (“the rate of change of x is qualitatively proportional to y”). That is, the techniques we present are not specific to our particular qualitative modeling subset. The only requirement is that, given a set of absolute values or qualitative relationships from a question, the theory can compute an answer. Representing Questions A key feature of our representation is the conceptualization of questions as describing events happening in two worlds, world1 and world2, that are being compared. That comparison may be between two different entities, or the same entity at different time points. E.g., in Figure FIGREF1 the two worlds being compared are the car on wood, and the car on carpet. The tags world1 and world2 denote these different situations, and semantic parsing (Section SECREF5 ) requires learning to correctly associate these tags with parts of the question describing those situations. This abstracts away irrelevant details of the worlds, while still keeping track of which world is which. We define the following two predicates to express qualitative information in questions: [vskip=1mm,leftmargin=5mm] qrel(property, direction, world) qval(property, value, world) where property ( INLINEFORM0 ) INLINEFORM1 P, value INLINEFORM2 INLINEFORM3 , direction INLINEFORM4 {higher, lower}, and world INLINEFORM5 {world1, world2}. qrel() denotes the relative assertion that property is higher/lower in world compared with the other world, which is left implicit, e.g., from Figure FIGREF1 : [vskip=1mm,leftmargin=5mm] # The car rolls further on wood. qrel(distance, higher, world1) where world1 is a tag for the “car on wood” situation (hence world2 becomes a tag for the opposite “car on carpet” situation). qval() denotes that property has an absolute value in world, e.g., [vskip=1mm,leftmargin=5mm] # The car's speed is slow on carpet. qval(speed, low, world2) Logical Forms for Questions Despite the wide variation in language, the space of logical forms (LFs) for the questions that we consider is relatively compact. In each question, the question body establishes a scenario and each answer option then probes an implication. We thus express a question's LF as a tuple: [vskip=1mm,leftmargin=5mm] (setup, answer-A, answer-B) where setup is the predicate(s) describing the scenario, and answer-* are the predicate(s) being queried for. If answer-A follows from setup, as inferred by the reasoner, then the answer is (A); similarly for (B). For readability we will write this as [vskip=1mm,leftmargin=5mm] setup INLINEFORM0 answer-A ; answer-B We consider two styles of LF, covering a large range of questions. The first is: [vskip=1mm,leftmargin=5mm] (1) qrel( INLINEFORM1 ) INLINEFORM2 qrel( INLINEFORM0 ) ; qrel( INLINEFORM1 ) which deals with relative values of properties between worlds, and applies when the question setup includes a comparative. An example of this is in Figure FIGREF1 . The second is: [vskip=1mm,leftmargin=5mm] (2) qval( INLINEFORM2 ), qval( INLINEFORM3 ) INLINEFORM4 qrel( INLINEFORM0 ) ; qrel( INLINEFORM1 ) which deals with absolute values of properties, and applies when the setup uses absolute terms instead of comparatives. An example is the first question in Figure FIGREF4 , shown simplified below, whose LF looks as follows (colors showing approximate correspondences): [vskip=1mm,leftmargin=5mm] # Does a bar stool orangeslide faster along the redbar surface with tealdecorative raised bumps or the magentasmooth wooden bluefloor? (A) redbar (B) bluefloor qval(tealsmoothness, low, redworld1), qval(magentasmoothness, high, blueworld2) INLINEFORM0 qrel(orangespeed, higher, redworld1) ; qrel(orangespeed, higher, blueworld2) Inference A small set of rules for qualitative reasoning connects these predicates together. For example, (in logic) if the value of P is higher in world1 than the value of P in world2 and q+(P,Q) then the value of Q will be higher in world1 than the value of Q in world2. Given a question's logical form, a qualitative model, and these rules, a Prolog-style inference engine determines which answer option follows from the premise. The QuaRel Dataset QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms. The size of the dataset is similar to several other datasets with annotated logical forms used for semantic parsing BIBREF8 , BIBREF25 , BIBREF24 . As the space of LFs is constrained, the dataset is sufficient for a rich exploration of this space. We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. They then created a question, guided by a large number of examples, and were encouraged to be imaginative and use their own words. The results are a remarkable variety of situations and phrasings (Figure FIGREF4 ). Second, the LFs were elicited using a novel technique of reverse-engineering them from a set of follow-up questions, without exposing workers to the underlying formalism. This is possible because of the constrained space of LFs. Referring to LF templates (1) and (2) earlier (Section SECREF13 ), these questions are as follows: From this information, we can deduce the target LF ( INLINEFORM0 is the complement of INLINEFORM1 , INLINEFORM2 , we arbitrarily set INLINEFORM3 =world1, hence all other variables can be inferred). Three independent workers answer these follow-up questions to ensure reliable results. We also had a human answer the questions in the dev partition (in principle, they should all be answerable). The human scored 96.4%, the few failures caused by occasional annotation errors or ambiguities in the question set itself, suggesting high fidelity of the content. About half of the dataset are questions about friction, relating five different properties (friction, heat, distance, speed, smoothness). These questions form a meaningful, connected subset of the dataset which we denote QuaRel INLINEFORM0 . The remaining questions involve a wide variety of 14 additional properties and their relations, such as “exercise intensity vs. sweat” or “distance vs. brightness”. Figure FIGREF4 shows typical examples of questions in QuaRel, and Table TABREF26 provides summary statistics. In particular, the vocabulary is highly varied (5226 unique words), given the dataset size. Figure FIGREF27 shows some examples of the varied phrases used to describe smoothness. Baseline Systems We use four systems to evaluate the difficulty of this dataset. (We subsequently present two new models, extending the baseline neural semantic parser, in Sections SECREF36 and SECREF44 ). The first two are an information retrieval system and a word-association method, following the designs of BIBREF26 Clark2016CombiningRS. These are naive baselines that do not parse the question, but nevertheless may find some signal in a large corpus of text that helps guess the correct answer. The third is a CCG-style rule-based semantic parser written specifically for friction questions (the QuaRel INLINEFORM0 subset), but prior to data being collected. The last is a state-of-the-art neural semantic parser. We briefly describe each in turn. Baseline Experiments We ran the above systems on the QuaRel dataset. QuaSP was trained on the training set, using the model with highest parse accuracy on the dev set (similarly BiLSTM used highest answer accuracy on the dev set) . The results are shown in Table TABREF34 . The 95% confidence interval is +/- 4% on the full test set. The human score is the sanity check on the dev set (Section SECREF4 ). As Table TABREF34 shows, the QuaSP model performs better than other baseline approaches which are only slightly above random. QuaSP scores 56.1% (61.7% on the friction subset), indicating the challenges of this dataset. For the rule-based system, we observe that it is unable to parse the majority (66%) of questions (hence scoring 0.5 for those questions, reflecting a random guess), due to the varied and unexpected vocabulary present in the dataset. For example, Figure FIGREF27 shows some of the ways that the notion of “smoother/rougher” is expressed in questions, many of which are not covered by the hand-written CCG grammar. This reflects the typical brittleness of hand-built systems. For QuaSP, we also analyzed the parse accuracies, shown in Table TABREF35 , the score reflecting the percentage of times it produced exactly the right logical form. The random baseline for parse accuracy is near zero given the large space of logical forms, while the model parse accuracies are relatively high, much better than a random baseline. Further analysis of the predicted LFs indicates that the neural model does well at predicting the properties ( INLINEFORM0 25% of errors on dev set), but struggles to predict the worlds in the LFs reliably ( INLINEFORM1 70% of errors on dev set). This helps explain why non-trivial parse accuracy does not necessarily translate into correspondingly higher answer accuracy: If only the world assignment is wrong, the answer will flip and give a score of zero, rather than the average 0.5. New Models We now present two new models, both extensions of the neural baseline QuaSP. The first, QuaSP+, addresses the leading cause of failure just described, namely the problem of identifying the two worlds being compared, and significantly outperforms all the baseline systems. The second, QuaSP+Zero, addresses the scaling problem, namely the costly requirement of needing many training examples each time a new qualitative property is introduced. It does this by instead using only a small amount of lexical information about the new property, thus achieving “zero shot” performance, i.e., handling properties unseen in the training examples BIBREF34 , a capability not present in the baseline systems. We present the models and results for each. QuaSP+: A Model Incorporating World Tracking We define the world tracking problem as identifying and tracking references to different “worlds” being compared in text, i.e., correctly mapping phrases to world identifiers, a critical aspect of the semantic parsing task. There are three reasons why this is challenging. First, unlike properties, the worlds being compared in questions are distinct in almost every question, and thus there is no obvious, learnable mapping from phrases to worlds. For example, while a property (like speed) has learnable ways to refer to it (“faster”, “moves rapidly”, “speeds”, “barely moves”), worlds are different in each question (e.g., “on a road”, “countertop”, “while cutting grass”) and thus learning to identify them is hard. Second, different phrases may be used to refer to the same world in the same question (see Figure FIGREF43 ), further complicating the task. Finally, even if the model could learn to identify worlds in other ways, e.g., by syntactic position in the question, there is the problem of selecting world1 or world2 consistently throughout the parse, so that the equivalent phrasings are assigned the same world. This problem of mapping phrases to world identifiers is similar to the task of entity linking BIBREF35 . In prior semantic parsing work, entity linking is relatively straightforward: simple string-matching heuristics are often sufficient BIBREF36 , BIBREF37 , or an external entity linking system can be used BIBREF38 , BIBREF39 . In QuaRel, however, because the phrases denoting world1 and world2 are different in almost every question, and the word “world” is never used, such methods cannot be applied. To address this, we have developed QuaSP+, a new model that extends QuaSP by adding an extra initial step to identify and delexicalize world references in the question. In this delexicalization process, potentially new linguistic descriptions of worlds are replaced by canonical tokens, creating the opportunity for the model to generalize across questions. For example, the world mentions in the question: [vskip=1mm,leftmargin=5mm] “A ball rolls further on wood than carpet because the (A) carpet is smoother (B) wood is smoother” are delexicalized to: [vskip=1mm,leftmargin=5mm] “A ball rolls further on World1 than World2 because the (A) World2 is smoother (B) World1 is smoother” This approach is analogous to BIBREF40 Herzig2018DecouplingSA, who delexicalized words to POS tags to avoid memorization. Similar delexicalized features have also been employed in Open Information Extraction BIBREF41 , so the Open IE system could learn a general model of how relations are expressed. In our case, however, delexicalizing to World1 and World2 is itself a significant challenge, because identifying phrases referring to worlds is substantially more complex than (say) identifying parts of speech. To perform this delexicalization step, we use the world annotations included as part of the training dataset (Section SECREF4 ) to train a separate tagger to identify “world mentions” (text spans) in the question using BIO tags (BiLSTM encoder followed by a CRF). The spans are then sorted into World1 and World2 using the following algorithm: If one span is a substring of another, they are are grouped together. Remaining spans are singleton groups. The two groups containing the longest spans are labeled as the two worlds being compared. Any additional spans are assigned to one of these two groups based on closest edit distance (or ignored if zero overlap). The group appearing first in the question is labeled World1, the other World2. The result is a question in which world mentions are canonicalized. The semantic parser QuaSP is then trained using these questions. We call the combined system (delexicalization plus semantic parser) QuaSP+. The results for QuaSP+ are included in Table TABREF34 . Most importantly, QuaSP+ significantly outperforms the baselines by over 12% absolute. Similarly, the parse accuracies are significantly improved from 32.2% to 43.8% (Table TABREF35 ). This suggests that this delexicalization technique is an effective way of making progress on this dataset, and more generally on problems where multiple situations are being compared, a common characteristic of qualitative problems. QuaSP+Zero: A Model for the Zero-Shot Task While our delexicalization procedure demonstrates a way of addressing the world tracking problem, the approach still relies on annotated data; if we were to add new qualitative relations, new training data would be needed, which is a significant scalability obstacle. To address this, we define the zero-shot problem as being able to answer questions involving a new predicate p given training data only about other predicates P different from p. For example, if we add a new property (e.g., heat) to the qualitative model (e.g., adding q+(friction, heat); “more friction implies more heat”), we want to answer questions involving heat without creating new annotated training questions, and instead only use minimal extra information about the new property. A parser that achieved good zero-shot performance, i.e., worked well for new properties unseen at training time, would be a substantial advance, allowing a new qualitative model to link to questions with minimal effort. QuaRel provides an environment in which methods for this zero-shot theory extension can be devised and evaluated. To do this, we consider the following experimental setting: All questions mentioning a particular property are removed, the parser is trained on the remainder, and then tested on those withheld questions, i.e., questions mentioning a property unseen in the training data. We present and evaluate a model that we have developed for this, called QuaSP+Zero, that modifies the QuaSP+ parser as follows: During decoding, at points where the parser is selecting which property to include in the LF (e.g., Figure FIGREF31 ), it does not just consider the question tokens, but also the relationship between those tokens and the properties INLINEFORM0 used in the qualitative model. For example, a question token such as “longer” can act as a cue for (the property) length, even if unseen in the training data, because “longer” and a lexical form of length (e.g.,“length”) are similar. This approach follows the entity-linking approach used by BIBREF11 Krishnamurthy2017NeuralSP, where the similarity between question tokens and (words associated with) entities - called the entity linking score - help decide which entities to include in the LF during parsing. Here, we modify their entity linking score INLINEFORM1 , linking question tokens INLINEFORM2 and property “entities” INLINEFORM3 , to be: INLINEFORM4 where INLINEFORM0 is a diagonal matrix connecting the embedding of the question token INLINEFORM1 and words INLINEFORM2 associated with the property INLINEFORM3 . For INLINEFORM4 , we provide a small list of words for each property (such as “speed”, “velocity”, and “fast” for the speed property), a small-cost requirement. The results with QuaSP+Zero are in Table TABREF45 , shown in detail on the QuaRel INLINEFORM0 subset and (due to space constraints) summarized for the full QuaRel. We can measure overall performance of QuaSP+Zero by averaging each of the zero-shot test sets (weighted by the number of questions in each set), resulting in an overall parse accuracy of 38.9% and answer accuracy 61.0% on QuaRel INLINEFORM1 , and 25.7% (parse) and 59.5% (answer) on QuaRel, both significantly better than random. These initial results are encouraging, suggesting that it may be possible to parse into modified qualitative models that include new relations, with minimal annotation effort, significantly opening up qualitative reasoning methods for QA. Summary and Conclusion Our goal is to answer questions that involve qualitative relationships, an important genre of task that involves both language and knowledge, but also one that presents significant challenges for semantic parsing. To this end we have developed a simple and flexible formalism for representing these questions; constructed QuaRel, the first dataset of qualitative story questions that exemplifies these challenges; and presented two new models that adapt existing parsing techniques to this task. The first model, QuaSP+, illustrates how delexicalization can help with world tracking (identifying different “worlds” in questions), resulting in state-of-the-art performance on QuaRel. The second model, QuaSP+Zero, illustrates how zero-shot learning can be achieved (i.e., adding new qualitative relationships without requiring new training examples) by using an entity-linking approach applied to properties - a capability not present in previous models. There are several directions in which this work can be expanded. First, quantitative property values (e.g., “10 mph”) are currently not handled well, as their mapping to “low” or “high” is context-dependent. Second, some questions do not fit our two question templates (Section SECREF13 ), e.g., where two property values are a single answer option (e.g., “....(A) one floor is smooth and the other floor is rough”). Finally, some questions include an additional level of indirection, requiring an inference step to map to qualitative relations. For example, “Which surface would be best for a race? (A) gravel (B) blacktop” requires the additional commonsense inference that “best for a race” implies “higher speed”. Given the ubiquity of qualitative comparisons in natural text, recognizing and reasoning with qualitative relationships is likely to remain an important task for AI. This work makes inroads into this task, and contributes a dataset and models to encourage progress by others. The dataset and models are publicly available at http://data.allenai.org/quarel.
does not just consider the question tokens, but also the relationship between those tokens and the properties
709a4993927187514701fe3cc491ac3030da1215
709a4993927187514701fe3cc491ac3030da1215_0
Q: Which off-the-shelf tools do they use on QuaRel? Text: Introduction Many natural language tasks require recognizing and reasoning with qualitative relationships. For example, we may read about temperatures rising (climate science), a drug dose being increased (medicine), or the supply of goods being reduced (economics), and want to reason about the effects. Qualitative story problems, of the kind found in elementary exams (e.g., Figure FIGREF1 ), form a natural example of many of these linguistic and reasoning challenges, and is the target of this work. Understanding and answering such questions is particularly challenging. Corpus-based methods perform poorly in this setting, as the questions ask about novel scenarios rather than facts that can be looked up. Similarly, word association methods struggle, as a single word change (e.g., “more” to “less”) can flip the answer. Rather, the task appears to require knowledge of the underlying qualitative relations (e.g., “more friction implies less speed”). Qualitative modeling BIBREF0 , BIBREF1 , BIBREF2 provides a means for encoding and reasoning about such relationships. Relationships are expressed in a natural, qualitative way (e.g., if X increases, then so will Y), rather than requiring numeric equations, and inference allows complex questions to be answered. However, the semantic parsing task of mapping real world questions into these models is formidable and presents unique challenges. These challenges must be solved if natural questions involving qualitative relationships are to be reliably answered. We make three contributions: (1) a simple and flexible conceptual framework for formally representing these kinds of questions, in particular ones that express qualitative comparisons between two scenarios; (2) a challenging new dataset (QuaRel), including logical forms, exemplifying the parsing challenges; and (3) two novel models that extend type-constrained semantic parsing to address these challenges. Our first model, QuaSP+, addresses the problem of tracking different “worlds” in questions, resulting in significantly higher scores than with off-the-shelf tools (Section SECREF36 ). The second model, QuaSP+Zero, demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships on unseen properties, without requiring additional training data, something not possible with previous models (Section SECREF44 ). Together these contributions make inroads into answering complex, qualitative questions by linking language and reasoning, and offer a new dataset and models to spur further progress by the community. Related Work There has been rapid progress in question-answering (QA), spanning a wide variety of tasks and phenomena, including factoid QA BIBREF3 , entailment BIBREF4 , sentiment BIBREF5 , and ellipsis and coreference BIBREF6 . Our contribution here is the first dataset specifically targeted at qualitative relationships, an important category of language that has been less explored. While questions requiring reasoning about qualitative relations sometimes appear in other datasets, e.g., BIBREF7 , our dataset specifically focuses on them so their challenges can be studied. For answering such questions, we treat the problem as mapping language to a structured formalism (semantic parsing) where simple qualitative reasoning can occur. Semantic parsing has a long history BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , using datasets about geography BIBREF8 , travel booking BIBREF12 , factoid QA over knowledge bases BIBREF10 , Wikipedia tables BIBREF13 , and many more. Our contributions to this line of research are: a dataset that features phenomena under-represented in prior datasets, namely (1) highly diverse language describing open-domain qualitative problems, and (2) the need to reason over entities that have no explicit formal representation; and methods for adapting existing semantic parsers to address these phenomena. For the target formalism itself, we draw on the extensive body of work on qualitative reasoning BIBREF0 , BIBREF1 , BIBREF2 to create a logical form language that can express the required qualitative knowledge, yet is sufficiently constrained that parsing into it is feasible, described in more detail in Section SECREF3 . There has been some work connecting language with qualitative reasoning, although mainly focused on extracting qualitative models themselves from text rather than question interpretation, e.g., BIBREF14 , BIBREF15 . Recent work by BIBREF16 crouse2018learning also includes interpreting questions that require identifying qualitative processes in text, in constrast to our setting of interpreting NL story questions that involve qualitative comparisons. Answering story problems has received attention in the domain of arithmetic, where simple algebra story questions (e.g., “Sue had 5 cookies, then gave 2 to Joe...”) are mapped to a system of equations, e.g., BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . This task is loosely analogous to ours (we instead map to qualitative relations) except that in arithmetic the entities to relate are often identifiable (namely, the numbers). Our qualitative story questions lack this structure, adding an extra challenge. The QuaRel dataset shares some structure with the Winograd Schema Challenge BIBREF21 , being 2-way multiple choice questions invoking both commonsense and coreference. However, they test different aspects of commonsense: Winograd uses coreference resolution to test commonsense understanding of scenarios, while QuaRel tests reasoning about qualitative relationships requiring tracking of coreferent “worlds.” Finally, crowdsourcing datasets has become a driving force in AI, producing significant progress, e.g., BIBREF3 , BIBREF22 , BIBREF23 . However, for semantic parsing tasks, one obstacle has been the difficulty in crowdsourcing target logical forms for questions. Here, we show how those logical forms can be obtained indirectly from workers without training the workers in the formalism, loosely similar to BIBREF24 . Knowledge Representation We first describe our framework for representing questions and the knowledge to answer them. Our dataset, described later, includes logical forms expressed in this language. Qualitative Background Knowledge We use a simple representation of qualitative relationships, leveraging prior work in qualitative reasoning BIBREF0 . Let INLINEFORM0 be the set of properties relevant to the question set's domain (e.g., smoothness, friction, speed). Let INLINEFORM1 be a set of qualitative values for property INLINEFORM2 (e.g., fast, slow). For the background knowledge about the domain itself (a qualitative model), following BIBREF0 Forbus1984QualitativePT, we use the following predicates: [vskip=1mm,leftmargin=5mm] q+(property1, property2) q-(property1, property2) q+ denotes that property1 and property2 are qualitatively proportional, e.g., if property1 goes up, property2 will too, while q- denotes inverse proportionality, e.g., [vskip=1mm,leftmargin=5mm] # If friction goes up, speed goes down. q-(friction, speed). We also introduce the predicate: [vskip=1mm,leftmargin=5mm] higher-than( INLINEFORM0 , INLINEFORM1 , property INLINEFORM2 ) where INLINEFORM3 , allowing an ordering of property values to be specified, e.g., higher-than(fast, slow, speed). For our purposes here, we simplify to use just two property values, low and high, for all properties. (The parser learns mappings from words to these values, described later). Given these primitives, compact theories can be authored for a particular domain by choosing relevant properties INLINEFORM0 , and specifying qualitative relationships (q+,q-) and ordinal values (higher-than) for them. For example, a simple theory about friction is sketched graphically in Figure FIGREF3 . Our observation is that these theories are relatively small, simple, and easy to author. Rather, the primary challenge is in mapping the complex and varied language of questions into a form that interfaces with this representation. This language can be extended to include additional primitives from qualitative modeling, e.g., i+(x,y) (“the rate of change of x is qualitatively proportional to y”). That is, the techniques we present are not specific to our particular qualitative modeling subset. The only requirement is that, given a set of absolute values or qualitative relationships from a question, the theory can compute an answer. Representing Questions A key feature of our representation is the conceptualization of questions as describing events happening in two worlds, world1 and world2, that are being compared. That comparison may be between two different entities, or the same entity at different time points. E.g., in Figure FIGREF1 the two worlds being compared are the car on wood, and the car on carpet. The tags world1 and world2 denote these different situations, and semantic parsing (Section SECREF5 ) requires learning to correctly associate these tags with parts of the question describing those situations. This abstracts away irrelevant details of the worlds, while still keeping track of which world is which. We define the following two predicates to express qualitative information in questions: [vskip=1mm,leftmargin=5mm] qrel(property, direction, world) qval(property, value, world) where property ( INLINEFORM0 ) INLINEFORM1 P, value INLINEFORM2 INLINEFORM3 , direction INLINEFORM4 {higher, lower}, and world INLINEFORM5 {world1, world2}. qrel() denotes the relative assertion that property is higher/lower in world compared with the other world, which is left implicit, e.g., from Figure FIGREF1 : [vskip=1mm,leftmargin=5mm] # The car rolls further on wood. qrel(distance, higher, world1) where world1 is a tag for the “car on wood” situation (hence world2 becomes a tag for the opposite “car on carpet” situation). qval() denotes that property has an absolute value in world, e.g., [vskip=1mm,leftmargin=5mm] # The car's speed is slow on carpet. qval(speed, low, world2) Logical Forms for Questions Despite the wide variation in language, the space of logical forms (LFs) for the questions that we consider is relatively compact. In each question, the question body establishes a scenario and each answer option then probes an implication. We thus express a question's LF as a tuple: [vskip=1mm,leftmargin=5mm] (setup, answer-A, answer-B) where setup is the predicate(s) describing the scenario, and answer-* are the predicate(s) being queried for. If answer-A follows from setup, as inferred by the reasoner, then the answer is (A); similarly for (B). For readability we will write this as [vskip=1mm,leftmargin=5mm] setup INLINEFORM0 answer-A ; answer-B We consider two styles of LF, covering a large range of questions. The first is: [vskip=1mm,leftmargin=5mm] (1) qrel( INLINEFORM1 ) INLINEFORM2 qrel( INLINEFORM0 ) ; qrel( INLINEFORM1 ) which deals with relative values of properties between worlds, and applies when the question setup includes a comparative. An example of this is in Figure FIGREF1 . The second is: [vskip=1mm,leftmargin=5mm] (2) qval( INLINEFORM2 ), qval( INLINEFORM3 ) INLINEFORM4 qrel( INLINEFORM0 ) ; qrel( INLINEFORM1 ) which deals with absolute values of properties, and applies when the setup uses absolute terms instead of comparatives. An example is the first question in Figure FIGREF4 , shown simplified below, whose LF looks as follows (colors showing approximate correspondences): [vskip=1mm,leftmargin=5mm] # Does a bar stool orangeslide faster along the redbar surface with tealdecorative raised bumps or the magentasmooth wooden bluefloor? (A) redbar (B) bluefloor qval(tealsmoothness, low, redworld1), qval(magentasmoothness, high, blueworld2) INLINEFORM0 qrel(orangespeed, higher, redworld1) ; qrel(orangespeed, higher, blueworld2) Inference A small set of rules for qualitative reasoning connects these predicates together. For example, (in logic) if the value of P is higher in world1 than the value of P in world2 and q+(P,Q) then the value of Q will be higher in world1 than the value of Q in world2. Given a question's logical form, a qualitative model, and these rules, a Prolog-style inference engine determines which answer option follows from the premise. The QuaRel Dataset QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms. The size of the dataset is similar to several other datasets with annotated logical forms used for semantic parsing BIBREF8 , BIBREF25 , BIBREF24 . As the space of LFs is constrained, the dataset is sufficient for a rich exploration of this space. We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. They then created a question, guided by a large number of examples, and were encouraged to be imaginative and use their own words. The results are a remarkable variety of situations and phrasings (Figure FIGREF4 ). Second, the LFs were elicited using a novel technique of reverse-engineering them from a set of follow-up questions, without exposing workers to the underlying formalism. This is possible because of the constrained space of LFs. Referring to LF templates (1) and (2) earlier (Section SECREF13 ), these questions are as follows: From this information, we can deduce the target LF ( INLINEFORM0 is the complement of INLINEFORM1 , INLINEFORM2 , we arbitrarily set INLINEFORM3 =world1, hence all other variables can be inferred). Three independent workers answer these follow-up questions to ensure reliable results. We also had a human answer the questions in the dev partition (in principle, they should all be answerable). The human scored 96.4%, the few failures caused by occasional annotation errors or ambiguities in the question set itself, suggesting high fidelity of the content. About half of the dataset are questions about friction, relating five different properties (friction, heat, distance, speed, smoothness). These questions form a meaningful, connected subset of the dataset which we denote QuaRel INLINEFORM0 . The remaining questions involve a wide variety of 14 additional properties and their relations, such as “exercise intensity vs. sweat” or “distance vs. brightness”. Figure FIGREF4 shows typical examples of questions in QuaRel, and Table TABREF26 provides summary statistics. In particular, the vocabulary is highly varied (5226 unique words), given the dataset size. Figure FIGREF27 shows some examples of the varied phrases used to describe smoothness. Baseline Systems We use four systems to evaluate the difficulty of this dataset. (We subsequently present two new models, extending the baseline neural semantic parser, in Sections SECREF36 and SECREF44 ). The first two are an information retrieval system and a word-association method, following the designs of BIBREF26 Clark2016CombiningRS. These are naive baselines that do not parse the question, but nevertheless may find some signal in a large corpus of text that helps guess the correct answer. The third is a CCG-style rule-based semantic parser written specifically for friction questions (the QuaRel INLINEFORM0 subset), but prior to data being collected. The last is a state-of-the-art neural semantic parser. We briefly describe each in turn. Baseline Experiments We ran the above systems on the QuaRel dataset. QuaSP was trained on the training set, using the model with highest parse accuracy on the dev set (similarly BiLSTM used highest answer accuracy on the dev set) . The results are shown in Table TABREF34 . The 95% confidence interval is +/- 4% on the full test set. The human score is the sanity check on the dev set (Section SECREF4 ). As Table TABREF34 shows, the QuaSP model performs better than other baseline approaches which are only slightly above random. QuaSP scores 56.1% (61.7% on the friction subset), indicating the challenges of this dataset. For the rule-based system, we observe that it is unable to parse the majority (66%) of questions (hence scoring 0.5 for those questions, reflecting a random guess), due to the varied and unexpected vocabulary present in the dataset. For example, Figure FIGREF27 shows some of the ways that the notion of “smoother/rougher” is expressed in questions, many of which are not covered by the hand-written CCG grammar. This reflects the typical brittleness of hand-built systems. For QuaSP, we also analyzed the parse accuracies, shown in Table TABREF35 , the score reflecting the percentage of times it produced exactly the right logical form. The random baseline for parse accuracy is near zero given the large space of logical forms, while the model parse accuracies are relatively high, much better than a random baseline. Further analysis of the predicted LFs indicates that the neural model does well at predicting the properties ( INLINEFORM0 25% of errors on dev set), but struggles to predict the worlds in the LFs reliably ( INLINEFORM1 70% of errors on dev set). This helps explain why non-trivial parse accuracy does not necessarily translate into correspondingly higher answer accuracy: If only the world assignment is wrong, the answer will flip and give a score of zero, rather than the average 0.5. New Models We now present two new models, both extensions of the neural baseline QuaSP. The first, QuaSP+, addresses the leading cause of failure just described, namely the problem of identifying the two worlds being compared, and significantly outperforms all the baseline systems. The second, QuaSP+Zero, addresses the scaling problem, namely the costly requirement of needing many training examples each time a new qualitative property is introduced. It does this by instead using only a small amount of lexical information about the new property, thus achieving “zero shot” performance, i.e., handling properties unseen in the training examples BIBREF34 , a capability not present in the baseline systems. We present the models and results for each. QuaSP+: A Model Incorporating World Tracking We define the world tracking problem as identifying and tracking references to different “worlds” being compared in text, i.e., correctly mapping phrases to world identifiers, a critical aspect of the semantic parsing task. There are three reasons why this is challenging. First, unlike properties, the worlds being compared in questions are distinct in almost every question, and thus there is no obvious, learnable mapping from phrases to worlds. For example, while a property (like speed) has learnable ways to refer to it (“faster”, “moves rapidly”, “speeds”, “barely moves”), worlds are different in each question (e.g., “on a road”, “countertop”, “while cutting grass”) and thus learning to identify them is hard. Second, different phrases may be used to refer to the same world in the same question (see Figure FIGREF43 ), further complicating the task. Finally, even if the model could learn to identify worlds in other ways, e.g., by syntactic position in the question, there is the problem of selecting world1 or world2 consistently throughout the parse, so that the equivalent phrasings are assigned the same world. This problem of mapping phrases to world identifiers is similar to the task of entity linking BIBREF35 . In prior semantic parsing work, entity linking is relatively straightforward: simple string-matching heuristics are often sufficient BIBREF36 , BIBREF37 , or an external entity linking system can be used BIBREF38 , BIBREF39 . In QuaRel, however, because the phrases denoting world1 and world2 are different in almost every question, and the word “world” is never used, such methods cannot be applied. To address this, we have developed QuaSP+, a new model that extends QuaSP by adding an extra initial step to identify and delexicalize world references in the question. In this delexicalization process, potentially new linguistic descriptions of worlds are replaced by canonical tokens, creating the opportunity for the model to generalize across questions. For example, the world mentions in the question: [vskip=1mm,leftmargin=5mm] “A ball rolls further on wood than carpet because the (A) carpet is smoother (B) wood is smoother” are delexicalized to: [vskip=1mm,leftmargin=5mm] “A ball rolls further on World1 than World2 because the (A) World2 is smoother (B) World1 is smoother” This approach is analogous to BIBREF40 Herzig2018DecouplingSA, who delexicalized words to POS tags to avoid memorization. Similar delexicalized features have also been employed in Open Information Extraction BIBREF41 , so the Open IE system could learn a general model of how relations are expressed. In our case, however, delexicalizing to World1 and World2 is itself a significant challenge, because identifying phrases referring to worlds is substantially more complex than (say) identifying parts of speech. To perform this delexicalization step, we use the world annotations included as part of the training dataset (Section SECREF4 ) to train a separate tagger to identify “world mentions” (text spans) in the question using BIO tags (BiLSTM encoder followed by a CRF). The spans are then sorted into World1 and World2 using the following algorithm: If one span is a substring of another, they are are grouped together. Remaining spans are singleton groups. The two groups containing the longest spans are labeled as the two worlds being compared. Any additional spans are assigned to one of these two groups based on closest edit distance (or ignored if zero overlap). The group appearing first in the question is labeled World1, the other World2. The result is a question in which world mentions are canonicalized. The semantic parser QuaSP is then trained using these questions. We call the combined system (delexicalization plus semantic parser) QuaSP+. The results for QuaSP+ are included in Table TABREF34 . Most importantly, QuaSP+ significantly outperforms the baselines by over 12% absolute. Similarly, the parse accuracies are significantly improved from 32.2% to 43.8% (Table TABREF35 ). This suggests that this delexicalization technique is an effective way of making progress on this dataset, and more generally on problems where multiple situations are being compared, a common characteristic of qualitative problems. QuaSP+Zero: A Model for the Zero-Shot Task While our delexicalization procedure demonstrates a way of addressing the world tracking problem, the approach still relies on annotated data; if we were to add new qualitative relations, new training data would be needed, which is a significant scalability obstacle. To address this, we define the zero-shot problem as being able to answer questions involving a new predicate p given training data only about other predicates P different from p. For example, if we add a new property (e.g., heat) to the qualitative model (e.g., adding q+(friction, heat); “more friction implies more heat”), we want to answer questions involving heat without creating new annotated training questions, and instead only use minimal extra information about the new property. A parser that achieved good zero-shot performance, i.e., worked well for new properties unseen at training time, would be a substantial advance, allowing a new qualitative model to link to questions with minimal effort. QuaRel provides an environment in which methods for this zero-shot theory extension can be devised and evaluated. To do this, we consider the following experimental setting: All questions mentioning a particular property are removed, the parser is trained on the remainder, and then tested on those withheld questions, i.e., questions mentioning a property unseen in the training data. We present and evaluate a model that we have developed for this, called QuaSP+Zero, that modifies the QuaSP+ parser as follows: During decoding, at points where the parser is selecting which property to include in the LF (e.g., Figure FIGREF31 ), it does not just consider the question tokens, but also the relationship between those tokens and the properties INLINEFORM0 used in the qualitative model. For example, a question token such as “longer” can act as a cue for (the property) length, even if unseen in the training data, because “longer” and a lexical form of length (e.g.,“length”) are similar. This approach follows the entity-linking approach used by BIBREF11 Krishnamurthy2017NeuralSP, where the similarity between question tokens and (words associated with) entities - called the entity linking score - help decide which entities to include in the LF during parsing. Here, we modify their entity linking score INLINEFORM1 , linking question tokens INLINEFORM2 and property “entities” INLINEFORM3 , to be: INLINEFORM4 where INLINEFORM0 is a diagonal matrix connecting the embedding of the question token INLINEFORM1 and words INLINEFORM2 associated with the property INLINEFORM3 . For INLINEFORM4 , we provide a small list of words for each property (such as “speed”, “velocity”, and “fast” for the speed property), a small-cost requirement. The results with QuaSP+Zero are in Table TABREF45 , shown in detail on the QuaRel INLINEFORM0 subset and (due to space constraints) summarized for the full QuaRel. We can measure overall performance of QuaSP+Zero by averaging each of the zero-shot test sets (weighted by the number of questions in each set), resulting in an overall parse accuracy of 38.9% and answer accuracy 61.0% on QuaRel INLINEFORM1 , and 25.7% (parse) and 59.5% (answer) on QuaRel, both significantly better than random. These initial results are encouraging, suggesting that it may be possible to parse into modified qualitative models that include new relations, with minimal annotation effort, significantly opening up qualitative reasoning methods for QA. Summary and Conclusion Our goal is to answer questions that involve qualitative relationships, an important genre of task that involves both language and knowledge, but also one that presents significant challenges for semantic parsing. To this end we have developed a simple and flexible formalism for representing these questions; constructed QuaRel, the first dataset of qualitative story questions that exemplifies these challenges; and presented two new models that adapt existing parsing techniques to this task. The first model, QuaSP+, illustrates how delexicalization can help with world tracking (identifying different “worlds” in questions), resulting in state-of-the-art performance on QuaRel. The second model, QuaSP+Zero, illustrates how zero-shot learning can be achieved (i.e., adding new qualitative relationships without requiring new training examples) by using an entity-linking approach applied to properties - a capability not present in previous models. There are several directions in which this work can be expanded. First, quantitative property values (e.g., “10 mph”) are currently not handled well, as their mapping to “low” or “high” is context-dependent. Second, some questions do not fit our two question templates (Section SECREF13 ), e.g., where two property values are a single answer option (e.g., “....(A) one floor is smooth and the other floor is rough”). Finally, some questions include an additional level of indirection, requiring an inference step to map to qualitative relations. For example, “Which surface would be best for a race? (A) gravel (B) blacktop” requires the additional commonsense inference that “best for a race” implies “higher speed”. Given the ubiquity of qualitative comparisons in natural text, recognizing and reasoning with qualitative relationships is likely to remain an important task for AI. This work makes inroads into this task, and contributes a dataset and models to encourage progress by others. The dataset and models are publicly available at http://data.allenai.org/quarel.
information retrieval system, word-association method, CCG-style rule-based semantic parser written specifically for friction questions, state-of-the-art neural semantic parser
a3c6acf900126bc9bd9c50ce99041ea00761da6a
a3c6acf900126bc9bd9c50ce99041ea00761da6a_0
Q: How do they obtain the logical forms of their questions in their dataset? Text: Introduction Many natural language tasks require recognizing and reasoning with qualitative relationships. For example, we may read about temperatures rising (climate science), a drug dose being increased (medicine), or the supply of goods being reduced (economics), and want to reason about the effects. Qualitative story problems, of the kind found in elementary exams (e.g., Figure FIGREF1 ), form a natural example of many of these linguistic and reasoning challenges, and is the target of this work. Understanding and answering such questions is particularly challenging. Corpus-based methods perform poorly in this setting, as the questions ask about novel scenarios rather than facts that can be looked up. Similarly, word association methods struggle, as a single word change (e.g., “more” to “less”) can flip the answer. Rather, the task appears to require knowledge of the underlying qualitative relations (e.g., “more friction implies less speed”). Qualitative modeling BIBREF0 , BIBREF1 , BIBREF2 provides a means for encoding and reasoning about such relationships. Relationships are expressed in a natural, qualitative way (e.g., if X increases, then so will Y), rather than requiring numeric equations, and inference allows complex questions to be answered. However, the semantic parsing task of mapping real world questions into these models is formidable and presents unique challenges. These challenges must be solved if natural questions involving qualitative relationships are to be reliably answered. We make three contributions: (1) a simple and flexible conceptual framework for formally representing these kinds of questions, in particular ones that express qualitative comparisons between two scenarios; (2) a challenging new dataset (QuaRel), including logical forms, exemplifying the parsing challenges; and (3) two novel models that extend type-constrained semantic parsing to address these challenges. Our first model, QuaSP+, addresses the problem of tracking different “worlds” in questions, resulting in significantly higher scores than with off-the-shelf tools (Section SECREF36 ). The second model, QuaSP+Zero, demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships on unseen properties, without requiring additional training data, something not possible with previous models (Section SECREF44 ). Together these contributions make inroads into answering complex, qualitative questions by linking language and reasoning, and offer a new dataset and models to spur further progress by the community. Related Work There has been rapid progress in question-answering (QA), spanning a wide variety of tasks and phenomena, including factoid QA BIBREF3 , entailment BIBREF4 , sentiment BIBREF5 , and ellipsis and coreference BIBREF6 . Our contribution here is the first dataset specifically targeted at qualitative relationships, an important category of language that has been less explored. While questions requiring reasoning about qualitative relations sometimes appear in other datasets, e.g., BIBREF7 , our dataset specifically focuses on them so their challenges can be studied. For answering such questions, we treat the problem as mapping language to a structured formalism (semantic parsing) where simple qualitative reasoning can occur. Semantic parsing has a long history BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , using datasets about geography BIBREF8 , travel booking BIBREF12 , factoid QA over knowledge bases BIBREF10 , Wikipedia tables BIBREF13 , and many more. Our contributions to this line of research are: a dataset that features phenomena under-represented in prior datasets, namely (1) highly diverse language describing open-domain qualitative problems, and (2) the need to reason over entities that have no explicit formal representation; and methods for adapting existing semantic parsers to address these phenomena. For the target formalism itself, we draw on the extensive body of work on qualitative reasoning BIBREF0 , BIBREF1 , BIBREF2 to create a logical form language that can express the required qualitative knowledge, yet is sufficiently constrained that parsing into it is feasible, described in more detail in Section SECREF3 . There has been some work connecting language with qualitative reasoning, although mainly focused on extracting qualitative models themselves from text rather than question interpretation, e.g., BIBREF14 , BIBREF15 . Recent work by BIBREF16 crouse2018learning also includes interpreting questions that require identifying qualitative processes in text, in constrast to our setting of interpreting NL story questions that involve qualitative comparisons. Answering story problems has received attention in the domain of arithmetic, where simple algebra story questions (e.g., “Sue had 5 cookies, then gave 2 to Joe...”) are mapped to a system of equations, e.g., BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . This task is loosely analogous to ours (we instead map to qualitative relations) except that in arithmetic the entities to relate are often identifiable (namely, the numbers). Our qualitative story questions lack this structure, adding an extra challenge. The QuaRel dataset shares some structure with the Winograd Schema Challenge BIBREF21 , being 2-way multiple choice questions invoking both commonsense and coreference. However, they test different aspects of commonsense: Winograd uses coreference resolution to test commonsense understanding of scenarios, while QuaRel tests reasoning about qualitative relationships requiring tracking of coreferent “worlds.” Finally, crowdsourcing datasets has become a driving force in AI, producing significant progress, e.g., BIBREF3 , BIBREF22 , BIBREF23 . However, for semantic parsing tasks, one obstacle has been the difficulty in crowdsourcing target logical forms for questions. Here, we show how those logical forms can be obtained indirectly from workers without training the workers in the formalism, loosely similar to BIBREF24 . Knowledge Representation We first describe our framework for representing questions and the knowledge to answer them. Our dataset, described later, includes logical forms expressed in this language. Qualitative Background Knowledge We use a simple representation of qualitative relationships, leveraging prior work in qualitative reasoning BIBREF0 . Let INLINEFORM0 be the set of properties relevant to the question set's domain (e.g., smoothness, friction, speed). Let INLINEFORM1 be a set of qualitative values for property INLINEFORM2 (e.g., fast, slow). For the background knowledge about the domain itself (a qualitative model), following BIBREF0 Forbus1984QualitativePT, we use the following predicates: [vskip=1mm,leftmargin=5mm] q+(property1, property2) q-(property1, property2) q+ denotes that property1 and property2 are qualitatively proportional, e.g., if property1 goes up, property2 will too, while q- denotes inverse proportionality, e.g., [vskip=1mm,leftmargin=5mm] # If friction goes up, speed goes down. q-(friction, speed). We also introduce the predicate: [vskip=1mm,leftmargin=5mm] higher-than( INLINEFORM0 , INLINEFORM1 , property INLINEFORM2 ) where INLINEFORM3 , allowing an ordering of property values to be specified, e.g., higher-than(fast, slow, speed). For our purposes here, we simplify to use just two property values, low and high, for all properties. (The parser learns mappings from words to these values, described later). Given these primitives, compact theories can be authored for a particular domain by choosing relevant properties INLINEFORM0 , and specifying qualitative relationships (q+,q-) and ordinal values (higher-than) for them. For example, a simple theory about friction is sketched graphically in Figure FIGREF3 . Our observation is that these theories are relatively small, simple, and easy to author. Rather, the primary challenge is in mapping the complex and varied language of questions into a form that interfaces with this representation. This language can be extended to include additional primitives from qualitative modeling, e.g., i+(x,y) (“the rate of change of x is qualitatively proportional to y”). That is, the techniques we present are not specific to our particular qualitative modeling subset. The only requirement is that, given a set of absolute values or qualitative relationships from a question, the theory can compute an answer. Representing Questions A key feature of our representation is the conceptualization of questions as describing events happening in two worlds, world1 and world2, that are being compared. That comparison may be between two different entities, or the same entity at different time points. E.g., in Figure FIGREF1 the two worlds being compared are the car on wood, and the car on carpet. The tags world1 and world2 denote these different situations, and semantic parsing (Section SECREF5 ) requires learning to correctly associate these tags with parts of the question describing those situations. This abstracts away irrelevant details of the worlds, while still keeping track of which world is which. We define the following two predicates to express qualitative information in questions: [vskip=1mm,leftmargin=5mm] qrel(property, direction, world) qval(property, value, world) where property ( INLINEFORM0 ) INLINEFORM1 P, value INLINEFORM2 INLINEFORM3 , direction INLINEFORM4 {higher, lower}, and world INLINEFORM5 {world1, world2}. qrel() denotes the relative assertion that property is higher/lower in world compared with the other world, which is left implicit, e.g., from Figure FIGREF1 : [vskip=1mm,leftmargin=5mm] # The car rolls further on wood. qrel(distance, higher, world1) where world1 is a tag for the “car on wood” situation (hence world2 becomes a tag for the opposite “car on carpet” situation). qval() denotes that property has an absolute value in world, e.g., [vskip=1mm,leftmargin=5mm] # The car's speed is slow on carpet. qval(speed, low, world2) Logical Forms for Questions Despite the wide variation in language, the space of logical forms (LFs) for the questions that we consider is relatively compact. In each question, the question body establishes a scenario and each answer option then probes an implication. We thus express a question's LF as a tuple: [vskip=1mm,leftmargin=5mm] (setup, answer-A, answer-B) where setup is the predicate(s) describing the scenario, and answer-* are the predicate(s) being queried for. If answer-A follows from setup, as inferred by the reasoner, then the answer is (A); similarly for (B). For readability we will write this as [vskip=1mm,leftmargin=5mm] setup INLINEFORM0 answer-A ; answer-B We consider two styles of LF, covering a large range of questions. The first is: [vskip=1mm,leftmargin=5mm] (1) qrel( INLINEFORM1 ) INLINEFORM2 qrel( INLINEFORM0 ) ; qrel( INLINEFORM1 ) which deals with relative values of properties between worlds, and applies when the question setup includes a comparative. An example of this is in Figure FIGREF1 . The second is: [vskip=1mm,leftmargin=5mm] (2) qval( INLINEFORM2 ), qval( INLINEFORM3 ) INLINEFORM4 qrel( INLINEFORM0 ) ; qrel( INLINEFORM1 ) which deals with absolute values of properties, and applies when the setup uses absolute terms instead of comparatives. An example is the first question in Figure FIGREF4 , shown simplified below, whose LF looks as follows (colors showing approximate correspondences): [vskip=1mm,leftmargin=5mm] # Does a bar stool orangeslide faster along the redbar surface with tealdecorative raised bumps or the magentasmooth wooden bluefloor? (A) redbar (B) bluefloor qval(tealsmoothness, low, redworld1), qval(magentasmoothness, high, blueworld2) INLINEFORM0 qrel(orangespeed, higher, redworld1) ; qrel(orangespeed, higher, blueworld2) Inference A small set of rules for qualitative reasoning connects these predicates together. For example, (in logic) if the value of P is higher in world1 than the value of P in world2 and q+(P,Q) then the value of Q will be higher in world1 than the value of Q in world2. Given a question's logical form, a qualitative model, and these rules, a Prolog-style inference engine determines which answer option follows from the premise. The QuaRel Dataset QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms. The size of the dataset is similar to several other datasets with annotated logical forms used for semantic parsing BIBREF8 , BIBREF25 , BIBREF24 . As the space of LFs is constrained, the dataset is sufficient for a rich exploration of this space. We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. They then created a question, guided by a large number of examples, and were encouraged to be imaginative and use their own words. The results are a remarkable variety of situations and phrasings (Figure FIGREF4 ). Second, the LFs were elicited using a novel technique of reverse-engineering them from a set of follow-up questions, without exposing workers to the underlying formalism. This is possible because of the constrained space of LFs. Referring to LF templates (1) and (2) earlier (Section SECREF13 ), these questions are as follows: From this information, we can deduce the target LF ( INLINEFORM0 is the complement of INLINEFORM1 , INLINEFORM2 , we arbitrarily set INLINEFORM3 =world1, hence all other variables can be inferred). Three independent workers answer these follow-up questions to ensure reliable results. We also had a human answer the questions in the dev partition (in principle, they should all be answerable). The human scored 96.4%, the few failures caused by occasional annotation errors or ambiguities in the question set itself, suggesting high fidelity of the content. About half of the dataset are questions about friction, relating five different properties (friction, heat, distance, speed, smoothness). These questions form a meaningful, connected subset of the dataset which we denote QuaRel INLINEFORM0 . The remaining questions involve a wide variety of 14 additional properties and their relations, such as “exercise intensity vs. sweat” or “distance vs. brightness”. Figure FIGREF4 shows typical examples of questions in QuaRel, and Table TABREF26 provides summary statistics. In particular, the vocabulary is highly varied (5226 unique words), given the dataset size. Figure FIGREF27 shows some examples of the varied phrases used to describe smoothness. Baseline Systems We use four systems to evaluate the difficulty of this dataset. (We subsequently present two new models, extending the baseline neural semantic parser, in Sections SECREF36 and SECREF44 ). The first two are an information retrieval system and a word-association method, following the designs of BIBREF26 Clark2016CombiningRS. These are naive baselines that do not parse the question, but nevertheless may find some signal in a large corpus of text that helps guess the correct answer. The third is a CCG-style rule-based semantic parser written specifically for friction questions (the QuaRel INLINEFORM0 subset), but prior to data being collected. The last is a state-of-the-art neural semantic parser. We briefly describe each in turn. Baseline Experiments We ran the above systems on the QuaRel dataset. QuaSP was trained on the training set, using the model with highest parse accuracy on the dev set (similarly BiLSTM used highest answer accuracy on the dev set) . The results are shown in Table TABREF34 . The 95% confidence interval is +/- 4% on the full test set. The human score is the sanity check on the dev set (Section SECREF4 ). As Table TABREF34 shows, the QuaSP model performs better than other baseline approaches which are only slightly above random. QuaSP scores 56.1% (61.7% on the friction subset), indicating the challenges of this dataset. For the rule-based system, we observe that it is unable to parse the majority (66%) of questions (hence scoring 0.5 for those questions, reflecting a random guess), due to the varied and unexpected vocabulary present in the dataset. For example, Figure FIGREF27 shows some of the ways that the notion of “smoother/rougher” is expressed in questions, many of which are not covered by the hand-written CCG grammar. This reflects the typical brittleness of hand-built systems. For QuaSP, we also analyzed the parse accuracies, shown in Table TABREF35 , the score reflecting the percentage of times it produced exactly the right logical form. The random baseline for parse accuracy is near zero given the large space of logical forms, while the model parse accuracies are relatively high, much better than a random baseline. Further analysis of the predicted LFs indicates that the neural model does well at predicting the properties ( INLINEFORM0 25% of errors on dev set), but struggles to predict the worlds in the LFs reliably ( INLINEFORM1 70% of errors on dev set). This helps explain why non-trivial parse accuracy does not necessarily translate into correspondingly higher answer accuracy: If only the world assignment is wrong, the answer will flip and give a score of zero, rather than the average 0.5. New Models We now present two new models, both extensions of the neural baseline QuaSP. The first, QuaSP+, addresses the leading cause of failure just described, namely the problem of identifying the two worlds being compared, and significantly outperforms all the baseline systems. The second, QuaSP+Zero, addresses the scaling problem, namely the costly requirement of needing many training examples each time a new qualitative property is introduced. It does this by instead using only a small amount of lexical information about the new property, thus achieving “zero shot” performance, i.e., handling properties unseen in the training examples BIBREF34 , a capability not present in the baseline systems. We present the models and results for each. QuaSP+: A Model Incorporating World Tracking We define the world tracking problem as identifying and tracking references to different “worlds” being compared in text, i.e., correctly mapping phrases to world identifiers, a critical aspect of the semantic parsing task. There are three reasons why this is challenging. First, unlike properties, the worlds being compared in questions are distinct in almost every question, and thus there is no obvious, learnable mapping from phrases to worlds. For example, while a property (like speed) has learnable ways to refer to it (“faster”, “moves rapidly”, “speeds”, “barely moves”), worlds are different in each question (e.g., “on a road”, “countertop”, “while cutting grass”) and thus learning to identify them is hard. Second, different phrases may be used to refer to the same world in the same question (see Figure FIGREF43 ), further complicating the task. Finally, even if the model could learn to identify worlds in other ways, e.g., by syntactic position in the question, there is the problem of selecting world1 or world2 consistently throughout the parse, so that the equivalent phrasings are assigned the same world. This problem of mapping phrases to world identifiers is similar to the task of entity linking BIBREF35 . In prior semantic parsing work, entity linking is relatively straightforward: simple string-matching heuristics are often sufficient BIBREF36 , BIBREF37 , or an external entity linking system can be used BIBREF38 , BIBREF39 . In QuaRel, however, because the phrases denoting world1 and world2 are different in almost every question, and the word “world” is never used, such methods cannot be applied. To address this, we have developed QuaSP+, a new model that extends QuaSP by adding an extra initial step to identify and delexicalize world references in the question. In this delexicalization process, potentially new linguistic descriptions of worlds are replaced by canonical tokens, creating the opportunity for the model to generalize across questions. For example, the world mentions in the question: [vskip=1mm,leftmargin=5mm] “A ball rolls further on wood than carpet because the (A) carpet is smoother (B) wood is smoother” are delexicalized to: [vskip=1mm,leftmargin=5mm] “A ball rolls further on World1 than World2 because the (A) World2 is smoother (B) World1 is smoother” This approach is analogous to BIBREF40 Herzig2018DecouplingSA, who delexicalized words to POS tags to avoid memorization. Similar delexicalized features have also been employed in Open Information Extraction BIBREF41 , so the Open IE system could learn a general model of how relations are expressed. In our case, however, delexicalizing to World1 and World2 is itself a significant challenge, because identifying phrases referring to worlds is substantially more complex than (say) identifying parts of speech. To perform this delexicalization step, we use the world annotations included as part of the training dataset (Section SECREF4 ) to train a separate tagger to identify “world mentions” (text spans) in the question using BIO tags (BiLSTM encoder followed by a CRF). The spans are then sorted into World1 and World2 using the following algorithm: If one span is a substring of another, they are are grouped together. Remaining spans are singleton groups. The two groups containing the longest spans are labeled as the two worlds being compared. Any additional spans are assigned to one of these two groups based on closest edit distance (or ignored if zero overlap). The group appearing first in the question is labeled World1, the other World2. The result is a question in which world mentions are canonicalized. The semantic parser QuaSP is then trained using these questions. We call the combined system (delexicalization plus semantic parser) QuaSP+. The results for QuaSP+ are included in Table TABREF34 . Most importantly, QuaSP+ significantly outperforms the baselines by over 12% absolute. Similarly, the parse accuracies are significantly improved from 32.2% to 43.8% (Table TABREF35 ). This suggests that this delexicalization technique is an effective way of making progress on this dataset, and more generally on problems where multiple situations are being compared, a common characteristic of qualitative problems. QuaSP+Zero: A Model for the Zero-Shot Task While our delexicalization procedure demonstrates a way of addressing the world tracking problem, the approach still relies on annotated data; if we were to add new qualitative relations, new training data would be needed, which is a significant scalability obstacle. To address this, we define the zero-shot problem as being able to answer questions involving a new predicate p given training data only about other predicates P different from p. For example, if we add a new property (e.g., heat) to the qualitative model (e.g., adding q+(friction, heat); “more friction implies more heat”), we want to answer questions involving heat without creating new annotated training questions, and instead only use minimal extra information about the new property. A parser that achieved good zero-shot performance, i.e., worked well for new properties unseen at training time, would be a substantial advance, allowing a new qualitative model to link to questions with minimal effort. QuaRel provides an environment in which methods for this zero-shot theory extension can be devised and evaluated. To do this, we consider the following experimental setting: All questions mentioning a particular property are removed, the parser is trained on the remainder, and then tested on those withheld questions, i.e., questions mentioning a property unseen in the training data. We present and evaluate a model that we have developed for this, called QuaSP+Zero, that modifies the QuaSP+ parser as follows: During decoding, at points where the parser is selecting which property to include in the LF (e.g., Figure FIGREF31 ), it does not just consider the question tokens, but also the relationship between those tokens and the properties INLINEFORM0 used in the qualitative model. For example, a question token such as “longer” can act as a cue for (the property) length, even if unseen in the training data, because “longer” and a lexical form of length (e.g.,“length”) are similar. This approach follows the entity-linking approach used by BIBREF11 Krishnamurthy2017NeuralSP, where the similarity between question tokens and (words associated with) entities - called the entity linking score - help decide which entities to include in the LF during parsing. Here, we modify their entity linking score INLINEFORM1 , linking question tokens INLINEFORM2 and property “entities” INLINEFORM3 , to be: INLINEFORM4 where INLINEFORM0 is a diagonal matrix connecting the embedding of the question token INLINEFORM1 and words INLINEFORM2 associated with the property INLINEFORM3 . For INLINEFORM4 , we provide a small list of words for each property (such as “speed”, “velocity”, and “fast” for the speed property), a small-cost requirement. The results with QuaSP+Zero are in Table TABREF45 , shown in detail on the QuaRel INLINEFORM0 subset and (due to space constraints) summarized for the full QuaRel. We can measure overall performance of QuaSP+Zero by averaging each of the zero-shot test sets (weighted by the number of questions in each set), resulting in an overall parse accuracy of 38.9% and answer accuracy 61.0% on QuaRel INLINEFORM1 , and 25.7% (parse) and 59.5% (answer) on QuaRel, both significantly better than random. These initial results are encouraging, suggesting that it may be possible to parse into modified qualitative models that include new relations, with minimal annotation effort, significantly opening up qualitative reasoning methods for QA. Summary and Conclusion Our goal is to answer questions that involve qualitative relationships, an important genre of task that involves both language and knowledge, but also one that presents significant challenges for semantic parsing. To this end we have developed a simple and flexible formalism for representing these questions; constructed QuaRel, the first dataset of qualitative story questions that exemplifies these challenges; and presented two new models that adapt existing parsing techniques to this task. The first model, QuaSP+, illustrates how delexicalization can help with world tracking (identifying different “worlds” in questions), resulting in state-of-the-art performance on QuaRel. The second model, QuaSP+Zero, illustrates how zero-shot learning can be achieved (i.e., adding new qualitative relationships without requiring new training examples) by using an entity-linking approach applied to properties - a capability not present in previous models. There are several directions in which this work can be expanded. First, quantitative property values (e.g., “10 mph”) are currently not handled well, as their mapping to “low” or “high” is context-dependent. Second, some questions do not fit our two question templates (Section SECREF13 ), e.g., where two property values are a single answer option (e.g., “....(A) one floor is smooth and the other floor is rough”). Finally, some questions include an additional level of indirection, requiring an inference step to map to qualitative relations. For example, “Which surface would be best for a race? (A) gravel (B) blacktop” requires the additional commonsense inference that “best for a race” implies “higher speed”. Given the ubiquity of qualitative comparisons in natural text, recognizing and reasoning with qualitative relationships is likely to remain an important task for AI. This work makes inroads into this task, and contributes a dataset and models to encourage progress by others. The dataset and models are publicly available at http://data.allenai.org/quarel.
workers were given a seed qualitative relation, asked to enter two objects, people, or situations to compare, created a question, guided by a large number of examples, LFs were elicited using a novel technique of reverse-engineering them from a set of follow-up questions
31b631a8634f6180b20a72477040046d1e085494
31b631a8634f6180b20a72477040046d1e085494_0
Q: Do all questions in the dataset allow the answers to pick from 2 options? Text: Introduction Many natural language tasks require recognizing and reasoning with qualitative relationships. For example, we may read about temperatures rising (climate science), a drug dose being increased (medicine), or the supply of goods being reduced (economics), and want to reason about the effects. Qualitative story problems, of the kind found in elementary exams (e.g., Figure FIGREF1 ), form a natural example of many of these linguistic and reasoning challenges, and is the target of this work. Understanding and answering such questions is particularly challenging. Corpus-based methods perform poorly in this setting, as the questions ask about novel scenarios rather than facts that can be looked up. Similarly, word association methods struggle, as a single word change (e.g., “more” to “less”) can flip the answer. Rather, the task appears to require knowledge of the underlying qualitative relations (e.g., “more friction implies less speed”). Qualitative modeling BIBREF0 , BIBREF1 , BIBREF2 provides a means for encoding and reasoning about such relationships. Relationships are expressed in a natural, qualitative way (e.g., if X increases, then so will Y), rather than requiring numeric equations, and inference allows complex questions to be answered. However, the semantic parsing task of mapping real world questions into these models is formidable and presents unique challenges. These challenges must be solved if natural questions involving qualitative relationships are to be reliably answered. We make three contributions: (1) a simple and flexible conceptual framework for formally representing these kinds of questions, in particular ones that express qualitative comparisons between two scenarios; (2) a challenging new dataset (QuaRel), including logical forms, exemplifying the parsing challenges; and (3) two novel models that extend type-constrained semantic parsing to address these challenges. Our first model, QuaSP+, addresses the problem of tracking different “worlds” in questions, resulting in significantly higher scores than with off-the-shelf tools (Section SECREF36 ). The second model, QuaSP+Zero, demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships on unseen properties, without requiring additional training data, something not possible with previous models (Section SECREF44 ). Together these contributions make inroads into answering complex, qualitative questions by linking language and reasoning, and offer a new dataset and models to spur further progress by the community. Related Work There has been rapid progress in question-answering (QA), spanning a wide variety of tasks and phenomena, including factoid QA BIBREF3 , entailment BIBREF4 , sentiment BIBREF5 , and ellipsis and coreference BIBREF6 . Our contribution here is the first dataset specifically targeted at qualitative relationships, an important category of language that has been less explored. While questions requiring reasoning about qualitative relations sometimes appear in other datasets, e.g., BIBREF7 , our dataset specifically focuses on them so their challenges can be studied. For answering such questions, we treat the problem as mapping language to a structured formalism (semantic parsing) where simple qualitative reasoning can occur. Semantic parsing has a long history BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , using datasets about geography BIBREF8 , travel booking BIBREF12 , factoid QA over knowledge bases BIBREF10 , Wikipedia tables BIBREF13 , and many more. Our contributions to this line of research are: a dataset that features phenomena under-represented in prior datasets, namely (1) highly diverse language describing open-domain qualitative problems, and (2) the need to reason over entities that have no explicit formal representation; and methods for adapting existing semantic parsers to address these phenomena. For the target formalism itself, we draw on the extensive body of work on qualitative reasoning BIBREF0 , BIBREF1 , BIBREF2 to create a logical form language that can express the required qualitative knowledge, yet is sufficiently constrained that parsing into it is feasible, described in more detail in Section SECREF3 . There has been some work connecting language with qualitative reasoning, although mainly focused on extracting qualitative models themselves from text rather than question interpretation, e.g., BIBREF14 , BIBREF15 . Recent work by BIBREF16 crouse2018learning also includes interpreting questions that require identifying qualitative processes in text, in constrast to our setting of interpreting NL story questions that involve qualitative comparisons. Answering story problems has received attention in the domain of arithmetic, where simple algebra story questions (e.g., “Sue had 5 cookies, then gave 2 to Joe...”) are mapped to a system of equations, e.g., BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . This task is loosely analogous to ours (we instead map to qualitative relations) except that in arithmetic the entities to relate are often identifiable (namely, the numbers). Our qualitative story questions lack this structure, adding an extra challenge. The QuaRel dataset shares some structure with the Winograd Schema Challenge BIBREF21 , being 2-way multiple choice questions invoking both commonsense and coreference. However, they test different aspects of commonsense: Winograd uses coreference resolution to test commonsense understanding of scenarios, while QuaRel tests reasoning about qualitative relationships requiring tracking of coreferent “worlds.” Finally, crowdsourcing datasets has become a driving force in AI, producing significant progress, e.g., BIBREF3 , BIBREF22 , BIBREF23 . However, for semantic parsing tasks, one obstacle has been the difficulty in crowdsourcing target logical forms for questions. Here, we show how those logical forms can be obtained indirectly from workers without training the workers in the formalism, loosely similar to BIBREF24 . Knowledge Representation We first describe our framework for representing questions and the knowledge to answer them. Our dataset, described later, includes logical forms expressed in this language. Qualitative Background Knowledge We use a simple representation of qualitative relationships, leveraging prior work in qualitative reasoning BIBREF0 . Let INLINEFORM0 be the set of properties relevant to the question set's domain (e.g., smoothness, friction, speed). Let INLINEFORM1 be a set of qualitative values for property INLINEFORM2 (e.g., fast, slow). For the background knowledge about the domain itself (a qualitative model), following BIBREF0 Forbus1984QualitativePT, we use the following predicates: [vskip=1mm,leftmargin=5mm] q+(property1, property2) q-(property1, property2) q+ denotes that property1 and property2 are qualitatively proportional, e.g., if property1 goes up, property2 will too, while q- denotes inverse proportionality, e.g., [vskip=1mm,leftmargin=5mm] # If friction goes up, speed goes down. q-(friction, speed). We also introduce the predicate: [vskip=1mm,leftmargin=5mm] higher-than( INLINEFORM0 , INLINEFORM1 , property INLINEFORM2 ) where INLINEFORM3 , allowing an ordering of property values to be specified, e.g., higher-than(fast, slow, speed). For our purposes here, we simplify to use just two property values, low and high, for all properties. (The parser learns mappings from words to these values, described later). Given these primitives, compact theories can be authored for a particular domain by choosing relevant properties INLINEFORM0 , and specifying qualitative relationships (q+,q-) and ordinal values (higher-than) for them. For example, a simple theory about friction is sketched graphically in Figure FIGREF3 . Our observation is that these theories are relatively small, simple, and easy to author. Rather, the primary challenge is in mapping the complex and varied language of questions into a form that interfaces with this representation. This language can be extended to include additional primitives from qualitative modeling, e.g., i+(x,y) (“the rate of change of x is qualitatively proportional to y”). That is, the techniques we present are not specific to our particular qualitative modeling subset. The only requirement is that, given a set of absolute values or qualitative relationships from a question, the theory can compute an answer. Representing Questions A key feature of our representation is the conceptualization of questions as describing events happening in two worlds, world1 and world2, that are being compared. That comparison may be between two different entities, or the same entity at different time points. E.g., in Figure FIGREF1 the two worlds being compared are the car on wood, and the car on carpet. The tags world1 and world2 denote these different situations, and semantic parsing (Section SECREF5 ) requires learning to correctly associate these tags with parts of the question describing those situations. This abstracts away irrelevant details of the worlds, while still keeping track of which world is which. We define the following two predicates to express qualitative information in questions: [vskip=1mm,leftmargin=5mm] qrel(property, direction, world) qval(property, value, world) where property ( INLINEFORM0 ) INLINEFORM1 P, value INLINEFORM2 INLINEFORM3 , direction INLINEFORM4 {higher, lower}, and world INLINEFORM5 {world1, world2}. qrel() denotes the relative assertion that property is higher/lower in world compared with the other world, which is left implicit, e.g., from Figure FIGREF1 : [vskip=1mm,leftmargin=5mm] # The car rolls further on wood. qrel(distance, higher, world1) where world1 is a tag for the “car on wood” situation (hence world2 becomes a tag for the opposite “car on carpet” situation). qval() denotes that property has an absolute value in world, e.g., [vskip=1mm,leftmargin=5mm] # The car's speed is slow on carpet. qval(speed, low, world2) Logical Forms for Questions Despite the wide variation in language, the space of logical forms (LFs) for the questions that we consider is relatively compact. In each question, the question body establishes a scenario and each answer option then probes an implication. We thus express a question's LF as a tuple: [vskip=1mm,leftmargin=5mm] (setup, answer-A, answer-B) where setup is the predicate(s) describing the scenario, and answer-* are the predicate(s) being queried for. If answer-A follows from setup, as inferred by the reasoner, then the answer is (A); similarly for (B). For readability we will write this as [vskip=1mm,leftmargin=5mm] setup INLINEFORM0 answer-A ; answer-B We consider two styles of LF, covering a large range of questions. The first is: [vskip=1mm,leftmargin=5mm] (1) qrel( INLINEFORM1 ) INLINEFORM2 qrel( INLINEFORM0 ) ; qrel( INLINEFORM1 ) which deals with relative values of properties between worlds, and applies when the question setup includes a comparative. An example of this is in Figure FIGREF1 . The second is: [vskip=1mm,leftmargin=5mm] (2) qval( INLINEFORM2 ), qval( INLINEFORM3 ) INLINEFORM4 qrel( INLINEFORM0 ) ; qrel( INLINEFORM1 ) which deals with absolute values of properties, and applies when the setup uses absolute terms instead of comparatives. An example is the first question in Figure FIGREF4 , shown simplified below, whose LF looks as follows (colors showing approximate correspondences): [vskip=1mm,leftmargin=5mm] # Does a bar stool orangeslide faster along the redbar surface with tealdecorative raised bumps or the magentasmooth wooden bluefloor? (A) redbar (B) bluefloor qval(tealsmoothness, low, redworld1), qval(magentasmoothness, high, blueworld2) INLINEFORM0 qrel(orangespeed, higher, redworld1) ; qrel(orangespeed, higher, blueworld2) Inference A small set of rules for qualitative reasoning connects these predicates together. For example, (in logic) if the value of P is higher in world1 than the value of P in world2 and q+(P,Q) then the value of Q will be higher in world1 than the value of Q in world2. Given a question's logical form, a qualitative model, and these rules, a Prolog-style inference engine determines which answer option follows from the premise. The QuaRel Dataset QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms. The size of the dataset is similar to several other datasets with annotated logical forms used for semantic parsing BIBREF8 , BIBREF25 , BIBREF24 . As the space of LFs is constrained, the dataset is sufficient for a rich exploration of this space. We crowdsourced multiple-choice questions in two parts, encouraging workers to be imaginative and varied in their use of language. First, workers were given a seed qualitative relation q+/-( INLINEFORM0 ) in the domain, expressed in English (e.g., “If a surface has more friction, then an object will travel slower”), and asked to enter two objects, people, or situations to compare. They then created a question, guided by a large number of examples, and were encouraged to be imaginative and use their own words. The results are a remarkable variety of situations and phrasings (Figure FIGREF4 ). Second, the LFs were elicited using a novel technique of reverse-engineering them from a set of follow-up questions, without exposing workers to the underlying formalism. This is possible because of the constrained space of LFs. Referring to LF templates (1) and (2) earlier (Section SECREF13 ), these questions are as follows: From this information, we can deduce the target LF ( INLINEFORM0 is the complement of INLINEFORM1 , INLINEFORM2 , we arbitrarily set INLINEFORM3 =world1, hence all other variables can be inferred). Three independent workers answer these follow-up questions to ensure reliable results. We also had a human answer the questions in the dev partition (in principle, they should all be answerable). The human scored 96.4%, the few failures caused by occasional annotation errors or ambiguities in the question set itself, suggesting high fidelity of the content. About half of the dataset are questions about friction, relating five different properties (friction, heat, distance, speed, smoothness). These questions form a meaningful, connected subset of the dataset which we denote QuaRel INLINEFORM0 . The remaining questions involve a wide variety of 14 additional properties and their relations, such as “exercise intensity vs. sweat” or “distance vs. brightness”. Figure FIGREF4 shows typical examples of questions in QuaRel, and Table TABREF26 provides summary statistics. In particular, the vocabulary is highly varied (5226 unique words), given the dataset size. Figure FIGREF27 shows some examples of the varied phrases used to describe smoothness. Baseline Systems We use four systems to evaluate the difficulty of this dataset. (We subsequently present two new models, extending the baseline neural semantic parser, in Sections SECREF36 and SECREF44 ). The first two are an information retrieval system and a word-association method, following the designs of BIBREF26 Clark2016CombiningRS. These are naive baselines that do not parse the question, but nevertheless may find some signal in a large corpus of text that helps guess the correct answer. The third is a CCG-style rule-based semantic parser written specifically for friction questions (the QuaRel INLINEFORM0 subset), but prior to data being collected. The last is a state-of-the-art neural semantic parser. We briefly describe each in turn. Baseline Experiments We ran the above systems on the QuaRel dataset. QuaSP was trained on the training set, using the model with highest parse accuracy on the dev set (similarly BiLSTM used highest answer accuracy on the dev set) . The results are shown in Table TABREF34 . The 95% confidence interval is +/- 4% on the full test set. The human score is the sanity check on the dev set (Section SECREF4 ). As Table TABREF34 shows, the QuaSP model performs better than other baseline approaches which are only slightly above random. QuaSP scores 56.1% (61.7% on the friction subset), indicating the challenges of this dataset. For the rule-based system, we observe that it is unable to parse the majority (66%) of questions (hence scoring 0.5 for those questions, reflecting a random guess), due to the varied and unexpected vocabulary present in the dataset. For example, Figure FIGREF27 shows some of the ways that the notion of “smoother/rougher” is expressed in questions, many of which are not covered by the hand-written CCG grammar. This reflects the typical brittleness of hand-built systems. For QuaSP, we also analyzed the parse accuracies, shown in Table TABREF35 , the score reflecting the percentage of times it produced exactly the right logical form. The random baseline for parse accuracy is near zero given the large space of logical forms, while the model parse accuracies are relatively high, much better than a random baseline. Further analysis of the predicted LFs indicates that the neural model does well at predicting the properties ( INLINEFORM0 25% of errors on dev set), but struggles to predict the worlds in the LFs reliably ( INLINEFORM1 70% of errors on dev set). This helps explain why non-trivial parse accuracy does not necessarily translate into correspondingly higher answer accuracy: If only the world assignment is wrong, the answer will flip and give a score of zero, rather than the average 0.5. New Models We now present two new models, both extensions of the neural baseline QuaSP. The first, QuaSP+, addresses the leading cause of failure just described, namely the problem of identifying the two worlds being compared, and significantly outperforms all the baseline systems. The second, QuaSP+Zero, addresses the scaling problem, namely the costly requirement of needing many training examples each time a new qualitative property is introduced. It does this by instead using only a small amount of lexical information about the new property, thus achieving “zero shot” performance, i.e., handling properties unseen in the training examples BIBREF34 , a capability not present in the baseline systems. We present the models and results for each. QuaSP+: A Model Incorporating World Tracking We define the world tracking problem as identifying and tracking references to different “worlds” being compared in text, i.e., correctly mapping phrases to world identifiers, a critical aspect of the semantic parsing task. There are three reasons why this is challenging. First, unlike properties, the worlds being compared in questions are distinct in almost every question, and thus there is no obvious, learnable mapping from phrases to worlds. For example, while a property (like speed) has learnable ways to refer to it (“faster”, “moves rapidly”, “speeds”, “barely moves”), worlds are different in each question (e.g., “on a road”, “countertop”, “while cutting grass”) and thus learning to identify them is hard. Second, different phrases may be used to refer to the same world in the same question (see Figure FIGREF43 ), further complicating the task. Finally, even if the model could learn to identify worlds in other ways, e.g., by syntactic position in the question, there is the problem of selecting world1 or world2 consistently throughout the parse, so that the equivalent phrasings are assigned the same world. This problem of mapping phrases to world identifiers is similar to the task of entity linking BIBREF35 . In prior semantic parsing work, entity linking is relatively straightforward: simple string-matching heuristics are often sufficient BIBREF36 , BIBREF37 , or an external entity linking system can be used BIBREF38 , BIBREF39 . In QuaRel, however, because the phrases denoting world1 and world2 are different in almost every question, and the word “world” is never used, such methods cannot be applied. To address this, we have developed QuaSP+, a new model that extends QuaSP by adding an extra initial step to identify and delexicalize world references in the question. In this delexicalization process, potentially new linguistic descriptions of worlds are replaced by canonical tokens, creating the opportunity for the model to generalize across questions. For example, the world mentions in the question: [vskip=1mm,leftmargin=5mm] “A ball rolls further on wood than carpet because the (A) carpet is smoother (B) wood is smoother” are delexicalized to: [vskip=1mm,leftmargin=5mm] “A ball rolls further on World1 than World2 because the (A) World2 is smoother (B) World1 is smoother” This approach is analogous to BIBREF40 Herzig2018DecouplingSA, who delexicalized words to POS tags to avoid memorization. Similar delexicalized features have also been employed in Open Information Extraction BIBREF41 , so the Open IE system could learn a general model of how relations are expressed. In our case, however, delexicalizing to World1 and World2 is itself a significant challenge, because identifying phrases referring to worlds is substantially more complex than (say) identifying parts of speech. To perform this delexicalization step, we use the world annotations included as part of the training dataset (Section SECREF4 ) to train a separate tagger to identify “world mentions” (text spans) in the question using BIO tags (BiLSTM encoder followed by a CRF). The spans are then sorted into World1 and World2 using the following algorithm: If one span is a substring of another, they are are grouped together. Remaining spans are singleton groups. The two groups containing the longest spans are labeled as the two worlds being compared. Any additional spans are assigned to one of these two groups based on closest edit distance (or ignored if zero overlap). The group appearing first in the question is labeled World1, the other World2. The result is a question in which world mentions are canonicalized. The semantic parser QuaSP is then trained using these questions. We call the combined system (delexicalization plus semantic parser) QuaSP+. The results for QuaSP+ are included in Table TABREF34 . Most importantly, QuaSP+ significantly outperforms the baselines by over 12% absolute. Similarly, the parse accuracies are significantly improved from 32.2% to 43.8% (Table TABREF35 ). This suggests that this delexicalization technique is an effective way of making progress on this dataset, and more generally on problems where multiple situations are being compared, a common characteristic of qualitative problems. QuaSP+Zero: A Model for the Zero-Shot Task While our delexicalization procedure demonstrates a way of addressing the world tracking problem, the approach still relies on annotated data; if we were to add new qualitative relations, new training data would be needed, which is a significant scalability obstacle. To address this, we define the zero-shot problem as being able to answer questions involving a new predicate p given training data only about other predicates P different from p. For example, if we add a new property (e.g., heat) to the qualitative model (e.g., adding q+(friction, heat); “more friction implies more heat”), we want to answer questions involving heat without creating new annotated training questions, and instead only use minimal extra information about the new property. A parser that achieved good zero-shot performance, i.e., worked well for new properties unseen at training time, would be a substantial advance, allowing a new qualitative model to link to questions with minimal effort. QuaRel provides an environment in which methods for this zero-shot theory extension can be devised and evaluated. To do this, we consider the following experimental setting: All questions mentioning a particular property are removed, the parser is trained on the remainder, and then tested on those withheld questions, i.e., questions mentioning a property unseen in the training data. We present and evaluate a model that we have developed for this, called QuaSP+Zero, that modifies the QuaSP+ parser as follows: During decoding, at points where the parser is selecting which property to include in the LF (e.g., Figure FIGREF31 ), it does not just consider the question tokens, but also the relationship between those tokens and the properties INLINEFORM0 used in the qualitative model. For example, a question token such as “longer” can act as a cue for (the property) length, even if unseen in the training data, because “longer” and a lexical form of length (e.g.,“length”) are similar. This approach follows the entity-linking approach used by BIBREF11 Krishnamurthy2017NeuralSP, where the similarity between question tokens and (words associated with) entities - called the entity linking score - help decide which entities to include in the LF during parsing. Here, we modify their entity linking score INLINEFORM1 , linking question tokens INLINEFORM2 and property “entities” INLINEFORM3 , to be: INLINEFORM4 where INLINEFORM0 is a diagonal matrix connecting the embedding of the question token INLINEFORM1 and words INLINEFORM2 associated with the property INLINEFORM3 . For INLINEFORM4 , we provide a small list of words for each property (such as “speed”, “velocity”, and “fast” for the speed property), a small-cost requirement. The results with QuaSP+Zero are in Table TABREF45 , shown in detail on the QuaRel INLINEFORM0 subset and (due to space constraints) summarized for the full QuaRel. We can measure overall performance of QuaSP+Zero by averaging each of the zero-shot test sets (weighted by the number of questions in each set), resulting in an overall parse accuracy of 38.9% and answer accuracy 61.0% on QuaRel INLINEFORM1 , and 25.7% (parse) and 59.5% (answer) on QuaRel, both significantly better than random. These initial results are encouraging, suggesting that it may be possible to parse into modified qualitative models that include new relations, with minimal annotation effort, significantly opening up qualitative reasoning methods for QA. Summary and Conclusion Our goal is to answer questions that involve qualitative relationships, an important genre of task that involves both language and knowledge, but also one that presents significant challenges for semantic parsing. To this end we have developed a simple and flexible formalism for representing these questions; constructed QuaRel, the first dataset of qualitative story questions that exemplifies these challenges; and presented two new models that adapt existing parsing techniques to this task. The first model, QuaSP+, illustrates how delexicalization can help with world tracking (identifying different “worlds” in questions), resulting in state-of-the-art performance on QuaRel. The second model, QuaSP+Zero, illustrates how zero-shot learning can be achieved (i.e., adding new qualitative relationships without requiring new training examples) by using an entity-linking approach applied to properties - a capability not present in previous models. There are several directions in which this work can be expanded. First, quantitative property values (e.g., “10 mph”) are currently not handled well, as their mapping to “low” or “high” is context-dependent. Second, some questions do not fit our two question templates (Section SECREF13 ), e.g., where two property values are a single answer option (e.g., “....(A) one floor is smooth and the other floor is rough”). Finally, some questions include an additional level of indirection, requiring an inference step to map to qualitative relations. For example, “Which surface would be best for a race? (A) gravel (B) blacktop” requires the additional commonsense inference that “best for a race” implies “higher speed”. Given the ubiquity of qualitative comparisons in natural text, recognizing and reasoning with qualitative relationships is likely to remain an important task for AI. This work makes inroads into this task, and contributes a dataset and models to encourage progress by others. The dataset and models are publicly available at http://data.allenai.org/quarel.
Yes
ab78f066144936444ecd164dc695bec1cb356762
ab78f066144936444ecd164dc695bec1cb356762_0
Q: What is shared in the joint model? Text: Introduction One of the exciting yet challenging areas of research in Intelligent Transportation Systems is developing context-awareness technologies that can enable autonomous vehicles to interact with their passengers, understand passenger context and situations, and take appropriate actions accordingly. To this end, building multi-modal dialogue understanding capabilities situated in the in-cabin context is crucial to enhance passenger comfort and gain user confidence in AV interaction systems. Among many components of such systems, intent recognition and slot filling modules are one of the core building blocks towards carrying out successful dialogue with passengers. As an initial attempt to tackle some of those challenges, this study introduce in-cabin intent detection and slot filling models to identify passengers' intent and extract semantic frames from the natural language utterances in AV. The proposed models are developed by leveraging User Experience (UX) grounded realistic (ecologically valid) in-cabin dataset. This dataset is generated with naturalistic passenger behaviors, multiple passenger interactions, and with presence of a Wizard-of-Oz (WoZ) agent in moving vehicles with noisy road conditions. Background Long Short-Term Memory (LSTM) networks BIBREF0 are widely-used for temporal sequence learning or time-series modeling in Natural Language Processing (NLP). These neural networks are commonly employed for sequence-to-sequence (seq2seq) and sequence-to-one (seq2one) modeling problems, including slot filling tasks BIBREF1 and utterance-level intent classification BIBREF2 , BIBREF3 which are well-studied for various application domains. Bidirectional LSTMs (Bi-LSTMs) BIBREF4 are extensions of traditional LSTMs which are proposed to improve model performance on sequence classification problems even further. Jointly modeling slot extraction and intent recognition BIBREF2 , BIBREF5 is also explored in several architectures for task-specific applications in NLP. Using Attention mechanism BIBREF6 , BIBREF7 on top of RNNs is yet another recent break-through to elevate the model performance by attending inherently crucial sub-modules of given input. There exist various architectures to build hierarchical learning models BIBREF8 , BIBREF9 , BIBREF10 for document-to-sentence level, and sentence-to-word level classification tasks, which are highly domain-dependent and task-specific. Automatic Speech Recognition (ASR) technology has recently achieved human-level accuracy in many fields BIBREF11 , BIBREF12 . For spoken language understanding (SLU), it is shown that training SLU models on true text input (i.e., human transcriptions) versus noisy speech input (i.e., ASR outputs) can achieve varying results BIBREF13 . Even greater performance degradations are expected in more challenging and realistic setups with noisy environments, such as moving vehicles in actual traffic conditions. As an example, a recent work BIBREF14 attempts to classify sentences as navigation-related or not using the DARPA supported CU-Move in-vehicle speech corpus BIBREF15 , a relatively old and large corpus focusing on route navigation. For this binary intent classification task, the authors observed that detection performances are largely affected by high ASR error rates due to background noise and multi-speakers in CU-Move dataset (not publicly available). For in-cabin dialogue between car assistants and driver/passengers, recent studies explored creating a public dataset using a WoZ approach BIBREF16 , and improving ASR for passenger speech recognition BIBREF17 . A preliminary report on research designed to collect data for human-agent interactions in a moving vehicle is presented in a previous study BIBREF18 , with qualitative analysis on initial observations and user interviews. Our current study is focused on the quantitative analysis of natural language interactions found in this in-vehicle dataset BIBREF19 , where we address intent detection and slot extraction tasks for passengers interacting with the AMIE in-cabin agent. In this study, we propose a UX grounded realistic intent recognition and slot filling models with naturalistic passenger-vehicle interactions in moving vehicles. Based on observed interactions, we defined in-vehicle intent types and refined their relevant slots through a data driven process. After exploring existing approaches for jointly training intents and slots, we applied certain variations of these models that perform best on our dataset to support various natural commands for interacting with the car-agent. The main differences in our proposed models can be summarized as follows: (1) Using the extracted intent keywords in addition to the slots to jointly model them with utterance-level intents (where most of the previous work BIBREF8 , BIBREF9 only join slots and utterance-level intents, ignoring the intent keywords); (2) The 2-level hierarchy we defined by word-level detection/extraction for slots and intent keywords first, and then filtering-out predicted non-slot and non-intent keywords instead of feeding them into the upper levels of the network (i.e., instead of using stacked RNNs with multiple recurrent hidden layers for the full utterance BIBREF9 , BIBREF10 , which are computationally costly for long utterances with many non-slot & non-intent-related words), and finally using only the predicted valid-slots and intent-related keywords as an input to the second level of the hierarchy; (3) Extending joint models BIBREF2 , BIBREF5 to include both beginning-of-utterance and end-of-utterance tokens to leverage Bi-LSTMs (after observing that we achieved better results by doing so). We compared our intent detection and slot filling results with the results obtained from Dialogflow, a commercially available intent-based dialogue system by Google, and showed that our proposed models perform better for both tasks on the same dataset. We also conducted initial speech-to-text explorations by comparing models trained and tested (10-fold CV) on human transcriptions versus noisy ASR outputs (via Cloud Speech-to-Text). Finally, we compared the results with single passenger rides versus the rides with multiple passengers. Data Collection and Annotation Our AV in-cabin dataset includes around 30 hours of multi-modal data collected from 30 passengers (15 female, 15 male) in a total of 20 rides/sessions. In 10 sessions, single passenger was present (i.e., singletons), whereas the remaining 10 sessions include two passengers (i.e., dyads) interacting with the vehicle. The data is collected "in the wild" on the streets of Richmond, British Columbia, Canada. Each ride lasted about 1 hour or more. The vehicle is modified to hide the operator and the human acting as in-cabin agent from the passengers, using a variation of WoZ approach BIBREF20 . Participants sit in the back of the car and are separated by a semi-sound proof and translucent screen from the human driver and the WoZ AMIE agent at the front. In each session, the participants were playing a scavenger hunt game by receiving instructions over the phone from the Game Master. Passengers treat the car as AV and communicate with the WoZ AMIE agent via speech commands. Game objectives require passengers to interact naturally with the agent to go to certain destinations, update routes, stop the vehicle, give specific directions regarding where to pull over or park (sometimes with gesture), find landmarks, change speed, get in and out of the vehicle, etc. Further details of the data collection design and scavenger hunt protocol can be found in the preliminary study BIBREF18 . See Fig. FIGREF6 for the vehicle instrumentation to enhance multi-modal data collection setup. Our study is the initial work on this multi-modal dataset to develop intent detection and slot filling models, where we leveraged from the back-driver video/audio stream recorded by an RGB camera (facing towards the passengers) for manual transcription and annotation of in-cabin utterances. In addition, we used the audio data recorded by Lapel 1 Audio and Lapel 2 Audio (Fig. FIGREF6 ) as our input resources for the ASR. For in-cabin intent understanding, we described 4 groups of usages to support various natural commands for interacting with the vehicle: (1) Set/Change Destination/Route (including turn-by-turn instructions), (2) Set/Change Driving Behavior/Speed, (3) Finishing the Trip Use-cases, and (4) Others (open/close door/window/trunk, turn music/radio on/off, change AC/temperature, show map, etc.). According to those scenarios, 10 types of passenger intents are identified and annotated as follows: SetDestination, SetRoute, GoFaster, GoSlower, Stop, Park, PullOver, DropOff, OpenDoor, and Other. For slot filling task, relevant slots are identified and annotated as: Location, Position/Direction, Object, Time Guidance, Person, Gesture/Gaze (e.g., `this', `that', `over there', etc.), and None/O. In addition to utterance-level intents and slots, word-level intent related keywords are annotated as Intent. We obtained 1331 utterances having commands to AMIE agent from our in-cabin dataset. We expanded this dataset via the creation of similar tasks on Amazon Mechanical Turk BIBREF21 and reached 3418 utterances with intents in total. Intent and slot annotations are obtained on the transcribed utterances by majority voting of 3 annotators. Those annotation results for utterance-level intent types, slots and intent keywords can be found in Table TABREF7 and Table TABREF8 as a summary of dataset statistics. Detecting Utterance-level Intent Types As a baseline system, we implemented term-frequency and rule-based mapping mechanisms from word-level intent keywords extraction to utterance-level intent recognition. To further improve the utterance-level performance, we explored various RNN architectures and developed a hierarchical (2-level) models to recognize passenger intents along with relevant entities/slots in utterances. Our hierarchical model has the following 2-levels: Level-1: Word-level extraction (to automatically detect/predict and eliminate non-slot & non-intent keywords first, as they would not carry much information for understanding the utterance-level intent-type). Level-2: Utterance-level recognition (to detect final intent-types from given utterances, by using valid slots and intent keywords as inputs only, which are detected at Level-1). In this study, we employed an RNN architecture with LSTM cells that are designed to exploit long range dependencies in sequential data. LSTM has memory cell state to store relevant information and various gates, which can mitigate the vanishing gradient problem BIBREF0 . Given the input INLINEFORM0 at time INLINEFORM1 , and hidden state from the previous time step INLINEFORM2 , the hidden and output layers for the current time step are computed. The LSTM architecture is specified by the following equations: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 denote the weight matrices and bias terms, respectively. The sigmoid ( INLINEFORM2 ) and INLINEFORM3 are activation functions applied element-wise, and INLINEFORM4 denotes the element-wise vector product. LSTM has a memory vector INLINEFORM5 to read/write or reset using a gating mechanism and activation functions. Here, input gate INLINEFORM6 scales down the input, the forget gate INLINEFORM7 scales down the memory vector INLINEFORM8 , and the output gate INLINEFORM9 scales down the output to achieve final INLINEFORM10 , which is used to predict INLINEFORM11 (through a INLINEFORM12 activation). Similar to LSTMs, GRUs BIBREF22 are proposed as a simpler and faster alternative, having only reset and update gates. For Bi-LSTM BIBREF4 , BIBREF2 , two LSTM architectures are traversed in forward and backward directions, where their hidden layers are concatenated to compute the output. For slot filling and intent keywords extraction, we experimented with various configurations of seq2seq LSTMs BIBREF3 and GRUs BIBREF22 , as well as Bi-LSTMs BIBREF4 . A sample network architecture can be seen in Fig. FIGREF15 where we jointly trained slots and intent keywords. The passenger utterance is fed into LSTM/GRU network via an embedding layer as a sequence of words, which are transformed into word vectors. We also experimented with GloVe BIBREF23 , word2vec BIBREF24 , BIBREF25 , and fastText BIBREF26 as pre-trained word embeddings. To prevent overfitting, we used a dropout layer with 0.5 rate for regularization. Best performing results are obtained with Bi-LSTMs and GloVe embeddings (6B tokens, 400K vocabulary size, vector dimension 100). For utterance-level intent detection, we mainly experimented with 5 groups of models: (1) Hybrid: RNN + Rule-based, (2) Separate: Seq2one Bi-LSTM with Attention, (3) Joint: Seq2seq Bi-LSTM for slots/intent keywords & utterance-level intents, (4) Hierarchical & Separate, (5) Hierarchical & Joint. For (1), we detect/extract intent keywords and slots (via RNN) and map them into utterance-level intent-types (rule-based). For (2), we feed the whole utterance as input sequence and intent-type as single target into Bi-LSTM network with Attention mechanism. For (3), we jointly train word-level intent keywords/slots and utterance-level intents (by adding <BOU>/<EOU> terms to the beginning/end of utterances with intent-types as their labels). For (4) and (5), we detect/extract intent keywords/slots first, and then only feed the predicted keywords/slots as a sequence into (2) and (3), respectively. Utterance-Level Intent Detection Experiments The details of 5 groups of models and their variations that we experimented with for utterance-level intent recognition are summarized in this section. Instead of purely relying on machine learning (ML) or deep learning (DL) system, hybrid models leverage both ML/DL and rule-based systems. In this model, we defined our hybrid approach as using RNNs first for detecting/extracting intent keywords and slots; then applying rule-based mapping mechanisms to identify utterance-level intents (using the predicted intent keywords and slots). A sample network architecture can be seen in Fig. FIGREF18 where we leveraged seq2seq Bi-LSTM networks for word-level extraction before the rule-based mapping to utterance-level intent classes. The model variations are defined based on varying mapping mechanisms and networks as follows: Hybrid-0: RNN (Seq2seq LSTM for intent keywords extraction) + Rule-based (mapping extracted intent keywords to utterance-level intents) Hybrid-1: RNN (Seq2seq Bi-LSTM for intent keywords extraction) + Rule-based (mapping extracted intent keywords to utterance-level intents) Hybrid-2: RNN (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Rule-based (mapping extracted intent keywords & ‘Position/Direction’ slots to utterance-level intents) Hybrid-3: RNN (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Rule-based (mapping extracted intent keywords & all slots to utterance-level intents) This approach is based on separately training sequence-to-one RNNs for utterance-level intents only. These are called separate models as we do not leverage any information from the slot or intent keyword tags (i.e., utterance-level intents are not jointly trained with slots/intent keywords). Note that in seq2one models, we feed the utterance as an input sequence and the LSTM layer will only return the hidden state output at the last time step. This single output (or concatenated output of last hidden states from the forward and backward LSTMs in Bi-LSTM case) will be used to classify the intent type of the given utterance. The idea behind is that the last hidden state of the sequence will contain a latent semantic representation of the whole input utterance, which can be utilized for utterance-level intent prediction. See Fig. FIGREF24 (a) for sample network architecture of the seq2one Bi-LSTM network. Note that in the Bi-LSTM implementation for seq2one learning (i.e., when not returning sequences), the outputs of backward/reverse LSTM is actually ordered in reverse time steps ( INLINEFORM0 ... INLINEFORM1 ). Thus, as illustrated in Fig. FIGREF24 (a), we actually concatenate the hidden state outputs of forward LSTM at last time step and backward LSTM at first time step (i.e., first word in a given utterance), and then feed this merged result to the dense layer. Fig. FIGREF24 (b) depicts the seq2one Bi-LSTM network with Attention mechanism applied on top of Bi-LSTM layers. For the Attention case, the hidden state outputs of all time steps are fed into the Attention mechanism that will allow to point at specific words in a sequence when computing a single output BIBREF6 . Another variation of Attention mechanism we examined is the AttentionWithContext, which incorporates a context/query vector jointly learned during the training process to assist the attention BIBREF7 . All seq2one model variations we experimented with can be summarized as follows: Separate-0: Seq2one LSTM for utterance-level intents Separate-1: Seq2one Bi-LSTM for utterance-level intents Separate-2: Seq2one Bi-LSTM with Attention BIBREF6 for utterance-level intents Separate-3: Seq2one Bi-LSTM with AttentionWithContext BIBREF7 for utterance-level intents Using sequence-to-sequence networks, the approach here is jointly training annotated utterance-level intents and slots/intent keywords by adding <BOU>/ <EOU> tokens to the beginning/end of each utterance, with utterance-level intent-type as labels of such tokens. Our approach is an extension of BIBREF2 , in which only an <EOS> term is added with intent-type tags associated to this sentence final token, both for LSTM and Bi-LSTM cases. However, we experimented with adding both <BOU> and <EOU> terms as Bi-LSTMs will be used for seq2seq learning, and we observed that slightly better results can be achieved by doing so. The idea behind is that, since this is a seq2seq learning problem, at the last time step (i.e., prediction at <EOU>) the reverse pass in Bi-LSTM would be incomplete (refer to Fig. FIGREF24 (a) to observe the last Bi-LSTM cell). Therefore, adding <BOU> token and leveraging the backward LSTM output at first time step (i.e., prediction at <BOU>) would potentially help for joint seq2seq learning. An overall network architecture can be found in Fig. FIGREF30 for our joint models. We will report the experimental results on two variations (with and without intent keywords) as follows: Joint-1: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots) Joint-2: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots & intent keywords) Proposed hierarchical models are detecting/extracting intent keywords & slots using sequence-to-sequence networks first (i.e., level-1), and then feeding only the words that are predicted as intent keywords & valid slots (i.e., not the ones that are predicted as ‘None/O’) as an input sequence to various separate sequence-to-one models (described above) to recognize final utterance-level intents (i.e., level-2). A sample network architecture is given in Fig. FIGREF34 (a). The idea behind filtering out non-slot and non-intent keywords here resembles providing a summary of input sequence to the upper levels of the network hierarchy, where we actually learn this summarized sequence of keywords using another RNN layer. This would potentially result in focusing the utterance-level classification problem on the most salient words of the input sequences (i.e., intent keywords & slots) and also effectively reducing the length of input sequences (i.e., improving the long-term dependency issues observed in longer sequences). Note that according to our dataset statistics given in Table TABREF8 , 45% of the words found in transcribed utterances with passenger intents are annotated as non-slot and non-intent keywords (e.g., 'please', 'okay', 'can', 'could', incomplete/interrupted words, filler sounds like 'uh'/'um', certain stop words, punctuation, and many others that are not related to intent/slots). Therefore, the proposed approach would result in reducing the sequence length nearly by half at the input layer of level-2 for utterance-level recognition. For hierarchical & separate models, we experimented with 4 variations based on which separate model used at the second level of the hierarchy, and these are summarized as follows: Hierarchical & Separate-0: Level-1 (Seq2seq LSTM for intent keywords & slots extraction) + Level-2 (Separate-0: Seq2one LSTM for utterance-level intent detection) Hierarchical & Separate-1: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Separate-1: Seq2one Bi-LSTM for utterance-level intent detection) Hierarchical & Separate-2: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Separate-2: Seq2one Bi-LSTM + Attention for utterance-level intent detection) Hierarchical & Separate-3: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Separate-3: Seq2one Bi-LSTM + AttentionWithContext for utterance-level intent detection) Proposed hierarchical models detect/extract intent keywords & slots using sequence-to-sequence networks first, and then only the words that are predicted as intent keywords & valid slots (i.e., not the ones that are predicted as ‘None/O’) are fed as input to the joint sequence-to-sequence models (described above). See Fig. FIGREF34 (b) for sample network architecture. After the filtering or summarization of sequence at level-1, <BOU> and <EOU> tokens are appended to the shorter input sequence before level-2 for joint learning. Note that in this case, using Joint-1 model (jointly training annotated slots & utterance-level intents) for the second level of the hierarchy would not make much sense (without intent keywords). Hence, Joint-2 model is used for the second level as described below: Hierarchical & Joint-2: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Joint-2 Seq2seq models with slots & intent keywords & utterance-level intents) Table TABREF42 summarizes the results of various approaches we investigated for utterance-level intent understanding. We achieved 0.91 overall F1-score with our best-performing model, namely Hierarchical & Joint-2. All model results are obtained via 10-fold cross-validation (10-fold CV) on the same dataset. For our AMIE scenarios, Table TABREF43 shows the intent-wise detection results with the initial (Hybrid-0) and currently best performing (H-Joint-2) intent recognizers. With our best model (H-Joint-2), relatively problematic SetDestination and SetRoute intents’ detection performances in baseline model (Hybrid-0) jumped from 0.78 to 0.89 and 0.75 to 0.88, respectively. We compared our intent detection results with the Dialogflow's Detect Intent API. The same AMIE dataset is used to train and test (10-fold CV) Dialogflow's intent detection and slot filling modules, using the recommended hybrid mode (rule-based and ML). As shown in Table TABREF43 , an overall F1-score of 0.89 is achieved with Dialogflow for the same task. As you can see, our Hierarchical & Joint models obtained higher results than the Dialogflow for 8 out of 10 intent types. Slot Filling and Intent Keyword Extraction Experiments Slot filling and intent keyword extraction results are given in Table TABREF44 and Table TABREF46 , respectively. For slot extraction, we reached 0.96 overall F1-score using seq2seq Bi-LSTM model, which is slightly better than using LSTM model. Although the overall performance is slightly improved with Bi-LSTM model, relatively problematic Object, Time Guidance, Gesture/Gaze slots’ F1-score performances increased from 0.80 to 0.89, 0.80 to 0.85, and 0.87 to 0.92, respectively. Note that with Dialogflow, we reached 0.92 overall F1-score for the entity/slot filling task on the same dataset. As you can see, our models reached significantly higher F1-scores than the Dialogflow for 6 out of 7 slot types (except Time Guidance). Speech-to-Text Experiments for AMIE: Training and Testing Models on ASR Outputs For transcriptions, utterance-level audio clips were extracted from the passenger-facing video stream, which was the single source used for human transcriptions of all utterances from passengers, AMIE agent and the game master. Since our transcriptions-based intent/slot models assumed perfect (at least close to human-level) ASR in the previous sections, we experimented with more realistic scenario of using ASR outputs for intent/slot modeling. We employed Cloud Speech-to-Text API to obtain ASR outputs on audio clips with passenger utterances, which were segmented using transcription time-stamps. We observed an overall word error rate (WER) of 13.6% in ASR outputs for all 20 sessions of AMIE. Considering that a generic ASR is used with no domain-specific acoustic models for this moving vehicle environment with in-cabin noise, the initial results were quite promising for us to move on with the model training on ASR outputs. For initial explorations, we created a new dataset having utterances with commands using ASR outputs of the in-cabin data (20 sessions with 1331 spoken utterances). Human transcriptions version of this set is also created. Although the dataset size is limited, both slot/intent keyword extraction models and utterance-level intent recognition models are not severely affected when trained and tested (10-fold CV) on ASR outputs instead of manual transcriptions. See Table TABREF48 for the overall F1-scores of the compared models. After the ASR pipeline described above is completed for all 20 sessions of AMIE in-cabin dataset (ALL with 1331 utterances), we repeated all our experiments with the subsets for 10 sessions having single passenger (Singletons with 600 utterances) and remaining 10 sessions having two passengers (Dyads with 731 utterances). We observed overall WER of 13.5% and 13.7% for Singletons and Dyads, respectively. The overlapping speech cases with slightly more conversations going on (longer transcriptions) in Dyad sessions compared to the Singleton sessions may affect the ASR performance, which may also affect the intent/slots models performances. As shown in Table TABREF48 , although we have more samples with Dyads, the performance drops between the models trained on transcriptions vs. ASR outputs are slightly higher for the Dyads compared to the Singletons, as expected. Discussion and Conclusion We introduced AMIE, the intelligent in-cabin car agent responsible for handling certain AV-passenger interactions. We develop hierarchical and joint models to extract various passenger intents along with relevant slots for actions to be performed in AV, achieving F1-scores of 0.91 for intent recognition and 0.96 for slot extraction. We show that even using the generic ASR with noisy outputs, our models are still capable of achieving comparable results with models trained on human transcriptions. We believe that the ASR can be improved by collecting more in-domain data to obtain domain-specific acoustic models. These initial models will allow us to collect more speech data via bootstrapping with the intent-based dialogue application we have built, and the hierarchy we defined will allow us to eliminate costly annotation efforts in the future, especially for the word-level slots and intent keywords. Once enough domain-specific multi-modal data is collected, our future work is to explore training end-to-end dialogue agents for our in-cabin use-cases. We are planning to exploit other modalities for improved understanding of the in-cabin dialogue as well.
jointly trained with slots
e659ceb184777015c12db2da5ae396635192f0b0
e659ceb184777015c12db2da5ae396635192f0b0_0
Q: Are the intent labels imbalanced in the dataset? Text: Introduction One of the exciting yet challenging areas of research in Intelligent Transportation Systems is developing context-awareness technologies that can enable autonomous vehicles to interact with their passengers, understand passenger context and situations, and take appropriate actions accordingly. To this end, building multi-modal dialogue understanding capabilities situated in the in-cabin context is crucial to enhance passenger comfort and gain user confidence in AV interaction systems. Among many components of such systems, intent recognition and slot filling modules are one of the core building blocks towards carrying out successful dialogue with passengers. As an initial attempt to tackle some of those challenges, this study introduce in-cabin intent detection and slot filling models to identify passengers' intent and extract semantic frames from the natural language utterances in AV. The proposed models are developed by leveraging User Experience (UX) grounded realistic (ecologically valid) in-cabin dataset. This dataset is generated with naturalistic passenger behaviors, multiple passenger interactions, and with presence of a Wizard-of-Oz (WoZ) agent in moving vehicles with noisy road conditions. Background Long Short-Term Memory (LSTM) networks BIBREF0 are widely-used for temporal sequence learning or time-series modeling in Natural Language Processing (NLP). These neural networks are commonly employed for sequence-to-sequence (seq2seq) and sequence-to-one (seq2one) modeling problems, including slot filling tasks BIBREF1 and utterance-level intent classification BIBREF2 , BIBREF3 which are well-studied for various application domains. Bidirectional LSTMs (Bi-LSTMs) BIBREF4 are extensions of traditional LSTMs which are proposed to improve model performance on sequence classification problems even further. Jointly modeling slot extraction and intent recognition BIBREF2 , BIBREF5 is also explored in several architectures for task-specific applications in NLP. Using Attention mechanism BIBREF6 , BIBREF7 on top of RNNs is yet another recent break-through to elevate the model performance by attending inherently crucial sub-modules of given input. There exist various architectures to build hierarchical learning models BIBREF8 , BIBREF9 , BIBREF10 for document-to-sentence level, and sentence-to-word level classification tasks, which are highly domain-dependent and task-specific. Automatic Speech Recognition (ASR) technology has recently achieved human-level accuracy in many fields BIBREF11 , BIBREF12 . For spoken language understanding (SLU), it is shown that training SLU models on true text input (i.e., human transcriptions) versus noisy speech input (i.e., ASR outputs) can achieve varying results BIBREF13 . Even greater performance degradations are expected in more challenging and realistic setups with noisy environments, such as moving vehicles in actual traffic conditions. As an example, a recent work BIBREF14 attempts to classify sentences as navigation-related or not using the DARPA supported CU-Move in-vehicle speech corpus BIBREF15 , a relatively old and large corpus focusing on route navigation. For this binary intent classification task, the authors observed that detection performances are largely affected by high ASR error rates due to background noise and multi-speakers in CU-Move dataset (not publicly available). For in-cabin dialogue between car assistants and driver/passengers, recent studies explored creating a public dataset using a WoZ approach BIBREF16 , and improving ASR for passenger speech recognition BIBREF17 . A preliminary report on research designed to collect data for human-agent interactions in a moving vehicle is presented in a previous study BIBREF18 , with qualitative analysis on initial observations and user interviews. Our current study is focused on the quantitative analysis of natural language interactions found in this in-vehicle dataset BIBREF19 , where we address intent detection and slot extraction tasks for passengers interacting with the AMIE in-cabin agent. In this study, we propose a UX grounded realistic intent recognition and slot filling models with naturalistic passenger-vehicle interactions in moving vehicles. Based on observed interactions, we defined in-vehicle intent types and refined their relevant slots through a data driven process. After exploring existing approaches for jointly training intents and slots, we applied certain variations of these models that perform best on our dataset to support various natural commands for interacting with the car-agent. The main differences in our proposed models can be summarized as follows: (1) Using the extracted intent keywords in addition to the slots to jointly model them with utterance-level intents (where most of the previous work BIBREF8 , BIBREF9 only join slots and utterance-level intents, ignoring the intent keywords); (2) The 2-level hierarchy we defined by word-level detection/extraction for slots and intent keywords first, and then filtering-out predicted non-slot and non-intent keywords instead of feeding them into the upper levels of the network (i.e., instead of using stacked RNNs with multiple recurrent hidden layers for the full utterance BIBREF9 , BIBREF10 , which are computationally costly for long utterances with many non-slot & non-intent-related words), and finally using only the predicted valid-slots and intent-related keywords as an input to the second level of the hierarchy; (3) Extending joint models BIBREF2 , BIBREF5 to include both beginning-of-utterance and end-of-utterance tokens to leverage Bi-LSTMs (after observing that we achieved better results by doing so). We compared our intent detection and slot filling results with the results obtained from Dialogflow, a commercially available intent-based dialogue system by Google, and showed that our proposed models perform better for both tasks on the same dataset. We also conducted initial speech-to-text explorations by comparing models trained and tested (10-fold CV) on human transcriptions versus noisy ASR outputs (via Cloud Speech-to-Text). Finally, we compared the results with single passenger rides versus the rides with multiple passengers. Data Collection and Annotation Our AV in-cabin dataset includes around 30 hours of multi-modal data collected from 30 passengers (15 female, 15 male) in a total of 20 rides/sessions. In 10 sessions, single passenger was present (i.e., singletons), whereas the remaining 10 sessions include two passengers (i.e., dyads) interacting with the vehicle. The data is collected "in the wild" on the streets of Richmond, British Columbia, Canada. Each ride lasted about 1 hour or more. The vehicle is modified to hide the operator and the human acting as in-cabin agent from the passengers, using a variation of WoZ approach BIBREF20 . Participants sit in the back of the car and are separated by a semi-sound proof and translucent screen from the human driver and the WoZ AMIE agent at the front. In each session, the participants were playing a scavenger hunt game by receiving instructions over the phone from the Game Master. Passengers treat the car as AV and communicate with the WoZ AMIE agent via speech commands. Game objectives require passengers to interact naturally with the agent to go to certain destinations, update routes, stop the vehicle, give specific directions regarding where to pull over or park (sometimes with gesture), find landmarks, change speed, get in and out of the vehicle, etc. Further details of the data collection design and scavenger hunt protocol can be found in the preliminary study BIBREF18 . See Fig. FIGREF6 for the vehicle instrumentation to enhance multi-modal data collection setup. Our study is the initial work on this multi-modal dataset to develop intent detection and slot filling models, where we leveraged from the back-driver video/audio stream recorded by an RGB camera (facing towards the passengers) for manual transcription and annotation of in-cabin utterances. In addition, we used the audio data recorded by Lapel 1 Audio and Lapel 2 Audio (Fig. FIGREF6 ) as our input resources for the ASR. For in-cabin intent understanding, we described 4 groups of usages to support various natural commands for interacting with the vehicle: (1) Set/Change Destination/Route (including turn-by-turn instructions), (2) Set/Change Driving Behavior/Speed, (3) Finishing the Trip Use-cases, and (4) Others (open/close door/window/trunk, turn music/radio on/off, change AC/temperature, show map, etc.). According to those scenarios, 10 types of passenger intents are identified and annotated as follows: SetDestination, SetRoute, GoFaster, GoSlower, Stop, Park, PullOver, DropOff, OpenDoor, and Other. For slot filling task, relevant slots are identified and annotated as: Location, Position/Direction, Object, Time Guidance, Person, Gesture/Gaze (e.g., `this', `that', `over there', etc.), and None/O. In addition to utterance-level intents and slots, word-level intent related keywords are annotated as Intent. We obtained 1331 utterances having commands to AMIE agent from our in-cabin dataset. We expanded this dataset via the creation of similar tasks on Amazon Mechanical Turk BIBREF21 and reached 3418 utterances with intents in total. Intent and slot annotations are obtained on the transcribed utterances by majority voting of 3 annotators. Those annotation results for utterance-level intent types, slots and intent keywords can be found in Table TABREF7 and Table TABREF8 as a summary of dataset statistics. Detecting Utterance-level Intent Types As a baseline system, we implemented term-frequency and rule-based mapping mechanisms from word-level intent keywords extraction to utterance-level intent recognition. To further improve the utterance-level performance, we explored various RNN architectures and developed a hierarchical (2-level) models to recognize passenger intents along with relevant entities/slots in utterances. Our hierarchical model has the following 2-levels: Level-1: Word-level extraction (to automatically detect/predict and eliminate non-slot & non-intent keywords first, as they would not carry much information for understanding the utterance-level intent-type). Level-2: Utterance-level recognition (to detect final intent-types from given utterances, by using valid slots and intent keywords as inputs only, which are detected at Level-1). In this study, we employed an RNN architecture with LSTM cells that are designed to exploit long range dependencies in sequential data. LSTM has memory cell state to store relevant information and various gates, which can mitigate the vanishing gradient problem BIBREF0 . Given the input INLINEFORM0 at time INLINEFORM1 , and hidden state from the previous time step INLINEFORM2 , the hidden and output layers for the current time step are computed. The LSTM architecture is specified by the following equations: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 denote the weight matrices and bias terms, respectively. The sigmoid ( INLINEFORM2 ) and INLINEFORM3 are activation functions applied element-wise, and INLINEFORM4 denotes the element-wise vector product. LSTM has a memory vector INLINEFORM5 to read/write or reset using a gating mechanism and activation functions. Here, input gate INLINEFORM6 scales down the input, the forget gate INLINEFORM7 scales down the memory vector INLINEFORM8 , and the output gate INLINEFORM9 scales down the output to achieve final INLINEFORM10 , which is used to predict INLINEFORM11 (through a INLINEFORM12 activation). Similar to LSTMs, GRUs BIBREF22 are proposed as a simpler and faster alternative, having only reset and update gates. For Bi-LSTM BIBREF4 , BIBREF2 , two LSTM architectures are traversed in forward and backward directions, where their hidden layers are concatenated to compute the output. For slot filling and intent keywords extraction, we experimented with various configurations of seq2seq LSTMs BIBREF3 and GRUs BIBREF22 , as well as Bi-LSTMs BIBREF4 . A sample network architecture can be seen in Fig. FIGREF15 where we jointly trained slots and intent keywords. The passenger utterance is fed into LSTM/GRU network via an embedding layer as a sequence of words, which are transformed into word vectors. We also experimented with GloVe BIBREF23 , word2vec BIBREF24 , BIBREF25 , and fastText BIBREF26 as pre-trained word embeddings. To prevent overfitting, we used a dropout layer with 0.5 rate for regularization. Best performing results are obtained with Bi-LSTMs and GloVe embeddings (6B tokens, 400K vocabulary size, vector dimension 100). For utterance-level intent detection, we mainly experimented with 5 groups of models: (1) Hybrid: RNN + Rule-based, (2) Separate: Seq2one Bi-LSTM with Attention, (3) Joint: Seq2seq Bi-LSTM for slots/intent keywords & utterance-level intents, (4) Hierarchical & Separate, (5) Hierarchical & Joint. For (1), we detect/extract intent keywords and slots (via RNN) and map them into utterance-level intent-types (rule-based). For (2), we feed the whole utterance as input sequence and intent-type as single target into Bi-LSTM network with Attention mechanism. For (3), we jointly train word-level intent keywords/slots and utterance-level intents (by adding <BOU>/<EOU> terms to the beginning/end of utterances with intent-types as their labels). For (4) and (5), we detect/extract intent keywords/slots first, and then only feed the predicted keywords/slots as a sequence into (2) and (3), respectively. Utterance-Level Intent Detection Experiments The details of 5 groups of models and their variations that we experimented with for utterance-level intent recognition are summarized in this section. Instead of purely relying on machine learning (ML) or deep learning (DL) system, hybrid models leverage both ML/DL and rule-based systems. In this model, we defined our hybrid approach as using RNNs first for detecting/extracting intent keywords and slots; then applying rule-based mapping mechanisms to identify utterance-level intents (using the predicted intent keywords and slots). A sample network architecture can be seen in Fig. FIGREF18 where we leveraged seq2seq Bi-LSTM networks for word-level extraction before the rule-based mapping to utterance-level intent classes. The model variations are defined based on varying mapping mechanisms and networks as follows: Hybrid-0: RNN (Seq2seq LSTM for intent keywords extraction) + Rule-based (mapping extracted intent keywords to utterance-level intents) Hybrid-1: RNN (Seq2seq Bi-LSTM for intent keywords extraction) + Rule-based (mapping extracted intent keywords to utterance-level intents) Hybrid-2: RNN (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Rule-based (mapping extracted intent keywords & ‘Position/Direction’ slots to utterance-level intents) Hybrid-3: RNN (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Rule-based (mapping extracted intent keywords & all slots to utterance-level intents) This approach is based on separately training sequence-to-one RNNs for utterance-level intents only. These are called separate models as we do not leverage any information from the slot or intent keyword tags (i.e., utterance-level intents are not jointly trained with slots/intent keywords). Note that in seq2one models, we feed the utterance as an input sequence and the LSTM layer will only return the hidden state output at the last time step. This single output (or concatenated output of last hidden states from the forward and backward LSTMs in Bi-LSTM case) will be used to classify the intent type of the given utterance. The idea behind is that the last hidden state of the sequence will contain a latent semantic representation of the whole input utterance, which can be utilized for utterance-level intent prediction. See Fig. FIGREF24 (a) for sample network architecture of the seq2one Bi-LSTM network. Note that in the Bi-LSTM implementation for seq2one learning (i.e., when not returning sequences), the outputs of backward/reverse LSTM is actually ordered in reverse time steps ( INLINEFORM0 ... INLINEFORM1 ). Thus, as illustrated in Fig. FIGREF24 (a), we actually concatenate the hidden state outputs of forward LSTM at last time step and backward LSTM at first time step (i.e., first word in a given utterance), and then feed this merged result to the dense layer. Fig. FIGREF24 (b) depicts the seq2one Bi-LSTM network with Attention mechanism applied on top of Bi-LSTM layers. For the Attention case, the hidden state outputs of all time steps are fed into the Attention mechanism that will allow to point at specific words in a sequence when computing a single output BIBREF6 . Another variation of Attention mechanism we examined is the AttentionWithContext, which incorporates a context/query vector jointly learned during the training process to assist the attention BIBREF7 . All seq2one model variations we experimented with can be summarized as follows: Separate-0: Seq2one LSTM for utterance-level intents Separate-1: Seq2one Bi-LSTM for utterance-level intents Separate-2: Seq2one Bi-LSTM with Attention BIBREF6 for utterance-level intents Separate-3: Seq2one Bi-LSTM with AttentionWithContext BIBREF7 for utterance-level intents Using sequence-to-sequence networks, the approach here is jointly training annotated utterance-level intents and slots/intent keywords by adding <BOU>/ <EOU> tokens to the beginning/end of each utterance, with utterance-level intent-type as labels of such tokens. Our approach is an extension of BIBREF2 , in which only an <EOS> term is added with intent-type tags associated to this sentence final token, both for LSTM and Bi-LSTM cases. However, we experimented with adding both <BOU> and <EOU> terms as Bi-LSTMs will be used for seq2seq learning, and we observed that slightly better results can be achieved by doing so. The idea behind is that, since this is a seq2seq learning problem, at the last time step (i.e., prediction at <EOU>) the reverse pass in Bi-LSTM would be incomplete (refer to Fig. FIGREF24 (a) to observe the last Bi-LSTM cell). Therefore, adding <BOU> token and leveraging the backward LSTM output at first time step (i.e., prediction at <BOU>) would potentially help for joint seq2seq learning. An overall network architecture can be found in Fig. FIGREF30 for our joint models. We will report the experimental results on two variations (with and without intent keywords) as follows: Joint-1: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots) Joint-2: Seq2seq Bi-LSTM for utterance-level intent detection (jointly trained with slots & intent keywords) Proposed hierarchical models are detecting/extracting intent keywords & slots using sequence-to-sequence networks first (i.e., level-1), and then feeding only the words that are predicted as intent keywords & valid slots (i.e., not the ones that are predicted as ‘None/O’) as an input sequence to various separate sequence-to-one models (described above) to recognize final utterance-level intents (i.e., level-2). A sample network architecture is given in Fig. FIGREF34 (a). The idea behind filtering out non-slot and non-intent keywords here resembles providing a summary of input sequence to the upper levels of the network hierarchy, where we actually learn this summarized sequence of keywords using another RNN layer. This would potentially result in focusing the utterance-level classification problem on the most salient words of the input sequences (i.e., intent keywords & slots) and also effectively reducing the length of input sequences (i.e., improving the long-term dependency issues observed in longer sequences). Note that according to our dataset statistics given in Table TABREF8 , 45% of the words found in transcribed utterances with passenger intents are annotated as non-slot and non-intent keywords (e.g., 'please', 'okay', 'can', 'could', incomplete/interrupted words, filler sounds like 'uh'/'um', certain stop words, punctuation, and many others that are not related to intent/slots). Therefore, the proposed approach would result in reducing the sequence length nearly by half at the input layer of level-2 for utterance-level recognition. For hierarchical & separate models, we experimented with 4 variations based on which separate model used at the second level of the hierarchy, and these are summarized as follows: Hierarchical & Separate-0: Level-1 (Seq2seq LSTM for intent keywords & slots extraction) + Level-2 (Separate-0: Seq2one LSTM for utterance-level intent detection) Hierarchical & Separate-1: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Separate-1: Seq2one Bi-LSTM for utterance-level intent detection) Hierarchical & Separate-2: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Separate-2: Seq2one Bi-LSTM + Attention for utterance-level intent detection) Hierarchical & Separate-3: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Separate-3: Seq2one Bi-LSTM + AttentionWithContext for utterance-level intent detection) Proposed hierarchical models detect/extract intent keywords & slots using sequence-to-sequence networks first, and then only the words that are predicted as intent keywords & valid slots (i.e., not the ones that are predicted as ‘None/O’) are fed as input to the joint sequence-to-sequence models (described above). See Fig. FIGREF34 (b) for sample network architecture. After the filtering or summarization of sequence at level-1, <BOU> and <EOU> tokens are appended to the shorter input sequence before level-2 for joint learning. Note that in this case, using Joint-1 model (jointly training annotated slots & utterance-level intents) for the second level of the hierarchy would not make much sense (without intent keywords). Hence, Joint-2 model is used for the second level as described below: Hierarchical & Joint-2: Level-1 (Seq2seq Bi-LSTM for intent keywords & slots extraction) + Level-2 (Joint-2 Seq2seq models with slots & intent keywords & utterance-level intents) Table TABREF42 summarizes the results of various approaches we investigated for utterance-level intent understanding. We achieved 0.91 overall F1-score with our best-performing model, namely Hierarchical & Joint-2. All model results are obtained via 10-fold cross-validation (10-fold CV) on the same dataset. For our AMIE scenarios, Table TABREF43 shows the intent-wise detection results with the initial (Hybrid-0) and currently best performing (H-Joint-2) intent recognizers. With our best model (H-Joint-2), relatively problematic SetDestination and SetRoute intents’ detection performances in baseline model (Hybrid-0) jumped from 0.78 to 0.89 and 0.75 to 0.88, respectively. We compared our intent detection results with the Dialogflow's Detect Intent API. The same AMIE dataset is used to train and test (10-fold CV) Dialogflow's intent detection and slot filling modules, using the recommended hybrid mode (rule-based and ML). As shown in Table TABREF43 , an overall F1-score of 0.89 is achieved with Dialogflow for the same task. As you can see, our Hierarchical & Joint models obtained higher results than the Dialogflow for 8 out of 10 intent types. Slot Filling and Intent Keyword Extraction Experiments Slot filling and intent keyword extraction results are given in Table TABREF44 and Table TABREF46 , respectively. For slot extraction, we reached 0.96 overall F1-score using seq2seq Bi-LSTM model, which is slightly better than using LSTM model. Although the overall performance is slightly improved with Bi-LSTM model, relatively problematic Object, Time Guidance, Gesture/Gaze slots’ F1-score performances increased from 0.80 to 0.89, 0.80 to 0.85, and 0.87 to 0.92, respectively. Note that with Dialogflow, we reached 0.92 overall F1-score for the entity/slot filling task on the same dataset. As you can see, our models reached significantly higher F1-scores than the Dialogflow for 6 out of 7 slot types (except Time Guidance). Speech-to-Text Experiments for AMIE: Training and Testing Models on ASR Outputs For transcriptions, utterance-level audio clips were extracted from the passenger-facing video stream, which was the single source used for human transcriptions of all utterances from passengers, AMIE agent and the game master. Since our transcriptions-based intent/slot models assumed perfect (at least close to human-level) ASR in the previous sections, we experimented with more realistic scenario of using ASR outputs for intent/slot modeling. We employed Cloud Speech-to-Text API to obtain ASR outputs on audio clips with passenger utterances, which were segmented using transcription time-stamps. We observed an overall word error rate (WER) of 13.6% in ASR outputs for all 20 sessions of AMIE. Considering that a generic ASR is used with no domain-specific acoustic models for this moving vehicle environment with in-cabin noise, the initial results were quite promising for us to move on with the model training on ASR outputs. For initial explorations, we created a new dataset having utterances with commands using ASR outputs of the in-cabin data (20 sessions with 1331 spoken utterances). Human transcriptions version of this set is also created. Although the dataset size is limited, both slot/intent keyword extraction models and utterance-level intent recognition models are not severely affected when trained and tested (10-fold CV) on ASR outputs instead of manual transcriptions. See Table TABREF48 for the overall F1-scores of the compared models. After the ASR pipeline described above is completed for all 20 sessions of AMIE in-cabin dataset (ALL with 1331 utterances), we repeated all our experiments with the subsets for 10 sessions having single passenger (Singletons with 600 utterances) and remaining 10 sessions having two passengers (Dyads with 731 utterances). We observed overall WER of 13.5% and 13.7% for Singletons and Dyads, respectively. The overlapping speech cases with slightly more conversations going on (longer transcriptions) in Dyad sessions compared to the Singleton sessions may affect the ASR performance, which may also affect the intent/slots models performances. As shown in Table TABREF48 , although we have more samples with Dyads, the performance drops between the models trained on transcriptions vs. ASR outputs are slightly higher for the Dyads compared to the Singletons, as expected. Discussion and Conclusion We introduced AMIE, the intelligent in-cabin car agent responsible for handling certain AV-passenger interactions. We develop hierarchical and joint models to extract various passenger intents along with relevant slots for actions to be performed in AV, achieving F1-scores of 0.91 for intent recognition and 0.96 for slot extraction. We show that even using the generic ASR with noisy outputs, our models are still capable of achieving comparable results with models trained on human transcriptions. We believe that the ASR can be improved by collecting more in-domain data to obtain domain-specific acoustic models. These initial models will allow us to collect more speech data via bootstrapping with the intent-based dialogue application we have built, and the hierarchy we defined will allow us to eliminate costly annotation efforts in the future, especially for the word-level slots and intent keywords. Once enough domain-specific multi-modal data is collected, our future work is to explore training end-to-end dialogue agents for our in-cabin use-cases. We are planning to exploit other modalities for improved understanding of the in-cabin dialogue as well.
Yes
b512ab8de26874ee240cffdb3c65d9ac8d6023d9
b512ab8de26874ee240cffdb3c65d9ac8d6023d9_0
Q: What kernels are used in the support vector machines? Text: Introduction The evolution of scientific ideas happens when old ideas are replaced by new ones. Researchers usually conduct scientific experiments based on the previous publications. They either take use of others work as a solution to solve their specific problem, or they improve the results documented in the previous publications by introducing new solutions. I refer to the former as positive citation and the later negative citation. Citation sentence examples with different sentiment polarity are shown in Table TABREF2 . Sentiment analysis of citations plays an important role in plotting scientific idea flow. I can see from Table TABREF2 , one of the ideas introduced in paper A0 is Hidden Markov Model (HMM) based part-of-speech (POS) tagging, which has been referenced positively in paper A1. In paper A2, however, a better approach was brought up making the idea (HMM based POS) in paper A0 negative. This citation sentiment analysis could lead to future-works in such a way that new approaches (mentioned in paper A2) are recommended to other papers which cited A0 positively . Analyzing citation sentences during literature review is time consuming. Recently, researchers developed algorithms to automatically analyze citation sentiment. For example, BIBREF0 extracted several features for citation purpose and polarity classification, such as reference count, contrary expression and dependency relations. Jochim et al. tried to improve the result by using unigram and bigram features BIBREF1 . BIBREF2 used word level features, contextual polarity features, and sentence structure based features to detect sentiment citations. Although they generated good results using the combination of features, it required a lot of engineering work and big amount of annotated data to obtain the features. Further more, capturing accurate features relies on other NLP techniques, such as part-of-speech tagging (POS) and sentence parsing. Therefore, it is necessary to explore other techniques that are free from hand-crafted features. With the development of neural networks and deep learning, it is possible to learn the representations of concepts from unlabeled text corpus automatically. These representations can be treated as concept features for classification. An important advance in this area is the development of the word2vec technique BIBREF3 , which has proved to be an effective approach in Twitter sentiment classification BIBREF4 . In this work, the word2vec technique on sentiment analysis of citations was explored. Word embeddings trained from different corpora were compared. Related Work Mikolov et al. introduced word2vec technique BIBREF3 that can obtain word vectors by training text corpus. The idea of word2vec (word embeddings) originated from the concept of distributed representation of words BIBREF5 . The common method to derive the vectors is using neural probabilistic language model BIBREF6 . Word embeddings proved to be effective representations in the tasks of sentiment analysis BIBREF4 , BIBREF7 , BIBREF8 and text classification BIBREF9 . Sadeghian and Sharafat BIBREF10 extended word embeddings to sentence embeddings by averaging the word vectors in a sentiment review statement. Their results showed that word embeddings outperformed the bag-of-words model in sentiment classification. In this work, I are aiming at evaluating word embeddings for sentiment analysis of citations. The research questions are: Pre-processing The SentenceModel provided by LingPipe was used to segment raw text into its constituent sentences . The data I used to train the vectors has noise. For example, there are incomplete sentences mistakenly detected (e.g. Publication Year.). To address this issue, I eliminated sentences with less than three words. Overall Sent2vec Training In the work, I constructed sentence embeddings based on word embeddings. I simply averaged the vectors of the words in one sentence to obtain sentence embeddings (sent2vec). The main process in this step is to learn the word embedding matrix INLINEFORM0 : INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 (1) where INLINEFORM0 ( INLINEFORM1 ) is the word embedding for word INLINEFORM2 , which could be learned by the classical word2vec algorithm BIBREF3 . The parameters that I used to train the word embeddings are the same as in the work of Sadeghian and Sharafat Polarity-Specific Word Representation Training To improve sentiment citation classification results, I trained polarity specific word embeddings (PS-Embeddings), which were inspired by the Sentiment-Specific Word Embedding BIBREF4 . After obtaining the PS-Embeddings, I used the same scheme to average the vectors in one sentence according to the sent2vec model. Training Dataset The ACL-Embeddings (300 and 100 dimensions) from ACL collection were trained . ACL Anthology Reference Corpus contains the canonical 10,921 computational linguistics papers, from which I have generated 622,144 sentences after filtering out sentences with lower quality. For training polarity specific word embeddings (PS-Embeddings, 100 dimensions), I selected 17,538 sentences (8,769 positive and 8,769 negative) from ACL collection, by comparing sentences with the polar phrases . The pre-trained Brown-Embeddings (100 dimensions) learned from Brown corpus was also used as a comparison. Test Dataset To evaluate the sent2vec performance on citation sentiment detection, I conducted experiments on three datasets. The first one (dataset-basic) was originally taken from ACL Anthology BIBREF11 . Athar and Awais BIBREF2 manually annotated 8,736 citations from 310 publications in the ACL Anthology. I used all of the labeled sentences (830 positive, 280 negative and 7,626 objective) for testing. The second dataset (dataset-implicit) was used for evaluating implicit citation classification, containing 200,222 excluded (x), 282 positive (p), 419 negative (n) and 2,880 objective (o) annotated sentences. Every sentence which does not contain any direct or indirect mention of the citation is labeled as being excluded (x) . The third dataset (dataset-pn) is a subset of dataset-basic, containing 828 positive and 280 negative citations. Dataset-pn was used for the purposes of (1) evaluating binary classification (positive versus negative) performance using sent2vec; (2) Comparing the sentiment classification ability of PS-Embeddings with other embeddings. Evaluation Strategy One-Vs-The-Rest strategy was adopted for the task of multi-class classification and I reported F-score, micro-F, macro-F and weighted-F scores using 10-fold cross-validation. The F1 score is a weighted average of the precision and recall. In the multi-class case, this is the weighted average of the F1 score of each class. There are several types of averaging performed on the data: Micro-F calculates metrics globally by counting the total true positives, false negatives and false positives. Macro-F calculates metrics for each label, and find their unweighted mean. Macro-F does not take label imbalance into account. Weighted-F calculates metrics for each label, and find their average, weighted by support (the number of true instances for each label). Weighted-F alters macro-F to account for label imbalance. Results The performances of citation sentiment classification on dataset-basic and dataset-implicit were shown in Table TABREF25 and Table TABREF26 respectively. The result of classifying positive and negative citations was shown in Table TABREF27 . To compare with the outcomes in the work of BIBREF2 , I selected two records from their results: the best one (based on features n-gram + dependencies + negation) and the baseline (based on 1-3 grams). From Table TABREF25 I can see that the features extracted by BIBREF2 performed far better than word embeddings, in terms of macro-F (their best macro-F is 0.90, the one in this work is 0.33). However, the higher micro-F score (The highest micro-F in this work is 0.88, theirs is 0.78) and the weighted-F scores indicated that this method may achieve better performances if the evaluations are conducted on a balanced dataset. Among the embeddings, ACL-Embeddings performed better than Brown corpus in terms of macro-F and weighted-F measurements . To compare the dimensionality of word embeddings, ACL300 gave a higher micro-F score than ACL100, but there is no difference between 300 and 100 dimensional ACL-embeddings when look at the macro-F and weighted-F scores. Table TABREF26 showed the sent2vec performance on classifying implicit citations with four categories: objective, negative, positive and excluded. The method in this experiment had a poor performance on detecting positive citations, but it was comparable with both the baseline and sentence structure method BIBREF12 for the category of objective citations. With respect to classifying negative citations, this method was not as good as sentence structure features but it outperformed the baseline. The results of classifying category X from the rest showed that the performances of this method and the sentence structure method are fairly equal. Table TABREF27 showed the results of classifying positive and negative citations using different word embeddings. The macro-F score 0.85 and the weighted-F score 0.86 proved that word2vec is effective on classifying positive and negative citations. However, unlike the outcomes in the paper of BIBREF4 , where they concluded that sentiment specific word embeddings performed best, integrating polarity information did not improve the result in this experiment. Discussion and Conclusion In this paper, I reported the citation sentiment classification results based on word embeddings. The binary classification results in Table TABREF27 showed that word2vec is a promising tool for distinguishing positive and negative citations. From Table TABREF27 I can see that there are no big differences among the scores generated by ACL100 and Brown100, despite they have different vocabulary sizes (ACL100 has 14,325 words, Brown100 has 56,057 words). The polarity specific word embeddings did not show its strength in the task of binary classification. For the task of classifying implicit citations (Table TABREF26 ), in general, sent2vec (macro-F 0.44) was comparable with the baseline (macro-F 0.47) and it was effective for detecting objective sentences (F-score 0.84) as well as separating X sentences from the rest (F-score 0.997), but it did not work well on distinguishing positive citations from the rest. For the overall classification (Table TABREF25 ), however, this method was not as good as hand-crafted features, such as n-grams and sentence structure features. I may conclude from this experiment that word2vec technique has the potential to capture sentiment information in the citations, but hand-crafted features have better performance.
Unanswerable
4e4d377b140c149338446ba69737ea191c4328d9
4e4d377b140c149338446ba69737ea191c4328d9_0
Q: What dataset is used? Text: Introduction The evolution of scientific ideas happens when old ideas are replaced by new ones. Researchers usually conduct scientific experiments based on the previous publications. They either take use of others work as a solution to solve their specific problem, or they improve the results documented in the previous publications by introducing new solutions. I refer to the former as positive citation and the later negative citation. Citation sentence examples with different sentiment polarity are shown in Table TABREF2 . Sentiment analysis of citations plays an important role in plotting scientific idea flow. I can see from Table TABREF2 , one of the ideas introduced in paper A0 is Hidden Markov Model (HMM) based part-of-speech (POS) tagging, which has been referenced positively in paper A1. In paper A2, however, a better approach was brought up making the idea (HMM based POS) in paper A0 negative. This citation sentiment analysis could lead to future-works in such a way that new approaches (mentioned in paper A2) are recommended to other papers which cited A0 positively . Analyzing citation sentences during literature review is time consuming. Recently, researchers developed algorithms to automatically analyze citation sentiment. For example, BIBREF0 extracted several features for citation purpose and polarity classification, such as reference count, contrary expression and dependency relations. Jochim et al. tried to improve the result by using unigram and bigram features BIBREF1 . BIBREF2 used word level features, contextual polarity features, and sentence structure based features to detect sentiment citations. Although they generated good results using the combination of features, it required a lot of engineering work and big amount of annotated data to obtain the features. Further more, capturing accurate features relies on other NLP techniques, such as part-of-speech tagging (POS) and sentence parsing. Therefore, it is necessary to explore other techniques that are free from hand-crafted features. With the development of neural networks and deep learning, it is possible to learn the representations of concepts from unlabeled text corpus automatically. These representations can be treated as concept features for classification. An important advance in this area is the development of the word2vec technique BIBREF3 , which has proved to be an effective approach in Twitter sentiment classification BIBREF4 . In this work, the word2vec technique on sentiment analysis of citations was explored. Word embeddings trained from different corpora were compared. Related Work Mikolov et al. introduced word2vec technique BIBREF3 that can obtain word vectors by training text corpus. The idea of word2vec (word embeddings) originated from the concept of distributed representation of words BIBREF5 . The common method to derive the vectors is using neural probabilistic language model BIBREF6 . Word embeddings proved to be effective representations in the tasks of sentiment analysis BIBREF4 , BIBREF7 , BIBREF8 and text classification BIBREF9 . Sadeghian and Sharafat BIBREF10 extended word embeddings to sentence embeddings by averaging the word vectors in a sentiment review statement. Their results showed that word embeddings outperformed the bag-of-words model in sentiment classification. In this work, I are aiming at evaluating word embeddings for sentiment analysis of citations. The research questions are: Pre-processing The SentenceModel provided by LingPipe was used to segment raw text into its constituent sentences . The data I used to train the vectors has noise. For example, there are incomplete sentences mistakenly detected (e.g. Publication Year.). To address this issue, I eliminated sentences with less than three words. Overall Sent2vec Training In the work, I constructed sentence embeddings based on word embeddings. I simply averaged the vectors of the words in one sentence to obtain sentence embeddings (sent2vec). The main process in this step is to learn the word embedding matrix INLINEFORM0 : INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 (1) where INLINEFORM0 ( INLINEFORM1 ) is the word embedding for word INLINEFORM2 , which could be learned by the classical word2vec algorithm BIBREF3 . The parameters that I used to train the word embeddings are the same as in the work of Sadeghian and Sharafat Polarity-Specific Word Representation Training To improve sentiment citation classification results, I trained polarity specific word embeddings (PS-Embeddings), which were inspired by the Sentiment-Specific Word Embedding BIBREF4 . After obtaining the PS-Embeddings, I used the same scheme to average the vectors in one sentence according to the sent2vec model. Training Dataset The ACL-Embeddings (300 and 100 dimensions) from ACL collection were trained . ACL Anthology Reference Corpus contains the canonical 10,921 computational linguistics papers, from which I have generated 622,144 sentences after filtering out sentences with lower quality. For training polarity specific word embeddings (PS-Embeddings, 100 dimensions), I selected 17,538 sentences (8,769 positive and 8,769 negative) from ACL collection, by comparing sentences with the polar phrases . The pre-trained Brown-Embeddings (100 dimensions) learned from Brown corpus was also used as a comparison. Test Dataset To evaluate the sent2vec performance on citation sentiment detection, I conducted experiments on three datasets. The first one (dataset-basic) was originally taken from ACL Anthology BIBREF11 . Athar and Awais BIBREF2 manually annotated 8,736 citations from 310 publications in the ACL Anthology. I used all of the labeled sentences (830 positive, 280 negative and 7,626 objective) for testing. The second dataset (dataset-implicit) was used for evaluating implicit citation classification, containing 200,222 excluded (x), 282 positive (p), 419 negative (n) and 2,880 objective (o) annotated sentences. Every sentence which does not contain any direct or indirect mention of the citation is labeled as being excluded (x) . The third dataset (dataset-pn) is a subset of dataset-basic, containing 828 positive and 280 negative citations. Dataset-pn was used for the purposes of (1) evaluating binary classification (positive versus negative) performance using sent2vec; (2) Comparing the sentiment classification ability of PS-Embeddings with other embeddings. Evaluation Strategy One-Vs-The-Rest strategy was adopted for the task of multi-class classification and I reported F-score, micro-F, macro-F and weighted-F scores using 10-fold cross-validation. The F1 score is a weighted average of the precision and recall. In the multi-class case, this is the weighted average of the F1 score of each class. There are several types of averaging performed on the data: Micro-F calculates metrics globally by counting the total true positives, false negatives and false positives. Macro-F calculates metrics for each label, and find their unweighted mean. Macro-F does not take label imbalance into account. Weighted-F calculates metrics for each label, and find their average, weighted by support (the number of true instances for each label). Weighted-F alters macro-F to account for label imbalance. Results The performances of citation sentiment classification on dataset-basic and dataset-implicit were shown in Table TABREF25 and Table TABREF26 respectively. The result of classifying positive and negative citations was shown in Table TABREF27 . To compare with the outcomes in the work of BIBREF2 , I selected two records from their results: the best one (based on features n-gram + dependencies + negation) and the baseline (based on 1-3 grams). From Table TABREF25 I can see that the features extracted by BIBREF2 performed far better than word embeddings, in terms of macro-F (their best macro-F is 0.90, the one in this work is 0.33). However, the higher micro-F score (The highest micro-F in this work is 0.88, theirs is 0.78) and the weighted-F scores indicated that this method may achieve better performances if the evaluations are conducted on a balanced dataset. Among the embeddings, ACL-Embeddings performed better than Brown corpus in terms of macro-F and weighted-F measurements . To compare the dimensionality of word embeddings, ACL300 gave a higher micro-F score than ACL100, but there is no difference between 300 and 100 dimensional ACL-embeddings when look at the macro-F and weighted-F scores. Table TABREF26 showed the sent2vec performance on classifying implicit citations with four categories: objective, negative, positive and excluded. The method in this experiment had a poor performance on detecting positive citations, but it was comparable with both the baseline and sentence structure method BIBREF12 for the category of objective citations. With respect to classifying negative citations, this method was not as good as sentence structure features but it outperformed the baseline. The results of classifying category X from the rest showed that the performances of this method and the sentence structure method are fairly equal. Table TABREF27 showed the results of classifying positive and negative citations using different word embeddings. The macro-F score 0.85 and the weighted-F score 0.86 proved that word2vec is effective on classifying positive and negative citations. However, unlike the outcomes in the paper of BIBREF4 , where they concluded that sentiment specific word embeddings performed best, integrating polarity information did not improve the result in this experiment. Discussion and Conclusion In this paper, I reported the citation sentiment classification results based on word embeddings. The binary classification results in Table TABREF27 showed that word2vec is a promising tool for distinguishing positive and negative citations. From Table TABREF27 I can see that there are no big differences among the scores generated by ACL100 and Brown100, despite they have different vocabulary sizes (ACL100 has 14,325 words, Brown100 has 56,057 words). The polarity specific word embeddings did not show its strength in the task of binary classification. For the task of classifying implicit citations (Table TABREF26 ), in general, sent2vec (macro-F 0.44) was comparable with the baseline (macro-F 0.47) and it was effective for detecting objective sentences (F-score 0.84) as well as separating X sentences from the rest (F-score 0.997), but it did not work well on distinguishing positive citations from the rest. For the overall classification (Table TABREF25 ), however, this method was not as good as hand-crafted features, such as n-grams and sentence structure features. I may conclude from this experiment that word2vec technique has the potential to capture sentiment information in the citations, but hand-crafted features have better performance.
ACL Anthology Reference Corpus
828ce5faed7783297cf9ce202364f999b8d4a1f6
828ce5faed7783297cf9ce202364f999b8d4a1f6_0
Q: What metrics are considered? Text: Introduction The evolution of scientific ideas happens when old ideas are replaced by new ones. Researchers usually conduct scientific experiments based on the previous publications. They either take use of others work as a solution to solve their specific problem, or they improve the results documented in the previous publications by introducing new solutions. I refer to the former as positive citation and the later negative citation. Citation sentence examples with different sentiment polarity are shown in Table TABREF2 . Sentiment analysis of citations plays an important role in plotting scientific idea flow. I can see from Table TABREF2 , one of the ideas introduced in paper A0 is Hidden Markov Model (HMM) based part-of-speech (POS) tagging, which has been referenced positively in paper A1. In paper A2, however, a better approach was brought up making the idea (HMM based POS) in paper A0 negative. This citation sentiment analysis could lead to future-works in such a way that new approaches (mentioned in paper A2) are recommended to other papers which cited A0 positively . Analyzing citation sentences during literature review is time consuming. Recently, researchers developed algorithms to automatically analyze citation sentiment. For example, BIBREF0 extracted several features for citation purpose and polarity classification, such as reference count, contrary expression and dependency relations. Jochim et al. tried to improve the result by using unigram and bigram features BIBREF1 . BIBREF2 used word level features, contextual polarity features, and sentence structure based features to detect sentiment citations. Although they generated good results using the combination of features, it required a lot of engineering work and big amount of annotated data to obtain the features. Further more, capturing accurate features relies on other NLP techniques, such as part-of-speech tagging (POS) and sentence parsing. Therefore, it is necessary to explore other techniques that are free from hand-crafted features. With the development of neural networks and deep learning, it is possible to learn the representations of concepts from unlabeled text corpus automatically. These representations can be treated as concept features for classification. An important advance in this area is the development of the word2vec technique BIBREF3 , which has proved to be an effective approach in Twitter sentiment classification BIBREF4 . In this work, the word2vec technique on sentiment analysis of citations was explored. Word embeddings trained from different corpora were compared. Related Work Mikolov et al. introduced word2vec technique BIBREF3 that can obtain word vectors by training text corpus. The idea of word2vec (word embeddings) originated from the concept of distributed representation of words BIBREF5 . The common method to derive the vectors is using neural probabilistic language model BIBREF6 . Word embeddings proved to be effective representations in the tasks of sentiment analysis BIBREF4 , BIBREF7 , BIBREF8 and text classification BIBREF9 . Sadeghian and Sharafat BIBREF10 extended word embeddings to sentence embeddings by averaging the word vectors in a sentiment review statement. Their results showed that word embeddings outperformed the bag-of-words model in sentiment classification. In this work, I are aiming at evaluating word embeddings for sentiment analysis of citations. The research questions are: Pre-processing The SentenceModel provided by LingPipe was used to segment raw text into its constituent sentences . The data I used to train the vectors has noise. For example, there are incomplete sentences mistakenly detected (e.g. Publication Year.). To address this issue, I eliminated sentences with less than three words. Overall Sent2vec Training In the work, I constructed sentence embeddings based on word embeddings. I simply averaged the vectors of the words in one sentence to obtain sentence embeddings (sent2vec). The main process in this step is to learn the word embedding matrix INLINEFORM0 : INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 (1) where INLINEFORM0 ( INLINEFORM1 ) is the word embedding for word INLINEFORM2 , which could be learned by the classical word2vec algorithm BIBREF3 . The parameters that I used to train the word embeddings are the same as in the work of Sadeghian and Sharafat Polarity-Specific Word Representation Training To improve sentiment citation classification results, I trained polarity specific word embeddings (PS-Embeddings), which were inspired by the Sentiment-Specific Word Embedding BIBREF4 . After obtaining the PS-Embeddings, I used the same scheme to average the vectors in one sentence according to the sent2vec model. Training Dataset The ACL-Embeddings (300 and 100 dimensions) from ACL collection were trained . ACL Anthology Reference Corpus contains the canonical 10,921 computational linguistics papers, from which I have generated 622,144 sentences after filtering out sentences with lower quality. For training polarity specific word embeddings (PS-Embeddings, 100 dimensions), I selected 17,538 sentences (8,769 positive and 8,769 negative) from ACL collection, by comparing sentences with the polar phrases . The pre-trained Brown-Embeddings (100 dimensions) learned from Brown corpus was also used as a comparison. Test Dataset To evaluate the sent2vec performance on citation sentiment detection, I conducted experiments on three datasets. The first one (dataset-basic) was originally taken from ACL Anthology BIBREF11 . Athar and Awais BIBREF2 manually annotated 8,736 citations from 310 publications in the ACL Anthology. I used all of the labeled sentences (830 positive, 280 negative and 7,626 objective) for testing. The second dataset (dataset-implicit) was used for evaluating implicit citation classification, containing 200,222 excluded (x), 282 positive (p), 419 negative (n) and 2,880 objective (o) annotated sentences. Every sentence which does not contain any direct or indirect mention of the citation is labeled as being excluded (x) . The third dataset (dataset-pn) is a subset of dataset-basic, containing 828 positive and 280 negative citations. Dataset-pn was used for the purposes of (1) evaluating binary classification (positive versus negative) performance using sent2vec; (2) Comparing the sentiment classification ability of PS-Embeddings with other embeddings. Evaluation Strategy One-Vs-The-Rest strategy was adopted for the task of multi-class classification and I reported F-score, micro-F, macro-F and weighted-F scores using 10-fold cross-validation. The F1 score is a weighted average of the precision and recall. In the multi-class case, this is the weighted average of the F1 score of each class. There are several types of averaging performed on the data: Micro-F calculates metrics globally by counting the total true positives, false negatives and false positives. Macro-F calculates metrics for each label, and find their unweighted mean. Macro-F does not take label imbalance into account. Weighted-F calculates metrics for each label, and find their average, weighted by support (the number of true instances for each label). Weighted-F alters macro-F to account for label imbalance. Results The performances of citation sentiment classification on dataset-basic and dataset-implicit were shown in Table TABREF25 and Table TABREF26 respectively. The result of classifying positive and negative citations was shown in Table TABREF27 . To compare with the outcomes in the work of BIBREF2 , I selected two records from their results: the best one (based on features n-gram + dependencies + negation) and the baseline (based on 1-3 grams). From Table TABREF25 I can see that the features extracted by BIBREF2 performed far better than word embeddings, in terms of macro-F (their best macro-F is 0.90, the one in this work is 0.33). However, the higher micro-F score (The highest micro-F in this work is 0.88, theirs is 0.78) and the weighted-F scores indicated that this method may achieve better performances if the evaluations are conducted on a balanced dataset. Among the embeddings, ACL-Embeddings performed better than Brown corpus in terms of macro-F and weighted-F measurements . To compare the dimensionality of word embeddings, ACL300 gave a higher micro-F score than ACL100, but there is no difference between 300 and 100 dimensional ACL-embeddings when look at the macro-F and weighted-F scores. Table TABREF26 showed the sent2vec performance on classifying implicit citations with four categories: objective, negative, positive and excluded. The method in this experiment had a poor performance on detecting positive citations, but it was comparable with both the baseline and sentence structure method BIBREF12 for the category of objective citations. With respect to classifying negative citations, this method was not as good as sentence structure features but it outperformed the baseline. The results of classifying category X from the rest showed that the performances of this method and the sentence structure method are fairly equal. Table TABREF27 showed the results of classifying positive and negative citations using different word embeddings. The macro-F score 0.85 and the weighted-F score 0.86 proved that word2vec is effective on classifying positive and negative citations. However, unlike the outcomes in the paper of BIBREF4 , where they concluded that sentiment specific word embeddings performed best, integrating polarity information did not improve the result in this experiment. Discussion and Conclusion In this paper, I reported the citation sentiment classification results based on word embeddings. The binary classification results in Table TABREF27 showed that word2vec is a promising tool for distinguishing positive and negative citations. From Table TABREF27 I can see that there are no big differences among the scores generated by ACL100 and Brown100, despite they have different vocabulary sizes (ACL100 has 14,325 words, Brown100 has 56,057 words). The polarity specific word embeddings did not show its strength in the task of binary classification. For the task of classifying implicit citations (Table TABREF26 ), in general, sent2vec (macro-F 0.44) was comparable with the baseline (macro-F 0.47) and it was effective for detecting objective sentences (F-score 0.84) as well as separating X sentences from the rest (F-score 0.997), but it did not work well on distinguishing positive citations from the rest. For the overall classification (Table TABREF25 ), however, this method was not as good as hand-crafted features, such as n-grams and sentence structure features. I may conclude from this experiment that word2vec technique has the potential to capture sentiment information in the citations, but hand-crafted features have better performance.
F-score, micro-F, macro-F, weighted-F
9d016eb3913b41f7a18c6fa865897c12b5fe0212
9d016eb3913b41f7a18c6fa865897c12b5fe0212_0
Q: Did the authors evaluate their system output for coherence? Text: Related Work In the literature, several cache-based translation models have been proposed for conventional statistical machine translation, besides traditional n-gram language models and neural language models. In this section, we will first introduce related work in cache-based language models and then in translation models. For traditional n-gram language models, Kuhn1990A propose a cache-based language model, which mixes a large global language model with a small local model estimated from recent items in the history of the input stream for speech recongnition. della1992adaptive introduce a MaxEnt-based cache model by integrating a cache into a smoothed trigram language model, reporting reduction in both perplexity and word error rates. chueh2010topic present a new topic cache model for speech recongnition based on latent Dirichlet language model by incorporating a large-span topic cache into the generation of topic mixtures. For neural language models, huang2014cache propose a cache-based RNN inference scheme, which avoids repeated computation of identical LM calls and caches previously computed scores and useful intermediate results and thus reduce the computational expense of RNNLM. Grave2016Improving extend the neural network language model with a neural cache model, which stores recent hidden activations to be used as contextual representations. Our caches significantly differ from these two caches in that we store linguistic items in the cache rather than scores or activations. For neural machine translation, wangexploiting propose a cross-sentence context-aware approach and employ a hierarchy of Recurrent Neural Networks (RNNs) to summarize the cross-sentence context from source-side previous sentences. jean2017does propose a novel larger-context neural machine translation model based on the recent works on larger-context language modelling BIBREF11 and employ the method to model the surrounding text in addition to the source sentence. For cache-based translation models, nepveu2004adaptive propose a dynamic adaptive translation model using cache-based implementation for interactive machine translation, and develop a monolingual dynamic adaptive model and a bilingual dynamic adaptive model. tiedemann2010context propose a cache-based translation model, filling the cache with bilingual phrase pairs from the best translation hypotheses of previous sentences in a document. gong2011cache further propose a cache-based approach to document-level translation, which includes three caches, a dynamic cache, a static cache and a topic cache, to capture various document-level information. bertoldi2013cache describe a cache mechanism to implement online learning in phrase-based SMT and use a repetition rate measure to predict the utility of cached items expected to be useful for the current translation. Our caches are similar to those used by gong2011cache who incorporate these caches into statistical machine translation. We adapt them to neural machine translation with a neural cache model. It is worthwhile to emphasize that such adaptation is nontrivial as shown below because the two translation philosophies and frameworks are significantly different. Attention-based NMT In this section, we briefly describe the NMT model taken as a baseline. Without loss of generality, we adopt the NMT architecture proposed by bahdanau2015neural, with an encoder-decoder neural network. Encoder The encoder uses bidirectional recurrent neural networks (Bi-RNN) to encode a source sentence with a forward and a backward RNN. The forward RNN takes as input a source sentence $x = (x_1, x_2, ..., x_T)$ from left to right and outputs a hidden state sequence $(\overrightarrow{h_1},\overrightarrow{h_2}, ..., \overrightarrow{h_T})$ while the backward RNN reads the sentence in an inverse direction and outputs a backward hidden state sequence $(\overleftarrow{h_1},\overleftarrow{h_2}, ..., \overleftarrow{h_T})$ . The context-dependent word representations of the source sentence $h_j$ (also known as word annotation vectors) are the concatenation of hidden states $\overrightarrow{h_j}$ and $\overleftarrow{h_j}$ in the two directions. Decoder The decoder is an RNN that predicts target words $y_t$ via a multi-layer perceptron (MLP) neural network. The prediction is based on the decoder RNN hidden state $s_t$ , the previous predicted word $y_{t-1}$ and a source-side context vector $c_t$ . The hidden state $s_t$ of the decoder at time $t$ and the conditional probability of the next word $y_t$ are computed as follows: $$s_t = f(s_{t-1}, y_{t-1}, c_t)$$ (Eq. 3) $$p(y_t|y_{<t};x) = g(y_{t-1}, s_t, c_t)$$ (Eq. 4) Attention Model In the attention model, the context vector $c_t$ is calculated as a weighted sum over source annotation vectors $(h_1, h_2, ..., h_T)$ : $$c_t = \sum _{j=1}^{T_x} \alpha _{tj}h_j$$ (Eq. 6) $$\alpha _{tj} = \frac{exp(e_{tj})}{\sum _{k=1}^{T} exp(e_{tk})}$$ (Eq. 7) where $\alpha _{tj}$ is the attention weight of each hidden state $h_j$ computed by the attention model, and $a$ is a feed forward neural network with a single hidden layer. The dl4mt tutorial presents an improved implementation of the attention-based NMT system, which feeds the previous word $y_{t-1}$ to the attention model. We use the dl4mt tutorial implementation as our baseline, which we will refer to as RNNSearch*. The proposed cache-based neural approach is implemented on the top of RNNSearch* system, where the encoder-decoder NMT framework is trained to optimize the sum of the conditional log probabilities of correct translations of all source sentences on a parallel corpus as normal. The Cache-based Neural Model In this section, we elaborate on the proposed cache-based neural model and how we integrate it into neural machine translation, Figure 1 shows the entire architecture of our NMT with the cache-based neural model. Dynamic Cache and Topic Cache The aim of cache is to incorporate document-level constraints and therefore to improve the consistency and coherence of document translations. In this section, we introduce our proposed dynamic cache and topic cache in detail. In order to build the dynamic cache, we dynamically extract words from recently translated sentences and the partial translation of current sentence being translated as words of dynamic cache. We apply the following rules to build the dynamic cache. The max size of the dynamic cache is set to $|c_d|$ . According to the first-in-first-out rule, when the dynamic cache is full and a new word is inserted into the cache, the oldest word in the cache will be removed. Duplicate entries into the dynamic cache are not allowed when a word has been already in the cache. It is worth noting that we also maintain a stop word list, and we added English punctuations and “UNK” into our stop word list. Words in the stop word list would not be inserted into the dynamic cache. So the common words like “a” and “the” cannot appear in the cache. All words in the dynamic cache can be found in the target-side vocabulary of RNNSearch*. In order to build the topic cache, we first use an off-the-shelf LDA topic tool to learn topic distributions of source- and target-side documents separately. Then we estimate a topic projection distribution over all target-side topics $p(z_t|z_s)$ for each source topic $z_s$ by collecting events and accumulating counts of $(z_s, z_t)$ from aligned document pairs. Notice that $z_s/z_t$ is the topic with the highest topic probability $p(z_.|d)$ on the source/target side. Then we can use the topic cache as follows: During the training process of NMT, the learned target-side topic model is used to infer the topic distribution for each target document. For a target document d in the training data, we select the topic $z$ with the highest probability $p(z|d)$ as the topic for the document. The $|c_t|$ most probable topical words in topic $z$ are extracted to fill the topic cache for the document $d$ . In the NMT testing process, we first infer the topic distribution for a source document in question with the learned source-side topic model. From the topic distribution, we choose the topic with the highest probability as the topic for the source document. Then we use the learned topic projection function to map the source topic onto a target topic with the highest projection probability, as illustrated in Figure 2. After that, we use the $|c_t|$ most probable topical words in the projected target topic to fill the topic cache. The words of topic cache and dynamic cache together form the final cache model. In practice, the cache stores word embeddings, as shown in Figure 3. As we do not want to introduce extra embedding parameters, we let the cache share the same target word embedding matrix with the NMT model. In this case, if a word is not in the target-side vocabulary of NMT, we discard the word from the cache. The Model The cache-based neural model is to evaluate the probabilities of words occurring in the cache and to provide the evaluation results for the decoder via a gating mechanism. When the decoder generates the next target word $y_t$ , we hope that the cache can provide helpful information to judge whether $y_t$ is appropriate from the perspective of the document-level cache if $y_t$ occurs in the cache.To achieve this goal, we should appropriately evaluate the word entries in the cache. In this paper, we build a new neural network layer as the scorer for the cache. At each decoding step $t$ , we use the scorer to score $y_t$ if $y_t$ is in the cache. The inputs to the scorer are the current hidden state $s_t$ of the decoder, previous word $y_{t-1}$ , context vector $c_t$ , and the word $y_t$ from the cache. The score of $y_t$ is calculated as follows: $$score(y_t|y_{<t},x) = g_{cache}(s_t,c_t,y_{t-1},y_t)$$ (Eq. 22) where $g_{cache}$ is a non-linear function. This score is further used to estimate the cache probability of $y_t$ as follows: $$p_{cache}(y_t|y_{<t},x) = softmax(score(y_t|y_{<t},x))$$ (Eq. 23) Since we have two prediction probabilities for the next target word $y_t$ , one from the cache-based neural model $p_{cache}$ , the other originally estimated by the NMT decoder $p_{nmt}$ , how do we integrate these two probabilities? Here, we introduce a gating mechanism to combine them, and word prediction probabilities on the vocabulary of NMT are updated by combining the two probabilities through linear interpolation between the NMT probability and cache-based neural model probability. The final word prediction probability for $y_t$ is calculated as follows: $$p(y_t|y_{<t},x) = (1 - \alpha _t)p_{cache}(y_t|y_{<t},x) + \alpha _tp_{nmt}(y_t|y_{<t},x)$$ (Eq. 26) Notice that if $y_t$ is not in the cache, we set $p_{cache}(y_t|y_{<t},x) = 0$ , where $\alpha _t$ is the gate and computed as follows: $$\alpha _t = g_{gate}(f_{gate}(s_t,c_t,y_{t-1}))$$ (Eq. 27) where $f_{gate}$ is a non-linear function and $g_{gate}$ is sigmoid function. We use the contextual elements of $s_t, c_t, y_{t-1}$ to score the current target word occurring in the cache (Eq. (6)) and to estimate the gate (Eq. (9)). If the target word is consistent with the context and in the cache at the same time, the probability of the target word will be high. Finally, we train the proposed cache model jointly with the NMT model towards minimizing the negative log-likelihood on the training corpus. The cost function is computed as follows: $$L(\theta ) = -\sum _{i=1}^N \sum _{t=1}^Tlogp(y_t|y_{<t},x)$$ (Eq. 28) where $\theta $ are all parameters in the cache-based NMT model. Decoding Process Our cache-based NMT system works as follows: When the decoder shifts to a new test document, clear the topic and dynamic cache. Obtain target topical words for the new test document as described in Section 4.1 and fill them in the topic cache. Clear the dynamic cache when translating the first sentence of the test document. For each sentence in the new test document, translate it with the proposed cache-based NMT and continuously expands the dynamic cache with newly generated target words and target words obtained from the best translation hypothesis of previous sentences. In this way, the topic cache can provide useful global information at the beginning of the translation process while the dynamic cache is growing with the progress of translation. Experimentation We evaluated the effectiveness of the proposed cache-based neural model for neural machine translation on NIST Chinese-English translation tasks. Experimental Setting We selected corpora LDC2003E14, LDC2004T07, LDC2005T06, LDC2005T10 and a portion of data from the corpus LDC2004T08 (Hong Kong Hansards/Laws/News) as our bilingual training data, where document boundaries are explicitly kept. In total, our training data contain 103,236 documents and 2.80M sentences. On average, each document consists of 28.4 sentences. We chose NIST05 dataset (1082 sentence pairs) as our development set, and NIST02, NIST04, NIST06 (878, 1788, 1664 sentence pairs. respectively) as our test sets. We compared our proposed model against the following two systems: Moses BIBREF12 : an off-the-shelf phrase-based translation system with its default setting. RNNSearch*: our in-house attention-based NMT system which adopts the feedback attention as described in Section 3 . For Moses, we used the full training data to train the model. We ran GIZA++ BIBREF13 on the training data in both directions, and merged alignments in two directions with “grow-diag-final” refinement rule BIBREF14 to obtain final word alignments. We trained a 5-gram language model on the Xinhua portion of GIGA-WORD corpus using SRILM Toolkit with a modified Kneser-Ney smoothing. For RNNSearch, we used the parallel corpus to train the attention-based NMT model. The encoder of RNNSearch consists of a forward and backward recurrent neural network. The word embedding dimension is 620 and the size of a hidden layer is 1000. The maximum length of sentences that we used to train RNNSearch in our experiments was set to 50 on both Chinese and English side. We used the most frequent 30K words for both Chinese and English. We replaced rare words with a special token “UNK”. Dropout was applied only on the output layer and the dropout rate was set to 0.5. All the other settings were the same as those in BIBREF1 . Once the NMT model was trained, we adopted a beam search to find possible translations with high probabilities. We set the beam width to 10. For the proposed cache-based NMT model, we implemented it on the top of RNNSearch*. We set the size of the dynamic and topic cache $|c_d|$ and $|c_t|$ to 100, 200, respectively. For the dynamic cache, we only kept those most recently-visited items. For the LDA tool, we set the number of topics considered in the model to 100 and set the number of topic words that are used to fill the topic cache to 200. The parameter $\alpha $ and $\beta $ of LDA were set to 0.5 and 0.1, respectively. We used a feedforward neural network with two hidden layers to define $g_{cache}$ (Equation (6)) and $f_{gate}$ (Equation (9)). For $f_{gate}$ , the number of units in the two hidden layers were set to 500 and 200 respectively. For $g_{cache}$ , the number of units in the two hidden layers were set to 1000 and 500 respectively. We used a pre-training strategy that has been widely used in the literature to train our proposed model: training the regular attention-based NMT model using our implementation of RNNSearch*, and then using its parameters to initialize the parameters of the proposed model, except for those related to the operations of the proposed cache model. We used the stochastic gradient descent algorithm with mini-batch and Adadelta to train the NMT models. The mini-batch was set to 80 sentences and decay rates $\rho $ and $\epsilon $ of Adadelta were set to 0.95 and $10^{-6}$ . Experimental Results Table 1 shows the results of different models measured in terms of BLEU score. From the table, we can find that our implementation RNNSearch* using the feedback attention and dropout outperforms Moses by 3.23 BLEU points. The proposed model $RNNSearch*_{+Cd}$ achieves an average gain of 1.01 BLEU points over RNNSearch* on all test sets. Further, the model $RNNSearch*_{+Cd, Ct}$ achieves an average gain of 1.60 BLEU points over RNNSearch*, and it outperforms Moses by 4.83 BLEU points. These results strongly suggest that the dynamic and topic cache are very helpful and able to improve translation quality in document translation. Effect of the Gating Mechanism In order to validate the effectiveness of the gating mechanism used in the cache-based neural model, we set a fixed gate value for $RNNSearch*_{+Cd,Ct}$ , in other words, we use a mixture of probabilities with fixed proportions to replace the gating mechanism that automatically learns weights for probability mixture. Table 2 displays the result. When we set the gate $\alpha $ to a fixed value 0.3, the performance has an obvious decline comparing with that of $RNNSearch*_{+Cd,Ct}$ in terms of BLEU score. The performance is even worse than RNNSearch* by 10.11 BLEU points. Therefore without a good mechanism, the cache-based neural model cannot be appropriately integrated into NMT. This shows that the gating mechanism plays a important role in $RNNSearch*_{+Cd,Ct}$ . Effect of the Topic Cache When the NMT decoder translates the first sentence of a document, the dynamic cache is empty. In this case, we hope that the topic cache will provide document-level information for translating the first sentence. We therefore further investigate how the topic cache influence the translation of the first sentence in a document. We count and calculate the average number of words that appear in both the translations of the beginning sentences of documents and the topic cache. The statistical results are shown in Table 3. Without using the cache model, RNNSearch* generates translations that contain words from the topic cache as these topic words are tightly related to documents being translated. With the topic cache, our neural cache model enables the translations of the first sentences to be more relevant to the global topics of documents being translated as these translations contain more words from the topic cache that describes these documents. As the dynamic cache is empty when the decoder translates the beginning sentences, the topic cache is complementary to such a cold cache at the start. Comparing the numbers of translations generated by our model and human translations (Reference in Table 3), we can find that with the help of the topic cache, translations of the first sentences of documents are becoming closer to human translations. Analysis on the Cache-based Neural Model As shown above, the topic cache is able to influence both the translations of beginning sentences and those of subsequent sentences while the dynamic cache built from translations of preceding sentences has an impact on the translations of subsequent sentences. We further study what roles the dynamic and topic cache play in the translation process. For this aim, we calculate the average number of words in translations generated by $RNNSearch*_{+Cd,Ct}$ that are also in the caches. During the counting process, stop words and “UNK” are removed from sentence and document translations. Table 4 shows the results. If only the topic cache is used ([ $document \in [Ct]$ , $sentence \in (Ct)$ ] in Table 4), the cache still can provide useful information to help NMT translate sentences and documents. 28.3 words per document and 2.39 words per sentence are from the topic cache. When both the dynamic and topic cache are used ([ $document \in [Ct,Cd]$ , $sentence \in (Ct,Cd)$ ] in Table 4), the numbers of words that both occur in sentence/document translations and the two caches sharply increase from 2.61/30.27 to 6.73/81.16. The reason for this is that words appear in preceding sentences will have a large probability of appearing in subsequent sentences. This shows that the dynamic cache plays a important role in keeping document translations consistent by reusing words from preceding sentences. We also provide two translation examples in Table 5. We can find that RNNSearch* generates different translations “operations” and “actions” for the same chinese word “行动(xingdong)”, while our proposed model produces the same translation “actions”. Analysis on Translation Coherence We want to further study how the proposed cache-based neural model influence coherence in document translation. For this, we follow Lapata2005Automatic to measure coherence as sentence similarity. First, each sentence is represented by the mean of the distributed vectors of its words. Second, the similarity between two sentences is determined by the cosine of their means. $$sim(S_1,S_2) = cos(\mu (\vec{S_1}),\mu (\vec{S_2})) \\$$ (Eq. 46) where $\mu (\vec{S_i})=\frac{1}{|S_i|}\sum _{\vec{w} \in S_i}\vec{w}$ , and $\vec{w}$ is the vector for word $w$ . We use Word2Vec to get the distributed vectors of words and English Gigaword fourth Edition as training data to train Word2Vec. We consider that embeddings from word2vec trained on large monolingual corpus can well encode semantic information of words. We set the dimensionality of word embeddings to 200. Table 6 shows the average cosine similarity of adjacent sentences on all test sets. From the table, we can find that the $RNNSearch*_{+Cd,Ct}$ model produces better coherence in document translation than RNNSearch* in term of cosine similarity. Conclusion In this paper, we have presented a novel cache-based neural model for NMT to capture the global topic information and inter-sentence cohesion dependencies. We use a gating mechanism to integrate both the topic and dynamic cache into the proposed neural cache model. Experiment results show that the cache-based neural model achieves consistent and significant improvements in translation quality over several state-of-the-art NMT and SMT baselines. Further analysis reveals that the topic cache and dynamic cache are complementary to each other and that both are able to guide the NMT decoder to use topical words and to reuse words from recently translated sentences as next word predictions. Acknowledgments The present research was supported by the National Natural Science Foundation of China (Grant No. 61622209). We would like to thank three anonymous reviewers for their insightful comments.
Yes
c1c611409b5659a1fd4a870b6cc41f042e2e9889
c1c611409b5659a1fd4a870b6cc41f042e2e9889_0
Q: What evaluations did the authors use on their system? Text: Related Work In the literature, several cache-based translation models have been proposed for conventional statistical machine translation, besides traditional n-gram language models and neural language models. In this section, we will first introduce related work in cache-based language models and then in translation models. For traditional n-gram language models, Kuhn1990A propose a cache-based language model, which mixes a large global language model with a small local model estimated from recent items in the history of the input stream for speech recongnition. della1992adaptive introduce a MaxEnt-based cache model by integrating a cache into a smoothed trigram language model, reporting reduction in both perplexity and word error rates. chueh2010topic present a new topic cache model for speech recongnition based on latent Dirichlet language model by incorporating a large-span topic cache into the generation of topic mixtures. For neural language models, huang2014cache propose a cache-based RNN inference scheme, which avoids repeated computation of identical LM calls and caches previously computed scores and useful intermediate results and thus reduce the computational expense of RNNLM. Grave2016Improving extend the neural network language model with a neural cache model, which stores recent hidden activations to be used as contextual representations. Our caches significantly differ from these two caches in that we store linguistic items in the cache rather than scores or activations. For neural machine translation, wangexploiting propose a cross-sentence context-aware approach and employ a hierarchy of Recurrent Neural Networks (RNNs) to summarize the cross-sentence context from source-side previous sentences. jean2017does propose a novel larger-context neural machine translation model based on the recent works on larger-context language modelling BIBREF11 and employ the method to model the surrounding text in addition to the source sentence. For cache-based translation models, nepveu2004adaptive propose a dynamic adaptive translation model using cache-based implementation for interactive machine translation, and develop a monolingual dynamic adaptive model and a bilingual dynamic adaptive model. tiedemann2010context propose a cache-based translation model, filling the cache with bilingual phrase pairs from the best translation hypotheses of previous sentences in a document. gong2011cache further propose a cache-based approach to document-level translation, which includes three caches, a dynamic cache, a static cache and a topic cache, to capture various document-level information. bertoldi2013cache describe a cache mechanism to implement online learning in phrase-based SMT and use a repetition rate measure to predict the utility of cached items expected to be useful for the current translation. Our caches are similar to those used by gong2011cache who incorporate these caches into statistical machine translation. We adapt them to neural machine translation with a neural cache model. It is worthwhile to emphasize that such adaptation is nontrivial as shown below because the two translation philosophies and frameworks are significantly different. Attention-based NMT In this section, we briefly describe the NMT model taken as a baseline. Without loss of generality, we adopt the NMT architecture proposed by bahdanau2015neural, with an encoder-decoder neural network. Encoder The encoder uses bidirectional recurrent neural networks (Bi-RNN) to encode a source sentence with a forward and a backward RNN. The forward RNN takes as input a source sentence $x = (x_1, x_2, ..., x_T)$ from left to right and outputs a hidden state sequence $(\overrightarrow{h_1},\overrightarrow{h_2}, ..., \overrightarrow{h_T})$ while the backward RNN reads the sentence in an inverse direction and outputs a backward hidden state sequence $(\overleftarrow{h_1},\overleftarrow{h_2}, ..., \overleftarrow{h_T})$ . The context-dependent word representations of the source sentence $h_j$ (also known as word annotation vectors) are the concatenation of hidden states $\overrightarrow{h_j}$ and $\overleftarrow{h_j}$ in the two directions. Decoder The decoder is an RNN that predicts target words $y_t$ via a multi-layer perceptron (MLP) neural network. The prediction is based on the decoder RNN hidden state $s_t$ , the previous predicted word $y_{t-1}$ and a source-side context vector $c_t$ . The hidden state $s_t$ of the decoder at time $t$ and the conditional probability of the next word $y_t$ are computed as follows: $$s_t = f(s_{t-1}, y_{t-1}, c_t)$$ (Eq. 3) $$p(y_t|y_{<t};x) = g(y_{t-1}, s_t, c_t)$$ (Eq. 4) Attention Model In the attention model, the context vector $c_t$ is calculated as a weighted sum over source annotation vectors $(h_1, h_2, ..., h_T)$ : $$c_t = \sum _{j=1}^{T_x} \alpha _{tj}h_j$$ (Eq. 6) $$\alpha _{tj} = \frac{exp(e_{tj})}{\sum _{k=1}^{T} exp(e_{tk})}$$ (Eq. 7) where $\alpha _{tj}$ is the attention weight of each hidden state $h_j$ computed by the attention model, and $a$ is a feed forward neural network with a single hidden layer. The dl4mt tutorial presents an improved implementation of the attention-based NMT system, which feeds the previous word $y_{t-1}$ to the attention model. We use the dl4mt tutorial implementation as our baseline, which we will refer to as RNNSearch*. The proposed cache-based neural approach is implemented on the top of RNNSearch* system, where the encoder-decoder NMT framework is trained to optimize the sum of the conditional log probabilities of correct translations of all source sentences on a parallel corpus as normal. The Cache-based Neural Model In this section, we elaborate on the proposed cache-based neural model and how we integrate it into neural machine translation, Figure 1 shows the entire architecture of our NMT with the cache-based neural model. Dynamic Cache and Topic Cache The aim of cache is to incorporate document-level constraints and therefore to improve the consistency and coherence of document translations. In this section, we introduce our proposed dynamic cache and topic cache in detail. In order to build the dynamic cache, we dynamically extract words from recently translated sentences and the partial translation of current sentence being translated as words of dynamic cache. We apply the following rules to build the dynamic cache. The max size of the dynamic cache is set to $|c_d|$ . According to the first-in-first-out rule, when the dynamic cache is full and a new word is inserted into the cache, the oldest word in the cache will be removed. Duplicate entries into the dynamic cache are not allowed when a word has been already in the cache. It is worth noting that we also maintain a stop word list, and we added English punctuations and “UNK” into our stop word list. Words in the stop word list would not be inserted into the dynamic cache. So the common words like “a” and “the” cannot appear in the cache. All words in the dynamic cache can be found in the target-side vocabulary of RNNSearch*. In order to build the topic cache, we first use an off-the-shelf LDA topic tool to learn topic distributions of source- and target-side documents separately. Then we estimate a topic projection distribution over all target-side topics $p(z_t|z_s)$ for each source topic $z_s$ by collecting events and accumulating counts of $(z_s, z_t)$ from aligned document pairs. Notice that $z_s/z_t$ is the topic with the highest topic probability $p(z_.|d)$ on the source/target side. Then we can use the topic cache as follows: During the training process of NMT, the learned target-side topic model is used to infer the topic distribution for each target document. For a target document d in the training data, we select the topic $z$ with the highest probability $p(z|d)$ as the topic for the document. The $|c_t|$ most probable topical words in topic $z$ are extracted to fill the topic cache for the document $d$ . In the NMT testing process, we first infer the topic distribution for a source document in question with the learned source-side topic model. From the topic distribution, we choose the topic with the highest probability as the topic for the source document. Then we use the learned topic projection function to map the source topic onto a target topic with the highest projection probability, as illustrated in Figure 2. After that, we use the $|c_t|$ most probable topical words in the projected target topic to fill the topic cache. The words of topic cache and dynamic cache together form the final cache model. In practice, the cache stores word embeddings, as shown in Figure 3. As we do not want to introduce extra embedding parameters, we let the cache share the same target word embedding matrix with the NMT model. In this case, if a word is not in the target-side vocabulary of NMT, we discard the word from the cache. The Model The cache-based neural model is to evaluate the probabilities of words occurring in the cache and to provide the evaluation results for the decoder via a gating mechanism. When the decoder generates the next target word $y_t$ , we hope that the cache can provide helpful information to judge whether $y_t$ is appropriate from the perspective of the document-level cache if $y_t$ occurs in the cache.To achieve this goal, we should appropriately evaluate the word entries in the cache. In this paper, we build a new neural network layer as the scorer for the cache. At each decoding step $t$ , we use the scorer to score $y_t$ if $y_t$ is in the cache. The inputs to the scorer are the current hidden state $s_t$ of the decoder, previous word $y_{t-1}$ , context vector $c_t$ , and the word $y_t$ from the cache. The score of $y_t$ is calculated as follows: $$score(y_t|y_{<t},x) = g_{cache}(s_t,c_t,y_{t-1},y_t)$$ (Eq. 22) where $g_{cache}$ is a non-linear function. This score is further used to estimate the cache probability of $y_t$ as follows: $$p_{cache}(y_t|y_{<t},x) = softmax(score(y_t|y_{<t},x))$$ (Eq. 23) Since we have two prediction probabilities for the next target word $y_t$ , one from the cache-based neural model $p_{cache}$ , the other originally estimated by the NMT decoder $p_{nmt}$ , how do we integrate these two probabilities? Here, we introduce a gating mechanism to combine them, and word prediction probabilities on the vocabulary of NMT are updated by combining the two probabilities through linear interpolation between the NMT probability and cache-based neural model probability. The final word prediction probability for $y_t$ is calculated as follows: $$p(y_t|y_{<t},x) = (1 - \alpha _t)p_{cache}(y_t|y_{<t},x) + \alpha _tp_{nmt}(y_t|y_{<t},x)$$ (Eq. 26) Notice that if $y_t$ is not in the cache, we set $p_{cache}(y_t|y_{<t},x) = 0$ , where $\alpha _t$ is the gate and computed as follows: $$\alpha _t = g_{gate}(f_{gate}(s_t,c_t,y_{t-1}))$$ (Eq. 27) where $f_{gate}$ is a non-linear function and $g_{gate}$ is sigmoid function. We use the contextual elements of $s_t, c_t, y_{t-1}$ to score the current target word occurring in the cache (Eq. (6)) and to estimate the gate (Eq. (9)). If the target word is consistent with the context and in the cache at the same time, the probability of the target word will be high. Finally, we train the proposed cache model jointly with the NMT model towards minimizing the negative log-likelihood on the training corpus. The cost function is computed as follows: $$L(\theta ) = -\sum _{i=1}^N \sum _{t=1}^Tlogp(y_t|y_{<t},x)$$ (Eq. 28) where $\theta $ are all parameters in the cache-based NMT model. Decoding Process Our cache-based NMT system works as follows: When the decoder shifts to a new test document, clear the topic and dynamic cache. Obtain target topical words for the new test document as described in Section 4.1 and fill them in the topic cache. Clear the dynamic cache when translating the first sentence of the test document. For each sentence in the new test document, translate it with the proposed cache-based NMT and continuously expands the dynamic cache with newly generated target words and target words obtained from the best translation hypothesis of previous sentences. In this way, the topic cache can provide useful global information at the beginning of the translation process while the dynamic cache is growing with the progress of translation. Experimentation We evaluated the effectiveness of the proposed cache-based neural model for neural machine translation on NIST Chinese-English translation tasks. Experimental Setting We selected corpora LDC2003E14, LDC2004T07, LDC2005T06, LDC2005T10 and a portion of data from the corpus LDC2004T08 (Hong Kong Hansards/Laws/News) as our bilingual training data, where document boundaries are explicitly kept. In total, our training data contain 103,236 documents and 2.80M sentences. On average, each document consists of 28.4 sentences. We chose NIST05 dataset (1082 sentence pairs) as our development set, and NIST02, NIST04, NIST06 (878, 1788, 1664 sentence pairs. respectively) as our test sets. We compared our proposed model against the following two systems: Moses BIBREF12 : an off-the-shelf phrase-based translation system with its default setting. RNNSearch*: our in-house attention-based NMT system which adopts the feedback attention as described in Section 3 . For Moses, we used the full training data to train the model. We ran GIZA++ BIBREF13 on the training data in both directions, and merged alignments in two directions with “grow-diag-final” refinement rule BIBREF14 to obtain final word alignments. We trained a 5-gram language model on the Xinhua portion of GIGA-WORD corpus using SRILM Toolkit with a modified Kneser-Ney smoothing. For RNNSearch, we used the parallel corpus to train the attention-based NMT model. The encoder of RNNSearch consists of a forward and backward recurrent neural network. The word embedding dimension is 620 and the size of a hidden layer is 1000. The maximum length of sentences that we used to train RNNSearch in our experiments was set to 50 on both Chinese and English side. We used the most frequent 30K words for both Chinese and English. We replaced rare words with a special token “UNK”. Dropout was applied only on the output layer and the dropout rate was set to 0.5. All the other settings were the same as those in BIBREF1 . Once the NMT model was trained, we adopted a beam search to find possible translations with high probabilities. We set the beam width to 10. For the proposed cache-based NMT model, we implemented it on the top of RNNSearch*. We set the size of the dynamic and topic cache $|c_d|$ and $|c_t|$ to 100, 200, respectively. For the dynamic cache, we only kept those most recently-visited items. For the LDA tool, we set the number of topics considered in the model to 100 and set the number of topic words that are used to fill the topic cache to 200. The parameter $\alpha $ and $\beta $ of LDA were set to 0.5 and 0.1, respectively. We used a feedforward neural network with two hidden layers to define $g_{cache}$ (Equation (6)) and $f_{gate}$ (Equation (9)). For $f_{gate}$ , the number of units in the two hidden layers were set to 500 and 200 respectively. For $g_{cache}$ , the number of units in the two hidden layers were set to 1000 and 500 respectively. We used a pre-training strategy that has been widely used in the literature to train our proposed model: training the regular attention-based NMT model using our implementation of RNNSearch*, and then using its parameters to initialize the parameters of the proposed model, except for those related to the operations of the proposed cache model. We used the stochastic gradient descent algorithm with mini-batch and Adadelta to train the NMT models. The mini-batch was set to 80 sentences and decay rates $\rho $ and $\epsilon $ of Adadelta were set to 0.95 and $10^{-6}$ . Experimental Results Table 1 shows the results of different models measured in terms of BLEU score. From the table, we can find that our implementation RNNSearch* using the feedback attention and dropout outperforms Moses by 3.23 BLEU points. The proposed model $RNNSearch*_{+Cd}$ achieves an average gain of 1.01 BLEU points over RNNSearch* on all test sets. Further, the model $RNNSearch*_{+Cd, Ct}$ achieves an average gain of 1.60 BLEU points over RNNSearch*, and it outperforms Moses by 4.83 BLEU points. These results strongly suggest that the dynamic and topic cache are very helpful and able to improve translation quality in document translation. Effect of the Gating Mechanism In order to validate the effectiveness of the gating mechanism used in the cache-based neural model, we set a fixed gate value for $RNNSearch*_{+Cd,Ct}$ , in other words, we use a mixture of probabilities with fixed proportions to replace the gating mechanism that automatically learns weights for probability mixture. Table 2 displays the result. When we set the gate $\alpha $ to a fixed value 0.3, the performance has an obvious decline comparing with that of $RNNSearch*_{+Cd,Ct}$ in terms of BLEU score. The performance is even worse than RNNSearch* by 10.11 BLEU points. Therefore without a good mechanism, the cache-based neural model cannot be appropriately integrated into NMT. This shows that the gating mechanism plays a important role in $RNNSearch*_{+Cd,Ct}$ . Effect of the Topic Cache When the NMT decoder translates the first sentence of a document, the dynamic cache is empty. In this case, we hope that the topic cache will provide document-level information for translating the first sentence. We therefore further investigate how the topic cache influence the translation of the first sentence in a document. We count and calculate the average number of words that appear in both the translations of the beginning sentences of documents and the topic cache. The statistical results are shown in Table 3. Without using the cache model, RNNSearch* generates translations that contain words from the topic cache as these topic words are tightly related to documents being translated. With the topic cache, our neural cache model enables the translations of the first sentences to be more relevant to the global topics of documents being translated as these translations contain more words from the topic cache that describes these documents. As the dynamic cache is empty when the decoder translates the beginning sentences, the topic cache is complementary to such a cold cache at the start. Comparing the numbers of translations generated by our model and human translations (Reference in Table 3), we can find that with the help of the topic cache, translations of the first sentences of documents are becoming closer to human translations. Analysis on the Cache-based Neural Model As shown above, the topic cache is able to influence both the translations of beginning sentences and those of subsequent sentences while the dynamic cache built from translations of preceding sentences has an impact on the translations of subsequent sentences. We further study what roles the dynamic and topic cache play in the translation process. For this aim, we calculate the average number of words in translations generated by $RNNSearch*_{+Cd,Ct}$ that are also in the caches. During the counting process, stop words and “UNK” are removed from sentence and document translations. Table 4 shows the results. If only the topic cache is used ([ $document \in [Ct]$ , $sentence \in (Ct)$ ] in Table 4), the cache still can provide useful information to help NMT translate sentences and documents. 28.3 words per document and 2.39 words per sentence are from the topic cache. When both the dynamic and topic cache are used ([ $document \in [Ct,Cd]$ , $sentence \in (Ct,Cd)$ ] in Table 4), the numbers of words that both occur in sentence/document translations and the two caches sharply increase from 2.61/30.27 to 6.73/81.16. The reason for this is that words appear in preceding sentences will have a large probability of appearing in subsequent sentences. This shows that the dynamic cache plays a important role in keeping document translations consistent by reusing words from preceding sentences. We also provide two translation examples in Table 5. We can find that RNNSearch* generates different translations “operations” and “actions” for the same chinese word “行动(xingdong)”, while our proposed model produces the same translation “actions”. Analysis on Translation Coherence We want to further study how the proposed cache-based neural model influence coherence in document translation. For this, we follow Lapata2005Automatic to measure coherence as sentence similarity. First, each sentence is represented by the mean of the distributed vectors of its words. Second, the similarity between two sentences is determined by the cosine of their means. $$sim(S_1,S_2) = cos(\mu (\vec{S_1}),\mu (\vec{S_2})) \\$$ (Eq. 46) where $\mu (\vec{S_i})=\frac{1}{|S_i|}\sum _{\vec{w} \in S_i}\vec{w}$ , and $\vec{w}$ is the vector for word $w$ . We use Word2Vec to get the distributed vectors of words and English Gigaword fourth Edition as training data to train Word2Vec. We consider that embeddings from word2vec trained on large monolingual corpus can well encode semantic information of words. We set the dimensionality of word embeddings to 200. Table 6 shows the average cosine similarity of adjacent sentences on all test sets. From the table, we can find that the $RNNSearch*_{+Cd,Ct}$ model produces better coherence in document translation than RNNSearch* in term of cosine similarity. Conclusion In this paper, we have presented a novel cache-based neural model for NMT to capture the global topic information and inter-sentence cohesion dependencies. We use a gating mechanism to integrate both the topic and dynamic cache into the proposed neural cache model. Experiment results show that the cache-based neural model achieves consistent and significant improvements in translation quality over several state-of-the-art NMT and SMT baselines. Further analysis reveals that the topic cache and dynamic cache are complementary to each other and that both are able to guide the NMT decoder to use topical words and to reuse words from recently translated sentences as next word predictions. Acknowledgments The present research was supported by the National Natural Science Foundation of China (Grant No. 61622209). We would like to thank three anonymous reviewers for their insightful comments.
BLEU scores, exact matches of words in both translations and topic cache, and cosine similarities of adjacent sentences for coherence.
79bb1a1b71a1149e33e8b51ffdb83124c18f3e9c
79bb1a1b71a1149e33e8b51ffdb83124c18f3e9c_0
Q: What accuracy does CNN model achieve? Text: Introduction The collection and analysis of historical document images is a key component in the preservation of culture and heritage. Given its importance, a number of active research efforts exist across the world BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. In this paper, we focus on palm-leaf and early paper documents from the Indian sub-continent. In contrast with modern or recent era documents, such manuscripts are considerably more fragile, prone to degradation from elements of nature and tend to have a short shelf life BIBREF6, BIBREF7, BIBREF8. More worryingly, the domain experts who can decipher such content are small in number and dwindling. Therefore, it is essential to access the content within these documents before it is lost forever. Surprisingly, no large-scale annotated Indic manuscript image datasets exist for the benefit of researchers in the community. In this paper, we take a significant step to address this gap by creating such a dataset. Given the large diversity in language, script and non-textual regional elements in these manuscripts, spatial layout parsing is crucial in enabling downstream applications such as OCR, word-spotting, style-and-content based retrieval and clustering. For this reason, we first tackle the problem of creating a diverse, annotated spatial layout dataset. This has the immediate advantage of bypassing the hurdle of language and script familiarity for annotators since layout annotation does not require any special expertise unlike text annotation. In general, manuscripts from Indian subcontinent pose many unique challenges (Figure FIGREF1). To begin with, the documents exhibit a large multiplicity of languages. This is further magnified by variations in intra-language script systems. Along with text, manuscripts may contain pictures, tables, non-pictorial decorative elements in non-standard layouts. A unique aspect of Indic and South-East Asian manuscripts is the frequent presence of holes punched in the document for the purpose of binding BIBREF8, BIBREF9, BIBREF6. These holes cause unnatural gaps within text lines. The physical dimensions of the manuscripts are typically smaller compared to other historical documents, resulting in a dense content layout. Sometimes, multiple manuscript pages are present in a single image. Moreover, imaging-related factors such as varying scan quality play a role as well. Given all of these challenges, it is important to develop robust and scalable approaches for the problem of layout parsing. In addition, given the typical non-technical nature of domain experts who study manuscripts, it is also important to develop easy-to-use graphical interfaces for annotation, post-annotation visualization and analytics. We make the following contributions: We introduce Indiscapes, the first ever historical Indic manuscript dataset with detailed spatial layout annotations (Section SECREF3). We adapt a deep neural network architecture for instance-level spatial layout parsing of historical manuscript images (Section SECREF16). We also introduce a lightweight web-based GUI for annotation and dashboard-style analytics keeping in mind the non-technical domain experts and the unique layout-level challenges of Indic manuscripts (Section SECREF11). Related Work A number of research groups have invested significant efforts in the creation and maintenance of annotated, publicly available historical manuscript image datasets BIBREF10, BIBREF11, BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF12. Other collections contain character-level and word-level spatial annotations for South-East Asian palm-leaf manuscripts BIBREF9, BIBREF4, BIBREF13. In these latter set of works, annotations for lines are obtained by considering the polygonal region formed by union of character bounding boxes as a line. While studies on Indic palm-leaf and paper-based manuscripts exist, these are typically conducted on small and often, private collections of documents BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. No publicly available large-scale, annotated dataset of historical Indic manuscripts exists to the best of our knowledge. In contrast with existing collections, our proposed dataset contains a much larger diversity in terms of document type (palm-leaf and early paper), scripts and annotated layout elements (see Tables TABREF5,TABREF8). An additional level of complexity arises from the presence of multiple manuscript pages within a single image (see Fig. FIGREF1). A number of contributions can also be found for the task of historical document layout parsing BIBREF21, BIBREF22, BIBREF23, BIBREF24. Wei et al. BIBREF22 explore the effect of using a hybrid feature selection method while using autoencoders for semantic segmentation in five historical English and Medieval European manuscript datasets. Chen et al. BIBREF24 explore the use of Fully Convolutional Networks (FCN) for the same datasets. Barakat et al. BIBREF25 propose a FCN for segmenting closely spaced, arbitrarily oriented text lines from an Arabic manuscript dataset. The mentioned approaches, coupled with efforts to conduct competitions on various aspects of historical document layout analysis have aided progress in this area BIBREF26, BIBREF27, BIBREF28. A variety of layout parsing approaches, including those employing the modern paradigm of deep learning, have been proposed for Indic BIBREF17, BIBREF19, BIBREF29, BIBREF20 and South-East Asian BIBREF23, BIBREF30, BIBREF13, BIBREF31, BIBREF32 palm-leaf and paper manuscript images. However, existing approaches typically employ brittle hand-crafted features or demonstrate performance on datasets which are limited in terms of layout diversity. Similar to many recent works, we employ Fully Convolutional Networks in our approach. However, a crucial distinction lies in our formulation of layout parsing as an instance segmentation problem, rather than just a semantic segmentation problem. This avoids the problem of closely spaced layout regions (e.g. lines) being perceived as contiguous blobs. The ready availability of annotation and analysis tools has facilitated progress in creation and analysis of historical document manuscripts BIBREF33, BIBREF34, BIBREF35. The tool we propose in the paper contains many of the features found in existing annotation systems. However, some of these systems are primarily oriented towards single-user, offline annotation and do not enable a unified management of annotation process and monitoring of annotator performance. In contrast, our web-based system addresses these aspects and provides additional capabilities. Many of the additional features in our system are tailored for annotation and examining annotation analytics for documents with dense and irregular layout elements, especially those found in Indic manuscripts. In this respect, our annotation system is closer to the recent trend of collaborative, cloud/web-based annotation systems and services BIBREF36, BIBREF37, BIBREF38. Indiscapes: The Indic manuscript dataset The Indic manuscript document images in our dataset are obtained from two sources. The first source is the publicly available Indic manuscript collection from University of Pennsylvania's Rare Book and Manuscript Library BIBREF39, also referred to as Penn-in-Hand (PIH). From the $2{,}880$ Indic manuscript book-sets, we carefully curated 193 manuscript images for annotation. Our curated selection aims to maximize the diversity of the dataset in terms of various attributes such as the extent of document degradation, script language, presence of non-textual elements (e.g. pictures, tables) and number of lines. Some images contain multiple manuscript pages stacked vertically or horizontally (see bottom-left image in Figure FIGREF1). The second source for manuscript images in our dataset is Bhoomi, an assorted collection of 315 images sourced from multiple Oriental Research Institutes and libraries across India. As with the first collection, we chose a subset intended to maximize the overall diversity of the dataset. However, this latter set of images are characterized by a relatively inferior document quality, presence of multiple languages and from a layout point of view, predominantly contain long, closely and irregularly spaced text lines, binding holes and degradations (Figure FIGREF1). Though some document images contain multiple manuscripts, we do not attempt to split the image into multiple pages. While this poses a challenge for annotation and automatic image parsing, retaining such images in the dataset eliminates manual/semi-automatic intervention. As our results show, our approach can successfully handle such multi-page documents, thereby making it truly an end-to-end system. Overall, our dataset contains 508 annotated Indic manuscripts. Some salient aspects of the dataset can be viewed in Table TABREF5 and a pictorial illustration of layout regions can be viewed in Figure FIGREF13. Note that multiple regions can overlap, unlike existing historical document datasets which typically contain disjoint region annotations. For the rest of the section, we discuss the challenges associated with annotating Indic manuscripts (Section SECREF9) and our web-based annotation tool (Section SECREF11). Indiscapes: The Indic manuscript dataset ::: Annotation Challenges A variety of unique challenges exist in the context of annotating Indic manuscript layouts. The challenges arise from three major sources. Content: The documents are written in a large variety of Indic languages. Some languages even exhibit intra-language script variations. A large pool of annotators familiar with the languages and scripts present in the corpus is required to ensure proper annotation of lines and character components. Layout: Unlike some of the existing datasets, Indic manuscripts contain non-textual elements such as color pictures, tables and document decorations. These elements are frequently interspersed with text in non-standard layouts. In many cases, the manuscripts contain one or more physical holes, designed for a thread-like material to pass through and bind the leaves together as a book. Such holes vary in terms of spatial location, count and hole diameter. When the holes are present in the middle of the document, they cause a break in the contiguity of lines. In some documents, the line contiguity is broken by a `virtual' hole-like gap, possibly intended for creation of the punched hole at a future time. In many cases, the separation between lines is extremely small. The handwritten nature of these documents and the surface material result in extremely uneven lines, necessitating meticulous and slow annotation. If multiple manuscript pages are present, the stacking order could be horizontal or vertical. Overall, the sheer variety in layout elements poses a significant challenge, not only for annotation, but also for automated layout parsing. Degradations: Historical Indic manuscripts tend to be inherently fragile and prone to damage due to various sources – wood-and-leaf-boring insects, humidity seepage, improper storage and handling etc. While some degradations cause the edges of the document to become frayed, others manifest as irregularly shaped perforations in the document interior. It may be important to identify such degradations before attempting lexically-focused tasks such as OCR or word-spotting. Indiscapes: The Indic manuscript dataset ::: Annotation Tool Keeping the aforementioned challenges in mind, we introduce a new browser-based annotation tool (see Figure FIGREF10). The tool is designed to operate both stand-alone and as a web-service. The web-service mode enables features such as distributed parallel sessions by registered annotators, dashboard-based live session monitoring and a wide variety of annotation-related analytics. On the front-end, a freehand region option is provided alongside the usual rectangle and polygon to enable maximum annotation flexibility. The web-service version also features a `Correction-mode' which enables annotators to correct existing annotations from previous annotators. Additionally, the tool has been designed to enable lexical (text) annotations in future. Indic Manuscript Layout Parsing To succeed at layout parsing of manuscripts, we require a system which can accurately localize various types of regions (e.g. text lines, isolated character components, physical degradation, pictures, holes). More importantly, we require a system which can isolate individual instances of each region (e.g. multiple text lines) in the manuscript image. Also, in our case, the annotation regions for manuscripts are not disjoint and can overlap (e.g. The annotation region for a text line can overlap with the annotation region of a hole (see Figure FIGREF13)). Therefore, we require a system which can accommodate such overlaps. To meet all of these requirements, we model our problem as one of semantic instance-level segmentation and employ the Mask R-CNN BIBREF40 architecture which has proven to be very effective at the task of object-instance segmentation in photos. Next, we briefly describe the Mask R-CNN architecture and our modifications of the same. Subsequently, we provide details related to implementation (Section SECREF17), model training (Section SECREF18) and inference (Section SECREF19). Indic Manuscript Layout Parsing ::: Network Architecture The Mask-RCNN architecture contains three stages as described below (see Figure FIGREF12). Backbone: The first stage, referred to as the backbone, is used to extract features from the input image. It consists of a convolutional network combined with a feature-pyramid network BIBREF41, thereby enabling multi-scale features to be extracted. We use the first four blocks of ResNet-50 BIBREF42 as the convolutional network. Region Proposal Network (RPN): This is a convolutional network which scans the pyramid feature map generated by the backbone network and generates rectangular regions commonly called `object proposals' which are likely to contain objects of interest. For each level of the feature pyramid and for each spatial location at a given level, a set of level-specific bounding boxes called anchors are generated. The anchors typically span a range of aspect ratios (e.g. $1:2, 1:1, 2:1$) for flexibility in detection. For each anchor, the RPN network predicts (i) the probability of an object being present (`objectness score') (ii) offset coordinates of a bounding box relative to location of the anchor. The generated bounding boxes are first filtered according to the `objectness score'. From boxes which survive the filtering, those that overlap with the underlying object above a certain threshold are chosen. After applying non-maximal suppression to remove overlapping boxes with relatively smaller objectness scores, the final set of boxes which remain are termed `object proposals' or Regions-of-Interest (RoI). Multi-Task Branch Networks: The RoIs obtained from RPN are warped into fixed dimensions and overlaid on feature maps extracted from the backbone to obtain RoI-specific features. These features are fed to three parallel task sub-networks. The first sub-network maps these features to region labels (e.g. Hole,Character-Line-Segment) while the second sub-network maps the RoI features to bounding boxes. The third sub-network is fully convolutional and maps the features to the pixel mask of the underlying region. Note that the ability of the architecture to predict masks independently for each RoI plays a crucial role in obtaining instance segmentations. Another advantage is that it naturally addresses situations where annotations or predictions overlap. Indic Manuscript Layout Parsing ::: Implementation Details The dataset splits used for training, validation and test phases can be seen in Table TABREF6. All manuscript images are adaptively resized to ensure the width does not exceed 1024 pixels. The images are padded with zeros such that the input to the deep network has spatial dimensions of $1024 \times 1024$. The ground truth region masks are initially subjected to a similar resizing procedure. Subsequently, they are downsized to $28 \times 28$ in order to match output dimensions of the mask sub-network. Indic Manuscript Layout Parsing ::: Implementation Details ::: Training The network is initialized with weights obtained from a Mask R-CNN trained on the MS-COCO BIBREF43 dataset with a ResNet-50 backbone. We found that this results in faster convergence and stabler training compared to using weights from a Mask-RCNN trained on ImageNet BIBREF44 or training from scratch. Within the RPN network, we use custom-designed anchors of 5 different scales and with 3 different aspect ratios. Specifically, we use the following aspect ratios – 1:1,1:3,1:10 – keeping in mind the typical spatial extents of the various region classes. We also limit the number of RoIs (`object proposals') to 512. We use categorical cross entropy loss $\mathcal {L}_{RPN}$ for RPN classification network. Within the task branches, we use categorical cross entropy loss $\mathcal {L}_{r}$ for region classification branch, smooth L1 loss BIBREF45 ($\mathcal {L}_{bb}$) for final bounding box prediction and per-pixel binary cross entropy loss $\mathcal {L}_{mask}$ for mask prediction. The total loss is a convex combination of these losses, i.e. $\mathcal {L} = \lambda _{RPN} \mathcal {L}_{RPN} + \lambda _{r} \mathcal {L}_{r} + \lambda _{bb} \mathcal {L}_{bb} + \lambda _{mask} \mathcal {L}_{mask}$. The weighting factors ($\lambda $s) are set to 1. However, to ensure priority for our task of interest namely mask prediction, we set $\lambda _{mask}=2$. For optimization, we use Stochastic Gradient Descent (SGD) optimizer with a gradient norm clipping value of $0.5$. The batch size, momentum and weight decay are set to 1, $0.9$ and $10^{-3}$ respectively. Given the relatively smaller size of our manuscript dataset compared to the photo dataset (MS-COCO) used to originally train the base Mask R-CNN, we adopt a multi-stage training strategy. For the first stage (30 epochs), we train only the task branch sub-networks using a learning rate of $10^{-3}$ while freezing weights in the rest of the overall network. This ensures that the task branches are fine-tuned for the types of regions contained in manuscript images. For the second stage (20 epochs), we additionally train stage-4 and up of the backbone ResNet-50. This enables extraction of appropriate semantic features from manuscript images. The omission of the initial 3 stages in the backbone for training is due to the fact that they provide generic, re-usable low-level features. To ensure priority coverage of hard-to-localize regions, we use focal loss BIBREF46 for mask generation. For the final stage (15 epochs), we train the entire network using a learning rate of $10^{-4}$. Indic Manuscript Layout Parsing ::: Implementation Details ::: Inference During inference, the images are rescaled and processed using the procedure described at the beginning of the subsection. The number of RoIs retained after non-maximal suppression (NMS) from the RPN is set to 1000. From these, we choose the top 100 region detections with objectness score exceeding $0.5$ and feed the corresponding RoIs to the mask branch sub-network for mask generation. It is important to note that this strategy is different from the parallel generation of outputs and use of the task sub-networks during training. The generated masks are then binarized using an empirically chosen threshold of $0.4$ and rescaled to their original size using bilinear interpolation. On these generated masks, NMS with a threshold value of $0.5$ is applied to obtain the final set of predicted masks. Indic Manuscript Layout Parsing ::: Evaluation For quantitative evaluation, we compute Average Precision (AP) for a particular IoU threshold, a measure widely reported in instance segmentation literature BIBREF47, BIBREF43. We specifically report $AP_{50}$ and $AP_{75}$, corresponding to AP at IoU thresholds 50 and 75 respectively BIBREF40. In addition, we report an overall score by averaging AP at different IoU thresholds ranging from $0.5$ to $0.95$ in steps of $0.05$. The AP measure characterizes performance at document level. To characterize performance for each region type, we report two additional measures BIBREF24 – average class-wise IoU (cwIoU) and average class-wise per-pixel accuracy (cwAcc). Consider a fixed test document $k$. Suppose there are $r_i$ regions of class $i$ and let ${IoU}_r$ denote the IoU score for one such region $r$, i.e. $1 \leqslant r \leqslant r_i$. The per-class IoU score for class $i$ and document $k$ is computed as ${cwIoU}^d_i = \frac{\sum _r {IoU}_r}{r_i}$. Suppose there are $N_i$ documents containing at least a single region of class $i$ in ground-truth. The overall per-class IoU score for class $i$ is computed as ${cwIoU}_i = \frac{\sum _d {cwIoU}^d_i}{N_i}$. In a similar manner, we define class-wise pixel accuracy ${pwAcc}^d_i$ at document level and average it across all the documents containing class $i$, i.e. ${cwAcc}_i = \frac{\sum _d {pwAcc}^d_i}{N_i}$. Note that our approach for computing class-wise scores prevents documents with a relatively larger number of class instances from dominating the score and in this sense, differs from existing approaches BIBREF24 Results We report quantitative results using the measures described in Section SECREF20. Table TABREF14 reports Average Precision and Table TABREF15 reports class-wise average IOUs and per-pixel accuracies. Qualitative results can be viewed in Figure FIGREF13. Despite the challenges posed by manuscripts, our model performs reasonably well across a variety of classes. As the qualitative results indicate, the model predicts accurate masks for almost all the regions. The results also indicate that our model handles overlap between Holes and Character line segments well. From ablative experiments, we found that our choice of focal loss was crucial in obtaining accurate mask boundaries. Unlike traditional semantic segmentation which would have produced a single blob-like region for line segments, our instance-based approach isolates each text line separately. Additionally, the clear demarcation between Page-Boundary and background indicates that our system identifies semantically relevant regions for downstream analysis. As the result at the bottom of Figure FIGREF13 shows, our system can even handle images with multiple pages, thus removing the need for any pre-processing related to isolation of individual pages. From quantitative results, we observe that Holes, Character line segments, Page boundary and Pictures are parsed the best while Physical degradations are difficult to parse due to the relatively small footprint and inconsistent patterns in degradations. The results show that performance for Penn in Hand (PIH) documents is better compared to Bhoomi manuscripts. We conjecture that the presence of closely spaced and unevenly written lines in latter is the cause. In our approach, two (or more) objects may share the same bounding box in terms of overlap and it is not possible to determine which box to choose during mask prediction. Consequently, an underlying line's boundary may either end up not being detected or the predicted mask might be poorly localized. However, this is not a systemic problem since our model achieves good performance even for very dense Bhoomi document line layouts. Conclusion Via this paper, we propose Indiscapes, the first dataset with layout annotations for historical Indic manuscripts. We believe that the availability of layout annotations will play a crucial role in reducing the overall complexity for OCR and other tasks such as word-spotting, style-and-content based retrieval. In the long-term, we intend to expand the dataset, not only numerically but also in terms of layout, script and language diversity. As a significant contribution, we have also adapted a deep-network based instance segmentation framework custom modified for fully automatic layout parsing. Given the general nature of our framework, advances in instance segmentation approaches can be leveraged thereby improving performance over time. Our proposed web-based annotator system, although designed for Indic manuscripts, is flexible, and could be reused for similar manuscripts from Asian subcontinent. We intend to expand the capabilities of our annotator system in many useful ways. For instance, the layout estimated by our deep-network could be provided to annotators for correction, thus reducing annotation efforts. Finally, we plan to have our dataset, instance segmentation system and annotator system publicly available. This would enable large-scale data collection and automated analysis efforts for Indic as well as other historical Asian manuscripts. The repositories related to the systems presented in this paper and the Indiscapes dataset can be accessed at https://ihdia.iiit.ac.in. Acknowledgment We would like to thank Dr. Sai Susarla for enabling access to the Bhoomi document collection. We also thank Poreddy Mourya Kumar Reddy, Gollapudi Sai Vamsi Krishna for their contributions related to dashboard and various annotators for their labelling efforts.
Combined per-pixel accuracy for character line segments is 74.79
26faad6f42b6d628f341c8d4ce5a08a591eea8c2
26faad6f42b6d628f341c8d4ce5a08a591eea8c2_0
Q: How many documents are in the Indiscapes dataset? Text: Introduction The collection and analysis of historical document images is a key component in the preservation of culture and heritage. Given its importance, a number of active research efforts exist across the world BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. In this paper, we focus on palm-leaf and early paper documents from the Indian sub-continent. In contrast with modern or recent era documents, such manuscripts are considerably more fragile, prone to degradation from elements of nature and tend to have a short shelf life BIBREF6, BIBREF7, BIBREF8. More worryingly, the domain experts who can decipher such content are small in number and dwindling. Therefore, it is essential to access the content within these documents before it is lost forever. Surprisingly, no large-scale annotated Indic manuscript image datasets exist for the benefit of researchers in the community. In this paper, we take a significant step to address this gap by creating such a dataset. Given the large diversity in language, script and non-textual regional elements in these manuscripts, spatial layout parsing is crucial in enabling downstream applications such as OCR, word-spotting, style-and-content based retrieval and clustering. For this reason, we first tackle the problem of creating a diverse, annotated spatial layout dataset. This has the immediate advantage of bypassing the hurdle of language and script familiarity for annotators since layout annotation does not require any special expertise unlike text annotation. In general, manuscripts from Indian subcontinent pose many unique challenges (Figure FIGREF1). To begin with, the documents exhibit a large multiplicity of languages. This is further magnified by variations in intra-language script systems. Along with text, manuscripts may contain pictures, tables, non-pictorial decorative elements in non-standard layouts. A unique aspect of Indic and South-East Asian manuscripts is the frequent presence of holes punched in the document for the purpose of binding BIBREF8, BIBREF9, BIBREF6. These holes cause unnatural gaps within text lines. The physical dimensions of the manuscripts are typically smaller compared to other historical documents, resulting in a dense content layout. Sometimes, multiple manuscript pages are present in a single image. Moreover, imaging-related factors such as varying scan quality play a role as well. Given all of these challenges, it is important to develop robust and scalable approaches for the problem of layout parsing. In addition, given the typical non-technical nature of domain experts who study manuscripts, it is also important to develop easy-to-use graphical interfaces for annotation, post-annotation visualization and analytics. We make the following contributions: We introduce Indiscapes, the first ever historical Indic manuscript dataset with detailed spatial layout annotations (Section SECREF3). We adapt a deep neural network architecture for instance-level spatial layout parsing of historical manuscript images (Section SECREF16). We also introduce a lightweight web-based GUI for annotation and dashboard-style analytics keeping in mind the non-technical domain experts and the unique layout-level challenges of Indic manuscripts (Section SECREF11). Related Work A number of research groups have invested significant efforts in the creation and maintenance of annotated, publicly available historical manuscript image datasets BIBREF10, BIBREF11, BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF12. Other collections contain character-level and word-level spatial annotations for South-East Asian palm-leaf manuscripts BIBREF9, BIBREF4, BIBREF13. In these latter set of works, annotations for lines are obtained by considering the polygonal region formed by union of character bounding boxes as a line. While studies on Indic palm-leaf and paper-based manuscripts exist, these are typically conducted on small and often, private collections of documents BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. No publicly available large-scale, annotated dataset of historical Indic manuscripts exists to the best of our knowledge. In contrast with existing collections, our proposed dataset contains a much larger diversity in terms of document type (palm-leaf and early paper), scripts and annotated layout elements (see Tables TABREF5,TABREF8). An additional level of complexity arises from the presence of multiple manuscript pages within a single image (see Fig. FIGREF1). A number of contributions can also be found for the task of historical document layout parsing BIBREF21, BIBREF22, BIBREF23, BIBREF24. Wei et al. BIBREF22 explore the effect of using a hybrid feature selection method while using autoencoders for semantic segmentation in five historical English and Medieval European manuscript datasets. Chen et al. BIBREF24 explore the use of Fully Convolutional Networks (FCN) for the same datasets. Barakat et al. BIBREF25 propose a FCN for segmenting closely spaced, arbitrarily oriented text lines from an Arabic manuscript dataset. The mentioned approaches, coupled with efforts to conduct competitions on various aspects of historical document layout analysis have aided progress in this area BIBREF26, BIBREF27, BIBREF28. A variety of layout parsing approaches, including those employing the modern paradigm of deep learning, have been proposed for Indic BIBREF17, BIBREF19, BIBREF29, BIBREF20 and South-East Asian BIBREF23, BIBREF30, BIBREF13, BIBREF31, BIBREF32 palm-leaf and paper manuscript images. However, existing approaches typically employ brittle hand-crafted features or demonstrate performance on datasets which are limited in terms of layout diversity. Similar to many recent works, we employ Fully Convolutional Networks in our approach. However, a crucial distinction lies in our formulation of layout parsing as an instance segmentation problem, rather than just a semantic segmentation problem. This avoids the problem of closely spaced layout regions (e.g. lines) being perceived as contiguous blobs. The ready availability of annotation and analysis tools has facilitated progress in creation and analysis of historical document manuscripts BIBREF33, BIBREF34, BIBREF35. The tool we propose in the paper contains many of the features found in existing annotation systems. However, some of these systems are primarily oriented towards single-user, offline annotation and do not enable a unified management of annotation process and monitoring of annotator performance. In contrast, our web-based system addresses these aspects and provides additional capabilities. Many of the additional features in our system are tailored for annotation and examining annotation analytics for documents with dense and irregular layout elements, especially those found in Indic manuscripts. In this respect, our annotation system is closer to the recent trend of collaborative, cloud/web-based annotation systems and services BIBREF36, BIBREF37, BIBREF38. Indiscapes: The Indic manuscript dataset The Indic manuscript document images in our dataset are obtained from two sources. The first source is the publicly available Indic manuscript collection from University of Pennsylvania's Rare Book and Manuscript Library BIBREF39, also referred to as Penn-in-Hand (PIH). From the $2{,}880$ Indic manuscript book-sets, we carefully curated 193 manuscript images for annotation. Our curated selection aims to maximize the diversity of the dataset in terms of various attributes such as the extent of document degradation, script language, presence of non-textual elements (e.g. pictures, tables) and number of lines. Some images contain multiple manuscript pages stacked vertically or horizontally (see bottom-left image in Figure FIGREF1). The second source for manuscript images in our dataset is Bhoomi, an assorted collection of 315 images sourced from multiple Oriental Research Institutes and libraries across India. As with the first collection, we chose a subset intended to maximize the overall diversity of the dataset. However, this latter set of images are characterized by a relatively inferior document quality, presence of multiple languages and from a layout point of view, predominantly contain long, closely and irregularly spaced text lines, binding holes and degradations (Figure FIGREF1). Though some document images contain multiple manuscripts, we do not attempt to split the image into multiple pages. While this poses a challenge for annotation and automatic image parsing, retaining such images in the dataset eliminates manual/semi-automatic intervention. As our results show, our approach can successfully handle such multi-page documents, thereby making it truly an end-to-end system. Overall, our dataset contains 508 annotated Indic manuscripts. Some salient aspects of the dataset can be viewed in Table TABREF5 and a pictorial illustration of layout regions can be viewed in Figure FIGREF13. Note that multiple regions can overlap, unlike existing historical document datasets which typically contain disjoint region annotations. For the rest of the section, we discuss the challenges associated with annotating Indic manuscripts (Section SECREF9) and our web-based annotation tool (Section SECREF11). Indiscapes: The Indic manuscript dataset ::: Annotation Challenges A variety of unique challenges exist in the context of annotating Indic manuscript layouts. The challenges arise from three major sources. Content: The documents are written in a large variety of Indic languages. Some languages even exhibit intra-language script variations. A large pool of annotators familiar with the languages and scripts present in the corpus is required to ensure proper annotation of lines and character components. Layout: Unlike some of the existing datasets, Indic manuscripts contain non-textual elements such as color pictures, tables and document decorations. These elements are frequently interspersed with text in non-standard layouts. In many cases, the manuscripts contain one or more physical holes, designed for a thread-like material to pass through and bind the leaves together as a book. Such holes vary in terms of spatial location, count and hole diameter. When the holes are present in the middle of the document, they cause a break in the contiguity of lines. In some documents, the line contiguity is broken by a `virtual' hole-like gap, possibly intended for creation of the punched hole at a future time. In many cases, the separation between lines is extremely small. The handwritten nature of these documents and the surface material result in extremely uneven lines, necessitating meticulous and slow annotation. If multiple manuscript pages are present, the stacking order could be horizontal or vertical. Overall, the sheer variety in layout elements poses a significant challenge, not only for annotation, but also for automated layout parsing. Degradations: Historical Indic manuscripts tend to be inherently fragile and prone to damage due to various sources – wood-and-leaf-boring insects, humidity seepage, improper storage and handling etc. While some degradations cause the edges of the document to become frayed, others manifest as irregularly shaped perforations in the document interior. It may be important to identify such degradations before attempting lexically-focused tasks such as OCR or word-spotting. Indiscapes: The Indic manuscript dataset ::: Annotation Tool Keeping the aforementioned challenges in mind, we introduce a new browser-based annotation tool (see Figure FIGREF10). The tool is designed to operate both stand-alone and as a web-service. The web-service mode enables features such as distributed parallel sessions by registered annotators, dashboard-based live session monitoring and a wide variety of annotation-related analytics. On the front-end, a freehand region option is provided alongside the usual rectangle and polygon to enable maximum annotation flexibility. The web-service version also features a `Correction-mode' which enables annotators to correct existing annotations from previous annotators. Additionally, the tool has been designed to enable lexical (text) annotations in future. Indic Manuscript Layout Parsing To succeed at layout parsing of manuscripts, we require a system which can accurately localize various types of regions (e.g. text lines, isolated character components, physical degradation, pictures, holes). More importantly, we require a system which can isolate individual instances of each region (e.g. multiple text lines) in the manuscript image. Also, in our case, the annotation regions for manuscripts are not disjoint and can overlap (e.g. The annotation region for a text line can overlap with the annotation region of a hole (see Figure FIGREF13)). Therefore, we require a system which can accommodate such overlaps. To meet all of these requirements, we model our problem as one of semantic instance-level segmentation and employ the Mask R-CNN BIBREF40 architecture which has proven to be very effective at the task of object-instance segmentation in photos. Next, we briefly describe the Mask R-CNN architecture and our modifications of the same. Subsequently, we provide details related to implementation (Section SECREF17), model training (Section SECREF18) and inference (Section SECREF19). Indic Manuscript Layout Parsing ::: Network Architecture The Mask-RCNN architecture contains three stages as described below (see Figure FIGREF12). Backbone: The first stage, referred to as the backbone, is used to extract features from the input image. It consists of a convolutional network combined with a feature-pyramid network BIBREF41, thereby enabling multi-scale features to be extracted. We use the first four blocks of ResNet-50 BIBREF42 as the convolutional network. Region Proposal Network (RPN): This is a convolutional network which scans the pyramid feature map generated by the backbone network and generates rectangular regions commonly called `object proposals' which are likely to contain objects of interest. For each level of the feature pyramid and for each spatial location at a given level, a set of level-specific bounding boxes called anchors are generated. The anchors typically span a range of aspect ratios (e.g. $1:2, 1:1, 2:1$) for flexibility in detection. For each anchor, the RPN network predicts (i) the probability of an object being present (`objectness score') (ii) offset coordinates of a bounding box relative to location of the anchor. The generated bounding boxes are first filtered according to the `objectness score'. From boxes which survive the filtering, those that overlap with the underlying object above a certain threshold are chosen. After applying non-maximal suppression to remove overlapping boxes with relatively smaller objectness scores, the final set of boxes which remain are termed `object proposals' or Regions-of-Interest (RoI). Multi-Task Branch Networks: The RoIs obtained from RPN are warped into fixed dimensions and overlaid on feature maps extracted from the backbone to obtain RoI-specific features. These features are fed to three parallel task sub-networks. The first sub-network maps these features to region labels (e.g. Hole,Character-Line-Segment) while the second sub-network maps the RoI features to bounding boxes. The third sub-network is fully convolutional and maps the features to the pixel mask of the underlying region. Note that the ability of the architecture to predict masks independently for each RoI plays a crucial role in obtaining instance segmentations. Another advantage is that it naturally addresses situations where annotations or predictions overlap. Indic Manuscript Layout Parsing ::: Implementation Details The dataset splits used for training, validation and test phases can be seen in Table TABREF6. All manuscript images are adaptively resized to ensure the width does not exceed 1024 pixels. The images are padded with zeros such that the input to the deep network has spatial dimensions of $1024 \times 1024$. The ground truth region masks are initially subjected to a similar resizing procedure. Subsequently, they are downsized to $28 \times 28$ in order to match output dimensions of the mask sub-network. Indic Manuscript Layout Parsing ::: Implementation Details ::: Training The network is initialized with weights obtained from a Mask R-CNN trained on the MS-COCO BIBREF43 dataset with a ResNet-50 backbone. We found that this results in faster convergence and stabler training compared to using weights from a Mask-RCNN trained on ImageNet BIBREF44 or training from scratch. Within the RPN network, we use custom-designed anchors of 5 different scales and with 3 different aspect ratios. Specifically, we use the following aspect ratios – 1:1,1:3,1:10 – keeping in mind the typical spatial extents of the various region classes. We also limit the number of RoIs (`object proposals') to 512. We use categorical cross entropy loss $\mathcal {L}_{RPN}$ for RPN classification network. Within the task branches, we use categorical cross entropy loss $\mathcal {L}_{r}$ for region classification branch, smooth L1 loss BIBREF45 ($\mathcal {L}_{bb}$) for final bounding box prediction and per-pixel binary cross entropy loss $\mathcal {L}_{mask}$ for mask prediction. The total loss is a convex combination of these losses, i.e. $\mathcal {L} = \lambda _{RPN} \mathcal {L}_{RPN} + \lambda _{r} \mathcal {L}_{r} + \lambda _{bb} \mathcal {L}_{bb} + \lambda _{mask} \mathcal {L}_{mask}$. The weighting factors ($\lambda $s) are set to 1. However, to ensure priority for our task of interest namely mask prediction, we set $\lambda _{mask}=2$. For optimization, we use Stochastic Gradient Descent (SGD) optimizer with a gradient norm clipping value of $0.5$. The batch size, momentum and weight decay are set to 1, $0.9$ and $10^{-3}$ respectively. Given the relatively smaller size of our manuscript dataset compared to the photo dataset (MS-COCO) used to originally train the base Mask R-CNN, we adopt a multi-stage training strategy. For the first stage (30 epochs), we train only the task branch sub-networks using a learning rate of $10^{-3}$ while freezing weights in the rest of the overall network. This ensures that the task branches are fine-tuned for the types of regions contained in manuscript images. For the second stage (20 epochs), we additionally train stage-4 and up of the backbone ResNet-50. This enables extraction of appropriate semantic features from manuscript images. The omission of the initial 3 stages in the backbone for training is due to the fact that they provide generic, re-usable low-level features. To ensure priority coverage of hard-to-localize regions, we use focal loss BIBREF46 for mask generation. For the final stage (15 epochs), we train the entire network using a learning rate of $10^{-4}$. Indic Manuscript Layout Parsing ::: Implementation Details ::: Inference During inference, the images are rescaled and processed using the procedure described at the beginning of the subsection. The number of RoIs retained after non-maximal suppression (NMS) from the RPN is set to 1000. From these, we choose the top 100 region detections with objectness score exceeding $0.5$ and feed the corresponding RoIs to the mask branch sub-network for mask generation. It is important to note that this strategy is different from the parallel generation of outputs and use of the task sub-networks during training. The generated masks are then binarized using an empirically chosen threshold of $0.4$ and rescaled to their original size using bilinear interpolation. On these generated masks, NMS with a threshold value of $0.5$ is applied to obtain the final set of predicted masks. Indic Manuscript Layout Parsing ::: Evaluation For quantitative evaluation, we compute Average Precision (AP) for a particular IoU threshold, a measure widely reported in instance segmentation literature BIBREF47, BIBREF43. We specifically report $AP_{50}$ and $AP_{75}$, corresponding to AP at IoU thresholds 50 and 75 respectively BIBREF40. In addition, we report an overall score by averaging AP at different IoU thresholds ranging from $0.5$ to $0.95$ in steps of $0.05$. The AP measure characterizes performance at document level. To characterize performance for each region type, we report two additional measures BIBREF24 – average class-wise IoU (cwIoU) and average class-wise per-pixel accuracy (cwAcc). Consider a fixed test document $k$. Suppose there are $r_i$ regions of class $i$ and let ${IoU}_r$ denote the IoU score for one such region $r$, i.e. $1 \leqslant r \leqslant r_i$. The per-class IoU score for class $i$ and document $k$ is computed as ${cwIoU}^d_i = \frac{\sum _r {IoU}_r}{r_i}$. Suppose there are $N_i$ documents containing at least a single region of class $i$ in ground-truth. The overall per-class IoU score for class $i$ is computed as ${cwIoU}_i = \frac{\sum _d {cwIoU}^d_i}{N_i}$. In a similar manner, we define class-wise pixel accuracy ${pwAcc}^d_i$ at document level and average it across all the documents containing class $i$, i.e. ${cwAcc}_i = \frac{\sum _d {pwAcc}^d_i}{N_i}$. Note that our approach for computing class-wise scores prevents documents with a relatively larger number of class instances from dominating the score and in this sense, differs from existing approaches BIBREF24 Results We report quantitative results using the measures described in Section SECREF20. Table TABREF14 reports Average Precision and Table TABREF15 reports class-wise average IOUs and per-pixel accuracies. Qualitative results can be viewed in Figure FIGREF13. Despite the challenges posed by manuscripts, our model performs reasonably well across a variety of classes. As the qualitative results indicate, the model predicts accurate masks for almost all the regions. The results also indicate that our model handles overlap between Holes and Character line segments well. From ablative experiments, we found that our choice of focal loss was crucial in obtaining accurate mask boundaries. Unlike traditional semantic segmentation which would have produced a single blob-like region for line segments, our instance-based approach isolates each text line separately. Additionally, the clear demarcation between Page-Boundary and background indicates that our system identifies semantically relevant regions for downstream analysis. As the result at the bottom of Figure FIGREF13 shows, our system can even handle images with multiple pages, thus removing the need for any pre-processing related to isolation of individual pages. From quantitative results, we observe that Holes, Character line segments, Page boundary and Pictures are parsed the best while Physical degradations are difficult to parse due to the relatively small footprint and inconsistent patterns in degradations. The results show that performance for Penn in Hand (PIH) documents is better compared to Bhoomi manuscripts. We conjecture that the presence of closely spaced and unevenly written lines in latter is the cause. In our approach, two (or more) objects may share the same bounding box in terms of overlap and it is not possible to determine which box to choose during mask prediction. Consequently, an underlying line's boundary may either end up not being detected or the predicted mask might be poorly localized. However, this is not a systemic problem since our model achieves good performance even for very dense Bhoomi document line layouts. Conclusion Via this paper, we propose Indiscapes, the first dataset with layout annotations for historical Indic manuscripts. We believe that the availability of layout annotations will play a crucial role in reducing the overall complexity for OCR and other tasks such as word-spotting, style-and-content based retrieval. In the long-term, we intend to expand the dataset, not only numerically but also in terms of layout, script and language diversity. As a significant contribution, we have also adapted a deep-network based instance segmentation framework custom modified for fully automatic layout parsing. Given the general nature of our framework, advances in instance segmentation approaches can be leveraged thereby improving performance over time. Our proposed web-based annotator system, although designed for Indic manuscripts, is flexible, and could be reused for similar manuscripts from Asian subcontinent. We intend to expand the capabilities of our annotator system in many useful ways. For instance, the layout estimated by our deep-network could be provided to annotators for correction, thus reducing annotation efforts. Finally, we plan to have our dataset, instance segmentation system and annotator system publicly available. This would enable large-scale data collection and automated analysis efforts for Indic as well as other historical Asian manuscripts. The repositories related to the systems presented in this paper and the Indiscapes dataset can be accessed at https://ihdia.iiit.ac.in. Acknowledgment We would like to thank Dr. Sai Susarla for enabling access to the Bhoomi document collection. We also thank Poreddy Mourya Kumar Reddy, Gollapudi Sai Vamsi Krishna for their contributions related to dashboard and various annotators for their labelling efforts.
508
20be7a776dfda0d3c9dc10270699061cb9bc8297
20be7a776dfda0d3c9dc10270699061cb9bc8297_0
Q: What language(s) are the manuscripts written in? Text: Introduction The collection and analysis of historical document images is a key component in the preservation of culture and heritage. Given its importance, a number of active research efforts exist across the world BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5. In this paper, we focus on palm-leaf and early paper documents from the Indian sub-continent. In contrast with modern or recent era documents, such manuscripts are considerably more fragile, prone to degradation from elements of nature and tend to have a short shelf life BIBREF6, BIBREF7, BIBREF8. More worryingly, the domain experts who can decipher such content are small in number and dwindling. Therefore, it is essential to access the content within these documents before it is lost forever. Surprisingly, no large-scale annotated Indic manuscript image datasets exist for the benefit of researchers in the community. In this paper, we take a significant step to address this gap by creating such a dataset. Given the large diversity in language, script and non-textual regional elements in these manuscripts, spatial layout parsing is crucial in enabling downstream applications such as OCR, word-spotting, style-and-content based retrieval and clustering. For this reason, we first tackle the problem of creating a diverse, annotated spatial layout dataset. This has the immediate advantage of bypassing the hurdle of language and script familiarity for annotators since layout annotation does not require any special expertise unlike text annotation. In general, manuscripts from Indian subcontinent pose many unique challenges (Figure FIGREF1). To begin with, the documents exhibit a large multiplicity of languages. This is further magnified by variations in intra-language script systems. Along with text, manuscripts may contain pictures, tables, non-pictorial decorative elements in non-standard layouts. A unique aspect of Indic and South-East Asian manuscripts is the frequent presence of holes punched in the document for the purpose of binding BIBREF8, BIBREF9, BIBREF6. These holes cause unnatural gaps within text lines. The physical dimensions of the manuscripts are typically smaller compared to other historical documents, resulting in a dense content layout. Sometimes, multiple manuscript pages are present in a single image. Moreover, imaging-related factors such as varying scan quality play a role as well. Given all of these challenges, it is important to develop robust and scalable approaches for the problem of layout parsing. In addition, given the typical non-technical nature of domain experts who study manuscripts, it is also important to develop easy-to-use graphical interfaces for annotation, post-annotation visualization and analytics. We make the following contributions: We introduce Indiscapes, the first ever historical Indic manuscript dataset with detailed spatial layout annotations (Section SECREF3). We adapt a deep neural network architecture for instance-level spatial layout parsing of historical manuscript images (Section SECREF16). We also introduce a lightweight web-based GUI for annotation and dashboard-style analytics keeping in mind the non-technical domain experts and the unique layout-level challenges of Indic manuscripts (Section SECREF11). Related Work A number of research groups have invested significant efforts in the creation and maintenance of annotated, publicly available historical manuscript image datasets BIBREF10, BIBREF11, BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF12. Other collections contain character-level and word-level spatial annotations for South-East Asian palm-leaf manuscripts BIBREF9, BIBREF4, BIBREF13. In these latter set of works, annotations for lines are obtained by considering the polygonal region formed by union of character bounding boxes as a line. While studies on Indic palm-leaf and paper-based manuscripts exist, these are typically conducted on small and often, private collections of documents BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20. No publicly available large-scale, annotated dataset of historical Indic manuscripts exists to the best of our knowledge. In contrast with existing collections, our proposed dataset contains a much larger diversity in terms of document type (palm-leaf and early paper), scripts and annotated layout elements (see Tables TABREF5,TABREF8). An additional level of complexity arises from the presence of multiple manuscript pages within a single image (see Fig. FIGREF1). A number of contributions can also be found for the task of historical document layout parsing BIBREF21, BIBREF22, BIBREF23, BIBREF24. Wei et al. BIBREF22 explore the effect of using a hybrid feature selection method while using autoencoders for semantic segmentation in five historical English and Medieval European manuscript datasets. Chen et al. BIBREF24 explore the use of Fully Convolutional Networks (FCN) for the same datasets. Barakat et al. BIBREF25 propose a FCN for segmenting closely spaced, arbitrarily oriented text lines from an Arabic manuscript dataset. The mentioned approaches, coupled with efforts to conduct competitions on various aspects of historical document layout analysis have aided progress in this area BIBREF26, BIBREF27, BIBREF28. A variety of layout parsing approaches, including those employing the modern paradigm of deep learning, have been proposed for Indic BIBREF17, BIBREF19, BIBREF29, BIBREF20 and South-East Asian BIBREF23, BIBREF30, BIBREF13, BIBREF31, BIBREF32 palm-leaf and paper manuscript images. However, existing approaches typically employ brittle hand-crafted features or demonstrate performance on datasets which are limited in terms of layout diversity. Similar to many recent works, we employ Fully Convolutional Networks in our approach. However, a crucial distinction lies in our formulation of layout parsing as an instance segmentation problem, rather than just a semantic segmentation problem. This avoids the problem of closely spaced layout regions (e.g. lines) being perceived as contiguous blobs. The ready availability of annotation and analysis tools has facilitated progress in creation and analysis of historical document manuscripts BIBREF33, BIBREF34, BIBREF35. The tool we propose in the paper contains many of the features found in existing annotation systems. However, some of these systems are primarily oriented towards single-user, offline annotation and do not enable a unified management of annotation process and monitoring of annotator performance. In contrast, our web-based system addresses these aspects and provides additional capabilities. Many of the additional features in our system are tailored for annotation and examining annotation analytics for documents with dense and irregular layout elements, especially those found in Indic manuscripts. In this respect, our annotation system is closer to the recent trend of collaborative, cloud/web-based annotation systems and services BIBREF36, BIBREF37, BIBREF38. Indiscapes: The Indic manuscript dataset The Indic manuscript document images in our dataset are obtained from two sources. The first source is the publicly available Indic manuscript collection from University of Pennsylvania's Rare Book and Manuscript Library BIBREF39, also referred to as Penn-in-Hand (PIH). From the $2{,}880$ Indic manuscript book-sets, we carefully curated 193 manuscript images for annotation. Our curated selection aims to maximize the diversity of the dataset in terms of various attributes such as the extent of document degradation, script language, presence of non-textual elements (e.g. pictures, tables) and number of lines. Some images contain multiple manuscript pages stacked vertically or horizontally (see bottom-left image in Figure FIGREF1). The second source for manuscript images in our dataset is Bhoomi, an assorted collection of 315 images sourced from multiple Oriental Research Institutes and libraries across India. As with the first collection, we chose a subset intended to maximize the overall diversity of the dataset. However, this latter set of images are characterized by a relatively inferior document quality, presence of multiple languages and from a layout point of view, predominantly contain long, closely and irregularly spaced text lines, binding holes and degradations (Figure FIGREF1). Though some document images contain multiple manuscripts, we do not attempt to split the image into multiple pages. While this poses a challenge for annotation and automatic image parsing, retaining such images in the dataset eliminates manual/semi-automatic intervention. As our results show, our approach can successfully handle such multi-page documents, thereby making it truly an end-to-end system. Overall, our dataset contains 508 annotated Indic manuscripts. Some salient aspects of the dataset can be viewed in Table TABREF5 and a pictorial illustration of layout regions can be viewed in Figure FIGREF13. Note that multiple regions can overlap, unlike existing historical document datasets which typically contain disjoint region annotations. For the rest of the section, we discuss the challenges associated with annotating Indic manuscripts (Section SECREF9) and our web-based annotation tool (Section SECREF11). Indiscapes: The Indic manuscript dataset ::: Annotation Challenges A variety of unique challenges exist in the context of annotating Indic manuscript layouts. The challenges arise from three major sources. Content: The documents are written in a large variety of Indic languages. Some languages even exhibit intra-language script variations. A large pool of annotators familiar with the languages and scripts present in the corpus is required to ensure proper annotation of lines and character components. Layout: Unlike some of the existing datasets, Indic manuscripts contain non-textual elements such as color pictures, tables and document decorations. These elements are frequently interspersed with text in non-standard layouts. In many cases, the manuscripts contain one or more physical holes, designed for a thread-like material to pass through and bind the leaves together as a book. Such holes vary in terms of spatial location, count and hole diameter. When the holes are present in the middle of the document, they cause a break in the contiguity of lines. In some documents, the line contiguity is broken by a `virtual' hole-like gap, possibly intended for creation of the punched hole at a future time. In many cases, the separation between lines is extremely small. The handwritten nature of these documents and the surface material result in extremely uneven lines, necessitating meticulous and slow annotation. If multiple manuscript pages are present, the stacking order could be horizontal or vertical. Overall, the sheer variety in layout elements poses a significant challenge, not only for annotation, but also for automated layout parsing. Degradations: Historical Indic manuscripts tend to be inherently fragile and prone to damage due to various sources – wood-and-leaf-boring insects, humidity seepage, improper storage and handling etc. While some degradations cause the edges of the document to become frayed, others manifest as irregularly shaped perforations in the document interior. It may be important to identify such degradations before attempting lexically-focused tasks such as OCR or word-spotting. Indiscapes: The Indic manuscript dataset ::: Annotation Tool Keeping the aforementioned challenges in mind, we introduce a new browser-based annotation tool (see Figure FIGREF10). The tool is designed to operate both stand-alone and as a web-service. The web-service mode enables features such as distributed parallel sessions by registered annotators, dashboard-based live session monitoring and a wide variety of annotation-related analytics. On the front-end, a freehand region option is provided alongside the usual rectangle and polygon to enable maximum annotation flexibility. The web-service version also features a `Correction-mode' which enables annotators to correct existing annotations from previous annotators. Additionally, the tool has been designed to enable lexical (text) annotations in future. Indic Manuscript Layout Parsing To succeed at layout parsing of manuscripts, we require a system which can accurately localize various types of regions (e.g. text lines, isolated character components, physical degradation, pictures, holes). More importantly, we require a system which can isolate individual instances of each region (e.g. multiple text lines) in the manuscript image. Also, in our case, the annotation regions for manuscripts are not disjoint and can overlap (e.g. The annotation region for a text line can overlap with the annotation region of a hole (see Figure FIGREF13)). Therefore, we require a system which can accommodate such overlaps. To meet all of these requirements, we model our problem as one of semantic instance-level segmentation and employ the Mask R-CNN BIBREF40 architecture which has proven to be very effective at the task of object-instance segmentation in photos. Next, we briefly describe the Mask R-CNN architecture and our modifications of the same. Subsequently, we provide details related to implementation (Section SECREF17), model training (Section SECREF18) and inference (Section SECREF19). Indic Manuscript Layout Parsing ::: Network Architecture The Mask-RCNN architecture contains three stages as described below (see Figure FIGREF12). Backbone: The first stage, referred to as the backbone, is used to extract features from the input image. It consists of a convolutional network combined with a feature-pyramid network BIBREF41, thereby enabling multi-scale features to be extracted. We use the first four blocks of ResNet-50 BIBREF42 as the convolutional network. Region Proposal Network (RPN): This is a convolutional network which scans the pyramid feature map generated by the backbone network and generates rectangular regions commonly called `object proposals' which are likely to contain objects of interest. For each level of the feature pyramid and for each spatial location at a given level, a set of level-specific bounding boxes called anchors are generated. The anchors typically span a range of aspect ratios (e.g. $1:2, 1:1, 2:1$) for flexibility in detection. For each anchor, the RPN network predicts (i) the probability of an object being present (`objectness score') (ii) offset coordinates of a bounding box relative to location of the anchor. The generated bounding boxes are first filtered according to the `objectness score'. From boxes which survive the filtering, those that overlap with the underlying object above a certain threshold are chosen. After applying non-maximal suppression to remove overlapping boxes with relatively smaller objectness scores, the final set of boxes which remain are termed `object proposals' or Regions-of-Interest (RoI). Multi-Task Branch Networks: The RoIs obtained from RPN are warped into fixed dimensions and overlaid on feature maps extracted from the backbone to obtain RoI-specific features. These features are fed to three parallel task sub-networks. The first sub-network maps these features to region labels (e.g. Hole,Character-Line-Segment) while the second sub-network maps the RoI features to bounding boxes. The third sub-network is fully convolutional and maps the features to the pixel mask of the underlying region. Note that the ability of the architecture to predict masks independently for each RoI plays a crucial role in obtaining instance segmentations. Another advantage is that it naturally addresses situations where annotations or predictions overlap. Indic Manuscript Layout Parsing ::: Implementation Details The dataset splits used for training, validation and test phases can be seen in Table TABREF6. All manuscript images are adaptively resized to ensure the width does not exceed 1024 pixels. The images are padded with zeros such that the input to the deep network has spatial dimensions of $1024 \times 1024$. The ground truth region masks are initially subjected to a similar resizing procedure. Subsequently, they are downsized to $28 \times 28$ in order to match output dimensions of the mask sub-network. Indic Manuscript Layout Parsing ::: Implementation Details ::: Training The network is initialized with weights obtained from a Mask R-CNN trained on the MS-COCO BIBREF43 dataset with a ResNet-50 backbone. We found that this results in faster convergence and stabler training compared to using weights from a Mask-RCNN trained on ImageNet BIBREF44 or training from scratch. Within the RPN network, we use custom-designed anchors of 5 different scales and with 3 different aspect ratios. Specifically, we use the following aspect ratios – 1:1,1:3,1:10 – keeping in mind the typical spatial extents of the various region classes. We also limit the number of RoIs (`object proposals') to 512. We use categorical cross entropy loss $\mathcal {L}_{RPN}$ for RPN classification network. Within the task branches, we use categorical cross entropy loss $\mathcal {L}_{r}$ for region classification branch, smooth L1 loss BIBREF45 ($\mathcal {L}_{bb}$) for final bounding box prediction and per-pixel binary cross entropy loss $\mathcal {L}_{mask}$ for mask prediction. The total loss is a convex combination of these losses, i.e. $\mathcal {L} = \lambda _{RPN} \mathcal {L}_{RPN} + \lambda _{r} \mathcal {L}_{r} + \lambda _{bb} \mathcal {L}_{bb} + \lambda _{mask} \mathcal {L}_{mask}$. The weighting factors ($\lambda $s) are set to 1. However, to ensure priority for our task of interest namely mask prediction, we set $\lambda _{mask}=2$. For optimization, we use Stochastic Gradient Descent (SGD) optimizer with a gradient norm clipping value of $0.5$. The batch size, momentum and weight decay are set to 1, $0.9$ and $10^{-3}$ respectively. Given the relatively smaller size of our manuscript dataset compared to the photo dataset (MS-COCO) used to originally train the base Mask R-CNN, we adopt a multi-stage training strategy. For the first stage (30 epochs), we train only the task branch sub-networks using a learning rate of $10^{-3}$ while freezing weights in the rest of the overall network. This ensures that the task branches are fine-tuned for the types of regions contained in manuscript images. For the second stage (20 epochs), we additionally train stage-4 and up of the backbone ResNet-50. This enables extraction of appropriate semantic features from manuscript images. The omission of the initial 3 stages in the backbone for training is due to the fact that they provide generic, re-usable low-level features. To ensure priority coverage of hard-to-localize regions, we use focal loss BIBREF46 for mask generation. For the final stage (15 epochs), we train the entire network using a learning rate of $10^{-4}$. Indic Manuscript Layout Parsing ::: Implementation Details ::: Inference During inference, the images are rescaled and processed using the procedure described at the beginning of the subsection. The number of RoIs retained after non-maximal suppression (NMS) from the RPN is set to 1000. From these, we choose the top 100 region detections with objectness score exceeding $0.5$ and feed the corresponding RoIs to the mask branch sub-network for mask generation. It is important to note that this strategy is different from the parallel generation of outputs and use of the task sub-networks during training. The generated masks are then binarized using an empirically chosen threshold of $0.4$ and rescaled to their original size using bilinear interpolation. On these generated masks, NMS with a threshold value of $0.5$ is applied to obtain the final set of predicted masks. Indic Manuscript Layout Parsing ::: Evaluation For quantitative evaluation, we compute Average Precision (AP) for a particular IoU threshold, a measure widely reported in instance segmentation literature BIBREF47, BIBREF43. We specifically report $AP_{50}$ and $AP_{75}$, corresponding to AP at IoU thresholds 50 and 75 respectively BIBREF40. In addition, we report an overall score by averaging AP at different IoU thresholds ranging from $0.5$ to $0.95$ in steps of $0.05$. The AP measure characterizes performance at document level. To characterize performance for each region type, we report two additional measures BIBREF24 – average class-wise IoU (cwIoU) and average class-wise per-pixel accuracy (cwAcc). Consider a fixed test document $k$. Suppose there are $r_i$ regions of class $i$ and let ${IoU}_r$ denote the IoU score for one such region $r$, i.e. $1 \leqslant r \leqslant r_i$. The per-class IoU score for class $i$ and document $k$ is computed as ${cwIoU}^d_i = \frac{\sum _r {IoU}_r}{r_i}$. Suppose there are $N_i$ documents containing at least a single region of class $i$ in ground-truth. The overall per-class IoU score for class $i$ is computed as ${cwIoU}_i = \frac{\sum _d {cwIoU}^d_i}{N_i}$. In a similar manner, we define class-wise pixel accuracy ${pwAcc}^d_i$ at document level and average it across all the documents containing class $i$, i.e. ${cwAcc}_i = \frac{\sum _d {pwAcc}^d_i}{N_i}$. Note that our approach for computing class-wise scores prevents documents with a relatively larger number of class instances from dominating the score and in this sense, differs from existing approaches BIBREF24 Results We report quantitative results using the measures described in Section SECREF20. Table TABREF14 reports Average Precision and Table TABREF15 reports class-wise average IOUs and per-pixel accuracies. Qualitative results can be viewed in Figure FIGREF13. Despite the challenges posed by manuscripts, our model performs reasonably well across a variety of classes. As the qualitative results indicate, the model predicts accurate masks for almost all the regions. The results also indicate that our model handles overlap between Holes and Character line segments well. From ablative experiments, we found that our choice of focal loss was crucial in obtaining accurate mask boundaries. Unlike traditional semantic segmentation which would have produced a single blob-like region for line segments, our instance-based approach isolates each text line separately. Additionally, the clear demarcation between Page-Boundary and background indicates that our system identifies semantically relevant regions for downstream analysis. As the result at the bottom of Figure FIGREF13 shows, our system can even handle images with multiple pages, thus removing the need for any pre-processing related to isolation of individual pages. From quantitative results, we observe that Holes, Character line segments, Page boundary and Pictures are parsed the best while Physical degradations are difficult to parse due to the relatively small footprint and inconsistent patterns in degradations. The results show that performance for Penn in Hand (PIH) documents is better compared to Bhoomi manuscripts. We conjecture that the presence of closely spaced and unevenly written lines in latter is the cause. In our approach, two (or more) objects may share the same bounding box in terms of overlap and it is not possible to determine which box to choose during mask prediction. Consequently, an underlying line's boundary may either end up not being detected or the predicted mask might be poorly localized. However, this is not a systemic problem since our model achieves good performance even for very dense Bhoomi document line layouts. Conclusion Via this paper, we propose Indiscapes, the first dataset with layout annotations for historical Indic manuscripts. We believe that the availability of layout annotations will play a crucial role in reducing the overall complexity for OCR and other tasks such as word-spotting, style-and-content based retrieval. In the long-term, we intend to expand the dataset, not only numerically but also in terms of layout, script and language diversity. As a significant contribution, we have also adapted a deep-network based instance segmentation framework custom modified for fully automatic layout parsing. Given the general nature of our framework, advances in instance segmentation approaches can be leveraged thereby improving performance over time. Our proposed web-based annotator system, although designed for Indic manuscripts, is flexible, and could be reused for similar manuscripts from Asian subcontinent. We intend to expand the capabilities of our annotator system in many useful ways. For instance, the layout estimated by our deep-network could be provided to annotators for correction, thus reducing annotation efforts. Finally, we plan to have our dataset, instance segmentation system and annotator system publicly available. This would enable large-scale data collection and automated analysis efforts for Indic as well as other historical Asian manuscripts. The repositories related to the systems presented in this paper and the Indiscapes dataset can be accessed at https://ihdia.iiit.ac.in. Acknowledgment We would like to thank Dr. Sai Susarla for enabling access to the Bhoomi document collection. We also thank Poreddy Mourya Kumar Reddy, Gollapudi Sai Vamsi Krishna for their contributions related to dashboard and various annotators for their labelling efforts.
Unanswerable
3bfb8c12f151dada259fbd511358914c4b4e1b0e
3bfb8c12f151dada259fbd511358914c4b4e1b0e_0
Q: What metrics are used to evaluation revision detection? Text: Introduction It is a common habit for people to keep several versions of documents, which creates duplicate data. A scholarly article is normally revised several times before being published. An academic paper may be listed on personal websites, digital conference libraries, Google Scholar, etc. In major corporations, a document typically goes through several revisions involving multiple editors and authors. Users would benefit from visualizing the entire history of a document. It is worthwhile to develop a system that is able to intelligently identify, manage and represent revisions. Given a collection of text documents, our study identifies revision relationships in a completely unsupervised way. For each document in a corpus we only use its content and the last modified timestamp. We assume that a document can be revised by many users, but that the documents are not merged together. We consider collaborative editing as revising documents one by one. The two research problems that are most relevant to document revision detection are plagiarism detection and revision provenance. In a plagiarism detection system, every incoming document is compared with all registered non-plagiarized documents BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The system returns true if an original copy is found in the database; otherwise, the system returns false and adds the document to the database. Thus, it is a 1-to-n problem. Revision provenance is a 1-to-1 problem as it keeps track of detailed updates of one document BIBREF4 , BIBREF5 . Real-world applications include GitHub, version control in Microsoft Word and Wikipedia version trees BIBREF6 . In contrast, our system solves an n-to-n problem on a large scale. Our potential target data sources, such as the entire web or internal corpora in corporations, contain numerous original documents and their revisions. The aim is to find all revision document pairs within a reasonable time. Document revision detection, plagiarism detection and revision provenance all rely on comparing the content of two documents and assessing a distance/similarity score. The classic document similarity measure, especially for plagiarism detection, is fingerprinting BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Fixed-length fingerprints are created using hash functions to represent document features and are then used to measure document similarities. However, the main purpose of fingerprinting is to reduce computation instead of improving accuracy, and it cannot capture word semantics. Another widely used approach is computing the sentence-to-sentence Levenshtein distance and assigning an overall score for every document pair BIBREF13 . Nevertheless, due to the large number of existing documents, as well as the large number of sentences in each document, the Levenshtein distance is not computation-friendly. Although alternatives such as the vector space model (VSM) can largely reduce the computation time, their effectiveness is low. More importantly, none of the above approaches can capture semantic meanings of words, which heavily limits the performances of these approaches. For instance, from a semantic perspective, “I went to the bank" is expected to be similar to “I withdrew some money" rather than “I went hiking." Our document distance measures are inspired by the weaknesses of current document distance/similarity measures and recently proposed models for word representations such as word2vec BIBREF14 and Paragraph Vector (PV) BIBREF15 . Replacing words with distributed vector embeddings makes it feasible to measure semantic distances using advanced algorithms, e.g., Dynamic Time Warping (DTW) BIBREF16 , BIBREF17 , BIBREF18 and Tree Edit Distance (TED) BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . Although calculating text distance using DTW BIBREF27 , TED BIBREF28 or Word Mover's Distance (WMV) BIBREF29 has been attempted in the past, these measures are not ideal for large-scale document distance calculation. The first two algorithms were designed for sentence distances instead of document distances. The third measure computes the distance of two documents by solving a transshipment problem between words in the two documents and uses word2vec embeddings to calculate semantic distances of words. The biggest limitation of WMV is its long computation time. We show in Section SECREF54 that our wDTW and wTED measures yield more precise distance scores with much shorter running time than WMV. We recast the problem of detecting document revisions as a network optimization problem (see Section SECREF2 ) and consequently as a set of document distance problems (see Section SECREF4 ). We use trained word vectors to represent words, concatenate the word vectors to represent documents and combine word2vec with DTW or TED. Meanwhile, in order to guarantee reasonable computation time in large data sets, we calculate document distances at the paragraph level with Apache Spark. A distance score is computed by feeding paragraph representations to DTW or TED. Our code and data are publicly available. The primary contributions of this work are as follows. The rest of this paper is organized in five parts. In Section 2, we clarify related terms and explain the methodology for document revision detection. In Section 3, we provide a brief background on existing document similarity measures and present our wDTW and wTED algorithms as well as the overall process flow. In Section 4, we demonstrate our revision detection results on Wikipedia revision dumps and six simulated data sets. Finally, in Section 5, we summarize some concluding remarks and discuss avenues for future work and improvements. Revision Network The two requirements for a document INLINEFORM0 being a revision of another document INLINEFORM1 are that INLINEFORM2 has been created later than INLINEFORM3 and that the content of INLINEFORM4 is similar to (has been modified from) that of INLINEFORM5 . More specifically, given a corpus INLINEFORM6 , for any two documents INLINEFORM7 , we want to find out the yes/no revision relationship of INLINEFORM8 and INLINEFORM9 , and then output all such revision pairs. We assume that each document has a creation date (the last modified timestamp) which is readily available from the meta data of the document. In this section we also assume that we have a INLINEFORM0 method and a cut-off threshold INLINEFORM1 . We represent a corpus as network INLINEFORM2 , for example Figure FIGREF5 , in which a vertex corresponds to a document. There is an arc INLINEFORM3 if and only if INLINEFORM4 and the creation date of INLINEFORM5 is before the creation date of INLINEFORM6 . In other words, INLINEFORM7 is a revision candidate for INLINEFORM8 . By construction, INLINEFORM9 is acyclic. For instance, INLINEFORM10 is a revision candidate for INLINEFORM11 and INLINEFORM12 . Note that we allow one document to be the original document of several revised documents. As we only need to focus on revision candidates, we reduce INLINEFORM13 to INLINEFORM14 , shown in Figure FIGREF5 , by removing isolated vertices. We define the weight of an arc as the distance score between the two vertices. Recall the assumption that a document can be a revision of at most one document. In other words, documents cannot be merged. Due to this assumption, all revision pairs form a branching in INLINEFORM15 . (A branching is a subgraph where each vertex has an in-degree of at most 1.) The document revision problem is to find a minimum cost branching in INLINEFORM16 (see Fig FIGREF5 ). The minimum branching problem was earlier solved by BIBREF30 edmonds1967optimum and BIBREF31 velardi2013ontolearn. The details of his algorithm are as follows. In our case, INLINEFORM0 is acyclic and, therefore, the second step never occurs. For this reason, Algorithm SECREF2 solves the document revision problem. Find minimum branching INLINEFORM0 for network INLINEFORM1 [1] Input: INLINEFORM0 INLINEFORM1 every vertex INLINEFORM0 Set INLINEFORM1 to correspond to all arcs with head INLINEFORM2 Select INLINEFORM3 such that INLINEFORM4 is minimum INLINEFORM5 Output: INLINEFORM0 The essential part of determining the minimum branching INLINEFORM0 is extracting arcs with the lowest distance scores. This is equivalent to finding the most similar document from the revision candidates for every original document. Distance/similarity Measures In this section, we first introduce the classic VSM model, the word2vec model, DTW and TED. We next demonstrate how to combine the above components to construct our semantic document distance measures: wDTW and wTED. We also discuss the implementation of our revision detection system. Background VSM represents a set of documents as vectors of identifiers. The identifier of a word used in this work is the tf-idf weight. We represent documents as tf-idf vectors, and thus the similarity of two documents can be described by the cosine distance between their vectors. VSM has low algorithm complexity but cannot represent the semantics of words since it is based on the bag-of-words assumption. Word2vec produces semantic embeddings for words using a two-layer neural network. Specifically, word2vec relies on a skip-gram model that uses the current word to predict context words in a surrounding window to maximize the average log probability. Words with similar meanings tend to have similar embeddings. DTW was developed originally for speech recognition in time series analysis and has been widely used to measure the distance between two sequences of vectors. Given two sequences of feature vectors: INLINEFORM0 and INLINEFORM1 , DTW finds the optimal alignment for INLINEFORM2 and INLINEFORM3 by first constructing an INLINEFORM4 matrix in which the INLINEFORM5 element is the alignment cost of INLINEFORM6 and INLINEFORM7 , and then retrieving the path from one corner to the diagonal one through the matrix that has the minimal cumulative distance. This algorithm is described by the following formula. DISPLAYFORM0 TED was initially defined to calculate the minimal cost of node edit operations for transforming one labeled tree into another. The node edit operations are defined as follows. Deletion Delete a node and connect its children to its parent maintaining the order. Insertion Insert a node between an existing node and a subsequence of consecutive children of this node. Substitution Rename the label of a node. Let INLINEFORM0 and INLINEFORM1 be two labeled trees, and INLINEFORM2 be the INLINEFORM3 node in INLINEFORM4 . INLINEFORM5 corresponds to a mapping from INLINEFORM6 to INLINEFORM7 . TED finds mapping INLINEFORM8 with the minimal edit cost based on INLINEFORM9 where INLINEFORM0 means transferring INLINEFORM1 to INLINEFORM2 based on INLINEFORM3 , and INLINEFORM4 represents an empty node. Semantic Distance between Paragraphs According to the description of DTW in Section UID14 , the distance between two documents can be calculated using DTW by replacing each element in the feature vectors INLINEFORM0 and INLINEFORM1 with a word vector. However, computing the DTW distance between two documents at the word level is basically as expensive as calculating the Levenshtein distance. Thus in this section we propose an improved algorithm that is more appropriate for document distance calculation. In order to receive semantic representations for documents and maintain a reasonable algorithm complexity, we use word2vec to train word vectors and represent each paragraph as a sequence of vectors. Note that in both wDTW and wTED we take document titles and section titles as paragraphs. Although a more recently proposed model PV can directly train vector representations for short texts such as movie reviews BIBREF15 , our experiments in Section SECREF54 show that PV is not appropriate for standard paragraphs in general documents. Therefore, we use word2vec in our work. Algorithm SECREF20 describes how we compute the distance between two paragraphs based on DTW and word vectors. The distance between one paragraph in a document and one paragraph in another document can be pre-calculated in parallel using Spark to provide faster computation for wDTW and wTED. DistPara [h] Replace the words in paragraphs INLINEFORM0 and INLINEFORM1 with word2vec embeddings: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 calculate INLINEFORM15 Return: INLINEFORM16 Word Vector-based Dynamic Time Warping As a document can be considered as a sequence of paragraphs, wDTW returns the distance between two documents by applying another DTW on top of paragraphs. The cost function is exactly the DistPara distance of two paragraphs given in Algorithm SECREF20 . Algorithm SECREF21 and Figure FIGREF22 describe our wDTW measure. wDTW observes semantic information from word vectors, which is fundamentally different from the word distance calculated from hierarchies among words in the algorithm proposed by BIBREF27 liu2007sentence. The shortcomings of their work are that it is difficult to learn semantic taxonomy of all words and that their DTW algorithm can only be applied to sentences not documents. wDTW [h] Represent documents INLINEFORM0 and INLINEFORM1 with vectors of paragraphs: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 DistPara INLINEFORM15 calculate INLINEFORM16 Return: INLINEFORM17 Word Vector-based Tree Edit Distance TED is reasonable for measuring document distances as documents can be easily transformed to tree structures visualized in Figure FIGREF24 . The document tree concept was originally proposed by BIBREF0 si1997check. A document can be viewed at multiple abstraction levels that include the document title, its sections, subsections, etc. Thus for each document we can build a tree-like structure with title INLINEFORM0 sections INLINEFORM1 subsections INLINEFORM2 ... INLINEFORM3 paragraphs being paths from the root to leaves. Child nodes are ordered from left to right as they appear in the document. We represent labels in a document tree as the vector sequences of titles, sections, subsections and paragraphs with word2vec embeddings. wTED converts documents to tree structures and then uses DistPara distances. More formally, the distance between two nodes is computed as follows. The cost of substitution is the DistPara value of the two nodes. The cost of insertion is the DistPara value of an empty sequence and the label of the inserted node. This essentially means that the cost is the sum of the L2-norms of the word vectors in that node. The cost of deletion is the same as the cost of insertion. Compared to the algorithm proposed by BIBREF28 sidorov2015computing, wTED provides different edit cost functions and uses document tree structures instead of syntactic n-grams, and thus wTED yields more meaningful distance scores for long documents. Algorithm SECREF23 and Figure FIGREF28 describe how we calculate the edit cost between two document trees. wTED [1] Convert documents INLINEFORM0 and INLINEFORM1 to trees INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize tree edit distance INLINEFORM0 node label INLINEFORM1 node label INLINEFORM2 Update TED mapping cost INLINEFORM3 using INLINEFORM4 DistPara INLINEFORM5 INLINEFORM6 DistPara INLINEFORM7 INLINEFORM8 DistPara INLINEFORM9 Return: INLINEFORM0 Process Flow Our system is a boosting learner that is composed of four modules: weak filter, strong filter, revision network and optimal subnetwork. First of all, we sort all documents by timestamps and pair up documents so that we only compare each document with documents that have been created earlier. In the first module, we calculate the VSM similarity scores for all pairs and eliminate those with scores that are lower than an empirical threshold ( INLINEFORM0 ). This is what we call the weak filter. After that, we apply the strong filter wDTW or wTED on the available pairs and filter out document pairs having distances higher than a threshold INLINEFORM1 . For VSM in Section SECREF32 , we directly filter out document pairs having similarity scores lower than a threshold INLINEFORM2 . The cut-off threshold estimation is explained in Section SECREF30 . The remaining document pairs from the strong filter are then sent to the revision network module. In the end, we output the optimal revision pairs following the minimum branching strategy. Estimating the Cut-off Threshold Hyperprameter INLINEFORM0 is calibrated by calculating the absolute extreme based on an initial set of documents, i.e., all processed documents since the moment the system was put in use. Based on this set, we calculate all distance/similarity scores and create a histogram, see Figure FIGREF31 . The figure shows the correlation between the number of document pairs and the similarity scores in the training process of one simulated corpus using VSM. The optimal INLINEFORM1 in this example is around 0.6 where the number of document pairs noticeably drops. As the system continues running, new documents become available and INLINEFORM0 can be periodically updated by using the same method. Numerical Experiments This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods. Distance/Similarity Measures We denote the following distance/similarity measures. wDTW: Our semantic distance measure explained in Section SECREF21 . wTED: Our semantic distance measure explained in Section SECREF23 . WMD: The Word Mover's Distance introduced in Section SECREF1 . WMD adapts the earth mover's distance to the space of documents. VSM: The similarity measure introduced in Section UID12 . PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 . PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 . Our experiments were conducted on an Apache Spark cluster with 32 cores and 320 GB total memory. We implemented wDTW, wTED, WMD, VSM, PV-DTW and PV-TED in Java Spark. The paragraph vectors for PV-DTW and PV-TED were trained by gensim. Data Sets In this section, we introduce the two data sets we used for our revision detection experiments: Wikipedia revision dumps and a document revision data set generated by a computer simulation. The two data sets differ in that the Wikipedia revision dumps only contain linear revision chains, while the simulated data sets also contains tree-structured revision chains, which can be very common in real-world data. The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data. We pre-processed the Wikipedia revision dumps using the JWPL Revision Machine BIBREF32 and produced a data set that contains 62,234 documents with 46,354 revisions. As we noticed that short documents just contributed to noise (graffiti) in the data, we eliminated documents that have fewer than three paragraphs and fewer than 300 words. We removed empty lines in the documents and trained word2vec embeddings on the entire corpus. We used the documents occurring in the first INLINEFORM0 of the revision period for INLINEFORM1 calibration, and the remaining documents for test. The generation process of the simulated data sets is designed to mimic the real world. Users open some existing documents in a file system, make some changes (e.g. addition, deletion or replacement), and save them as separate documents. These documents become revisions of the original documents. We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents. Similar to a file system, at any moment new documents could be added and/or some of the current documents could be revised. The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts. This corpus generation process had five time periods INLINEFORM0 . Figure FIGREF42 illustrates this simulation. We set a Poisson distribution with rate INLINEFORM1 (the number of documents in the initial corpus) to control the number of new documents added in each time period, and a Poisson distribution with rate INLINEFORM2 to control the number of documents revised in each time period. We generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5). Table TABREF48 summarizes the first data set. In each data set, we name the initial corpus Corpus 0, and define INLINEFORM0 as the timestamp when we started this simulation process. We set INLINEFORM1 , INLINEFORM2 . Corpus INLINEFORM3 corresponds to documents generated before timestamp INLINEFORM4 . We extracted document revisions from Corpus INLINEFORM5 and compared the revisions generated in (Corpus INLINEFORM6 - Corpus INLINEFORM7 ) with the ground truths in Table TABREF48 . Hence, we ran four experiments on this data set in total. In every experiment, INLINEFORM8 is calibrated based on Corpus INLINEFORM9 . For instance, the training set of the first experiment was Corpus 1. We trained INLINEFORM10 from Corpus 1. We extracted all revisions in Corpus 2, and compared revisions generated in the test set (Corpus 2 - Corpus 1) with the ground truth: 258 revised documents. The word2vec model shared in the four experiments was trained on Corpus 5. Results We use precision, recall and F-measure to evaluate the detected revisions. A true positive case is a correctly identified revision. A false positive case is an incorrectly identified revision. A false negative case is a missed revision record. We illustrate the performances of wDTW, wTED, WMD, VSM, PV-DTW and PV-TED on the Wikipedia revision dumps in Figure FIGREF43 . wDTW and wTED have the highest F-measure scores compared to the rest of four measures, and wDTW also have the highest precision and recall scores. Figure FIGREF49 shows the average evaluation results on the simulated data sets. From left to right, the corpus size increases and the revision chains become longer, thus it becomes more challenging to detect document revisions. Overall, wDTW consistently performs the best. WMD is slightly better than wTED. In particular, when the corpus size increases, the performances of WMD, VSM, PV-DTW and PV-TED drop faster than wDTW and wTED. Because the revision operations were randomly selected in each corpus, it is possible that there are non-monotone points in the series. wDTW and wTED perform better than WMD especially when the corpus is large, because they use dynamic programming to find the global optimal alignment for documents. In contrast, WMD relies on a greedy algorithm that sums up the minimal cost for every word. wDTW and wTED perform better than PV-DTW and PV-TED, which indicates that our DistPara distance in Algorithm SECREF20 is more accurate than the Euclidian distance between paragraph vectors trained by PV. We show in Table TABREF53 the average running time of the six distance/similarity measures. In all the experiments, VSM is the fastest, wTED is faster than wDTW, and WMD is the slowest. Running WMD is extremely expensive because WMD needs to solve an INLINEFORM0 sequential transshipment problem for every two documents where INLINEFORM1 is the average number of words in a document. In contrast, by splitting this heavy computation into several smaller problems (finding the distance between any two paragraphs), which can be run in parallel, wDTW and wTED scale much better. Combining Figure FIGREF43 , Figure FIGREF49 and Table TABREF53 we conclude that wDTW yields the most accurate results using marginally more time than VSM, PV-TED and PV-DTW, but much less running time than WMD. wTED returns satisfactory results using shorter time than wDTW. Conclusion This paper has explored how DTW and TED can be extended with word2vec to construct semantic document distance measures: wDTW and wTED. By representing paragraphs with concatenations of word vectors, wDTW and wTED are able to capture the semantics of the words and thus give more accurate distance scores. In order to detect revisions, we have used minimum branching on an appropriately developed network with document distance scores serving as arc weights. We have also assessed the efficiency of the method of retrieving an optimal revision subnetwork by finding the minimum branching. Furthermore, we have compared wDTW and wTED with several distance measures for revision detection tasks. Our results demonstrate the effectiveness and robustness of wDTW and wTED in the Wikipedia revision dumps and our simulated data sets. In order to reduce the computation time, we have computed document distances at the paragraph level and implemented a boosting learning system using Apache Spark. Although we have demonstrated the superiority of our semantic measures only in the revision detection experiments, wDTW and wTED can also be used as semantic distance measures in many clustering, classification tasks. Our revision detection system can be enhanced with richer features such as author information and writing styles, and exact changes in revision pairs. Another interesting aspect we would like to explore in the future is reducing the complexities of calculating the distance between two paragraphs. Acknowledgments This work was supported in part by Intel Corporation, Semiconductor Research Corporation (SRC).
precision, recall, F-measure
3f85cc5be84479ba668db6d9f614fedbff6d77f1
3f85cc5be84479ba668db6d9f614fedbff6d77f1_0
Q: How large is the Wikipedia revision dump dataset? Text: Introduction It is a common habit for people to keep several versions of documents, which creates duplicate data. A scholarly article is normally revised several times before being published. An academic paper may be listed on personal websites, digital conference libraries, Google Scholar, etc. In major corporations, a document typically goes through several revisions involving multiple editors and authors. Users would benefit from visualizing the entire history of a document. It is worthwhile to develop a system that is able to intelligently identify, manage and represent revisions. Given a collection of text documents, our study identifies revision relationships in a completely unsupervised way. For each document in a corpus we only use its content and the last modified timestamp. We assume that a document can be revised by many users, but that the documents are not merged together. We consider collaborative editing as revising documents one by one. The two research problems that are most relevant to document revision detection are plagiarism detection and revision provenance. In a plagiarism detection system, every incoming document is compared with all registered non-plagiarized documents BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The system returns true if an original copy is found in the database; otherwise, the system returns false and adds the document to the database. Thus, it is a 1-to-n problem. Revision provenance is a 1-to-1 problem as it keeps track of detailed updates of one document BIBREF4 , BIBREF5 . Real-world applications include GitHub, version control in Microsoft Word and Wikipedia version trees BIBREF6 . In contrast, our system solves an n-to-n problem on a large scale. Our potential target data sources, such as the entire web or internal corpora in corporations, contain numerous original documents and their revisions. The aim is to find all revision document pairs within a reasonable time. Document revision detection, plagiarism detection and revision provenance all rely on comparing the content of two documents and assessing a distance/similarity score. The classic document similarity measure, especially for plagiarism detection, is fingerprinting BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Fixed-length fingerprints are created using hash functions to represent document features and are then used to measure document similarities. However, the main purpose of fingerprinting is to reduce computation instead of improving accuracy, and it cannot capture word semantics. Another widely used approach is computing the sentence-to-sentence Levenshtein distance and assigning an overall score for every document pair BIBREF13 . Nevertheless, due to the large number of existing documents, as well as the large number of sentences in each document, the Levenshtein distance is not computation-friendly. Although alternatives such as the vector space model (VSM) can largely reduce the computation time, their effectiveness is low. More importantly, none of the above approaches can capture semantic meanings of words, which heavily limits the performances of these approaches. For instance, from a semantic perspective, “I went to the bank" is expected to be similar to “I withdrew some money" rather than “I went hiking." Our document distance measures are inspired by the weaknesses of current document distance/similarity measures and recently proposed models for word representations such as word2vec BIBREF14 and Paragraph Vector (PV) BIBREF15 . Replacing words with distributed vector embeddings makes it feasible to measure semantic distances using advanced algorithms, e.g., Dynamic Time Warping (DTW) BIBREF16 , BIBREF17 , BIBREF18 and Tree Edit Distance (TED) BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . Although calculating text distance using DTW BIBREF27 , TED BIBREF28 or Word Mover's Distance (WMV) BIBREF29 has been attempted in the past, these measures are not ideal for large-scale document distance calculation. The first two algorithms were designed for sentence distances instead of document distances. The third measure computes the distance of two documents by solving a transshipment problem between words in the two documents and uses word2vec embeddings to calculate semantic distances of words. The biggest limitation of WMV is its long computation time. We show in Section SECREF54 that our wDTW and wTED measures yield more precise distance scores with much shorter running time than WMV. We recast the problem of detecting document revisions as a network optimization problem (see Section SECREF2 ) and consequently as a set of document distance problems (see Section SECREF4 ). We use trained word vectors to represent words, concatenate the word vectors to represent documents and combine word2vec with DTW or TED. Meanwhile, in order to guarantee reasonable computation time in large data sets, we calculate document distances at the paragraph level with Apache Spark. A distance score is computed by feeding paragraph representations to DTW or TED. Our code and data are publicly available. The primary contributions of this work are as follows. The rest of this paper is organized in five parts. In Section 2, we clarify related terms and explain the methodology for document revision detection. In Section 3, we provide a brief background on existing document similarity measures and present our wDTW and wTED algorithms as well as the overall process flow. In Section 4, we demonstrate our revision detection results on Wikipedia revision dumps and six simulated data sets. Finally, in Section 5, we summarize some concluding remarks and discuss avenues for future work and improvements. Revision Network The two requirements for a document INLINEFORM0 being a revision of another document INLINEFORM1 are that INLINEFORM2 has been created later than INLINEFORM3 and that the content of INLINEFORM4 is similar to (has been modified from) that of INLINEFORM5 . More specifically, given a corpus INLINEFORM6 , for any two documents INLINEFORM7 , we want to find out the yes/no revision relationship of INLINEFORM8 and INLINEFORM9 , and then output all such revision pairs. We assume that each document has a creation date (the last modified timestamp) which is readily available from the meta data of the document. In this section we also assume that we have a INLINEFORM0 method and a cut-off threshold INLINEFORM1 . We represent a corpus as network INLINEFORM2 , for example Figure FIGREF5 , in which a vertex corresponds to a document. There is an arc INLINEFORM3 if and only if INLINEFORM4 and the creation date of INLINEFORM5 is before the creation date of INLINEFORM6 . In other words, INLINEFORM7 is a revision candidate for INLINEFORM8 . By construction, INLINEFORM9 is acyclic. For instance, INLINEFORM10 is a revision candidate for INLINEFORM11 and INLINEFORM12 . Note that we allow one document to be the original document of several revised documents. As we only need to focus on revision candidates, we reduce INLINEFORM13 to INLINEFORM14 , shown in Figure FIGREF5 , by removing isolated vertices. We define the weight of an arc as the distance score between the two vertices. Recall the assumption that a document can be a revision of at most one document. In other words, documents cannot be merged. Due to this assumption, all revision pairs form a branching in INLINEFORM15 . (A branching is a subgraph where each vertex has an in-degree of at most 1.) The document revision problem is to find a minimum cost branching in INLINEFORM16 (see Fig FIGREF5 ). The minimum branching problem was earlier solved by BIBREF30 edmonds1967optimum and BIBREF31 velardi2013ontolearn. The details of his algorithm are as follows. In our case, INLINEFORM0 is acyclic and, therefore, the second step never occurs. For this reason, Algorithm SECREF2 solves the document revision problem. Find minimum branching INLINEFORM0 for network INLINEFORM1 [1] Input: INLINEFORM0 INLINEFORM1 every vertex INLINEFORM0 Set INLINEFORM1 to correspond to all arcs with head INLINEFORM2 Select INLINEFORM3 such that INLINEFORM4 is minimum INLINEFORM5 Output: INLINEFORM0 The essential part of determining the minimum branching INLINEFORM0 is extracting arcs with the lowest distance scores. This is equivalent to finding the most similar document from the revision candidates for every original document. Distance/similarity Measures In this section, we first introduce the classic VSM model, the word2vec model, DTW and TED. We next demonstrate how to combine the above components to construct our semantic document distance measures: wDTW and wTED. We also discuss the implementation of our revision detection system. Background VSM represents a set of documents as vectors of identifiers. The identifier of a word used in this work is the tf-idf weight. We represent documents as tf-idf vectors, and thus the similarity of two documents can be described by the cosine distance between their vectors. VSM has low algorithm complexity but cannot represent the semantics of words since it is based on the bag-of-words assumption. Word2vec produces semantic embeddings for words using a two-layer neural network. Specifically, word2vec relies on a skip-gram model that uses the current word to predict context words in a surrounding window to maximize the average log probability. Words with similar meanings tend to have similar embeddings. DTW was developed originally for speech recognition in time series analysis and has been widely used to measure the distance between two sequences of vectors. Given two sequences of feature vectors: INLINEFORM0 and INLINEFORM1 , DTW finds the optimal alignment for INLINEFORM2 and INLINEFORM3 by first constructing an INLINEFORM4 matrix in which the INLINEFORM5 element is the alignment cost of INLINEFORM6 and INLINEFORM7 , and then retrieving the path from one corner to the diagonal one through the matrix that has the minimal cumulative distance. This algorithm is described by the following formula. DISPLAYFORM0 TED was initially defined to calculate the minimal cost of node edit operations for transforming one labeled tree into another. The node edit operations are defined as follows. Deletion Delete a node and connect its children to its parent maintaining the order. Insertion Insert a node between an existing node and a subsequence of consecutive children of this node. Substitution Rename the label of a node. Let INLINEFORM0 and INLINEFORM1 be two labeled trees, and INLINEFORM2 be the INLINEFORM3 node in INLINEFORM4 . INLINEFORM5 corresponds to a mapping from INLINEFORM6 to INLINEFORM7 . TED finds mapping INLINEFORM8 with the minimal edit cost based on INLINEFORM9 where INLINEFORM0 means transferring INLINEFORM1 to INLINEFORM2 based on INLINEFORM3 , and INLINEFORM4 represents an empty node. Semantic Distance between Paragraphs According to the description of DTW in Section UID14 , the distance between two documents can be calculated using DTW by replacing each element in the feature vectors INLINEFORM0 and INLINEFORM1 with a word vector. However, computing the DTW distance between two documents at the word level is basically as expensive as calculating the Levenshtein distance. Thus in this section we propose an improved algorithm that is more appropriate for document distance calculation. In order to receive semantic representations for documents and maintain a reasonable algorithm complexity, we use word2vec to train word vectors and represent each paragraph as a sequence of vectors. Note that in both wDTW and wTED we take document titles and section titles as paragraphs. Although a more recently proposed model PV can directly train vector representations for short texts such as movie reviews BIBREF15 , our experiments in Section SECREF54 show that PV is not appropriate for standard paragraphs in general documents. Therefore, we use word2vec in our work. Algorithm SECREF20 describes how we compute the distance between two paragraphs based on DTW and word vectors. The distance between one paragraph in a document and one paragraph in another document can be pre-calculated in parallel using Spark to provide faster computation for wDTW and wTED. DistPara [h] Replace the words in paragraphs INLINEFORM0 and INLINEFORM1 with word2vec embeddings: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 calculate INLINEFORM15 Return: INLINEFORM16 Word Vector-based Dynamic Time Warping As a document can be considered as a sequence of paragraphs, wDTW returns the distance between two documents by applying another DTW on top of paragraphs. The cost function is exactly the DistPara distance of two paragraphs given in Algorithm SECREF20 . Algorithm SECREF21 and Figure FIGREF22 describe our wDTW measure. wDTW observes semantic information from word vectors, which is fundamentally different from the word distance calculated from hierarchies among words in the algorithm proposed by BIBREF27 liu2007sentence. The shortcomings of their work are that it is difficult to learn semantic taxonomy of all words and that their DTW algorithm can only be applied to sentences not documents. wDTW [h] Represent documents INLINEFORM0 and INLINEFORM1 with vectors of paragraphs: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 DistPara INLINEFORM15 calculate INLINEFORM16 Return: INLINEFORM17 Word Vector-based Tree Edit Distance TED is reasonable for measuring document distances as documents can be easily transformed to tree structures visualized in Figure FIGREF24 . The document tree concept was originally proposed by BIBREF0 si1997check. A document can be viewed at multiple abstraction levels that include the document title, its sections, subsections, etc. Thus for each document we can build a tree-like structure with title INLINEFORM0 sections INLINEFORM1 subsections INLINEFORM2 ... INLINEFORM3 paragraphs being paths from the root to leaves. Child nodes are ordered from left to right as they appear in the document. We represent labels in a document tree as the vector sequences of titles, sections, subsections and paragraphs with word2vec embeddings. wTED converts documents to tree structures and then uses DistPara distances. More formally, the distance between two nodes is computed as follows. The cost of substitution is the DistPara value of the two nodes. The cost of insertion is the DistPara value of an empty sequence and the label of the inserted node. This essentially means that the cost is the sum of the L2-norms of the word vectors in that node. The cost of deletion is the same as the cost of insertion. Compared to the algorithm proposed by BIBREF28 sidorov2015computing, wTED provides different edit cost functions and uses document tree structures instead of syntactic n-grams, and thus wTED yields more meaningful distance scores for long documents. Algorithm SECREF23 and Figure FIGREF28 describe how we calculate the edit cost between two document trees. wTED [1] Convert documents INLINEFORM0 and INLINEFORM1 to trees INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize tree edit distance INLINEFORM0 node label INLINEFORM1 node label INLINEFORM2 Update TED mapping cost INLINEFORM3 using INLINEFORM4 DistPara INLINEFORM5 INLINEFORM6 DistPara INLINEFORM7 INLINEFORM8 DistPara INLINEFORM9 Return: INLINEFORM0 Process Flow Our system is a boosting learner that is composed of four modules: weak filter, strong filter, revision network and optimal subnetwork. First of all, we sort all documents by timestamps and pair up documents so that we only compare each document with documents that have been created earlier. In the first module, we calculate the VSM similarity scores for all pairs and eliminate those with scores that are lower than an empirical threshold ( INLINEFORM0 ). This is what we call the weak filter. After that, we apply the strong filter wDTW or wTED on the available pairs and filter out document pairs having distances higher than a threshold INLINEFORM1 . For VSM in Section SECREF32 , we directly filter out document pairs having similarity scores lower than a threshold INLINEFORM2 . The cut-off threshold estimation is explained in Section SECREF30 . The remaining document pairs from the strong filter are then sent to the revision network module. In the end, we output the optimal revision pairs following the minimum branching strategy. Estimating the Cut-off Threshold Hyperprameter INLINEFORM0 is calibrated by calculating the absolute extreme based on an initial set of documents, i.e., all processed documents since the moment the system was put in use. Based on this set, we calculate all distance/similarity scores and create a histogram, see Figure FIGREF31 . The figure shows the correlation between the number of document pairs and the similarity scores in the training process of one simulated corpus using VSM. The optimal INLINEFORM1 in this example is around 0.6 where the number of document pairs noticeably drops. As the system continues running, new documents become available and INLINEFORM0 can be periodically updated by using the same method. Numerical Experiments This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods. Distance/Similarity Measures We denote the following distance/similarity measures. wDTW: Our semantic distance measure explained in Section SECREF21 . wTED: Our semantic distance measure explained in Section SECREF23 . WMD: The Word Mover's Distance introduced in Section SECREF1 . WMD adapts the earth mover's distance to the space of documents. VSM: The similarity measure introduced in Section UID12 . PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 . PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 . Our experiments were conducted on an Apache Spark cluster with 32 cores and 320 GB total memory. We implemented wDTW, wTED, WMD, VSM, PV-DTW and PV-TED in Java Spark. The paragraph vectors for PV-DTW and PV-TED were trained by gensim. Data Sets In this section, we introduce the two data sets we used for our revision detection experiments: Wikipedia revision dumps and a document revision data set generated by a computer simulation. The two data sets differ in that the Wikipedia revision dumps only contain linear revision chains, while the simulated data sets also contains tree-structured revision chains, which can be very common in real-world data. The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data. We pre-processed the Wikipedia revision dumps using the JWPL Revision Machine BIBREF32 and produced a data set that contains 62,234 documents with 46,354 revisions. As we noticed that short documents just contributed to noise (graffiti) in the data, we eliminated documents that have fewer than three paragraphs and fewer than 300 words. We removed empty lines in the documents and trained word2vec embeddings on the entire corpus. We used the documents occurring in the first INLINEFORM0 of the revision period for INLINEFORM1 calibration, and the remaining documents for test. The generation process of the simulated data sets is designed to mimic the real world. Users open some existing documents in a file system, make some changes (e.g. addition, deletion or replacement), and save them as separate documents. These documents become revisions of the original documents. We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents. Similar to a file system, at any moment new documents could be added and/or some of the current documents could be revised. The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts. This corpus generation process had five time periods INLINEFORM0 . Figure FIGREF42 illustrates this simulation. We set a Poisson distribution with rate INLINEFORM1 (the number of documents in the initial corpus) to control the number of new documents added in each time period, and a Poisson distribution with rate INLINEFORM2 to control the number of documents revised in each time period. We generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5). Table TABREF48 summarizes the first data set. In each data set, we name the initial corpus Corpus 0, and define INLINEFORM0 as the timestamp when we started this simulation process. We set INLINEFORM1 , INLINEFORM2 . Corpus INLINEFORM3 corresponds to documents generated before timestamp INLINEFORM4 . We extracted document revisions from Corpus INLINEFORM5 and compared the revisions generated in (Corpus INLINEFORM6 - Corpus INLINEFORM7 ) with the ground truths in Table TABREF48 . Hence, we ran four experiments on this data set in total. In every experiment, INLINEFORM8 is calibrated based on Corpus INLINEFORM9 . For instance, the training set of the first experiment was Corpus 1. We trained INLINEFORM10 from Corpus 1. We extracted all revisions in Corpus 2, and compared revisions generated in the test set (Corpus 2 - Corpus 1) with the ground truth: 258 revised documents. The word2vec model shared in the four experiments was trained on Corpus 5. Results We use precision, recall and F-measure to evaluate the detected revisions. A true positive case is a correctly identified revision. A false positive case is an incorrectly identified revision. A false negative case is a missed revision record. We illustrate the performances of wDTW, wTED, WMD, VSM, PV-DTW and PV-TED on the Wikipedia revision dumps in Figure FIGREF43 . wDTW and wTED have the highest F-measure scores compared to the rest of four measures, and wDTW also have the highest precision and recall scores. Figure FIGREF49 shows the average evaluation results on the simulated data sets. From left to right, the corpus size increases and the revision chains become longer, thus it becomes more challenging to detect document revisions. Overall, wDTW consistently performs the best. WMD is slightly better than wTED. In particular, when the corpus size increases, the performances of WMD, VSM, PV-DTW and PV-TED drop faster than wDTW and wTED. Because the revision operations were randomly selected in each corpus, it is possible that there are non-monotone points in the series. wDTW and wTED perform better than WMD especially when the corpus is large, because they use dynamic programming to find the global optimal alignment for documents. In contrast, WMD relies on a greedy algorithm that sums up the minimal cost for every word. wDTW and wTED perform better than PV-DTW and PV-TED, which indicates that our DistPara distance in Algorithm SECREF20 is more accurate than the Euclidian distance between paragraph vectors trained by PV. We show in Table TABREF53 the average running time of the six distance/similarity measures. In all the experiments, VSM is the fastest, wTED is faster than wDTW, and WMD is the slowest. Running WMD is extremely expensive because WMD needs to solve an INLINEFORM0 sequential transshipment problem for every two documents where INLINEFORM1 is the average number of words in a document. In contrast, by splitting this heavy computation into several smaller problems (finding the distance between any two paragraphs), which can be run in parallel, wDTW and wTED scale much better. Combining Figure FIGREF43 , Figure FIGREF49 and Table TABREF53 we conclude that wDTW yields the most accurate results using marginally more time than VSM, PV-TED and PV-DTW, but much less running time than WMD. wTED returns satisfactory results using shorter time than wDTW. Conclusion This paper has explored how DTW and TED can be extended with word2vec to construct semantic document distance measures: wDTW and wTED. By representing paragraphs with concatenations of word vectors, wDTW and wTED are able to capture the semantics of the words and thus give more accurate distance scores. In order to detect revisions, we have used minimum branching on an appropriately developed network with document distance scores serving as arc weights. We have also assessed the efficiency of the method of retrieving an optimal revision subnetwork by finding the minimum branching. Furthermore, we have compared wDTW and wTED with several distance measures for revision detection tasks. Our results demonstrate the effectiveness and robustness of wDTW and wTED in the Wikipedia revision dumps and our simulated data sets. In order to reduce the computation time, we have computed document distances at the paragraph level and implemented a boosting learning system using Apache Spark. Although we have demonstrated the superiority of our semantic measures only in the revision detection experiments, wDTW and wTED can also be used as semantic distance measures in many clustering, classification tasks. Our revision detection system can be enhanced with richer features such as author information and writing styles, and exact changes in revision pairs. Another interesting aspect we would like to explore in the future is reducing the complexities of calculating the distance between two paragraphs. Acknowledgments This work was supported in part by Intel Corporation, Semiconductor Research Corporation (SRC).
eight GB
126e8112e26ebf8c19ca7ff3dd06691732118e90
126e8112e26ebf8c19ca7ff3dd06691732118e90_0
Q: What are simulated datasets collected? Text: Introduction It is a common habit for people to keep several versions of documents, which creates duplicate data. A scholarly article is normally revised several times before being published. An academic paper may be listed on personal websites, digital conference libraries, Google Scholar, etc. In major corporations, a document typically goes through several revisions involving multiple editors and authors. Users would benefit from visualizing the entire history of a document. It is worthwhile to develop a system that is able to intelligently identify, manage and represent revisions. Given a collection of text documents, our study identifies revision relationships in a completely unsupervised way. For each document in a corpus we only use its content and the last modified timestamp. We assume that a document can be revised by many users, but that the documents are not merged together. We consider collaborative editing as revising documents one by one. The two research problems that are most relevant to document revision detection are plagiarism detection and revision provenance. In a plagiarism detection system, every incoming document is compared with all registered non-plagiarized documents BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The system returns true if an original copy is found in the database; otherwise, the system returns false and adds the document to the database. Thus, it is a 1-to-n problem. Revision provenance is a 1-to-1 problem as it keeps track of detailed updates of one document BIBREF4 , BIBREF5 . Real-world applications include GitHub, version control in Microsoft Word and Wikipedia version trees BIBREF6 . In contrast, our system solves an n-to-n problem on a large scale. Our potential target data sources, such as the entire web or internal corpora in corporations, contain numerous original documents and their revisions. The aim is to find all revision document pairs within a reasonable time. Document revision detection, plagiarism detection and revision provenance all rely on comparing the content of two documents and assessing a distance/similarity score. The classic document similarity measure, especially for plagiarism detection, is fingerprinting BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Fixed-length fingerprints are created using hash functions to represent document features and are then used to measure document similarities. However, the main purpose of fingerprinting is to reduce computation instead of improving accuracy, and it cannot capture word semantics. Another widely used approach is computing the sentence-to-sentence Levenshtein distance and assigning an overall score for every document pair BIBREF13 . Nevertheless, due to the large number of existing documents, as well as the large number of sentences in each document, the Levenshtein distance is not computation-friendly. Although alternatives such as the vector space model (VSM) can largely reduce the computation time, their effectiveness is low. More importantly, none of the above approaches can capture semantic meanings of words, which heavily limits the performances of these approaches. For instance, from a semantic perspective, “I went to the bank" is expected to be similar to “I withdrew some money" rather than “I went hiking." Our document distance measures are inspired by the weaknesses of current document distance/similarity measures and recently proposed models for word representations such as word2vec BIBREF14 and Paragraph Vector (PV) BIBREF15 . Replacing words with distributed vector embeddings makes it feasible to measure semantic distances using advanced algorithms, e.g., Dynamic Time Warping (DTW) BIBREF16 , BIBREF17 , BIBREF18 and Tree Edit Distance (TED) BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . Although calculating text distance using DTW BIBREF27 , TED BIBREF28 or Word Mover's Distance (WMV) BIBREF29 has been attempted in the past, these measures are not ideal for large-scale document distance calculation. The first two algorithms were designed for sentence distances instead of document distances. The third measure computes the distance of two documents by solving a transshipment problem between words in the two documents and uses word2vec embeddings to calculate semantic distances of words. The biggest limitation of WMV is its long computation time. We show in Section SECREF54 that our wDTW and wTED measures yield more precise distance scores with much shorter running time than WMV. We recast the problem of detecting document revisions as a network optimization problem (see Section SECREF2 ) and consequently as a set of document distance problems (see Section SECREF4 ). We use trained word vectors to represent words, concatenate the word vectors to represent documents and combine word2vec with DTW or TED. Meanwhile, in order to guarantee reasonable computation time in large data sets, we calculate document distances at the paragraph level with Apache Spark. A distance score is computed by feeding paragraph representations to DTW or TED. Our code and data are publicly available. The primary contributions of this work are as follows. The rest of this paper is organized in five parts. In Section 2, we clarify related terms and explain the methodology for document revision detection. In Section 3, we provide a brief background on existing document similarity measures and present our wDTW and wTED algorithms as well as the overall process flow. In Section 4, we demonstrate our revision detection results on Wikipedia revision dumps and six simulated data sets. Finally, in Section 5, we summarize some concluding remarks and discuss avenues for future work and improvements. Revision Network The two requirements for a document INLINEFORM0 being a revision of another document INLINEFORM1 are that INLINEFORM2 has been created later than INLINEFORM3 and that the content of INLINEFORM4 is similar to (has been modified from) that of INLINEFORM5 . More specifically, given a corpus INLINEFORM6 , for any two documents INLINEFORM7 , we want to find out the yes/no revision relationship of INLINEFORM8 and INLINEFORM9 , and then output all such revision pairs. We assume that each document has a creation date (the last modified timestamp) which is readily available from the meta data of the document. In this section we also assume that we have a INLINEFORM0 method and a cut-off threshold INLINEFORM1 . We represent a corpus as network INLINEFORM2 , for example Figure FIGREF5 , in which a vertex corresponds to a document. There is an arc INLINEFORM3 if and only if INLINEFORM4 and the creation date of INLINEFORM5 is before the creation date of INLINEFORM6 . In other words, INLINEFORM7 is a revision candidate for INLINEFORM8 . By construction, INLINEFORM9 is acyclic. For instance, INLINEFORM10 is a revision candidate for INLINEFORM11 and INLINEFORM12 . Note that we allow one document to be the original document of several revised documents. As we only need to focus on revision candidates, we reduce INLINEFORM13 to INLINEFORM14 , shown in Figure FIGREF5 , by removing isolated vertices. We define the weight of an arc as the distance score between the two vertices. Recall the assumption that a document can be a revision of at most one document. In other words, documents cannot be merged. Due to this assumption, all revision pairs form a branching in INLINEFORM15 . (A branching is a subgraph where each vertex has an in-degree of at most 1.) The document revision problem is to find a minimum cost branching in INLINEFORM16 (see Fig FIGREF5 ). The minimum branching problem was earlier solved by BIBREF30 edmonds1967optimum and BIBREF31 velardi2013ontolearn. The details of his algorithm are as follows. In our case, INLINEFORM0 is acyclic and, therefore, the second step never occurs. For this reason, Algorithm SECREF2 solves the document revision problem. Find minimum branching INLINEFORM0 for network INLINEFORM1 [1] Input: INLINEFORM0 INLINEFORM1 every vertex INLINEFORM0 Set INLINEFORM1 to correspond to all arcs with head INLINEFORM2 Select INLINEFORM3 such that INLINEFORM4 is minimum INLINEFORM5 Output: INLINEFORM0 The essential part of determining the minimum branching INLINEFORM0 is extracting arcs with the lowest distance scores. This is equivalent to finding the most similar document from the revision candidates for every original document. Distance/similarity Measures In this section, we first introduce the classic VSM model, the word2vec model, DTW and TED. We next demonstrate how to combine the above components to construct our semantic document distance measures: wDTW and wTED. We also discuss the implementation of our revision detection system. Background VSM represents a set of documents as vectors of identifiers. The identifier of a word used in this work is the tf-idf weight. We represent documents as tf-idf vectors, and thus the similarity of two documents can be described by the cosine distance between their vectors. VSM has low algorithm complexity but cannot represent the semantics of words since it is based on the bag-of-words assumption. Word2vec produces semantic embeddings for words using a two-layer neural network. Specifically, word2vec relies on a skip-gram model that uses the current word to predict context words in a surrounding window to maximize the average log probability. Words with similar meanings tend to have similar embeddings. DTW was developed originally for speech recognition in time series analysis and has been widely used to measure the distance between two sequences of vectors. Given two sequences of feature vectors: INLINEFORM0 and INLINEFORM1 , DTW finds the optimal alignment for INLINEFORM2 and INLINEFORM3 by first constructing an INLINEFORM4 matrix in which the INLINEFORM5 element is the alignment cost of INLINEFORM6 and INLINEFORM7 , and then retrieving the path from one corner to the diagonal one through the matrix that has the minimal cumulative distance. This algorithm is described by the following formula. DISPLAYFORM0 TED was initially defined to calculate the minimal cost of node edit operations for transforming one labeled tree into another. The node edit operations are defined as follows. Deletion Delete a node and connect its children to its parent maintaining the order. Insertion Insert a node between an existing node and a subsequence of consecutive children of this node. Substitution Rename the label of a node. Let INLINEFORM0 and INLINEFORM1 be two labeled trees, and INLINEFORM2 be the INLINEFORM3 node in INLINEFORM4 . INLINEFORM5 corresponds to a mapping from INLINEFORM6 to INLINEFORM7 . TED finds mapping INLINEFORM8 with the minimal edit cost based on INLINEFORM9 where INLINEFORM0 means transferring INLINEFORM1 to INLINEFORM2 based on INLINEFORM3 , and INLINEFORM4 represents an empty node. Semantic Distance between Paragraphs According to the description of DTW in Section UID14 , the distance between two documents can be calculated using DTW by replacing each element in the feature vectors INLINEFORM0 and INLINEFORM1 with a word vector. However, computing the DTW distance between two documents at the word level is basically as expensive as calculating the Levenshtein distance. Thus in this section we propose an improved algorithm that is more appropriate for document distance calculation. In order to receive semantic representations for documents and maintain a reasonable algorithm complexity, we use word2vec to train word vectors and represent each paragraph as a sequence of vectors. Note that in both wDTW and wTED we take document titles and section titles as paragraphs. Although a more recently proposed model PV can directly train vector representations for short texts such as movie reviews BIBREF15 , our experiments in Section SECREF54 show that PV is not appropriate for standard paragraphs in general documents. Therefore, we use word2vec in our work. Algorithm SECREF20 describes how we compute the distance between two paragraphs based on DTW and word vectors. The distance between one paragraph in a document and one paragraph in another document can be pre-calculated in parallel using Spark to provide faster computation for wDTW and wTED. DistPara [h] Replace the words in paragraphs INLINEFORM0 and INLINEFORM1 with word2vec embeddings: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 calculate INLINEFORM15 Return: INLINEFORM16 Word Vector-based Dynamic Time Warping As a document can be considered as a sequence of paragraphs, wDTW returns the distance between two documents by applying another DTW on top of paragraphs. The cost function is exactly the DistPara distance of two paragraphs given in Algorithm SECREF20 . Algorithm SECREF21 and Figure FIGREF22 describe our wDTW measure. wDTW observes semantic information from word vectors, which is fundamentally different from the word distance calculated from hierarchies among words in the algorithm proposed by BIBREF27 liu2007sentence. The shortcomings of their work are that it is difficult to learn semantic taxonomy of all words and that their DTW algorithm can only be applied to sentences not documents. wDTW [h] Represent documents INLINEFORM0 and INLINEFORM1 with vectors of paragraphs: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 DistPara INLINEFORM15 calculate INLINEFORM16 Return: INLINEFORM17 Word Vector-based Tree Edit Distance TED is reasonable for measuring document distances as documents can be easily transformed to tree structures visualized in Figure FIGREF24 . The document tree concept was originally proposed by BIBREF0 si1997check. A document can be viewed at multiple abstraction levels that include the document title, its sections, subsections, etc. Thus for each document we can build a tree-like structure with title INLINEFORM0 sections INLINEFORM1 subsections INLINEFORM2 ... INLINEFORM3 paragraphs being paths from the root to leaves. Child nodes are ordered from left to right as they appear in the document. We represent labels in a document tree as the vector sequences of titles, sections, subsections and paragraphs with word2vec embeddings. wTED converts documents to tree structures and then uses DistPara distances. More formally, the distance between two nodes is computed as follows. The cost of substitution is the DistPara value of the two nodes. The cost of insertion is the DistPara value of an empty sequence and the label of the inserted node. This essentially means that the cost is the sum of the L2-norms of the word vectors in that node. The cost of deletion is the same as the cost of insertion. Compared to the algorithm proposed by BIBREF28 sidorov2015computing, wTED provides different edit cost functions and uses document tree structures instead of syntactic n-grams, and thus wTED yields more meaningful distance scores for long documents. Algorithm SECREF23 and Figure FIGREF28 describe how we calculate the edit cost between two document trees. wTED [1] Convert documents INLINEFORM0 and INLINEFORM1 to trees INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize tree edit distance INLINEFORM0 node label INLINEFORM1 node label INLINEFORM2 Update TED mapping cost INLINEFORM3 using INLINEFORM4 DistPara INLINEFORM5 INLINEFORM6 DistPara INLINEFORM7 INLINEFORM8 DistPara INLINEFORM9 Return: INLINEFORM0 Process Flow Our system is a boosting learner that is composed of four modules: weak filter, strong filter, revision network and optimal subnetwork. First of all, we sort all documents by timestamps and pair up documents so that we only compare each document with documents that have been created earlier. In the first module, we calculate the VSM similarity scores for all pairs and eliminate those with scores that are lower than an empirical threshold ( INLINEFORM0 ). This is what we call the weak filter. After that, we apply the strong filter wDTW or wTED on the available pairs and filter out document pairs having distances higher than a threshold INLINEFORM1 . For VSM in Section SECREF32 , we directly filter out document pairs having similarity scores lower than a threshold INLINEFORM2 . The cut-off threshold estimation is explained in Section SECREF30 . The remaining document pairs from the strong filter are then sent to the revision network module. In the end, we output the optimal revision pairs following the minimum branching strategy. Estimating the Cut-off Threshold Hyperprameter INLINEFORM0 is calibrated by calculating the absolute extreme based on an initial set of documents, i.e., all processed documents since the moment the system was put in use. Based on this set, we calculate all distance/similarity scores and create a histogram, see Figure FIGREF31 . The figure shows the correlation between the number of document pairs and the similarity scores in the training process of one simulated corpus using VSM. The optimal INLINEFORM1 in this example is around 0.6 where the number of document pairs noticeably drops. As the system continues running, new documents become available and INLINEFORM0 can be periodically updated by using the same method. Numerical Experiments This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods. Distance/Similarity Measures We denote the following distance/similarity measures. wDTW: Our semantic distance measure explained in Section SECREF21 . wTED: Our semantic distance measure explained in Section SECREF23 . WMD: The Word Mover's Distance introduced in Section SECREF1 . WMD adapts the earth mover's distance to the space of documents. VSM: The similarity measure introduced in Section UID12 . PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 . PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 . Our experiments were conducted on an Apache Spark cluster with 32 cores and 320 GB total memory. We implemented wDTW, wTED, WMD, VSM, PV-DTW and PV-TED in Java Spark. The paragraph vectors for PV-DTW and PV-TED were trained by gensim. Data Sets In this section, we introduce the two data sets we used for our revision detection experiments: Wikipedia revision dumps and a document revision data set generated by a computer simulation. The two data sets differ in that the Wikipedia revision dumps only contain linear revision chains, while the simulated data sets also contains tree-structured revision chains, which can be very common in real-world data. The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data. We pre-processed the Wikipedia revision dumps using the JWPL Revision Machine BIBREF32 and produced a data set that contains 62,234 documents with 46,354 revisions. As we noticed that short documents just contributed to noise (graffiti) in the data, we eliminated documents that have fewer than three paragraphs and fewer than 300 words. We removed empty lines in the documents and trained word2vec embeddings on the entire corpus. We used the documents occurring in the first INLINEFORM0 of the revision period for INLINEFORM1 calibration, and the remaining documents for test. The generation process of the simulated data sets is designed to mimic the real world. Users open some existing documents in a file system, make some changes (e.g. addition, deletion or replacement), and save them as separate documents. These documents become revisions of the original documents. We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents. Similar to a file system, at any moment new documents could be added and/or some of the current documents could be revised. The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts. This corpus generation process had five time periods INLINEFORM0 . Figure FIGREF42 illustrates this simulation. We set a Poisson distribution with rate INLINEFORM1 (the number of documents in the initial corpus) to control the number of new documents added in each time period, and a Poisson distribution with rate INLINEFORM2 to control the number of documents revised in each time period. We generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5). Table TABREF48 summarizes the first data set. In each data set, we name the initial corpus Corpus 0, and define INLINEFORM0 as the timestamp when we started this simulation process. We set INLINEFORM1 , INLINEFORM2 . Corpus INLINEFORM3 corresponds to documents generated before timestamp INLINEFORM4 . We extracted document revisions from Corpus INLINEFORM5 and compared the revisions generated in (Corpus INLINEFORM6 - Corpus INLINEFORM7 ) with the ground truths in Table TABREF48 . Hence, we ran four experiments on this data set in total. In every experiment, INLINEFORM8 is calibrated based on Corpus INLINEFORM9 . For instance, the training set of the first experiment was Corpus 1. We trained INLINEFORM10 from Corpus 1. We extracted all revisions in Corpus 2, and compared revisions generated in the test set (Corpus 2 - Corpus 1) with the ground truth: 258 revised documents. The word2vec model shared in the four experiments was trained on Corpus 5. Results We use precision, recall and F-measure to evaluate the detected revisions. A true positive case is a correctly identified revision. A false positive case is an incorrectly identified revision. A false negative case is a missed revision record. We illustrate the performances of wDTW, wTED, WMD, VSM, PV-DTW and PV-TED on the Wikipedia revision dumps in Figure FIGREF43 . wDTW and wTED have the highest F-measure scores compared to the rest of four measures, and wDTW also have the highest precision and recall scores. Figure FIGREF49 shows the average evaluation results on the simulated data sets. From left to right, the corpus size increases and the revision chains become longer, thus it becomes more challenging to detect document revisions. Overall, wDTW consistently performs the best. WMD is slightly better than wTED. In particular, when the corpus size increases, the performances of WMD, VSM, PV-DTW and PV-TED drop faster than wDTW and wTED. Because the revision operations were randomly selected in each corpus, it is possible that there are non-monotone points in the series. wDTW and wTED perform better than WMD especially when the corpus is large, because they use dynamic programming to find the global optimal alignment for documents. In contrast, WMD relies on a greedy algorithm that sums up the minimal cost for every word. wDTW and wTED perform better than PV-DTW and PV-TED, which indicates that our DistPara distance in Algorithm SECREF20 is more accurate than the Euclidian distance between paragraph vectors trained by PV. We show in Table TABREF53 the average running time of the six distance/similarity measures. In all the experiments, VSM is the fastest, wTED is faster than wDTW, and WMD is the slowest. Running WMD is extremely expensive because WMD needs to solve an INLINEFORM0 sequential transshipment problem for every two documents where INLINEFORM1 is the average number of words in a document. In contrast, by splitting this heavy computation into several smaller problems (finding the distance between any two paragraphs), which can be run in parallel, wDTW and wTED scale much better. Combining Figure FIGREF43 , Figure FIGREF49 and Table TABREF53 we conclude that wDTW yields the most accurate results using marginally more time than VSM, PV-TED and PV-DTW, but much less running time than WMD. wTED returns satisfactory results using shorter time than wDTW. Conclusion This paper has explored how DTW and TED can be extended with word2vec to construct semantic document distance measures: wDTW and wTED. By representing paragraphs with concatenations of word vectors, wDTW and wTED are able to capture the semantics of the words and thus give more accurate distance scores. In order to detect revisions, we have used minimum branching on an appropriately developed network with document distance scores serving as arc weights. We have also assessed the efficiency of the method of retrieving an optimal revision subnetwork by finding the minimum branching. Furthermore, we have compared wDTW and wTED with several distance measures for revision detection tasks. Our results demonstrate the effectiveness and robustness of wDTW and wTED in the Wikipedia revision dumps and our simulated data sets. In order to reduce the computation time, we have computed document distances at the paragraph level and implemented a boosting learning system using Apache Spark. Although we have demonstrated the superiority of our semantic measures only in the revision detection experiments, wDTW and wTED can also be used as semantic distance measures in many clustering, classification tasks. Our revision detection system can be enhanced with richer features such as author information and writing styles, and exact changes in revision pairs. Another interesting aspect we would like to explore in the future is reducing the complexities of calculating the distance between two paragraphs. Acknowledgments This work was supported in part by Intel Corporation, Semiconductor Research Corporation (SRC).
There are 6 simulated datasets collected which is initialised with a corpus of size 550 and simulated by generating new documents from Wikipedia extracts and replacing existing documents
be08ef81c3cfaaaf35c7414397a1871611f1a7fd
be08ef81c3cfaaaf35c7414397a1871611f1a7fd_0
Q: Which are the state-of-the-art models? Text: Introduction It is a common habit for people to keep several versions of documents, which creates duplicate data. A scholarly article is normally revised several times before being published. An academic paper may be listed on personal websites, digital conference libraries, Google Scholar, etc. In major corporations, a document typically goes through several revisions involving multiple editors and authors. Users would benefit from visualizing the entire history of a document. It is worthwhile to develop a system that is able to intelligently identify, manage and represent revisions. Given a collection of text documents, our study identifies revision relationships in a completely unsupervised way. For each document in a corpus we only use its content and the last modified timestamp. We assume that a document can be revised by many users, but that the documents are not merged together. We consider collaborative editing as revising documents one by one. The two research problems that are most relevant to document revision detection are plagiarism detection and revision provenance. In a plagiarism detection system, every incoming document is compared with all registered non-plagiarized documents BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . The system returns true if an original copy is found in the database; otherwise, the system returns false and adds the document to the database. Thus, it is a 1-to-n problem. Revision provenance is a 1-to-1 problem as it keeps track of detailed updates of one document BIBREF4 , BIBREF5 . Real-world applications include GitHub, version control in Microsoft Word and Wikipedia version trees BIBREF6 . In contrast, our system solves an n-to-n problem on a large scale. Our potential target data sources, such as the entire web or internal corpora in corporations, contain numerous original documents and their revisions. The aim is to find all revision document pairs within a reasonable time. Document revision detection, plagiarism detection and revision provenance all rely on comparing the content of two documents and assessing a distance/similarity score. The classic document similarity measure, especially for plagiarism detection, is fingerprinting BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Fixed-length fingerprints are created using hash functions to represent document features and are then used to measure document similarities. However, the main purpose of fingerprinting is to reduce computation instead of improving accuracy, and it cannot capture word semantics. Another widely used approach is computing the sentence-to-sentence Levenshtein distance and assigning an overall score for every document pair BIBREF13 . Nevertheless, due to the large number of existing documents, as well as the large number of sentences in each document, the Levenshtein distance is not computation-friendly. Although alternatives such as the vector space model (VSM) can largely reduce the computation time, their effectiveness is low. More importantly, none of the above approaches can capture semantic meanings of words, which heavily limits the performances of these approaches. For instance, from a semantic perspective, “I went to the bank" is expected to be similar to “I withdrew some money" rather than “I went hiking." Our document distance measures are inspired by the weaknesses of current document distance/similarity measures and recently proposed models for word representations such as word2vec BIBREF14 and Paragraph Vector (PV) BIBREF15 . Replacing words with distributed vector embeddings makes it feasible to measure semantic distances using advanced algorithms, e.g., Dynamic Time Warping (DTW) BIBREF16 , BIBREF17 , BIBREF18 and Tree Edit Distance (TED) BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 . Although calculating text distance using DTW BIBREF27 , TED BIBREF28 or Word Mover's Distance (WMV) BIBREF29 has been attempted in the past, these measures are not ideal for large-scale document distance calculation. The first two algorithms were designed for sentence distances instead of document distances. The third measure computes the distance of two documents by solving a transshipment problem between words in the two documents and uses word2vec embeddings to calculate semantic distances of words. The biggest limitation of WMV is its long computation time. We show in Section SECREF54 that our wDTW and wTED measures yield more precise distance scores with much shorter running time than WMV. We recast the problem of detecting document revisions as a network optimization problem (see Section SECREF2 ) and consequently as a set of document distance problems (see Section SECREF4 ). We use trained word vectors to represent words, concatenate the word vectors to represent documents and combine word2vec with DTW or TED. Meanwhile, in order to guarantee reasonable computation time in large data sets, we calculate document distances at the paragraph level with Apache Spark. A distance score is computed by feeding paragraph representations to DTW or TED. Our code and data are publicly available. The primary contributions of this work are as follows. The rest of this paper is organized in five parts. In Section 2, we clarify related terms and explain the methodology for document revision detection. In Section 3, we provide a brief background on existing document similarity measures and present our wDTW and wTED algorithms as well as the overall process flow. In Section 4, we demonstrate our revision detection results on Wikipedia revision dumps and six simulated data sets. Finally, in Section 5, we summarize some concluding remarks and discuss avenues for future work and improvements. Revision Network The two requirements for a document INLINEFORM0 being a revision of another document INLINEFORM1 are that INLINEFORM2 has been created later than INLINEFORM3 and that the content of INLINEFORM4 is similar to (has been modified from) that of INLINEFORM5 . More specifically, given a corpus INLINEFORM6 , for any two documents INLINEFORM7 , we want to find out the yes/no revision relationship of INLINEFORM8 and INLINEFORM9 , and then output all such revision pairs. We assume that each document has a creation date (the last modified timestamp) which is readily available from the meta data of the document. In this section we also assume that we have a INLINEFORM0 method and a cut-off threshold INLINEFORM1 . We represent a corpus as network INLINEFORM2 , for example Figure FIGREF5 , in which a vertex corresponds to a document. There is an arc INLINEFORM3 if and only if INLINEFORM4 and the creation date of INLINEFORM5 is before the creation date of INLINEFORM6 . In other words, INLINEFORM7 is a revision candidate for INLINEFORM8 . By construction, INLINEFORM9 is acyclic. For instance, INLINEFORM10 is a revision candidate for INLINEFORM11 and INLINEFORM12 . Note that we allow one document to be the original document of several revised documents. As we only need to focus on revision candidates, we reduce INLINEFORM13 to INLINEFORM14 , shown in Figure FIGREF5 , by removing isolated vertices. We define the weight of an arc as the distance score between the two vertices. Recall the assumption that a document can be a revision of at most one document. In other words, documents cannot be merged. Due to this assumption, all revision pairs form a branching in INLINEFORM15 . (A branching is a subgraph where each vertex has an in-degree of at most 1.) The document revision problem is to find a minimum cost branching in INLINEFORM16 (see Fig FIGREF5 ). The minimum branching problem was earlier solved by BIBREF30 edmonds1967optimum and BIBREF31 velardi2013ontolearn. The details of his algorithm are as follows. In our case, INLINEFORM0 is acyclic and, therefore, the second step never occurs. For this reason, Algorithm SECREF2 solves the document revision problem. Find minimum branching INLINEFORM0 for network INLINEFORM1 [1] Input: INLINEFORM0 INLINEFORM1 every vertex INLINEFORM0 Set INLINEFORM1 to correspond to all arcs with head INLINEFORM2 Select INLINEFORM3 such that INLINEFORM4 is minimum INLINEFORM5 Output: INLINEFORM0 The essential part of determining the minimum branching INLINEFORM0 is extracting arcs with the lowest distance scores. This is equivalent to finding the most similar document from the revision candidates for every original document. Distance/similarity Measures In this section, we first introduce the classic VSM model, the word2vec model, DTW and TED. We next demonstrate how to combine the above components to construct our semantic document distance measures: wDTW and wTED. We also discuss the implementation of our revision detection system. Background VSM represents a set of documents as vectors of identifiers. The identifier of a word used in this work is the tf-idf weight. We represent documents as tf-idf vectors, and thus the similarity of two documents can be described by the cosine distance between their vectors. VSM has low algorithm complexity but cannot represent the semantics of words since it is based on the bag-of-words assumption. Word2vec produces semantic embeddings for words using a two-layer neural network. Specifically, word2vec relies on a skip-gram model that uses the current word to predict context words in a surrounding window to maximize the average log probability. Words with similar meanings tend to have similar embeddings. DTW was developed originally for speech recognition in time series analysis and has been widely used to measure the distance between two sequences of vectors. Given two sequences of feature vectors: INLINEFORM0 and INLINEFORM1 , DTW finds the optimal alignment for INLINEFORM2 and INLINEFORM3 by first constructing an INLINEFORM4 matrix in which the INLINEFORM5 element is the alignment cost of INLINEFORM6 and INLINEFORM7 , and then retrieving the path from one corner to the diagonal one through the matrix that has the minimal cumulative distance. This algorithm is described by the following formula. DISPLAYFORM0 TED was initially defined to calculate the minimal cost of node edit operations for transforming one labeled tree into another. The node edit operations are defined as follows. Deletion Delete a node and connect its children to its parent maintaining the order. Insertion Insert a node between an existing node and a subsequence of consecutive children of this node. Substitution Rename the label of a node. Let INLINEFORM0 and INLINEFORM1 be two labeled trees, and INLINEFORM2 be the INLINEFORM3 node in INLINEFORM4 . INLINEFORM5 corresponds to a mapping from INLINEFORM6 to INLINEFORM7 . TED finds mapping INLINEFORM8 with the minimal edit cost based on INLINEFORM9 where INLINEFORM0 means transferring INLINEFORM1 to INLINEFORM2 based on INLINEFORM3 , and INLINEFORM4 represents an empty node. Semantic Distance between Paragraphs According to the description of DTW in Section UID14 , the distance between two documents can be calculated using DTW by replacing each element in the feature vectors INLINEFORM0 and INLINEFORM1 with a word vector. However, computing the DTW distance between two documents at the word level is basically as expensive as calculating the Levenshtein distance. Thus in this section we propose an improved algorithm that is more appropriate for document distance calculation. In order to receive semantic representations for documents and maintain a reasonable algorithm complexity, we use word2vec to train word vectors and represent each paragraph as a sequence of vectors. Note that in both wDTW and wTED we take document titles and section titles as paragraphs. Although a more recently proposed model PV can directly train vector representations for short texts such as movie reviews BIBREF15 , our experiments in Section SECREF54 show that PV is not appropriate for standard paragraphs in general documents. Therefore, we use word2vec in our work. Algorithm SECREF20 describes how we compute the distance between two paragraphs based on DTW and word vectors. The distance between one paragraph in a document and one paragraph in another document can be pre-calculated in parallel using Spark to provide faster computation for wDTW and wTED. DistPara [h] Replace the words in paragraphs INLINEFORM0 and INLINEFORM1 with word2vec embeddings: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 calculate INLINEFORM15 Return: INLINEFORM16 Word Vector-based Dynamic Time Warping As a document can be considered as a sequence of paragraphs, wDTW returns the distance between two documents by applying another DTW on top of paragraphs. The cost function is exactly the DistPara distance of two paragraphs given in Algorithm SECREF20 . Algorithm SECREF21 and Figure FIGREF22 describe our wDTW measure. wDTW observes semantic information from word vectors, which is fundamentally different from the word distance calculated from hierarchies among words in the algorithm proposed by BIBREF27 liu2007sentence. The shortcomings of their work are that it is difficult to learn semantic taxonomy of all words and that their DTW algorithm can only be applied to sentences not documents. wDTW [h] Represent documents INLINEFORM0 and INLINEFORM1 with vectors of paragraphs: INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize the first row and the first column of INLINEFORM6 matrix INLINEFORM7 INLINEFORM8 INLINEFORM9 INLINEFORM10 in range INLINEFORM11 INLINEFORM12 in range INLINEFORM13 INLINEFORM14 DistPara INLINEFORM15 calculate INLINEFORM16 Return: INLINEFORM17 Word Vector-based Tree Edit Distance TED is reasonable for measuring document distances as documents can be easily transformed to tree structures visualized in Figure FIGREF24 . The document tree concept was originally proposed by BIBREF0 si1997check. A document can be viewed at multiple abstraction levels that include the document title, its sections, subsections, etc. Thus for each document we can build a tree-like structure with title INLINEFORM0 sections INLINEFORM1 subsections INLINEFORM2 ... INLINEFORM3 paragraphs being paths from the root to leaves. Child nodes are ordered from left to right as they appear in the document. We represent labels in a document tree as the vector sequences of titles, sections, subsections and paragraphs with word2vec embeddings. wTED converts documents to tree structures and then uses DistPara distances. More formally, the distance between two nodes is computed as follows. The cost of substitution is the DistPara value of the two nodes. The cost of insertion is the DistPara value of an empty sequence and the label of the inserted node. This essentially means that the cost is the sum of the L2-norms of the word vectors in that node. The cost of deletion is the same as the cost of insertion. Compared to the algorithm proposed by BIBREF28 sidorov2015computing, wTED provides different edit cost functions and uses document tree structures instead of syntactic n-grams, and thus wTED yields more meaningful distance scores for long documents. Algorithm SECREF23 and Figure FIGREF28 describe how we calculate the edit cost between two document trees. wTED [1] Convert documents INLINEFORM0 and INLINEFORM1 to trees INLINEFORM2 and INLINEFORM3 Input: INLINEFORM4 and INLINEFORM5 Initialize tree edit distance INLINEFORM0 node label INLINEFORM1 node label INLINEFORM2 Update TED mapping cost INLINEFORM3 using INLINEFORM4 DistPara INLINEFORM5 INLINEFORM6 DistPara INLINEFORM7 INLINEFORM8 DistPara INLINEFORM9 Return: INLINEFORM0 Process Flow Our system is a boosting learner that is composed of four modules: weak filter, strong filter, revision network and optimal subnetwork. First of all, we sort all documents by timestamps and pair up documents so that we only compare each document with documents that have been created earlier. In the first module, we calculate the VSM similarity scores for all pairs and eliminate those with scores that are lower than an empirical threshold ( INLINEFORM0 ). This is what we call the weak filter. After that, we apply the strong filter wDTW or wTED on the available pairs and filter out document pairs having distances higher than a threshold INLINEFORM1 . For VSM in Section SECREF32 , we directly filter out document pairs having similarity scores lower than a threshold INLINEFORM2 . The cut-off threshold estimation is explained in Section SECREF30 . The remaining document pairs from the strong filter are then sent to the revision network module. In the end, we output the optimal revision pairs following the minimum branching strategy. Estimating the Cut-off Threshold Hyperprameter INLINEFORM0 is calibrated by calculating the absolute extreme based on an initial set of documents, i.e., all processed documents since the moment the system was put in use. Based on this set, we calculate all distance/similarity scores and create a histogram, see Figure FIGREF31 . The figure shows the correlation between the number of document pairs and the similarity scores in the training process of one simulated corpus using VSM. The optimal INLINEFORM1 in this example is around 0.6 where the number of document pairs noticeably drops. As the system continues running, new documents become available and INLINEFORM0 can be periodically updated by using the same method. Numerical Experiments This section reports the results of the experiments conducted on two data sets for evaluating the performances of wDTW and wTED against other baseline methods. Distance/Similarity Measures We denote the following distance/similarity measures. wDTW: Our semantic distance measure explained in Section SECREF21 . wTED: Our semantic distance measure explained in Section SECREF23 . WMD: The Word Mover's Distance introduced in Section SECREF1 . WMD adapts the earth mover's distance to the space of documents. VSM: The similarity measure introduced in Section UID12 . PV-DTW: PV-DTW is the same as Algorithm SECREF21 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 where INLINEFORM1 is the PV embedding of paragraph INLINEFORM2 . PV-TED: PV-TED is the same as Algorithm SECREF23 except that the distance between two paragraphs is not based on Algorithm SECREF20 but rather computed as INLINEFORM0 . Our experiments were conducted on an Apache Spark cluster with 32 cores and 320 GB total memory. We implemented wDTW, wTED, WMD, VSM, PV-DTW and PV-TED in Java Spark. The paragraph vectors for PV-DTW and PV-TED were trained by gensim. Data Sets In this section, we introduce the two data sets we used for our revision detection experiments: Wikipedia revision dumps and a document revision data set generated by a computer simulation. The two data sets differ in that the Wikipedia revision dumps only contain linear revision chains, while the simulated data sets also contains tree-structured revision chains, which can be very common in real-world data. The Wikipedia revision dumps that were previously introduced by Leskovec et al. leskovec2010governance contain eight GB (compressed size) revision edits with meta data. We pre-processed the Wikipedia revision dumps using the JWPL Revision Machine BIBREF32 and produced a data set that contains 62,234 documents with 46,354 revisions. As we noticed that short documents just contributed to noise (graffiti) in the data, we eliminated documents that have fewer than three paragraphs and fewer than 300 words. We removed empty lines in the documents and trained word2vec embeddings on the entire corpus. We used the documents occurring in the first INLINEFORM0 of the revision period for INLINEFORM1 calibration, and the remaining documents for test. The generation process of the simulated data sets is designed to mimic the real world. Users open some existing documents in a file system, make some changes (e.g. addition, deletion or replacement), and save them as separate documents. These documents become revisions of the original documents. We started from an initial corpus that did not have revisions, and kept adding new documents and revising existing documents. Similar to a file system, at any moment new documents could be added and/or some of the current documents could be revised. The revision operations we used were deletion, addition and replacement of words, sentences, paragraphs, section names and document titles. The addition of words, ..., section names, and new documents were pulled from the Wikipedia abstracts. This corpus generation process had five time periods INLINEFORM0 . Figure FIGREF42 illustrates this simulation. We set a Poisson distribution with rate INLINEFORM1 (the number of documents in the initial corpus) to control the number of new documents added in each time period, and a Poisson distribution with rate INLINEFORM2 to control the number of documents revised in each time period. We generated six data sets using different random seeds, and each data set contained six corpora (Corpus 0 - 5). Table TABREF48 summarizes the first data set. In each data set, we name the initial corpus Corpus 0, and define INLINEFORM0 as the timestamp when we started this simulation process. We set INLINEFORM1 , INLINEFORM2 . Corpus INLINEFORM3 corresponds to documents generated before timestamp INLINEFORM4 . We extracted document revisions from Corpus INLINEFORM5 and compared the revisions generated in (Corpus INLINEFORM6 - Corpus INLINEFORM7 ) with the ground truths in Table TABREF48 . Hence, we ran four experiments on this data set in total. In every experiment, INLINEFORM8 is calibrated based on Corpus INLINEFORM9 . For instance, the training set of the first experiment was Corpus 1. We trained INLINEFORM10 from Corpus 1. We extracted all revisions in Corpus 2, and compared revisions generated in the test set (Corpus 2 - Corpus 1) with the ground truth: 258 revised documents. The word2vec model shared in the four experiments was trained on Corpus 5. Results We use precision, recall and F-measure to evaluate the detected revisions. A true positive case is a correctly identified revision. A false positive case is an incorrectly identified revision. A false negative case is a missed revision record. We illustrate the performances of wDTW, wTED, WMD, VSM, PV-DTW and PV-TED on the Wikipedia revision dumps in Figure FIGREF43 . wDTW and wTED have the highest F-measure scores compared to the rest of four measures, and wDTW also have the highest precision and recall scores. Figure FIGREF49 shows the average evaluation results on the simulated data sets. From left to right, the corpus size increases and the revision chains become longer, thus it becomes more challenging to detect document revisions. Overall, wDTW consistently performs the best. WMD is slightly better than wTED. In particular, when the corpus size increases, the performances of WMD, VSM, PV-DTW and PV-TED drop faster than wDTW and wTED. Because the revision operations were randomly selected in each corpus, it is possible that there are non-monotone points in the series. wDTW and wTED perform better than WMD especially when the corpus is large, because they use dynamic programming to find the global optimal alignment for documents. In contrast, WMD relies on a greedy algorithm that sums up the minimal cost for every word. wDTW and wTED perform better than PV-DTW and PV-TED, which indicates that our DistPara distance in Algorithm SECREF20 is more accurate than the Euclidian distance between paragraph vectors trained by PV. We show in Table TABREF53 the average running time of the six distance/similarity measures. In all the experiments, VSM is the fastest, wTED is faster than wDTW, and WMD is the slowest. Running WMD is extremely expensive because WMD needs to solve an INLINEFORM0 sequential transshipment problem for every two documents where INLINEFORM1 is the average number of words in a document. In contrast, by splitting this heavy computation into several smaller problems (finding the distance between any two paragraphs), which can be run in parallel, wDTW and wTED scale much better. Combining Figure FIGREF43 , Figure FIGREF49 and Table TABREF53 we conclude that wDTW yields the most accurate results using marginally more time than VSM, PV-TED and PV-DTW, but much less running time than WMD. wTED returns satisfactory results using shorter time than wDTW. Conclusion This paper has explored how DTW and TED can be extended with word2vec to construct semantic document distance measures: wDTW and wTED. By representing paragraphs with concatenations of word vectors, wDTW and wTED are able to capture the semantics of the words and thus give more accurate distance scores. In order to detect revisions, we have used minimum branching on an appropriately developed network with document distance scores serving as arc weights. We have also assessed the efficiency of the method of retrieving an optimal revision subnetwork by finding the minimum branching. Furthermore, we have compared wDTW and wTED with several distance measures for revision detection tasks. Our results demonstrate the effectiveness and robustness of wDTW and wTED in the Wikipedia revision dumps and our simulated data sets. In order to reduce the computation time, we have computed document distances at the paragraph level and implemented a boosting learning system using Apache Spark. Although we have demonstrated the superiority of our semantic measures only in the revision detection experiments, wDTW and wTED can also be used as semantic distance measures in many clustering, classification tasks. Our revision detection system can be enhanced with richer features such as author information and writing styles, and exact changes in revision pairs. Another interesting aspect we would like to explore in the future is reducing the complexities of calculating the distance between two paragraphs. Acknowledgments This work was supported in part by Intel Corporation, Semiconductor Research Corporation (SRC).
WMD, VSM, PV-DTW, PV-TED
dc57ae854d78aa5d5e8c979826d3e2524d4e9165
dc57ae854d78aa5d5e8c979826d3e2524d4e9165_0
Q: Why is being feature-engineering free an advantage? Text: Introduction The proliferation of social media has provided a locus for use, and thereby collection, of figurative and creative language data, including irony BIBREF0. According to the Merriam-Webster online dictionary, irony refers to “the use of word to express something other than and especially the opposite of the literal meaning." A complex, controversial, and intriguing linguistic phenomenon, irony has been studied in disciplines such as linguistics, philosophy, and rhetoric. Irony detection also has implications for several NLP tasks such as sentiment analysis, hate speech detection, fake news detection, etc BIBREF0. Hence, automatic irony detection can potentially improve systems designed for each of these tasks. In this paper, we focus on learning irony. More specifically, we report our work submitted to the FIRE 2019 Arabic irony detection task (IDAT@FIRE2019). We focus our energy on an important angle of the problem–the small size of training data. Deep learning is the most successful under supervised conditions with large amounts of training data (tens-to-hundreds of thousands of examples). For most real-world tasks, we hard to obtain labeled data. Hence, it is highly desirable to eliminate, or at least reduce, dependence on supervision. In NLP, pre-training language models on unlabeled data has emerged as a successful approach for improving model performance. In particular, the pre-trained multilingual Bidirectional Encoder Representations from Transformers (BERT) BIBREF1 was introduced to learn language regularities from unlabeled data. Multi-task learning (MTL) is another approach that helps achieve inductive transfer between various tasks. More specifically, MTL leverages information from one or more source tasks to improve a target task BIBREF2, BIBREF3. In this work, we introduce Transformer representations (BERT) in an MTL setting to address the data bottleneck in IDAT@FIRE2019. To show the utility of BERT, we compare to a simpler model with gated recurrent units (GRU) in a single task setting. To identify the utility, or lack thereof, of MTL BERT, we compare to a single task BERT model. For MTL BERT, we train on a number of tasks simultaneously. Tasks we train on are sentiment analysis, gender detection, age detection, dialect identification, and emotion detection. Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions: In the context of the Arabic irony task, we show how a small-sized labeled data setting can be mitigated by training models in a multi-task learning setup. We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data). Methods Methods ::: GRU For our baseline, we use gated recurrent units (GRU) BIBREF4, a simplification of long-short term memory (LSTM) BIBREF5, which in turn is a variation of recurrent neural networks (RNNs). A GRU learns based on the following: where the update state $\textbf {\textit {z}}^{(t)}$ decides how much the unit updates its content: where W and U are weight matrices. The candidate activation makes use of a reset gate $\textbf {\textit {r}}^{(t)}$: where $\odot $ is a Hadamard product (element-wise multiplication). When its value is close to zero, the reset gate allows the unit to forget the previously computed state. The reset gate $\textbf {\textit {r}}^{(t)}$ is computed as follows: Methods ::: BERT BERT BIBREF1 is based on the Transformer BIBREF6, a network architecture that depends solely on encoder-decoder attention. The Transformer attention employs a function operating on queries, keys, and values. This attention function maps a query and a set of key-value pairs to an output, where the output is a weighted sum of the values. Encoder of the Transformer in BIBREF6 has 6 attention layers, each of which is composed of two sub-layers: (1) multi-head attention where queries, keys, and values are projected h times into linear, learned projections and ultimately concatenated; and (2) fully-connected feed-forward network (FFN) that is applied to each position separately and identically. Decoder of the Transformer also employs 6 identical layers, yet with an extra sub-layer that performs multi-head attention over the encoder stack. The architecture of BERT BIBREF1 is a multi-layer bidirectional Transformer encoder BIBREF6. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task captures context (i.e., sentence relationships). More information about BERT can be found in BIBREF1. Methods ::: Multi-task Learning In multi-task learning (MTL), a learner uses a number of (usually relevant) tasks to improve performance on a target task BIBREF2, BIBREF3. The MTL setup enables the learner to use cues from various tasks to improve the performance on the target task. MTL also usually helps regularize the model since the learner needs to find representations that are not specific to a single task, but rather more general. Supervised learning with deep neural networks requires large amounts of labeled data, which is not always available. By employing data from additional tasks, MTL thus practically augments training data to alleviate need for large labeled data. Many researchers achieve state-of-the-art results by employing MTL in supervised learning settings BIBREF7, BIBREF8. In specific, BERT was successfully used with MTL. Hence, we employ multi-task BERT (following BIBREF8). For our training, we use the same pre-trained BERT-Base Multilingual Cased model as the initial checkpoint. For this MTL pre-training of BERT, we use the same afore-mentioned single-task BERT parameters. We now describe our data. Data The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e. targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine. IDAT@FIRE2019 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We train our models on TRAIN, and evaluate on DEV. Our multi-task BERT models involve six different Arabic classification tasks. We briefly introduce the data for these tasks here: Author profiling and deception detection in Arabic (APDA). BIBREF9 . From APDA, we only use the corpus of author profiling (which includes the three profiling tasks of age, gender, and variety). The organizers of APDA provide 225,000 tweets as training data. Each tweet is labelled with three tags (one for each task). To develop our models, we split the training data into 90% training set ($n$=202,500 tweets) and 10% development set ($n$=22,500 tweets). With regard to age, authors consider tweets of three classes: {Under 25, Between 25 and 34, and Above 35}. For the Arabic varieties, they consider the following fifteen classes: {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. Gender is labeled as a binary task with {male,female} tags. LAMA+DINA Emotion detection. Alhuzali et al. BIBREF10 introduce LAMA, a dataset for Arabic emotion detection. They use a first-person seed phrase approach and extend work by Abdul-Mageed et al. BIBREF11 for emotion data collection from 6 to 8 emotion categories (i.e. anger, anticipation, disgust, fear, joy, sadness, surprise and trust). We use the combined LAMA+DINA corpus. It is split by the authors as 189,902 tweets training set, 910 as development, and 941 as test. In our experiment, we use only the training set for out MTL experiments. Sentiment analysis in Arabic tweets. This dataset is a shared task on Kaggle by Motaz Saad . The corpus contains 58,751 Arabic tweets (46,940 training, and 11,811 test). The tweets are annotated with positive and negative labels based on an emoji lexicon. Models ::: GRU We train a baseline GRU network with our iorny TRAIN data. This network has only one layer unidirectional GRU, with 500 unites and a linear, output layer. The input word tokens are embedded by the trainable word vectors which are initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use Adam BIBREF12 with a fixed learning rate of $1e-3$ for optimization. For regularization, we use dropout BIBREF13 with a rate of 0.5 on the hidden layer. We set the maximum sequence sequence in our GRU model to 50 words, and use all 22,000 words of training set as vocabulary. We employ batch training with a batch size of 64 for this model. We run the network for 20 epochs and save the model at the end of each epoch, choosing the model that performs highest on DEV as our best model. We report our best result on DEV in Table TABREF22. Our best result is acquired with 12 epochs. As Table TABREF22 shows, the baseline obtains $accuracy=73.70\%$ and $F_1=73.47$. Models ::: Single-Task BERT We use the BERT-Base Multilingual Cased model released by the authors BIBREF1 . The model is trained on 104 languages (including Arabic) with 12 layers, 768 hidden units each, 12 attention heads. The entire model has 110M parameters. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 20 epochs. For single-task learning, we fine-tune BERT on the training set (i.e., TRAIN) of the irony task exclusively. We refer to this model as BERT-ST, ST standing for `single task.' As Table TABREF22 shows, BERT-ST unsurprisingly acquires better performance than the baseline GRU model. On accuracy, BERT-ST is 7.94% better than the baseline. BERT-ST obtains 81.62 $F_1$ which is 7.35 better than the baseline. Models ::: Multi-Task BERT We follow the work of Liu et al. BIBREF8 for training an MTL BERT in that we fine-tune the afore-mentioned BERT-Base Multilingual Cased model with different tasks jointly. First, we fine-tune with the three tasks of author profiling and the irony task simultaneously. We refer to this model trained on the 4 tasks simply as BERT-MT4. BERT-MT5 refers to the model fine-tuned on the 3 author profiling tasks, the emotion task, and the irony task. We also refer to the model fine-tuned on all six tasks (adding the sentiment task mentioned earlier) as BERT-MT6. For MTL BERT, we use the same parameters as the single task BERT listed in the previous sub-section (i.e., Single-Task BERT). In Table TABREF22, we present the performance on the DEV set of only the irony detection task. We note that all the results of multitask learning with BERT are better than those with the single task BERT. The model trained on all six tasks obtains the best result, which is 2.23% accuracy and 2.25% $F_1$ higher than the single task BERT model. Models ::: In-Domain Pre-Training Our irony data involves dialects such as Egyptian, Gulf, and Levantine, as we explained earlier. The BERT-Base Multilingual Cased model we used, however, was trained on Arabic Wikipedia, which is mostly MSA. We believe this dialect mismatch is sub-optimal. As Sun et al. BIBREF14 show, further pre-training with domain specific data can improve performance of a learner. Viewing dialects as constituting different domains, we turn to dialectal data to further pre-train BERT. Namely, we use 1M tweets randomly sampled from an in-house Twitter dataset to resume pre-training BERT before we fine-tune on the irony data. We use BERT-Base Multilingual Cased model as an initial checkpoint and pre-train on this 1M dataset with a learning rate of $2e-5$, for 10 epochs. Then, we fine-tune on MT5 (and then on MT6) with the new further-pre-trained BERT model. We refer to the new models as BERT-1M-MT5 and BERT-1M-MT6, respectively. As Table TABREF22 shows, BERT-1M-MT5 performs best: BERT-1M-MT5 obtains 84.37% accuracy (0.5% less than BERT-MT6) and 83.34 $F_1$ (0.47% less than BERT-MT6). Models ::: IDAT@FIRE2019 Submission For the shared task submission, we use the predictions of BERT-1M-MT5 as our first submitted system. Then, we concatenate our DEV and TRAIN data to compose a new training set (thus using all the training data released by organizers) to re-train BERT-1M-MT5 and BERT-MT6 with the same parameters. We use the predictions of these two models as our second and third submissions. Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task. Related Work Multi-Task Learning. MTL has been effectively used to model several NLP problems. These include, for example, syntactic parsing BIBREF15, sequence labeling BIBREF16, BIBREF17, and text classification BIBREF18. Irony in different languages. Irony detection has been investigated in various languages. For example, Hee et al. BIBREF19 propose two irony detection tasks in English tweets. Task A is a binary classification task (irony vs. non-irony), and Task B is multi-class identification of a specific type of irony from the set {verbal, situational, other-irony, non-ironic}. They use hashtags to automatically collect tweets that they manually annotate using a fine-grained annotation scheme. Participants in this competition construct models based on logistic regression and support vector machine (SVM) BIBREF20, XGBoost BIBREF21, convolutional neural networks (CNNs) BIBREF21, long short-term memory networks (LSTMs) BIBREF22, etc. For the Italian language, Cignarella et al. propose the IronTA shared task BIBREF23, and the best system BIBREF24 is a combination of bi-directional LSTMs, word $n$-grams, and affective lexicons. For Spanish, Ortega-Bueno1 et al. BIBREF25 introduce the IroSvA shared task, a binary classification task for tweets and news comments. The best-performing model on the task, BIBREF26, employs pre-trained Word2Vec, multi-head Transformer encoder and a global average pooling mechanism. Irony in Arabic. Although Arabic is a widely spoken collection of languages ($\sim $ 300 million native speakers) BIBREF27, BIBREF28, there has not been works on irony that we know of on the language. IDAT@FIRE2019 aims at bridging this gap. The closest works in Arabic are those focusing on other text classification tasks such as sentiment analysis BIBREF29, BIBREF30, BIBREF31, BIBREF32, emotion BIBREF10, and dialect identification BIBREF28, BIBREF33, BIBREF34, BIBREF35. Conclusion In this paper, we described our submissions to the Irony Detection in Arabic shared task (IDAT@FIRE2019). We presented how we acquire effective models using pre-trained BERT in a multi-task learning setting. We also showed the utility of viewing different varieties of Arabic as different domains by reporting better performance with models pre-trained with dialectal data rather than exclusively on MSA. Our multi-task model with domain-specific BERT ranks $4th$ in the official IDAT@FIRE2019 evaluation. The model has the advantage of being exclusively based on deep learning. In the future, we will investigate other multi-task learning architectures, and extend our work with semi-supervised methods. Acknowledgement We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca).
Unanswerable
18412237f7faafc6befe975d5bcd348e2b499b55
18412237f7faafc6befe975d5bcd348e2b499b55_0
Q: Where did this model place in the final evaluation of the shared task? Text: Introduction The proliferation of social media has provided a locus for use, and thereby collection, of figurative and creative language data, including irony BIBREF0. According to the Merriam-Webster online dictionary, irony refers to “the use of word to express something other than and especially the opposite of the literal meaning." A complex, controversial, and intriguing linguistic phenomenon, irony has been studied in disciplines such as linguistics, philosophy, and rhetoric. Irony detection also has implications for several NLP tasks such as sentiment analysis, hate speech detection, fake news detection, etc BIBREF0. Hence, automatic irony detection can potentially improve systems designed for each of these tasks. In this paper, we focus on learning irony. More specifically, we report our work submitted to the FIRE 2019 Arabic irony detection task (IDAT@FIRE2019). We focus our energy on an important angle of the problem–the small size of training data. Deep learning is the most successful under supervised conditions with large amounts of training data (tens-to-hundreds of thousands of examples). For most real-world tasks, we hard to obtain labeled data. Hence, it is highly desirable to eliminate, or at least reduce, dependence on supervision. In NLP, pre-training language models on unlabeled data has emerged as a successful approach for improving model performance. In particular, the pre-trained multilingual Bidirectional Encoder Representations from Transformers (BERT) BIBREF1 was introduced to learn language regularities from unlabeled data. Multi-task learning (MTL) is another approach that helps achieve inductive transfer between various tasks. More specifically, MTL leverages information from one or more source tasks to improve a target task BIBREF2, BIBREF3. In this work, we introduce Transformer representations (BERT) in an MTL setting to address the data bottleneck in IDAT@FIRE2019. To show the utility of BERT, we compare to a simpler model with gated recurrent units (GRU) in a single task setting. To identify the utility, or lack thereof, of MTL BERT, we compare to a single task BERT model. For MTL BERT, we train on a number of tasks simultaneously. Tasks we train on are sentiment analysis, gender detection, age detection, dialect identification, and emotion detection. Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions: In the context of the Arabic irony task, we show how a small-sized labeled data setting can be mitigated by training models in a multi-task learning setup. We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data). Methods Methods ::: GRU For our baseline, we use gated recurrent units (GRU) BIBREF4, a simplification of long-short term memory (LSTM) BIBREF5, which in turn is a variation of recurrent neural networks (RNNs). A GRU learns based on the following: where the update state $\textbf {\textit {z}}^{(t)}$ decides how much the unit updates its content: where W and U are weight matrices. The candidate activation makes use of a reset gate $\textbf {\textit {r}}^{(t)}$: where $\odot $ is a Hadamard product (element-wise multiplication). When its value is close to zero, the reset gate allows the unit to forget the previously computed state. The reset gate $\textbf {\textit {r}}^{(t)}$ is computed as follows: Methods ::: BERT BERT BIBREF1 is based on the Transformer BIBREF6, a network architecture that depends solely on encoder-decoder attention. The Transformer attention employs a function operating on queries, keys, and values. This attention function maps a query and a set of key-value pairs to an output, where the output is a weighted sum of the values. Encoder of the Transformer in BIBREF6 has 6 attention layers, each of which is composed of two sub-layers: (1) multi-head attention where queries, keys, and values are projected h times into linear, learned projections and ultimately concatenated; and (2) fully-connected feed-forward network (FFN) that is applied to each position separately and identically. Decoder of the Transformer also employs 6 identical layers, yet with an extra sub-layer that performs multi-head attention over the encoder stack. The architecture of BERT BIBREF1 is a multi-layer bidirectional Transformer encoder BIBREF6. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task captures context (i.e., sentence relationships). More information about BERT can be found in BIBREF1. Methods ::: Multi-task Learning In multi-task learning (MTL), a learner uses a number of (usually relevant) tasks to improve performance on a target task BIBREF2, BIBREF3. The MTL setup enables the learner to use cues from various tasks to improve the performance on the target task. MTL also usually helps regularize the model since the learner needs to find representations that are not specific to a single task, but rather more general. Supervised learning with deep neural networks requires large amounts of labeled data, which is not always available. By employing data from additional tasks, MTL thus practically augments training data to alleviate need for large labeled data. Many researchers achieve state-of-the-art results by employing MTL in supervised learning settings BIBREF7, BIBREF8. In specific, BERT was successfully used with MTL. Hence, we employ multi-task BERT (following BIBREF8). For our training, we use the same pre-trained BERT-Base Multilingual Cased model as the initial checkpoint. For this MTL pre-training of BERT, we use the same afore-mentioned single-task BERT parameters. We now describe our data. Data The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e. targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine. IDAT@FIRE2019 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We train our models on TRAIN, and evaluate on DEV. Our multi-task BERT models involve six different Arabic classification tasks. We briefly introduce the data for these tasks here: Author profiling and deception detection in Arabic (APDA). BIBREF9 . From APDA, we only use the corpus of author profiling (which includes the three profiling tasks of age, gender, and variety). The organizers of APDA provide 225,000 tweets as training data. Each tweet is labelled with three tags (one for each task). To develop our models, we split the training data into 90% training set ($n$=202,500 tweets) and 10% development set ($n$=22,500 tweets). With regard to age, authors consider tweets of three classes: {Under 25, Between 25 and 34, and Above 35}. For the Arabic varieties, they consider the following fifteen classes: {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. Gender is labeled as a binary task with {male,female} tags. LAMA+DINA Emotion detection. Alhuzali et al. BIBREF10 introduce LAMA, a dataset for Arabic emotion detection. They use a first-person seed phrase approach and extend work by Abdul-Mageed et al. BIBREF11 for emotion data collection from 6 to 8 emotion categories (i.e. anger, anticipation, disgust, fear, joy, sadness, surprise and trust). We use the combined LAMA+DINA corpus. It is split by the authors as 189,902 tweets training set, 910 as development, and 941 as test. In our experiment, we use only the training set for out MTL experiments. Sentiment analysis in Arabic tweets. This dataset is a shared task on Kaggle by Motaz Saad . The corpus contains 58,751 Arabic tweets (46,940 training, and 11,811 test). The tweets are annotated with positive and negative labels based on an emoji lexicon. Models ::: GRU We train a baseline GRU network with our iorny TRAIN data. This network has only one layer unidirectional GRU, with 500 unites and a linear, output layer. The input word tokens are embedded by the trainable word vectors which are initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use Adam BIBREF12 with a fixed learning rate of $1e-3$ for optimization. For regularization, we use dropout BIBREF13 with a rate of 0.5 on the hidden layer. We set the maximum sequence sequence in our GRU model to 50 words, and use all 22,000 words of training set as vocabulary. We employ batch training with a batch size of 64 for this model. We run the network for 20 epochs and save the model at the end of each epoch, choosing the model that performs highest on DEV as our best model. We report our best result on DEV in Table TABREF22. Our best result is acquired with 12 epochs. As Table TABREF22 shows, the baseline obtains $accuracy=73.70\%$ and $F_1=73.47$. Models ::: Single-Task BERT We use the BERT-Base Multilingual Cased model released by the authors BIBREF1 . The model is trained on 104 languages (including Arabic) with 12 layers, 768 hidden units each, 12 attention heads. The entire model has 110M parameters. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 20 epochs. For single-task learning, we fine-tune BERT on the training set (i.e., TRAIN) of the irony task exclusively. We refer to this model as BERT-ST, ST standing for `single task.' As Table TABREF22 shows, BERT-ST unsurprisingly acquires better performance than the baseline GRU model. On accuracy, BERT-ST is 7.94% better than the baseline. BERT-ST obtains 81.62 $F_1$ which is 7.35 better than the baseline. Models ::: Multi-Task BERT We follow the work of Liu et al. BIBREF8 for training an MTL BERT in that we fine-tune the afore-mentioned BERT-Base Multilingual Cased model with different tasks jointly. First, we fine-tune with the three tasks of author profiling and the irony task simultaneously. We refer to this model trained on the 4 tasks simply as BERT-MT4. BERT-MT5 refers to the model fine-tuned on the 3 author profiling tasks, the emotion task, and the irony task. We also refer to the model fine-tuned on all six tasks (adding the sentiment task mentioned earlier) as BERT-MT6. For MTL BERT, we use the same parameters as the single task BERT listed in the previous sub-section (i.e., Single-Task BERT). In Table TABREF22, we present the performance on the DEV set of only the irony detection task. We note that all the results of multitask learning with BERT are better than those with the single task BERT. The model trained on all six tasks obtains the best result, which is 2.23% accuracy and 2.25% $F_1$ higher than the single task BERT model. Models ::: In-Domain Pre-Training Our irony data involves dialects such as Egyptian, Gulf, and Levantine, as we explained earlier. The BERT-Base Multilingual Cased model we used, however, was trained on Arabic Wikipedia, which is mostly MSA. We believe this dialect mismatch is sub-optimal. As Sun et al. BIBREF14 show, further pre-training with domain specific data can improve performance of a learner. Viewing dialects as constituting different domains, we turn to dialectal data to further pre-train BERT. Namely, we use 1M tweets randomly sampled from an in-house Twitter dataset to resume pre-training BERT before we fine-tune on the irony data. We use BERT-Base Multilingual Cased model as an initial checkpoint and pre-train on this 1M dataset with a learning rate of $2e-5$, for 10 epochs. Then, we fine-tune on MT5 (and then on MT6) with the new further-pre-trained BERT model. We refer to the new models as BERT-1M-MT5 and BERT-1M-MT6, respectively. As Table TABREF22 shows, BERT-1M-MT5 performs best: BERT-1M-MT5 obtains 84.37% accuracy (0.5% less than BERT-MT6) and 83.34 $F_1$ (0.47% less than BERT-MT6). Models ::: IDAT@FIRE2019 Submission For the shared task submission, we use the predictions of BERT-1M-MT5 as our first submitted system. Then, we concatenate our DEV and TRAIN data to compose a new training set (thus using all the training data released by organizers) to re-train BERT-1M-MT5 and BERT-MT6 with the same parameters. We use the predictions of these two models as our second and third submissions. Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task. Related Work Multi-Task Learning. MTL has been effectively used to model several NLP problems. These include, for example, syntactic parsing BIBREF15, sequence labeling BIBREF16, BIBREF17, and text classification BIBREF18. Irony in different languages. Irony detection has been investigated in various languages. For example, Hee et al. BIBREF19 propose two irony detection tasks in English tweets. Task A is a binary classification task (irony vs. non-irony), and Task B is multi-class identification of a specific type of irony from the set {verbal, situational, other-irony, non-ironic}. They use hashtags to automatically collect tweets that they manually annotate using a fine-grained annotation scheme. Participants in this competition construct models based on logistic regression and support vector machine (SVM) BIBREF20, XGBoost BIBREF21, convolutional neural networks (CNNs) BIBREF21, long short-term memory networks (LSTMs) BIBREF22, etc. For the Italian language, Cignarella et al. propose the IronTA shared task BIBREF23, and the best system BIBREF24 is a combination of bi-directional LSTMs, word $n$-grams, and affective lexicons. For Spanish, Ortega-Bueno1 et al. BIBREF25 introduce the IroSvA shared task, a binary classification task for tweets and news comments. The best-performing model on the task, BIBREF26, employs pre-trained Word2Vec, multi-head Transformer encoder and a global average pooling mechanism. Irony in Arabic. Although Arabic is a widely spoken collection of languages ($\sim $ 300 million native speakers) BIBREF27, BIBREF28, there has not been works on irony that we know of on the language. IDAT@FIRE2019 aims at bridging this gap. The closest works in Arabic are those focusing on other text classification tasks such as sentiment analysis BIBREF29, BIBREF30, BIBREF31, BIBREF32, emotion BIBREF10, and dialect identification BIBREF28, BIBREF33, BIBREF34, BIBREF35. Conclusion In this paper, we described our submissions to the Irony Detection in Arabic shared task (IDAT@FIRE2019). We presented how we acquire effective models using pre-trained BERT in a multi-task learning setting. We also showed the utility of viewing different varieties of Arabic as different domains by reporting better performance with models pre-trained with dialectal data rather than exclusively on MSA. Our multi-task model with domain-specific BERT ranks $4th$ in the official IDAT@FIRE2019 evaluation. The model has the advantage of being exclusively based on deep learning. In the future, we will investigate other multi-task learning architectures, and extend our work with semi-supervised methods. Acknowledgement We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca).
$4th$
02945c85d6cc4cdd1757b2f2bfa5e92ee4ed14a0
02945c85d6cc4cdd1757b2f2bfa5e92ee4ed14a0_0
Q: What in-domain data is used to continue pre-training? Text: Introduction The proliferation of social media has provided a locus for use, and thereby collection, of figurative and creative language data, including irony BIBREF0. According to the Merriam-Webster online dictionary, irony refers to “the use of word to express something other than and especially the opposite of the literal meaning." A complex, controversial, and intriguing linguistic phenomenon, irony has been studied in disciplines such as linguistics, philosophy, and rhetoric. Irony detection also has implications for several NLP tasks such as sentiment analysis, hate speech detection, fake news detection, etc BIBREF0. Hence, automatic irony detection can potentially improve systems designed for each of these tasks. In this paper, we focus on learning irony. More specifically, we report our work submitted to the FIRE 2019 Arabic irony detection task (IDAT@FIRE2019). We focus our energy on an important angle of the problem–the small size of training data. Deep learning is the most successful under supervised conditions with large amounts of training data (tens-to-hundreds of thousands of examples). For most real-world tasks, we hard to obtain labeled data. Hence, it is highly desirable to eliminate, or at least reduce, dependence on supervision. In NLP, pre-training language models on unlabeled data has emerged as a successful approach for improving model performance. In particular, the pre-trained multilingual Bidirectional Encoder Representations from Transformers (BERT) BIBREF1 was introduced to learn language regularities from unlabeled data. Multi-task learning (MTL) is another approach that helps achieve inductive transfer between various tasks. More specifically, MTL leverages information from one or more source tasks to improve a target task BIBREF2, BIBREF3. In this work, we introduce Transformer representations (BERT) in an MTL setting to address the data bottleneck in IDAT@FIRE2019. To show the utility of BERT, we compare to a simpler model with gated recurrent units (GRU) in a single task setting. To identify the utility, or lack thereof, of MTL BERT, we compare to a single task BERT model. For MTL BERT, we train on a number of tasks simultaneously. Tasks we train on are sentiment analysis, gender detection, age detection, dialect identification, and emotion detection. Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions: In the context of the Arabic irony task, we show how a small-sized labeled data setting can be mitigated by training models in a multi-task learning setup. We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data). Methods Methods ::: GRU For our baseline, we use gated recurrent units (GRU) BIBREF4, a simplification of long-short term memory (LSTM) BIBREF5, which in turn is a variation of recurrent neural networks (RNNs). A GRU learns based on the following: where the update state $\textbf {\textit {z}}^{(t)}$ decides how much the unit updates its content: where W and U are weight matrices. The candidate activation makes use of a reset gate $\textbf {\textit {r}}^{(t)}$: where $\odot $ is a Hadamard product (element-wise multiplication). When its value is close to zero, the reset gate allows the unit to forget the previously computed state. The reset gate $\textbf {\textit {r}}^{(t)}$ is computed as follows: Methods ::: BERT BERT BIBREF1 is based on the Transformer BIBREF6, a network architecture that depends solely on encoder-decoder attention. The Transformer attention employs a function operating on queries, keys, and values. This attention function maps a query and a set of key-value pairs to an output, where the output is a weighted sum of the values. Encoder of the Transformer in BIBREF6 has 6 attention layers, each of which is composed of two sub-layers: (1) multi-head attention where queries, keys, and values are projected h times into linear, learned projections and ultimately concatenated; and (2) fully-connected feed-forward network (FFN) that is applied to each position separately and identically. Decoder of the Transformer also employs 6 identical layers, yet with an extra sub-layer that performs multi-head attention over the encoder stack. The architecture of BERT BIBREF1 is a multi-layer bidirectional Transformer encoder BIBREF6. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task captures context (i.e., sentence relationships). More information about BERT can be found in BIBREF1. Methods ::: Multi-task Learning In multi-task learning (MTL), a learner uses a number of (usually relevant) tasks to improve performance on a target task BIBREF2, BIBREF3. The MTL setup enables the learner to use cues from various tasks to improve the performance on the target task. MTL also usually helps regularize the model since the learner needs to find representations that are not specific to a single task, but rather more general. Supervised learning with deep neural networks requires large amounts of labeled data, which is not always available. By employing data from additional tasks, MTL thus practically augments training data to alleviate need for large labeled data. Many researchers achieve state-of-the-art results by employing MTL in supervised learning settings BIBREF7, BIBREF8. In specific, BERT was successfully used with MTL. Hence, we employ multi-task BERT (following BIBREF8). For our training, we use the same pre-trained BERT-Base Multilingual Cased model as the initial checkpoint. For this MTL pre-training of BERT, we use the same afore-mentioned single-task BERT parameters. We now describe our data. Data The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e. targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine. IDAT@FIRE2019 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We train our models on TRAIN, and evaluate on DEV. Our multi-task BERT models involve six different Arabic classification tasks. We briefly introduce the data for these tasks here: Author profiling and deception detection in Arabic (APDA). BIBREF9 . From APDA, we only use the corpus of author profiling (which includes the three profiling tasks of age, gender, and variety). The organizers of APDA provide 225,000 tweets as training data. Each tweet is labelled with three tags (one for each task). To develop our models, we split the training data into 90% training set ($n$=202,500 tweets) and 10% development set ($n$=22,500 tweets). With regard to age, authors consider tweets of three classes: {Under 25, Between 25 and 34, and Above 35}. For the Arabic varieties, they consider the following fifteen classes: {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. Gender is labeled as a binary task with {male,female} tags. LAMA+DINA Emotion detection. Alhuzali et al. BIBREF10 introduce LAMA, a dataset for Arabic emotion detection. They use a first-person seed phrase approach and extend work by Abdul-Mageed et al. BIBREF11 for emotion data collection from 6 to 8 emotion categories (i.e. anger, anticipation, disgust, fear, joy, sadness, surprise and trust). We use the combined LAMA+DINA corpus. It is split by the authors as 189,902 tweets training set, 910 as development, and 941 as test. In our experiment, we use only the training set for out MTL experiments. Sentiment analysis in Arabic tweets. This dataset is a shared task on Kaggle by Motaz Saad . The corpus contains 58,751 Arabic tweets (46,940 training, and 11,811 test). The tweets are annotated with positive and negative labels based on an emoji lexicon. Models ::: GRU We train a baseline GRU network with our iorny TRAIN data. This network has only one layer unidirectional GRU, with 500 unites and a linear, output layer. The input word tokens are embedded by the trainable word vectors which are initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use Adam BIBREF12 with a fixed learning rate of $1e-3$ for optimization. For regularization, we use dropout BIBREF13 with a rate of 0.5 on the hidden layer. We set the maximum sequence sequence in our GRU model to 50 words, and use all 22,000 words of training set as vocabulary. We employ batch training with a batch size of 64 for this model. We run the network for 20 epochs and save the model at the end of each epoch, choosing the model that performs highest on DEV as our best model. We report our best result on DEV in Table TABREF22. Our best result is acquired with 12 epochs. As Table TABREF22 shows, the baseline obtains $accuracy=73.70\%$ and $F_1=73.47$. Models ::: Single-Task BERT We use the BERT-Base Multilingual Cased model released by the authors BIBREF1 . The model is trained on 104 languages (including Arabic) with 12 layers, 768 hidden units each, 12 attention heads. The entire model has 110M parameters. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 20 epochs. For single-task learning, we fine-tune BERT on the training set (i.e., TRAIN) of the irony task exclusively. We refer to this model as BERT-ST, ST standing for `single task.' As Table TABREF22 shows, BERT-ST unsurprisingly acquires better performance than the baseline GRU model. On accuracy, BERT-ST is 7.94% better than the baseline. BERT-ST obtains 81.62 $F_1$ which is 7.35 better than the baseline. Models ::: Multi-Task BERT We follow the work of Liu et al. BIBREF8 for training an MTL BERT in that we fine-tune the afore-mentioned BERT-Base Multilingual Cased model with different tasks jointly. First, we fine-tune with the three tasks of author profiling and the irony task simultaneously. We refer to this model trained on the 4 tasks simply as BERT-MT4. BERT-MT5 refers to the model fine-tuned on the 3 author profiling tasks, the emotion task, and the irony task. We also refer to the model fine-tuned on all six tasks (adding the sentiment task mentioned earlier) as BERT-MT6. For MTL BERT, we use the same parameters as the single task BERT listed in the previous sub-section (i.e., Single-Task BERT). In Table TABREF22, we present the performance on the DEV set of only the irony detection task. We note that all the results of multitask learning with BERT are better than those with the single task BERT. The model trained on all six tasks obtains the best result, which is 2.23% accuracy and 2.25% $F_1$ higher than the single task BERT model. Models ::: In-Domain Pre-Training Our irony data involves dialects such as Egyptian, Gulf, and Levantine, as we explained earlier. The BERT-Base Multilingual Cased model we used, however, was trained on Arabic Wikipedia, which is mostly MSA. We believe this dialect mismatch is sub-optimal. As Sun et al. BIBREF14 show, further pre-training with domain specific data can improve performance of a learner. Viewing dialects as constituting different domains, we turn to dialectal data to further pre-train BERT. Namely, we use 1M tweets randomly sampled from an in-house Twitter dataset to resume pre-training BERT before we fine-tune on the irony data. We use BERT-Base Multilingual Cased model as an initial checkpoint and pre-train on this 1M dataset with a learning rate of $2e-5$, for 10 epochs. Then, we fine-tune on MT5 (and then on MT6) with the new further-pre-trained BERT model. We refer to the new models as BERT-1M-MT5 and BERT-1M-MT6, respectively. As Table TABREF22 shows, BERT-1M-MT5 performs best: BERT-1M-MT5 obtains 84.37% accuracy (0.5% less than BERT-MT6) and 83.34 $F_1$ (0.47% less than BERT-MT6). Models ::: IDAT@FIRE2019 Submission For the shared task submission, we use the predictions of BERT-1M-MT5 as our first submitted system. Then, we concatenate our DEV and TRAIN data to compose a new training set (thus using all the training data released by organizers) to re-train BERT-1M-MT5 and BERT-MT6 with the same parameters. We use the predictions of these two models as our second and third submissions. Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task. Related Work Multi-Task Learning. MTL has been effectively used to model several NLP problems. These include, for example, syntactic parsing BIBREF15, sequence labeling BIBREF16, BIBREF17, and text classification BIBREF18. Irony in different languages. Irony detection has been investigated in various languages. For example, Hee et al. BIBREF19 propose two irony detection tasks in English tweets. Task A is a binary classification task (irony vs. non-irony), and Task B is multi-class identification of a specific type of irony from the set {verbal, situational, other-irony, non-ironic}. They use hashtags to automatically collect tweets that they manually annotate using a fine-grained annotation scheme. Participants in this competition construct models based on logistic regression and support vector machine (SVM) BIBREF20, XGBoost BIBREF21, convolutional neural networks (CNNs) BIBREF21, long short-term memory networks (LSTMs) BIBREF22, etc. For the Italian language, Cignarella et al. propose the IronTA shared task BIBREF23, and the best system BIBREF24 is a combination of bi-directional LSTMs, word $n$-grams, and affective lexicons. For Spanish, Ortega-Bueno1 et al. BIBREF25 introduce the IroSvA shared task, a binary classification task for tweets and news comments. The best-performing model on the task, BIBREF26, employs pre-trained Word2Vec, multi-head Transformer encoder and a global average pooling mechanism. Irony in Arabic. Although Arabic is a widely spoken collection of languages ($\sim $ 300 million native speakers) BIBREF27, BIBREF28, there has not been works on irony that we know of on the language. IDAT@FIRE2019 aims at bridging this gap. The closest works in Arabic are those focusing on other text classification tasks such as sentiment analysis BIBREF29, BIBREF30, BIBREF31, BIBREF32, emotion BIBREF10, and dialect identification BIBREF28, BIBREF33, BIBREF34, BIBREF35. Conclusion In this paper, we described our submissions to the Irony Detection in Arabic shared task (IDAT@FIRE2019). We presented how we acquire effective models using pre-trained BERT in a multi-task learning setting. We also showed the utility of viewing different varieties of Arabic as different domains by reporting better performance with models pre-trained with dialectal data rather than exclusively on MSA. Our multi-task model with domain-specific BERT ranks $4th$ in the official IDAT@FIRE2019 evaluation. The model has the advantage of being exclusively based on deep learning. In the future, we will investigate other multi-task learning architectures, and extend our work with semi-supervised methods. Acknowledgement We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca).
dialectal tweet data
6e51af9088c390829703c6fa966e98c3a53114c1
6e51af9088c390829703c6fa966e98c3a53114c1_0
Q: What dialect is used in the Google BERT model and what is used in the task data? Text: Introduction The proliferation of social media has provided a locus for use, and thereby collection, of figurative and creative language data, including irony BIBREF0. According to the Merriam-Webster online dictionary, irony refers to “the use of word to express something other than and especially the opposite of the literal meaning." A complex, controversial, and intriguing linguistic phenomenon, irony has been studied in disciplines such as linguistics, philosophy, and rhetoric. Irony detection also has implications for several NLP tasks such as sentiment analysis, hate speech detection, fake news detection, etc BIBREF0. Hence, automatic irony detection can potentially improve systems designed for each of these tasks. In this paper, we focus on learning irony. More specifically, we report our work submitted to the FIRE 2019 Arabic irony detection task (IDAT@FIRE2019). We focus our energy on an important angle of the problem–the small size of training data. Deep learning is the most successful under supervised conditions with large amounts of training data (tens-to-hundreds of thousands of examples). For most real-world tasks, we hard to obtain labeled data. Hence, it is highly desirable to eliminate, or at least reduce, dependence on supervision. In NLP, pre-training language models on unlabeled data has emerged as a successful approach for improving model performance. In particular, the pre-trained multilingual Bidirectional Encoder Representations from Transformers (BERT) BIBREF1 was introduced to learn language regularities from unlabeled data. Multi-task learning (MTL) is another approach that helps achieve inductive transfer between various tasks. More specifically, MTL leverages information from one or more source tasks to improve a target task BIBREF2, BIBREF3. In this work, we introduce Transformer representations (BERT) in an MTL setting to address the data bottleneck in IDAT@FIRE2019. To show the utility of BERT, we compare to a simpler model with gated recurrent units (GRU) in a single task setting. To identify the utility, or lack thereof, of MTL BERT, we compare to a single task BERT model. For MTL BERT, we train on a number of tasks simultaneously. Tasks we train on are sentiment analysis, gender detection, age detection, dialect identification, and emotion detection. Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions: In the context of the Arabic irony task, we show how a small-sized labeled data setting can be mitigated by training models in a multi-task learning setup. We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data). Methods Methods ::: GRU For our baseline, we use gated recurrent units (GRU) BIBREF4, a simplification of long-short term memory (LSTM) BIBREF5, which in turn is a variation of recurrent neural networks (RNNs). A GRU learns based on the following: where the update state $\textbf {\textit {z}}^{(t)}$ decides how much the unit updates its content: where W and U are weight matrices. The candidate activation makes use of a reset gate $\textbf {\textit {r}}^{(t)}$: where $\odot $ is a Hadamard product (element-wise multiplication). When its value is close to zero, the reset gate allows the unit to forget the previously computed state. The reset gate $\textbf {\textit {r}}^{(t)}$ is computed as follows: Methods ::: BERT BERT BIBREF1 is based on the Transformer BIBREF6, a network architecture that depends solely on encoder-decoder attention. The Transformer attention employs a function operating on queries, keys, and values. This attention function maps a query and a set of key-value pairs to an output, where the output is a weighted sum of the values. Encoder of the Transformer in BIBREF6 has 6 attention layers, each of which is composed of two sub-layers: (1) multi-head attention where queries, keys, and values are projected h times into linear, learned projections and ultimately concatenated; and (2) fully-connected feed-forward network (FFN) that is applied to each position separately and identically. Decoder of the Transformer also employs 6 identical layers, yet with an extra sub-layer that performs multi-head attention over the encoder stack. The architecture of BERT BIBREF1 is a multi-layer bidirectional Transformer encoder BIBREF6. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task captures context (i.e., sentence relationships). More information about BERT can be found in BIBREF1. Methods ::: Multi-task Learning In multi-task learning (MTL), a learner uses a number of (usually relevant) tasks to improve performance on a target task BIBREF2, BIBREF3. The MTL setup enables the learner to use cues from various tasks to improve the performance on the target task. MTL also usually helps regularize the model since the learner needs to find representations that are not specific to a single task, but rather more general. Supervised learning with deep neural networks requires large amounts of labeled data, which is not always available. By employing data from additional tasks, MTL thus practically augments training data to alleviate need for large labeled data. Many researchers achieve state-of-the-art results by employing MTL in supervised learning settings BIBREF7, BIBREF8. In specific, BERT was successfully used with MTL. Hence, we employ multi-task BERT (following BIBREF8). For our training, we use the same pre-trained BERT-Base Multilingual Cased model as the initial checkpoint. For this MTL pre-training of BERT, we use the same afore-mentioned single-task BERT parameters. We now describe our data. Data The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e. targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine. IDAT@FIRE2019 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We train our models on TRAIN, and evaluate on DEV. Our multi-task BERT models involve six different Arabic classification tasks. We briefly introduce the data for these tasks here: Author profiling and deception detection in Arabic (APDA). BIBREF9 . From APDA, we only use the corpus of author profiling (which includes the three profiling tasks of age, gender, and variety). The organizers of APDA provide 225,000 tweets as training data. Each tweet is labelled with three tags (one for each task). To develop our models, we split the training data into 90% training set ($n$=202,500 tweets) and 10% development set ($n$=22,500 tweets). With regard to age, authors consider tweets of three classes: {Under 25, Between 25 and 34, and Above 35}. For the Arabic varieties, they consider the following fifteen classes: {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. Gender is labeled as a binary task with {male,female} tags. LAMA+DINA Emotion detection. Alhuzali et al. BIBREF10 introduce LAMA, a dataset for Arabic emotion detection. They use a first-person seed phrase approach and extend work by Abdul-Mageed et al. BIBREF11 for emotion data collection from 6 to 8 emotion categories (i.e. anger, anticipation, disgust, fear, joy, sadness, surprise and trust). We use the combined LAMA+DINA corpus. It is split by the authors as 189,902 tweets training set, 910 as development, and 941 as test. In our experiment, we use only the training set for out MTL experiments. Sentiment analysis in Arabic tweets. This dataset is a shared task on Kaggle by Motaz Saad . The corpus contains 58,751 Arabic tweets (46,940 training, and 11,811 test). The tweets are annotated with positive and negative labels based on an emoji lexicon. Models ::: GRU We train a baseline GRU network with our iorny TRAIN data. This network has only one layer unidirectional GRU, with 500 unites and a linear, output layer. The input word tokens are embedded by the trainable word vectors which are initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use Adam BIBREF12 with a fixed learning rate of $1e-3$ for optimization. For regularization, we use dropout BIBREF13 with a rate of 0.5 on the hidden layer. We set the maximum sequence sequence in our GRU model to 50 words, and use all 22,000 words of training set as vocabulary. We employ batch training with a batch size of 64 for this model. We run the network for 20 epochs and save the model at the end of each epoch, choosing the model that performs highest on DEV as our best model. We report our best result on DEV in Table TABREF22. Our best result is acquired with 12 epochs. As Table TABREF22 shows, the baseline obtains $accuracy=73.70\%$ and $F_1=73.47$. Models ::: Single-Task BERT We use the BERT-Base Multilingual Cased model released by the authors BIBREF1 . The model is trained on 104 languages (including Arabic) with 12 layers, 768 hidden units each, 12 attention heads. The entire model has 110M parameters. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 20 epochs. For single-task learning, we fine-tune BERT on the training set (i.e., TRAIN) of the irony task exclusively. We refer to this model as BERT-ST, ST standing for `single task.' As Table TABREF22 shows, BERT-ST unsurprisingly acquires better performance than the baseline GRU model. On accuracy, BERT-ST is 7.94% better than the baseline. BERT-ST obtains 81.62 $F_1$ which is 7.35 better than the baseline. Models ::: Multi-Task BERT We follow the work of Liu et al. BIBREF8 for training an MTL BERT in that we fine-tune the afore-mentioned BERT-Base Multilingual Cased model with different tasks jointly. First, we fine-tune with the three tasks of author profiling and the irony task simultaneously. We refer to this model trained on the 4 tasks simply as BERT-MT4. BERT-MT5 refers to the model fine-tuned on the 3 author profiling tasks, the emotion task, and the irony task. We also refer to the model fine-tuned on all six tasks (adding the sentiment task mentioned earlier) as BERT-MT6. For MTL BERT, we use the same parameters as the single task BERT listed in the previous sub-section (i.e., Single-Task BERT). In Table TABREF22, we present the performance on the DEV set of only the irony detection task. We note that all the results of multitask learning with BERT are better than those with the single task BERT. The model trained on all six tasks obtains the best result, which is 2.23% accuracy and 2.25% $F_1$ higher than the single task BERT model. Models ::: In-Domain Pre-Training Our irony data involves dialects such as Egyptian, Gulf, and Levantine, as we explained earlier. The BERT-Base Multilingual Cased model we used, however, was trained on Arabic Wikipedia, which is mostly MSA. We believe this dialect mismatch is sub-optimal. As Sun et al. BIBREF14 show, further pre-training with domain specific data can improve performance of a learner. Viewing dialects as constituting different domains, we turn to dialectal data to further pre-train BERT. Namely, we use 1M tweets randomly sampled from an in-house Twitter dataset to resume pre-training BERT before we fine-tune on the irony data. We use BERT-Base Multilingual Cased model as an initial checkpoint and pre-train on this 1M dataset with a learning rate of $2e-5$, for 10 epochs. Then, we fine-tune on MT5 (and then on MT6) with the new further-pre-trained BERT model. We refer to the new models as BERT-1M-MT5 and BERT-1M-MT6, respectively. As Table TABREF22 shows, BERT-1M-MT5 performs best: BERT-1M-MT5 obtains 84.37% accuracy (0.5% less than BERT-MT6) and 83.34 $F_1$ (0.47% less than BERT-MT6). Models ::: IDAT@FIRE2019 Submission For the shared task submission, we use the predictions of BERT-1M-MT5 as our first submitted system. Then, we concatenate our DEV and TRAIN data to compose a new training set (thus using all the training data released by organizers) to re-train BERT-1M-MT5 and BERT-MT6 with the same parameters. We use the predictions of these two models as our second and third submissions. Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task. Related Work Multi-Task Learning. MTL has been effectively used to model several NLP problems. These include, for example, syntactic parsing BIBREF15, sequence labeling BIBREF16, BIBREF17, and text classification BIBREF18. Irony in different languages. Irony detection has been investigated in various languages. For example, Hee et al. BIBREF19 propose two irony detection tasks in English tweets. Task A is a binary classification task (irony vs. non-irony), and Task B is multi-class identification of a specific type of irony from the set {verbal, situational, other-irony, non-ironic}. They use hashtags to automatically collect tweets that they manually annotate using a fine-grained annotation scheme. Participants in this competition construct models based on logistic regression and support vector machine (SVM) BIBREF20, XGBoost BIBREF21, convolutional neural networks (CNNs) BIBREF21, long short-term memory networks (LSTMs) BIBREF22, etc. For the Italian language, Cignarella et al. propose the IronTA shared task BIBREF23, and the best system BIBREF24 is a combination of bi-directional LSTMs, word $n$-grams, and affective lexicons. For Spanish, Ortega-Bueno1 et al. BIBREF25 introduce the IroSvA shared task, a binary classification task for tweets and news comments. The best-performing model on the task, BIBREF26, employs pre-trained Word2Vec, multi-head Transformer encoder and a global average pooling mechanism. Irony in Arabic. Although Arabic is a widely spoken collection of languages ($\sim $ 300 million native speakers) BIBREF27, BIBREF28, there has not been works on irony that we know of on the language. IDAT@FIRE2019 aims at bridging this gap. The closest works in Arabic are those focusing on other text classification tasks such as sentiment analysis BIBREF29, BIBREF30, BIBREF31, BIBREF32, emotion BIBREF10, and dialect identification BIBREF28, BIBREF33, BIBREF34, BIBREF35. Conclusion In this paper, we described our submissions to the Irony Detection in Arabic shared task (IDAT@FIRE2019). We presented how we acquire effective models using pre-trained BERT in a multi-task learning setting. We also showed the utility of viewing different varieties of Arabic as different domains by reporting better performance with models pre-trained with dialectal data rather than exclusively on MSA. Our multi-task model with domain-specific BERT ranks $4th$ in the official IDAT@FIRE2019 evaluation. The model has the advantage of being exclusively based on deep learning. In the future, we will investigate other multi-task learning architectures, and extend our work with semi-supervised methods. Acknowledgement We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca).
Modern Standard Arabic (MSA), MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine
07ee4e0277ad1083270131d32a71c3fe062a916d
07ee4e0277ad1083270131d32a71c3fe062a916d_0
Q: What are the tasks used in the mulit-task learning setup? Text: Introduction The proliferation of social media has provided a locus for use, and thereby collection, of figurative and creative language data, including irony BIBREF0. According to the Merriam-Webster online dictionary, irony refers to “the use of word to express something other than and especially the opposite of the literal meaning." A complex, controversial, and intriguing linguistic phenomenon, irony has been studied in disciplines such as linguistics, philosophy, and rhetoric. Irony detection also has implications for several NLP tasks such as sentiment analysis, hate speech detection, fake news detection, etc BIBREF0. Hence, automatic irony detection can potentially improve systems designed for each of these tasks. In this paper, we focus on learning irony. More specifically, we report our work submitted to the FIRE 2019 Arabic irony detection task (IDAT@FIRE2019). We focus our energy on an important angle of the problem–the small size of training data. Deep learning is the most successful under supervised conditions with large amounts of training data (tens-to-hundreds of thousands of examples). For most real-world tasks, we hard to obtain labeled data. Hence, it is highly desirable to eliminate, or at least reduce, dependence on supervision. In NLP, pre-training language models on unlabeled data has emerged as a successful approach for improving model performance. In particular, the pre-trained multilingual Bidirectional Encoder Representations from Transformers (BERT) BIBREF1 was introduced to learn language regularities from unlabeled data. Multi-task learning (MTL) is another approach that helps achieve inductive transfer between various tasks. More specifically, MTL leverages information from one or more source tasks to improve a target task BIBREF2, BIBREF3. In this work, we introduce Transformer representations (BERT) in an MTL setting to address the data bottleneck in IDAT@FIRE2019. To show the utility of BERT, we compare to a simpler model with gated recurrent units (GRU) in a single task setting. To identify the utility, or lack thereof, of MTL BERT, we compare to a single task BERT model. For MTL BERT, we train on a number of tasks simultaneously. Tasks we train on are sentiment analysis, gender detection, age detection, dialect identification, and emotion detection. Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions: In the context of the Arabic irony task, we show how a small-sized labeled data setting can be mitigated by training models in a multi-task learning setup. We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data). Methods Methods ::: GRU For our baseline, we use gated recurrent units (GRU) BIBREF4, a simplification of long-short term memory (LSTM) BIBREF5, which in turn is a variation of recurrent neural networks (RNNs). A GRU learns based on the following: where the update state $\textbf {\textit {z}}^{(t)}$ decides how much the unit updates its content: where W and U are weight matrices. The candidate activation makes use of a reset gate $\textbf {\textit {r}}^{(t)}$: where $\odot $ is a Hadamard product (element-wise multiplication). When its value is close to zero, the reset gate allows the unit to forget the previously computed state. The reset gate $\textbf {\textit {r}}^{(t)}$ is computed as follows: Methods ::: BERT BERT BIBREF1 is based on the Transformer BIBREF6, a network architecture that depends solely on encoder-decoder attention. The Transformer attention employs a function operating on queries, keys, and values. This attention function maps a query and a set of key-value pairs to an output, where the output is a weighted sum of the values. Encoder of the Transformer in BIBREF6 has 6 attention layers, each of which is composed of two sub-layers: (1) multi-head attention where queries, keys, and values are projected h times into linear, learned projections and ultimately concatenated; and (2) fully-connected feed-forward network (FFN) that is applied to each position separately and identically. Decoder of the Transformer also employs 6 identical layers, yet with an extra sub-layer that performs multi-head attention over the encoder stack. The architecture of BERT BIBREF1 is a multi-layer bidirectional Transformer encoder BIBREF6. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary next sentence prediction task captures context (i.e., sentence relationships). More information about BERT can be found in BIBREF1. Methods ::: Multi-task Learning In multi-task learning (MTL), a learner uses a number of (usually relevant) tasks to improve performance on a target task BIBREF2, BIBREF3. The MTL setup enables the learner to use cues from various tasks to improve the performance on the target task. MTL also usually helps regularize the model since the learner needs to find representations that are not specific to a single task, but rather more general. Supervised learning with deep neural networks requires large amounts of labeled data, which is not always available. By employing data from additional tasks, MTL thus practically augments training data to alleviate need for large labeled data. Many researchers achieve state-of-the-art results by employing MTL in supervised learning settings BIBREF7, BIBREF8. In specific, BERT was successfully used with MTL. Hence, we employ multi-task BERT (following BIBREF8). For our training, we use the same pre-trained BERT-Base Multilingual Cased model as the initial checkpoint. For this MTL pre-training of BERT, we use the same afore-mentioned single-task BERT parameters. We now describe our data. Data The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e. targeted political figures or events) and the positive class involves ironic hashtags such as #sokhria, #tahakoum, and #maskhara (Arabic variants for “irony"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as Egyptian, Gulf, and Levantine. IDAT@FIRE2019 is set up as a binary classification task where tweets are assigned labels from the set {ironic, non-ironic}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We train our models on TRAIN, and evaluate on DEV. Our multi-task BERT models involve six different Arabic classification tasks. We briefly introduce the data for these tasks here: Author profiling and deception detection in Arabic (APDA). BIBREF9 . From APDA, we only use the corpus of author profiling (which includes the three profiling tasks of age, gender, and variety). The organizers of APDA provide 225,000 tweets as training data. Each tweet is labelled with three tags (one for each task). To develop our models, we split the training data into 90% training set ($n$=202,500 tweets) and 10% development set ($n$=22,500 tweets). With regard to age, authors consider tweets of three classes: {Under 25, Between 25 and 34, and Above 35}. For the Arabic varieties, they consider the following fifteen classes: {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. Gender is labeled as a binary task with {male,female} tags. LAMA+DINA Emotion detection. Alhuzali et al. BIBREF10 introduce LAMA, a dataset for Arabic emotion detection. They use a first-person seed phrase approach and extend work by Abdul-Mageed et al. BIBREF11 for emotion data collection from 6 to 8 emotion categories (i.e. anger, anticipation, disgust, fear, joy, sadness, surprise and trust). We use the combined LAMA+DINA corpus. It is split by the authors as 189,902 tweets training set, 910 as development, and 941 as test. In our experiment, we use only the training set for out MTL experiments. Sentiment analysis in Arabic tweets. This dataset is a shared task on Kaggle by Motaz Saad . The corpus contains 58,751 Arabic tweets (46,940 training, and 11,811 test). The tweets are annotated with positive and negative labels based on an emoji lexicon. Models ::: GRU We train a baseline GRU network with our iorny TRAIN data. This network has only one layer unidirectional GRU, with 500 unites and a linear, output layer. The input word tokens are embedded by the trainable word vectors which are initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use Adam BIBREF12 with a fixed learning rate of $1e-3$ for optimization. For regularization, we use dropout BIBREF13 with a rate of 0.5 on the hidden layer. We set the maximum sequence sequence in our GRU model to 50 words, and use all 22,000 words of training set as vocabulary. We employ batch training with a batch size of 64 for this model. We run the network for 20 epochs and save the model at the end of each epoch, choosing the model that performs highest on DEV as our best model. We report our best result on DEV in Table TABREF22. Our best result is acquired with 12 epochs. As Table TABREF22 shows, the baseline obtains $accuracy=73.70\%$ and $F_1=73.47$. Models ::: Single-Task BERT We use the BERT-Base Multilingual Cased model released by the authors BIBREF1 . The model is trained on 104 languages (including Arabic) with 12 layers, 768 hidden units each, 12 attention heads. The entire model has 110M parameters. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 20 epochs. For single-task learning, we fine-tune BERT on the training set (i.e., TRAIN) of the irony task exclusively. We refer to this model as BERT-ST, ST standing for `single task.' As Table TABREF22 shows, BERT-ST unsurprisingly acquires better performance than the baseline GRU model. On accuracy, BERT-ST is 7.94% better than the baseline. BERT-ST obtains 81.62 $F_1$ which is 7.35 better than the baseline. Models ::: Multi-Task BERT We follow the work of Liu et al. BIBREF8 for training an MTL BERT in that we fine-tune the afore-mentioned BERT-Base Multilingual Cased model with different tasks jointly. First, we fine-tune with the three tasks of author profiling and the irony task simultaneously. We refer to this model trained on the 4 tasks simply as BERT-MT4. BERT-MT5 refers to the model fine-tuned on the 3 author profiling tasks, the emotion task, and the irony task. We also refer to the model fine-tuned on all six tasks (adding the sentiment task mentioned earlier) as BERT-MT6. For MTL BERT, we use the same parameters as the single task BERT listed in the previous sub-section (i.e., Single-Task BERT). In Table TABREF22, we present the performance on the DEV set of only the irony detection task. We note that all the results of multitask learning with BERT are better than those with the single task BERT. The model trained on all six tasks obtains the best result, which is 2.23% accuracy and 2.25% $F_1$ higher than the single task BERT model. Models ::: In-Domain Pre-Training Our irony data involves dialects such as Egyptian, Gulf, and Levantine, as we explained earlier. The BERT-Base Multilingual Cased model we used, however, was trained on Arabic Wikipedia, which is mostly MSA. We believe this dialect mismatch is sub-optimal. As Sun et al. BIBREF14 show, further pre-training with domain specific data can improve performance of a learner. Viewing dialects as constituting different domains, we turn to dialectal data to further pre-train BERT. Namely, we use 1M tweets randomly sampled from an in-house Twitter dataset to resume pre-training BERT before we fine-tune on the irony data. We use BERT-Base Multilingual Cased model as an initial checkpoint and pre-train on this 1M dataset with a learning rate of $2e-5$, for 10 epochs. Then, we fine-tune on MT5 (and then on MT6) with the new further-pre-trained BERT model. We refer to the new models as BERT-1M-MT5 and BERT-1M-MT6, respectively. As Table TABREF22 shows, BERT-1M-MT5 performs best: BERT-1M-MT5 obtains 84.37% accuracy (0.5% less than BERT-MT6) and 83.34 $F_1$ (0.47% less than BERT-MT6). Models ::: IDAT@FIRE2019 Submission For the shared task submission, we use the predictions of BERT-1M-MT5 as our first submitted system. Then, we concatenate our DEV and TRAIN data to compose a new training set (thus using all the training data released by organizers) to re-train BERT-1M-MT5 and BERT-MT6 with the same parameters. We use the predictions of these two models as our second and third submissions. Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task. Related Work Multi-Task Learning. MTL has been effectively used to model several NLP problems. These include, for example, syntactic parsing BIBREF15, sequence labeling BIBREF16, BIBREF17, and text classification BIBREF18. Irony in different languages. Irony detection has been investigated in various languages. For example, Hee et al. BIBREF19 propose two irony detection tasks in English tweets. Task A is a binary classification task (irony vs. non-irony), and Task B is multi-class identification of a specific type of irony from the set {verbal, situational, other-irony, non-ironic}. They use hashtags to automatically collect tweets that they manually annotate using a fine-grained annotation scheme. Participants in this competition construct models based on logistic regression and support vector machine (SVM) BIBREF20, XGBoost BIBREF21, convolutional neural networks (CNNs) BIBREF21, long short-term memory networks (LSTMs) BIBREF22, etc. For the Italian language, Cignarella et al. propose the IronTA shared task BIBREF23, and the best system BIBREF24 is a combination of bi-directional LSTMs, word $n$-grams, and affective lexicons. For Spanish, Ortega-Bueno1 et al. BIBREF25 introduce the IroSvA shared task, a binary classification task for tweets and news comments. The best-performing model on the task, BIBREF26, employs pre-trained Word2Vec, multi-head Transformer encoder and a global average pooling mechanism. Irony in Arabic. Although Arabic is a widely spoken collection of languages ($\sim $ 300 million native speakers) BIBREF27, BIBREF28, there has not been works on irony that we know of on the language. IDAT@FIRE2019 aims at bridging this gap. The closest works in Arabic are those focusing on other text classification tasks such as sentiment analysis BIBREF29, BIBREF30, BIBREF31, BIBREF32, emotion BIBREF10, and dialect identification BIBREF28, BIBREF33, BIBREF34, BIBREF35. Conclusion In this paper, we described our submissions to the Irony Detection in Arabic shared task (IDAT@FIRE2019). We presented how we acquire effective models using pre-trained BERT in a multi-task learning setting. We also showed the utility of viewing different varieties of Arabic as different domains by reporting better performance with models pre-trained with dialectal data rather than exclusively on MSA. Our multi-task model with domain-specific BERT ranks $4th$ in the official IDAT@FIRE2019 evaluation. The model has the advantage of being exclusively based on deep learning. In the future, we will investigate other multi-task learning architectures, and extend our work with semi-supervised methods. Acknowledgement We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca).
Author profiling and deception detection in Arabic, LAMA+DINA Emotion detection, Sentiment analysis in Arabic tweets
bfce2afe7a4b71f9127d4f9ef479a0bfb16eaf76
bfce2afe7a4b71f9127d4f9ef479a0bfb16eaf76_0
Q: What human evaluation metrics were used in the paper? Text: Introduction Posing questions about a document in natural language is a crucial aspect of the effort to automatically process natural language data, enabling machines to ask clarification questions BIBREF0 , become more robust to queries BIBREF1 , and to act as automatic tutors BIBREF2 . Recent approaches to question generation have used Seq2Seq BIBREF3 models with attention BIBREF4 and a form of copy mechanism BIBREF5 , BIBREF6 . Such models are trained to generate a plausible question, conditioned on an input document and answer span within that document BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . There are currently no dedicated question generation datasets, and authors have used the context-question-answer triples available in SQuAD. Only a single question is available for each context-answer pair, and models are trained using teacher forcing BIBREF11 . This lack of diverse training data combined with the one-step-ahead training procedure exacerbates the problem of exposure bias BIBREF12 . The model does not learn how to distribute probability mass over sequences that are valid but different to the ground truth; during inference, the model must predict the whole sequence, and may not be robust to mistakes during decoding. Recent work has investigated training the models directly on a performance based objective, either by optimising for BLEU score BIBREF13 or other quality metrics BIBREF10 . By decoupling the training procedure from the ground truth data, the model is able to explore the space of possible questions and become more robust to mistakes during decoding. While the metrics used often seem to be intuitively good choices, there is an assumption that they are good proxies for question quality which has not yet been confirmed. Our contributions are as follows. We perform fine tuning using a range of rewards, including an adversarial objective. We show that although fine tuning leads to increases in reward scores, the resulting models perform worse when evaluated by human workers. We also demonstrate that the generated questions exploit weaknesses in the reward models. Background Many of the advances in natural language generation have been led by machine translation BIBREF3 , BIBREF4 , BIBREF6 . Previous work on question generation has made extensive use of these techniques. BIBREF8 use a Seq2Seq based model to generate questions conditioned on context-answer pairs, and build on this work by preprocessing the context to resolve coreferences and adding a pointer network BIBREF9 . Similarly, BIBREF7 use a part-of-speech tagger to augment the embedding vectors. Both authors perform a human evaluation of their models, and show significant improvement over their baseline. BIBREF13 use a similar model, but apply it to the task of generating questions without conditioning on a specific answer span. BIBREF14 use a modified context encoder based on multi-perspective context matching BIBREF15 . BIBREF16 propose a framework for fine tuning using policy gradients, using BLEU and other automatic metrics linked to the ground truth data as the rewards. BIBREF10 describe a Seq2Seq model with attention and a pointer network, with an additional encoding layer for the answer. They also describe a method for further tuning their model on language model and question answering reward objectives using policy gradients. Unfortunately they do not perform any human evaluation to determine whether this tuning led to improved question quality. For the related task of summarisation, BIBREF17 propose a framework for fine tuning a summarisation model using reinforcement learning, with the ROUGE similarity metric used as the reward. Experimental setup The task is to generate a natural language question, conditioned on a document and answer. For example, given the input document “this paper investigates rewards for question generation" and answer “question generation", the model should produce a question such as “what is investigated in the paper?" Model description We use the model architecture described by BIBREF10 . Briefly, this is a Seq2Seq model BIBREF3 with attention BIBREF4 and copy mechanism BIBREF5 , BIBREF6 . BIBREF10 also add an additional answer encoder layer, and initialise the decoder with a hidden state constructed from the final state of the encoder. Beam search BIBREF18 is used to sample from the model at inference time. The model was trained using maximum likelihood before fine tuning was applied. Our implementation achieves a competitive BLEU-4 score BIBREF19 of $13.5$ on the test set used by BIBREF8 , before fine tuning. Fine tuning Generated questions should be formed of language that is both fluent and relevant to the context and answer. We therefore performed fine tuning on a trained model, using rewards given either by the negative perplexity under a LSTM language model, or the F1 score attained by a question answering (QA) system, or a weighted combination of both. The language model is a standard recurrent neural network formed of a single LSTM layer. For the QA system, we use QANet BIBREF1 as implemented by BIBREF20 . Adversarial training Additionally, we propose a novel approach by learning the reward directly from the training data, using a discriminator detailed in Appendix "Discriminator architecture" . We pre-trained the discriminator to predict whether an input question and associated context-answer pair were generated by our model, or originated from the training data. We then used as the reward the probability estimated by the discriminator that a generated question was in fact real. In other words, the generator was rewarded for successfully fooling the discriminator. We also experimented with interleaving updates to the discriminator within the fine tuning phase, allowing the discriminator to become adversarial and adapt alongside the generator. These rewards $R(\hat{Y})$ were used to update the model parameters via the REINFORCE policy gradient algorithm BIBREF21 , according to $\nabla \mathcal {L} = \nabla \frac{1}{l} \sum \limits _t (\frac{R(\hat{Y})-\mu _R}{\sigma _R}) \log p(\hat{y}_t | \hat{y}_{< t}, \mathbf {D}, \mathbf {A})$ . We teacher forced the decoder with the generated sequence to reproduce the activations calculated during beam search, to enable backpropagation. All rewards were normalised with a simple form of PopArt BIBREF22 , with the running mean $\mu _R$ and standard deviation $\sigma _R$ updated online during training. We continued to apply a maximum likelihood training objective during this fine tuning. Evaluation We report the negative log-likelihood (NLL) of the test set under the different models, as well as the corpus level BLEU-4 score BIBREF19 of the generated questions compared to the ground truth. We also report the rewards achieved on the test set, as the QA, LM and discriminator scores. For the human evaluation, we follow the standard approach in evaluating machine translation systems BIBREF23 , as used for question generation by BIBREF9 . We asked three workers to rate 300 generated questions between 1 (poor) and 5 (good) on two separate criteria: the fluency of the language used, and the relevance of the question to the context document and answer. Results Table 2 shows the changes in automatic metrics for models fine tuned on various combinations of rewards, compared to the model without tuning. In all cases, the BLEU score reduced, as the training objective was no longer closely coupled to the training data. In general, models achieved better scores on the metrics on which they were fine tuned. Jointly training on a QA and LM reward resulted in better LM scores than training on only a LM reward. We conclude that fine tuning using policy gradients can be used to attain higher rewards, as expected. Table 3 shows the human evaluation scores for a subset of the fine tuned models. The model fine tuned on a QA and LM objective is rated as significantly worse by human annotators, despite achieving higher scores in the automatic metrics. In other words, the training objective given by these reward sources does not correspond to true question quality, despite them being intuitively good choices. The model fine tuned using an adversarial discriminator has also failed to achieve better human ratings, with the discriminator model unable to learn a useful reward source. Table 1 shows an example where fine tuning has not only failed to improve the quality of generated questions, but has caused the model to exploit the reward source. The model fine tuned on a LM reward has degenerated into producing a loop of words that is evidently deemed probable, while the model trained on a QA reward has learned that it can simply point at the location of the answer. This observation is supported by the metrics; the model fine tuned on a QA reward has suffered a catastrophic worsening in LM score of +226. Figure 1 shows the automatic scores against human ratings for all rated questions. The correlation coefficient between human relevance and automatic QA scores was 0.439, and between fluency and LM score was only 0.355. While the automatic scores are good indicators of whether a question will achieve the lowest human rating or not, they do not differentiate clearly between the higher ratings: training a model on these objectives will not necessarily learn to generate better questions. A good question will likely attain a high QA and LM score, but the inverse is not true; a sequence may exploit the weaknesses of the metrics and achieve a high score despite being unintelligible to a human. We conclude that fine tuning a question generation model on these rewards does not lead to better quality questions. Conclusion In this paper, we investigated the use of external reward sources for fine tuning question generation models to counteract the lack of task-specific training data. We showed that although fine tuning can be used to attain higher rewards, this does not equate to better quality questions when rated by humans. Using QA and LM rewards as a training objective causes the generator to expose the weaknesses in these models, which in turn suggests a possible use of this approach for generating adversarial training examples for QA models. The QA and LM scores are well correlated with human ratings at the lower end of the scale, suggesting they could be used as part of a reranking or filtering system. Discriminator architecture We used an architecture based on a modified QANet as shown in Figure 2 , replacing the output layers of the model to produce a single probability. Since the discriminator is also able to consider a full context-question-answer triple as input (as opposed to a context-question pair for the QA task), we fused this information in the output layers. Specifically, we applied max pooling over time to the output of the first two encoders, and we took the mean of the outputs of the third encoder that formed part of the answer span. These three reduced encodings were concatenated, a 64 unit hidden layer with ReLU activation applied, and the output passed through a single unit sigmoid output layer to give the estimated probability that an input context-question-answer triple originated from the ground truth dataset or was generated.
rating questions on a scale of 1-5 based on fluency of language used and relevance of the question to the context
dfbab3cd991f86d998223726617d61113caa6193
dfbab3cd991f86d998223726617d61113caa6193_0
Q: For the purposes of this paper, how is something determined to be domain specific knowledge? Text: Introduction With the advancement in technology and invention of modern web applications like Facebook and Twitter, users started expressing their opinions and ideologies at a scale unseen before. The growth of e-commerce companies like Amazon, Walmart have created a revolutionary impact in the field of consumer business. People buy products online through these companies and write reviews for their products. These consumer reviews act as a bridge between consumers and companies. Through these reviews, companies polish the quality of their services. Sentiment Classification (SC) is one of the major applications of Natural Language Processing (NLP) which aims to find the polarity of text. In the early stages BIBREF0 of text classification, sentiment classification was performed using traditional feature selection techniques like Bag-of-Words (BoW) BIBREF1 or TF-IDF. These features were further used to train machine learning classifiers like Naive Bayes (NB) BIBREF2 and Support Vector Machines (SVM) BIBREF3 . They are shown to act as strong baselines for text classification BIBREF4 . However, these models ignore word level semantic knowledge and sequential nature of text. Neural networks were proposed to learn distributed representations of words BIBREF5 . Skip-gram and CBOW architectures BIBREF6 were introduced to learn high quality word representations which constituted a major breakthrough in NLP. Several neural network architectures like recursive neural networks BIBREF7 and convolutional neural networks BIBREF8 achieved excellent results in text classification. Recurrent neural networks which were proposed for dealing sequential inputs suffer from vanishing BIBREF9 and exploding gradient problems BIBREF10 . To overcome this problem, Long Short Term Memory (LSTM) was introduced BIBREF11 . All these architectures have been successful in performing sentiment classification for a specific domain utilizing large amounts of labelled data. However, there exists insufficient labelled data for a target domain of interest. Therefore, Domain Adaptation (DA) exploits knowledge from a relevant domain with abundant labeled data to perform sentiment classification on an unseen target domain. However, expressions of sentiment vary in each domain. For example, in $\textit {Books}$ domain, words $\textit {thoughtful}$ and $\textit {comprehensive}$ are used to express sentiment whereas $\textit {cheap}$ and $\textit {costly}$ are used in $\textit {Electronics}$ domain. Hence, models should generalize well for all domains. Several methods have been introduced for performing Domain Adaptation. Blitzer BIBREF12 proposed Structural Correspondence Learning (SCL) which relies on pivot features between source and target domains. Pan BIBREF13 performed Domain Adaptation using Spectral Feature Alignment (SFA) that aligns features across different domains. Glorot BIBREF14 proposed Stacked Denoising Autoencoder (SDA) that learns generalized feature representations across domains. Zheng BIBREF15 proposed end-to-end adversarial network for Domain Adaptation. Qi BIBREF16 proposed a memory network for Domain Adaptation. Zheng BIBREF17 proposed a Hierarchical transfer network relying on attention for Domain Adaptation. However, all the above architectures use a different sub-network altogether to incorporate domain agnostic knowledge and is combined with main network in the final layers. This makes these architectures computationally intensive. To address this issue, we propose a Gated Convolutional Neural Network (GCN) model that learns domain agnostic knowledge using gated mechanism BIBREF18 . Convolution layers learns the higher level representations for source domain and gated layer selects domain agnostic representations. Unlike other models, GCN doesn't rely on a special sub-network for learning domain agnostic representations. As, gated mechanism is applied on Convolution layers, GCN is computationally efficient. Related Work Traditionally methods for tackling Domain Adaptation are lexicon based. Blitzer BIBREF19 used a pivot method to select features that occur frequently in both domains. It assumes that the selected pivot features can reliably represent the source domain. The pivots are selected using mutual information between selected features and the source domain labels. SFA BIBREF13 method argues that pivot features selected from source domain cannot attest a representation of target domain. Hence, SFA tries to exploit the relationship between domain-specific and domain independent words via simultaneously co-clustering them in a common latent space. SDA BIBREF14 performs Domain Adaptation by learning intermediate representations through auto-encoders. Yu BIBREF20 used two auxiliary tasks to help induce sentence embeddings that work well across different domains. These embeddings are trained using Convolutional Neural Networks (CNN). Gated convolutional neural networks have achieved state-of-art results in language modelling BIBREF18 . Since then, they have been used in different areas of natural language processing (NLP) like sentence similarity BIBREF21 and aspect based sentiment analysis BIBREF22 . Gated Convolutional Neural Networks In this section, we introduce a model based on Gated Convolutional Neural Networks for Domain Adaptation. We present the problem definition of Domain Adaptation, followed by the architecture of the proposed model. Problem Definition Given a source domain $D_{S}$ represented as $D_{S}$ = { $(x_{s_{1}},y_{s_{1}})$ , $(x_{s_{2}},y_{s_{2}})$ .... $(x_{s_{n}},y_{s_{n}})$ } where $x_{s_{i}} \in \mathbb {R}$ represents the vector of $i^{th}$ source text and $y_{s_{i}}$ represents the corresponding source domain label. Let $T_{S}$ represent the task in source domain. Given a target domain $D_{T}$ represented as $D_{S}$0 = { $D_{S}$1 , $D_{S}$2 .... $D_{S}$3 }, where $D_{S}$4 represents the vector of $D_{S}$5 target text and $D_{S}$6 represents corresponding target domain label. Let $D_{S}$7 represent the task in target domain. Domain Adaptation (DA) is defined by the target predictive function $D_{S}$8 calculated using the knowledge of $D_{S}$9 and $(x_{s_{1}},y_{s_{1}})$0 where $(x_{s_{1}},y_{s_{1}})$1 but $(x_{s_{1}},y_{s_{1}})$2 . It is imperative to note that the domains are different but only a single task. In this paper, the task is sentiment classification. Model Architecture The proposed model architecture is shown in the Figure 1 . Recurrent Neural Networks like LSTM, GRU update their weights at every timestep sequentially and hence lack parallelization over inputs in training. In case of attention based models, the attention layer has to wait for outputs from all timesteps. Hence, these models fail to take the advantage of parallelism either. Since, proposed model is based on convolution layers and gated mechanism, it can be parallelized efficiently. The convolution layers learn higher level representations for the source domain. The gated mechanism learn the domain agnostic representations. They together control the information that has to flow through further fully connected output layer after max pooling. Let $I$ denote the input sentence represented as $I$ = { $w_{1}$ $w_{2}$ $w_{3}$ ... $w_{N}$ } where $w_{i}$ represents the $i_{th}$ word in $I$ and $N$ is the maximum sentence length considered. Let $I$0 be the vocabulary size for each dataset and $I$1 denote the word embedding matrix where each $I$2 is a $I$3 dimensional vector. Input sentences whose length is less than $I$4 are padded with 0s to reach maximum sentence length. Words absent in the pretrained word embeddings are initialized to 0s. Therefore each input sentence $I$5 is converted to $I$6 dimensional vector. Convolution operation is applied on $I$7 with kernel $I$8 . The convolution operation is one-dimensional, applied with a fixed window size across words. We consider kernel size of 3,4 and 5. The weight initialization of these kernels is done using glorot uniform BIBREF23 . Each kernel is a feature detector which extracts patterns from n-grams. After convolution we obtain a new feature map $I$9 = [ $w_{1}$0 ] for each kernel $w_{1}$1 . $$C_{i} = f(P_{i:i+h} \ast W_{a} + b_{a})$$ (Eq. 5) where $f$ represents the activation function in convolution layer. The gated mechanism is applied on each convolution layer. Each gated layer learns to filter domain agnostic representations for every time step $i$ . $$S_{i} = g(P_{i:i+h} \ast W_{s} + b_{s})$$ (Eq. 6) where $g$ is the activation function used in gated convolution layer. The outputs from convolution layer and gated convolution layer are element wise multiplied to compute a new feature representation $G_{i}$ $$G_{i} = C_{i} \times S_{i}$$ (Eq. 7) Maxpooling operation is applied across each filter in this new feature representation to get the most important features BIBREF8 . As shown in Figure 1 the outputs from maxpooling layer across all filters are concatenated. The concatenated layer is fully connected to output layer. Sigmoid is used as the activation function in the output layer. Gating mechanisms Gating mechanisms have been effective in Recurrent Neural Networks like GRU and LSTM. They control the information flow through their recurrent cells. In case of GCN, these gated units control the domain information that flows to pooling layers. The model must be robust to change in domain knowledge and should be able to generalize well across different domains. We use the gated mechanisms Gated Tanh Unit (GTU) and Gated Linear Unit (GLU) and Gated Tanh ReLU Unit (GTRU) BIBREF22 in proposed model. The gated architectures are shown in figure 2 . The outputs from Gated Tanh Unit is calculated as $tanh(P \ast W + c) \times \sigma (P \ast V + c)$ . In case of Gated Linear Unit, it is calculated as $(P \ast W + c) \times \sigma (P \ast V + c)$ where $tanh$ and $\sigma $ denotes Tanh and Sigmoid activation functions respectively. In case of Gated Tanh ReLU Unit, output is calculated as $tanh(P \ast W + c) \times relu(P \ast V + c)$ Datasets Multi Domain Dataset BIBREF19 is a short dataset with reviews from distinct domains namely Books(B), DVD(D), Electronics(E) and Kitchen(K). Each domain consists of 2000 reviews equally divided among positive and negative sentiment. We consider 1280 reviews for training, 320 reviews for validation and 400 reviews for testing from each domain. Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain. Baselines To evaluate the performance of proposed model, we consider various baselines like traditional lexicon approaches, CNN models without gating mechanisms and LSTM models. Bag-of-words (BoW) is one of the strongest baselines in text classification BIBREF4 . We consider all the words as features with a minimum frequency of 5. These features are trained using Logistic Regression (LR). TF-IDF is a feature selection technique built upon Bag-of-Words. We consider all the words with a minimum frequency of 5. The features selected are trained using Logistic Regression (LR). Paragraph2vec or doc2vec BIBREF25 is a strong and popularly used baseline for text classification. Paragraph2Vec represents each sentence or paragraph in the form of a distributed representation. We trained our own doc2vec model using DBOW model. The paragraph vectors obtained are trained using Feed Forward Neural Network (FNN). To show the effectiveness of gated layer, we consider a CNN model which does not contain gated layers. Hence, we consider Static CNN model, a popular CNN architecture proposed in Kim BIBREF8 as a baseline. Wang BIBREF26 proposed a combination of Convolutional and Recurrent Neural Network for sentiment Analysis of short texts. This model takes the advantages of features learned by CNN and long-distance dependencies learned by RNN. It achieved remarkable results on benchmark datasets. We report the results using code published by the authors. We offer a comparison with LSTM model with a single hidden layer. This model is trained with equivalent experimental settings as proposed model. In this baseline, attention mechanism BIBREF27 is applied on the top of LSTM outputs across different timesteps. Implementation details All the models are experimented with approximately matching number of parameters for a solid comparison using a Tesla K80 GPU. Input Each word in the input sentence is converted to a 300 dimensional vector using GloVe pretrained vectors BIBREF28 . A maximum sentence length 100 is considered for all the datasets. Sentences with length less than 100 are padded with 0s. Architecture details: The model is implemented using keras. We considered 100 convolution filters for each of the kernels of sizes 3,4 and 5. To get the same sentence length after convolution operation zero padding is done on the input. Training Each sentence or paragraph is converted to lower case. Stopword removal is not done. A vocabulary size of 20000 is considered for all the datasets. We apply a dropout layer BIBREF29 with a probability of 0.5, on the embedding layer and probability 0.2, on the dense layer that connects the output layer. Adadelta BIBREF30 is used as the optimizer for training with gradient descent updates. Batch-size of 16 is taken for MDD and 50 for ARD. The model is trained for 50 epochs. We employ an early stopping mechanism based on validation loss for a patience of 10 epochs. The models are trained on source domain and tested on unseen target domain in all experiments. Results The performance of all models on MDD is shown in Tables 2 and 3 while for ARD, in Tables 4 and 5 . All values are shown in accuracy percentage. Furthermore time complexity of each model is presented in Table 1 . Discussion We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation. In Figure 3 , we have illustrated the visualization of convolution outputs(kernel size = 3) from the sigmoid gate in GLU across domains. As the kernel size is 3, each row in the output corresponds to a trigram from input sentence. This heat map visualizes values of all 100 filters and their average for every input trigram. These examples demonstrate what the convolution gate learns. Trigrams with domain independent but heavy polarity like “_ _ good” and “_ costly would” have higher weightage. Meanwhile, Trigrams with domain specific terms like “quality functional case” and “sell entire kitchen” get some of the least weights. In Figure 3 (b) example, the trigram “would have to” just consists of function words, hence gets the least weight. While “sell entire kitchen” gets more weight comparatively. This might be because while function words are merely grammatical units which contribute minimal to overall sentiment, domain specific terms like “sell” may contain sentiment level knowledge only relevant within the domain. In such a case it is possible that the filters effectively propagate sentiment level knowledge from domain specific terms as well. We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 . While the gated architectures outperform other baselines, within them as well we make observations. Gated Linear Unit (GLU) performs the best often over other gated architectures. In case of GTU, outputs from Sigmoid and Tanh are multiplied together, this may result in small gradients, and hence resulting in the vanishing gradient problem. However, this will not be the in the case of GLU, as the activation is linear. In case of GTRU, outputs from Tanh and ReLU are multiplied. In ReLU, because of absence of negative activations, corresponding Tanh outputs will be completely ignored, resulting in loss of some domain independent knowledge. Conclusion In this paper, we proposed Gated Convolutional Neural Network(GCN) model for Domain Adaptation in Sentiment Analysis. We show that gates in GCN, filter out domain dependant knowledge, hence performing better at an unseen target domain. Our experiments reveal that gated architectures outperform other popular recurrent and non-gated architectures. Furthermore, because these architectures rely on convolutions, they take advantage of parellalization, vastly reducing time complexity.
reviews under distinct product categories are considered specific domain knowledge
df510c85c277afc67799abcb503caa248c448ad2
df510c85c277afc67799abcb503caa248c448ad2_0
Q: Does the fact that GCNs can perform well on this tell us that the task is simpler than previously thought? Text: Introduction With the advancement in technology and invention of modern web applications like Facebook and Twitter, users started expressing their opinions and ideologies at a scale unseen before. The growth of e-commerce companies like Amazon, Walmart have created a revolutionary impact in the field of consumer business. People buy products online through these companies and write reviews for their products. These consumer reviews act as a bridge between consumers and companies. Through these reviews, companies polish the quality of their services. Sentiment Classification (SC) is one of the major applications of Natural Language Processing (NLP) which aims to find the polarity of text. In the early stages BIBREF0 of text classification, sentiment classification was performed using traditional feature selection techniques like Bag-of-Words (BoW) BIBREF1 or TF-IDF. These features were further used to train machine learning classifiers like Naive Bayes (NB) BIBREF2 and Support Vector Machines (SVM) BIBREF3 . They are shown to act as strong baselines for text classification BIBREF4 . However, these models ignore word level semantic knowledge and sequential nature of text. Neural networks were proposed to learn distributed representations of words BIBREF5 . Skip-gram and CBOW architectures BIBREF6 were introduced to learn high quality word representations which constituted a major breakthrough in NLP. Several neural network architectures like recursive neural networks BIBREF7 and convolutional neural networks BIBREF8 achieved excellent results in text classification. Recurrent neural networks which were proposed for dealing sequential inputs suffer from vanishing BIBREF9 and exploding gradient problems BIBREF10 . To overcome this problem, Long Short Term Memory (LSTM) was introduced BIBREF11 . All these architectures have been successful in performing sentiment classification for a specific domain utilizing large amounts of labelled data. However, there exists insufficient labelled data for a target domain of interest. Therefore, Domain Adaptation (DA) exploits knowledge from a relevant domain with abundant labeled data to perform sentiment classification on an unseen target domain. However, expressions of sentiment vary in each domain. For example, in $\textit {Books}$ domain, words $\textit {thoughtful}$ and $\textit {comprehensive}$ are used to express sentiment whereas $\textit {cheap}$ and $\textit {costly}$ are used in $\textit {Electronics}$ domain. Hence, models should generalize well for all domains. Several methods have been introduced for performing Domain Adaptation. Blitzer BIBREF12 proposed Structural Correspondence Learning (SCL) which relies on pivot features between source and target domains. Pan BIBREF13 performed Domain Adaptation using Spectral Feature Alignment (SFA) that aligns features across different domains. Glorot BIBREF14 proposed Stacked Denoising Autoencoder (SDA) that learns generalized feature representations across domains. Zheng BIBREF15 proposed end-to-end adversarial network for Domain Adaptation. Qi BIBREF16 proposed a memory network for Domain Adaptation. Zheng BIBREF17 proposed a Hierarchical transfer network relying on attention for Domain Adaptation. However, all the above architectures use a different sub-network altogether to incorporate domain agnostic knowledge and is combined with main network in the final layers. This makes these architectures computationally intensive. To address this issue, we propose a Gated Convolutional Neural Network (GCN) model that learns domain agnostic knowledge using gated mechanism BIBREF18 . Convolution layers learns the higher level representations for source domain and gated layer selects domain agnostic representations. Unlike other models, GCN doesn't rely on a special sub-network for learning domain agnostic representations. As, gated mechanism is applied on Convolution layers, GCN is computationally efficient. Related Work Traditionally methods for tackling Domain Adaptation are lexicon based. Blitzer BIBREF19 used a pivot method to select features that occur frequently in both domains. It assumes that the selected pivot features can reliably represent the source domain. The pivots are selected using mutual information between selected features and the source domain labels. SFA BIBREF13 method argues that pivot features selected from source domain cannot attest a representation of target domain. Hence, SFA tries to exploit the relationship between domain-specific and domain independent words via simultaneously co-clustering them in a common latent space. SDA BIBREF14 performs Domain Adaptation by learning intermediate representations through auto-encoders. Yu BIBREF20 used two auxiliary tasks to help induce sentence embeddings that work well across different domains. These embeddings are trained using Convolutional Neural Networks (CNN). Gated convolutional neural networks have achieved state-of-art results in language modelling BIBREF18 . Since then, they have been used in different areas of natural language processing (NLP) like sentence similarity BIBREF21 and aspect based sentiment analysis BIBREF22 . Gated Convolutional Neural Networks In this section, we introduce a model based on Gated Convolutional Neural Networks for Domain Adaptation. We present the problem definition of Domain Adaptation, followed by the architecture of the proposed model. Problem Definition Given a source domain $D_{S}$ represented as $D_{S}$ = { $(x_{s_{1}},y_{s_{1}})$ , $(x_{s_{2}},y_{s_{2}})$ .... $(x_{s_{n}},y_{s_{n}})$ } where $x_{s_{i}} \in \mathbb {R}$ represents the vector of $i^{th}$ source text and $y_{s_{i}}$ represents the corresponding source domain label. Let $T_{S}$ represent the task in source domain. Given a target domain $D_{T}$ represented as $D_{S}$0 = { $D_{S}$1 , $D_{S}$2 .... $D_{S}$3 }, where $D_{S}$4 represents the vector of $D_{S}$5 target text and $D_{S}$6 represents corresponding target domain label. Let $D_{S}$7 represent the task in target domain. Domain Adaptation (DA) is defined by the target predictive function $D_{S}$8 calculated using the knowledge of $D_{S}$9 and $(x_{s_{1}},y_{s_{1}})$0 where $(x_{s_{1}},y_{s_{1}})$1 but $(x_{s_{1}},y_{s_{1}})$2 . It is imperative to note that the domains are different but only a single task. In this paper, the task is sentiment classification. Model Architecture The proposed model architecture is shown in the Figure 1 . Recurrent Neural Networks like LSTM, GRU update their weights at every timestep sequentially and hence lack parallelization over inputs in training. In case of attention based models, the attention layer has to wait for outputs from all timesteps. Hence, these models fail to take the advantage of parallelism either. Since, proposed model is based on convolution layers and gated mechanism, it can be parallelized efficiently. The convolution layers learn higher level representations for the source domain. The gated mechanism learn the domain agnostic representations. They together control the information that has to flow through further fully connected output layer after max pooling. Let $I$ denote the input sentence represented as $I$ = { $w_{1}$ $w_{2}$ $w_{3}$ ... $w_{N}$ } where $w_{i}$ represents the $i_{th}$ word in $I$ and $N$ is the maximum sentence length considered. Let $I$0 be the vocabulary size for each dataset and $I$1 denote the word embedding matrix where each $I$2 is a $I$3 dimensional vector. Input sentences whose length is less than $I$4 are padded with 0s to reach maximum sentence length. Words absent in the pretrained word embeddings are initialized to 0s. Therefore each input sentence $I$5 is converted to $I$6 dimensional vector. Convolution operation is applied on $I$7 with kernel $I$8 . The convolution operation is one-dimensional, applied with a fixed window size across words. We consider kernel size of 3,4 and 5. The weight initialization of these kernels is done using glorot uniform BIBREF23 . Each kernel is a feature detector which extracts patterns from n-grams. After convolution we obtain a new feature map $I$9 = [ $w_{1}$0 ] for each kernel $w_{1}$1 . $$C_{i} = f(P_{i:i+h} \ast W_{a} + b_{a})$$ (Eq. 5) where $f$ represents the activation function in convolution layer. The gated mechanism is applied on each convolution layer. Each gated layer learns to filter domain agnostic representations for every time step $i$ . $$S_{i} = g(P_{i:i+h} \ast W_{s} + b_{s})$$ (Eq. 6) where $g$ is the activation function used in gated convolution layer. The outputs from convolution layer and gated convolution layer are element wise multiplied to compute a new feature representation $G_{i}$ $$G_{i} = C_{i} \times S_{i}$$ (Eq. 7) Maxpooling operation is applied across each filter in this new feature representation to get the most important features BIBREF8 . As shown in Figure 1 the outputs from maxpooling layer across all filters are concatenated. The concatenated layer is fully connected to output layer. Sigmoid is used as the activation function in the output layer. Gating mechanisms Gating mechanisms have been effective in Recurrent Neural Networks like GRU and LSTM. They control the information flow through their recurrent cells. In case of GCN, these gated units control the domain information that flows to pooling layers. The model must be robust to change in domain knowledge and should be able to generalize well across different domains. We use the gated mechanisms Gated Tanh Unit (GTU) and Gated Linear Unit (GLU) and Gated Tanh ReLU Unit (GTRU) BIBREF22 in proposed model. The gated architectures are shown in figure 2 . The outputs from Gated Tanh Unit is calculated as $tanh(P \ast W + c) \times \sigma (P \ast V + c)$ . In case of Gated Linear Unit, it is calculated as $(P \ast W + c) \times \sigma (P \ast V + c)$ where $tanh$ and $\sigma $ denotes Tanh and Sigmoid activation functions respectively. In case of Gated Tanh ReLU Unit, output is calculated as $tanh(P \ast W + c) \times relu(P \ast V + c)$ Datasets Multi Domain Dataset BIBREF19 is a short dataset with reviews from distinct domains namely Books(B), DVD(D), Electronics(E) and Kitchen(K). Each domain consists of 2000 reviews equally divided among positive and negative sentiment. We consider 1280 reviews for training, 320 reviews for validation and 400 reviews for testing from each domain. Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain. Baselines To evaluate the performance of proposed model, we consider various baselines like traditional lexicon approaches, CNN models without gating mechanisms and LSTM models. Bag-of-words (BoW) is one of the strongest baselines in text classification BIBREF4 . We consider all the words as features with a minimum frequency of 5. These features are trained using Logistic Regression (LR). TF-IDF is a feature selection technique built upon Bag-of-Words. We consider all the words with a minimum frequency of 5. The features selected are trained using Logistic Regression (LR). Paragraph2vec or doc2vec BIBREF25 is a strong and popularly used baseline for text classification. Paragraph2Vec represents each sentence or paragraph in the form of a distributed representation. We trained our own doc2vec model using DBOW model. The paragraph vectors obtained are trained using Feed Forward Neural Network (FNN). To show the effectiveness of gated layer, we consider a CNN model which does not contain gated layers. Hence, we consider Static CNN model, a popular CNN architecture proposed in Kim BIBREF8 as a baseline. Wang BIBREF26 proposed a combination of Convolutional and Recurrent Neural Network for sentiment Analysis of short texts. This model takes the advantages of features learned by CNN and long-distance dependencies learned by RNN. It achieved remarkable results on benchmark datasets. We report the results using code published by the authors. We offer a comparison with LSTM model with a single hidden layer. This model is trained with equivalent experimental settings as proposed model. In this baseline, attention mechanism BIBREF27 is applied on the top of LSTM outputs across different timesteps. Implementation details All the models are experimented with approximately matching number of parameters for a solid comparison using a Tesla K80 GPU. Input Each word in the input sentence is converted to a 300 dimensional vector using GloVe pretrained vectors BIBREF28 . A maximum sentence length 100 is considered for all the datasets. Sentences with length less than 100 are padded with 0s. Architecture details: The model is implemented using keras. We considered 100 convolution filters for each of the kernels of sizes 3,4 and 5. To get the same sentence length after convolution operation zero padding is done on the input. Training Each sentence or paragraph is converted to lower case. Stopword removal is not done. A vocabulary size of 20000 is considered for all the datasets. We apply a dropout layer BIBREF29 with a probability of 0.5, on the embedding layer and probability 0.2, on the dense layer that connects the output layer. Adadelta BIBREF30 is used as the optimizer for training with gradient descent updates. Batch-size of 16 is taken for MDD and 50 for ARD. The model is trained for 50 epochs. We employ an early stopping mechanism based on validation loss for a patience of 10 epochs. The models are trained on source domain and tested on unseen target domain in all experiments. Results The performance of all models on MDD is shown in Tables 2 and 3 while for ARD, in Tables 4 and 5 . All values are shown in accuracy percentage. Furthermore time complexity of each model is presented in Table 1 . Discussion We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation. In Figure 3 , we have illustrated the visualization of convolution outputs(kernel size = 3) from the sigmoid gate in GLU across domains. As the kernel size is 3, each row in the output corresponds to a trigram from input sentence. This heat map visualizes values of all 100 filters and their average for every input trigram. These examples demonstrate what the convolution gate learns. Trigrams with domain independent but heavy polarity like “_ _ good” and “_ costly would” have higher weightage. Meanwhile, Trigrams with domain specific terms like “quality functional case” and “sell entire kitchen” get some of the least weights. In Figure 3 (b) example, the trigram “would have to” just consists of function words, hence gets the least weight. While “sell entire kitchen” gets more weight comparatively. This might be because while function words are merely grammatical units which contribute minimal to overall sentiment, domain specific terms like “sell” may contain sentiment level knowledge only relevant within the domain. In such a case it is possible that the filters effectively propagate sentiment level knowledge from domain specific terms as well. We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 . While the gated architectures outperform other baselines, within them as well we make observations. Gated Linear Unit (GLU) performs the best often over other gated architectures. In case of GTU, outputs from Sigmoid and Tanh are multiplied together, this may result in small gradients, and hence resulting in the vanishing gradient problem. However, this will not be the in the case of GLU, as the activation is linear. In case of GTRU, outputs from Tanh and ReLU are multiplied. In ReLU, because of absence of negative activations, corresponding Tanh outputs will be completely ignored, resulting in loss of some domain independent knowledge. Conclusion In this paper, we proposed Gated Convolutional Neural Network(GCN) model for Domain Adaptation in Sentiment Analysis. We show that gates in GCN, filter out domain dependant knowledge, hence performing better at an unseen target domain. Our experiments reveal that gated architectures outperform other popular recurrent and non-gated architectures. Furthermore, because these architectures rely on convolutions, they take advantage of parellalization, vastly reducing time complexity.
No
d95180d72d329a27ddf2fd5cc6919f469632a895
d95180d72d329a27ddf2fd5cc6919f469632a895_0
Q: Are there conceptual benefits to using GCNs over more complex architectures like attention? Text: Introduction With the advancement in technology and invention of modern web applications like Facebook and Twitter, users started expressing their opinions and ideologies at a scale unseen before. The growth of e-commerce companies like Amazon, Walmart have created a revolutionary impact in the field of consumer business. People buy products online through these companies and write reviews for their products. These consumer reviews act as a bridge between consumers and companies. Through these reviews, companies polish the quality of their services. Sentiment Classification (SC) is one of the major applications of Natural Language Processing (NLP) which aims to find the polarity of text. In the early stages BIBREF0 of text classification, sentiment classification was performed using traditional feature selection techniques like Bag-of-Words (BoW) BIBREF1 or TF-IDF. These features were further used to train machine learning classifiers like Naive Bayes (NB) BIBREF2 and Support Vector Machines (SVM) BIBREF3 . They are shown to act as strong baselines for text classification BIBREF4 . However, these models ignore word level semantic knowledge and sequential nature of text. Neural networks were proposed to learn distributed representations of words BIBREF5 . Skip-gram and CBOW architectures BIBREF6 were introduced to learn high quality word representations which constituted a major breakthrough in NLP. Several neural network architectures like recursive neural networks BIBREF7 and convolutional neural networks BIBREF8 achieved excellent results in text classification. Recurrent neural networks which were proposed for dealing sequential inputs suffer from vanishing BIBREF9 and exploding gradient problems BIBREF10 . To overcome this problem, Long Short Term Memory (LSTM) was introduced BIBREF11 . All these architectures have been successful in performing sentiment classification for a specific domain utilizing large amounts of labelled data. However, there exists insufficient labelled data for a target domain of interest. Therefore, Domain Adaptation (DA) exploits knowledge from a relevant domain with abundant labeled data to perform sentiment classification on an unseen target domain. However, expressions of sentiment vary in each domain. For example, in $\textit {Books}$ domain, words $\textit {thoughtful}$ and $\textit {comprehensive}$ are used to express sentiment whereas $\textit {cheap}$ and $\textit {costly}$ are used in $\textit {Electronics}$ domain. Hence, models should generalize well for all domains. Several methods have been introduced for performing Domain Adaptation. Blitzer BIBREF12 proposed Structural Correspondence Learning (SCL) which relies on pivot features between source and target domains. Pan BIBREF13 performed Domain Adaptation using Spectral Feature Alignment (SFA) that aligns features across different domains. Glorot BIBREF14 proposed Stacked Denoising Autoencoder (SDA) that learns generalized feature representations across domains. Zheng BIBREF15 proposed end-to-end adversarial network for Domain Adaptation. Qi BIBREF16 proposed a memory network for Domain Adaptation. Zheng BIBREF17 proposed a Hierarchical transfer network relying on attention for Domain Adaptation. However, all the above architectures use a different sub-network altogether to incorporate domain agnostic knowledge and is combined with main network in the final layers. This makes these architectures computationally intensive. To address this issue, we propose a Gated Convolutional Neural Network (GCN) model that learns domain agnostic knowledge using gated mechanism BIBREF18 . Convolution layers learns the higher level representations for source domain and gated layer selects domain agnostic representations. Unlike other models, GCN doesn't rely on a special sub-network for learning domain agnostic representations. As, gated mechanism is applied on Convolution layers, GCN is computationally efficient. Related Work Traditionally methods for tackling Domain Adaptation are lexicon based. Blitzer BIBREF19 used a pivot method to select features that occur frequently in both domains. It assumes that the selected pivot features can reliably represent the source domain. The pivots are selected using mutual information between selected features and the source domain labels. SFA BIBREF13 method argues that pivot features selected from source domain cannot attest a representation of target domain. Hence, SFA tries to exploit the relationship between domain-specific and domain independent words via simultaneously co-clustering them in a common latent space. SDA BIBREF14 performs Domain Adaptation by learning intermediate representations through auto-encoders. Yu BIBREF20 used two auxiliary tasks to help induce sentence embeddings that work well across different domains. These embeddings are trained using Convolutional Neural Networks (CNN). Gated convolutional neural networks have achieved state-of-art results in language modelling BIBREF18 . Since then, they have been used in different areas of natural language processing (NLP) like sentence similarity BIBREF21 and aspect based sentiment analysis BIBREF22 . Gated Convolutional Neural Networks In this section, we introduce a model based on Gated Convolutional Neural Networks for Domain Adaptation. We present the problem definition of Domain Adaptation, followed by the architecture of the proposed model. Problem Definition Given a source domain $D_{S}$ represented as $D_{S}$ = { $(x_{s_{1}},y_{s_{1}})$ , $(x_{s_{2}},y_{s_{2}})$ .... $(x_{s_{n}},y_{s_{n}})$ } where $x_{s_{i}} \in \mathbb {R}$ represents the vector of $i^{th}$ source text and $y_{s_{i}}$ represents the corresponding source domain label. Let $T_{S}$ represent the task in source domain. Given a target domain $D_{T}$ represented as $D_{S}$0 = { $D_{S}$1 , $D_{S}$2 .... $D_{S}$3 }, where $D_{S}$4 represents the vector of $D_{S}$5 target text and $D_{S}$6 represents corresponding target domain label. Let $D_{S}$7 represent the task in target domain. Domain Adaptation (DA) is defined by the target predictive function $D_{S}$8 calculated using the knowledge of $D_{S}$9 and $(x_{s_{1}},y_{s_{1}})$0 where $(x_{s_{1}},y_{s_{1}})$1 but $(x_{s_{1}},y_{s_{1}})$2 . It is imperative to note that the domains are different but only a single task. In this paper, the task is sentiment classification. Model Architecture The proposed model architecture is shown in the Figure 1 . Recurrent Neural Networks like LSTM, GRU update their weights at every timestep sequentially and hence lack parallelization over inputs in training. In case of attention based models, the attention layer has to wait for outputs from all timesteps. Hence, these models fail to take the advantage of parallelism either. Since, proposed model is based on convolution layers and gated mechanism, it can be parallelized efficiently. The convolution layers learn higher level representations for the source domain. The gated mechanism learn the domain agnostic representations. They together control the information that has to flow through further fully connected output layer after max pooling. Let $I$ denote the input sentence represented as $I$ = { $w_{1}$ $w_{2}$ $w_{3}$ ... $w_{N}$ } where $w_{i}$ represents the $i_{th}$ word in $I$ and $N$ is the maximum sentence length considered. Let $I$0 be the vocabulary size for each dataset and $I$1 denote the word embedding matrix where each $I$2 is a $I$3 dimensional vector. Input sentences whose length is less than $I$4 are padded with 0s to reach maximum sentence length. Words absent in the pretrained word embeddings are initialized to 0s. Therefore each input sentence $I$5 is converted to $I$6 dimensional vector. Convolution operation is applied on $I$7 with kernel $I$8 . The convolution operation is one-dimensional, applied with a fixed window size across words. We consider kernel size of 3,4 and 5. The weight initialization of these kernels is done using glorot uniform BIBREF23 . Each kernel is a feature detector which extracts patterns from n-grams. After convolution we obtain a new feature map $I$9 = [ $w_{1}$0 ] for each kernel $w_{1}$1 . $$C_{i} = f(P_{i:i+h} \ast W_{a} + b_{a})$$ (Eq. 5) where $f$ represents the activation function in convolution layer. The gated mechanism is applied on each convolution layer. Each gated layer learns to filter domain agnostic representations for every time step $i$ . $$S_{i} = g(P_{i:i+h} \ast W_{s} + b_{s})$$ (Eq. 6) where $g$ is the activation function used in gated convolution layer. The outputs from convolution layer and gated convolution layer are element wise multiplied to compute a new feature representation $G_{i}$ $$G_{i} = C_{i} \times S_{i}$$ (Eq. 7) Maxpooling operation is applied across each filter in this new feature representation to get the most important features BIBREF8 . As shown in Figure 1 the outputs from maxpooling layer across all filters are concatenated. The concatenated layer is fully connected to output layer. Sigmoid is used as the activation function in the output layer. Gating mechanisms Gating mechanisms have been effective in Recurrent Neural Networks like GRU and LSTM. They control the information flow through their recurrent cells. In case of GCN, these gated units control the domain information that flows to pooling layers. The model must be robust to change in domain knowledge and should be able to generalize well across different domains. We use the gated mechanisms Gated Tanh Unit (GTU) and Gated Linear Unit (GLU) and Gated Tanh ReLU Unit (GTRU) BIBREF22 in proposed model. The gated architectures are shown in figure 2 . The outputs from Gated Tanh Unit is calculated as $tanh(P \ast W + c) \times \sigma (P \ast V + c)$ . In case of Gated Linear Unit, it is calculated as $(P \ast W + c) \times \sigma (P \ast V + c)$ where $tanh$ and $\sigma $ denotes Tanh and Sigmoid activation functions respectively. In case of Gated Tanh ReLU Unit, output is calculated as $tanh(P \ast W + c) \times relu(P \ast V + c)$ Datasets Multi Domain Dataset BIBREF19 is a short dataset with reviews from distinct domains namely Books(B), DVD(D), Electronics(E) and Kitchen(K). Each domain consists of 2000 reviews equally divided among positive and negative sentiment. We consider 1280 reviews for training, 320 reviews for validation and 400 reviews for testing from each domain. Amazon Reviews Dataset BIBREF24 is a large dataset with millions of reviews from different product categories. For our experiments, we consider a subset of 20000 reviews from the domains Cell Phones and Accessories(C), Clothing and Shoes(S), Home and Kitchen(H) and Tools and Home Improvement(T). Out of 20000 reviews, 10000 are positive and 10000 are negative. We use 12800 reviews for training, 3200 reviews for validation and 4000 reviews for testing from each domain. Baselines To evaluate the performance of proposed model, we consider various baselines like traditional lexicon approaches, CNN models without gating mechanisms and LSTM models. Bag-of-words (BoW) is one of the strongest baselines in text classification BIBREF4 . We consider all the words as features with a minimum frequency of 5. These features are trained using Logistic Regression (LR). TF-IDF is a feature selection technique built upon Bag-of-Words. We consider all the words with a minimum frequency of 5. The features selected are trained using Logistic Regression (LR). Paragraph2vec or doc2vec BIBREF25 is a strong and popularly used baseline for text classification. Paragraph2Vec represents each sentence or paragraph in the form of a distributed representation. We trained our own doc2vec model using DBOW model. The paragraph vectors obtained are trained using Feed Forward Neural Network (FNN). To show the effectiveness of gated layer, we consider a CNN model which does not contain gated layers. Hence, we consider Static CNN model, a popular CNN architecture proposed in Kim BIBREF8 as a baseline. Wang BIBREF26 proposed a combination of Convolutional and Recurrent Neural Network for sentiment Analysis of short texts. This model takes the advantages of features learned by CNN and long-distance dependencies learned by RNN. It achieved remarkable results on benchmark datasets. We report the results using code published by the authors. We offer a comparison with LSTM model with a single hidden layer. This model is trained with equivalent experimental settings as proposed model. In this baseline, attention mechanism BIBREF27 is applied on the top of LSTM outputs across different timesteps. Implementation details All the models are experimented with approximately matching number of parameters for a solid comparison using a Tesla K80 GPU. Input Each word in the input sentence is converted to a 300 dimensional vector using GloVe pretrained vectors BIBREF28 . A maximum sentence length 100 is considered for all the datasets. Sentences with length less than 100 are padded with 0s. Architecture details: The model is implemented using keras. We considered 100 convolution filters for each of the kernels of sizes 3,4 and 5. To get the same sentence length after convolution operation zero padding is done on the input. Training Each sentence or paragraph is converted to lower case. Stopword removal is not done. A vocabulary size of 20000 is considered for all the datasets. We apply a dropout layer BIBREF29 with a probability of 0.5, on the embedding layer and probability 0.2, on the dense layer that connects the output layer. Adadelta BIBREF30 is used as the optimizer for training with gradient descent updates. Batch-size of 16 is taken for MDD and 50 for ARD. The model is trained for 50 epochs. We employ an early stopping mechanism based on validation loss for a patience of 10 epochs. The models are trained on source domain and tested on unseen target domain in all experiments. Results The performance of all models on MDD is shown in Tables 2 and 3 while for ARD, in Tables 4 and 5 . All values are shown in accuracy percentage. Furthermore time complexity of each model is presented in Table 1 . Discussion We find that gated architectures vastly outperform non gated CNN model. The effectiveness of gated architectures rely on the idea of training a gate with sole purpose of identifying a weightage. In the task of sentiment analysis this weightage corresponds to what weights will lead to a decrement in final loss or in other words, most accurate prediction of sentiment. In doing so, the gate architecture learns which words or n-grams contribute to the sentiment the most, these words or n-grams often co-relate with domain independent words. On the other hand the gate gives less weightage to n-grams which are largely either specific to domain or function word chunks which contribute negligible to the overall sentiment. This is what makes gated architectures effective at Domain Adaptation. In Figure 3 , we have illustrated the visualization of convolution outputs(kernel size = 3) from the sigmoid gate in GLU across domains. As the kernel size is 3, each row in the output corresponds to a trigram from input sentence. This heat map visualizes values of all 100 filters and their average for every input trigram. These examples demonstrate what the convolution gate learns. Trigrams with domain independent but heavy polarity like “_ _ good” and “_ costly would” have higher weightage. Meanwhile, Trigrams with domain specific terms like “quality functional case” and “sell entire kitchen” get some of the least weights. In Figure 3 (b) example, the trigram “would have to” just consists of function words, hence gets the least weight. While “sell entire kitchen” gets more weight comparatively. This might be because while function words are merely grammatical units which contribute minimal to overall sentiment, domain specific terms like “sell” may contain sentiment level knowledge only relevant within the domain. In such a case it is possible that the filters effectively propagate sentiment level knowledge from domain specific terms as well. We see that gated architectures almost always outperform recurrent, attention and linear models BoW, TFIDF, PV. This is largely because while training and testing on same domains, these models especially recurrent and attention based may perform better. However, for Domain Adaptation, as they lack gated structure which is trained in parallel to learn importance, their performance on target domain is poor as compared to gated architectures. As gated architectures are based on convolutions, they exploit parallelization to give significant boost in time complexity as compared to other models. This is depicted in Table 1 . While the gated architectures outperform other baselines, within them as well we make observations. Gated Linear Unit (GLU) performs the best often over other gated architectures. In case of GTU, outputs from Sigmoid and Tanh are multiplied together, this may result in small gradients, and hence resulting in the vanishing gradient problem. However, this will not be the in the case of GLU, as the activation is linear. In case of GTRU, outputs from Tanh and ReLU are multiplied. In ReLU, because of absence of negative activations, corresponding Tanh outputs will be completely ignored, resulting in loss of some domain independent knowledge. Conclusion In this paper, we proposed Gated Convolutional Neural Network(GCN) model for Domain Adaptation in Sentiment Analysis. We show that gates in GCN, filter out domain dependant knowledge, hence performing better at an unseen target domain. Our experiments reveal that gated architectures outperform other popular recurrent and non-gated architectures. Furthermore, because these architectures rely on convolutions, they take advantage of parellalization, vastly reducing time complexity.
Yes
e196e2ce72eb8b2d50732c26e9bf346df6643f69
e196e2ce72eb8b2d50732c26e9bf346df6643f69_0
Q: Do they evaluate only on English? Text: Introduction Sarcastic and ironic expressions are prevalent in social media and, due to the tendency to invert polarity, play an important role in the context of opinion mining, emotion recognition and sentiment analysis BIBREF0 . Sarcasm and irony are two closely related linguistic phenomena, with the concept of meaning the opposite of what is literally expressed at its core. There is no consensus in academic research on the formal definition, both terms are non-static, depending on different factors such as context, domain and even region in some cases BIBREF1 . In light of the general complexity of natural language, this presents a range of challenges, from the initial dataset design and annotation to computational methods and evaluation BIBREF2 . The difficulties lie in capturing linguistic nuances, context-dependencies and latent meaning, due to richness of dynamic variants and figurative use of language BIBREF3 . The automatic detection of sarcastic expressions often relies on the contrast between positive and negative sentiment BIBREF4 . This incongruence can be found on a lexical level with sentiment-bearing words, as in "I love being ignored". In more complex linguistic settings an action or a situation can be perceived as negative, without revealing any affect-related lexical elements. The intention of the speaker as well as common knowledge or shared experience can be key aspects, as in "I love waking up at 5 am", which can be sarcastic, but not necessarily. Similarly, verbal irony is referred to as saying the opposite of what is meant and based on sentiment contrast BIBREF5 , whereas situational irony is seen as describing circumstances with unexpected consequences BIBREF6 , BIBREF7 . Empirical studies have shown that there are specific linguistic cues and combinations of such that can serve as indicators for sarcastic and ironic expressions. Lexical and morpho-syntactic cues include exclamations and interjections, typographic markers such as all caps, quotation marks and emoticons, intensifiers and hyperboles BIBREF8 , BIBREF9 . In the case of Twitter, the usage of emojis and hashtags has also proven to help automatic irony detection. We propose a purely character-based architecture which tackles these challenges by allowing us to use a learned representation that models features derived from morpho-syntactic cues. To do so, we use deep contextualized word representations, which have recently been used to achieve the state of the art on six NLP tasks, including sentiment analysis BIBREF10 . We test our proposed architecture on 7 different irony/sarcasm datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them and otherwise offering competitive results, showing the effectiveness of our proposal. We make our code available at https://github.com/epochx/elmo4irony. Related work Apart from the relevance for industry applications related to sentiment analysis, sarcasm and irony detection has received great traction within the NLP research community, resulting in a variety of methods, shared tasks and benchmark datasets. Computational approaches for the classification task range from rule-based systems BIBREF4 , BIBREF11 and statistical methods and machine learning algorithms such as Support Vector Machines BIBREF3 , BIBREF12 , Naive Bayes and Decision Trees BIBREF13 leveraging extensive feature sets, to deep learning-based approaches. In this context, BIBREF14 . delivered state-of-the-art results by using an intra-attentional component in addition to a recurrent neural network. Previous work such as the one by BIBREF15 had proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that also achieved excellent results. A comprehensive survey on automatic sarcasm detection was done by BIBREF16 , while computational irony detection was reviewed by BIBREF17 . Further improvements both in terms of classic and deep models came as a result of the SemEval 2018 Shared Task on Irony in English Tweets BIBREF18 . The system that achieved the best results was hybrid, namely, a densely-connected BiLSTM with a multi-task learning strategy, which also makes use of features such as POS tags and lexicons BIBREF19 . Proposed Approach The wide spectrum of linguistic cues that can serve as indicators for sarcastic and ironic expressions has been usually exploited for automatic sarcasm or irony detection by modeling them in the form of binary features in traditional machine learning. On the other hand, deep models for irony and sarcasm detection, which are currently offer state-of-the-art performance, have exploited sequential neural networks such as LSTMs and GRUs BIBREF15 , BIBREF23 on top of distributed word representations. Recently, in addition to using a sequential model, BIBREF14 proposed to use intra-attention to compare elements in a sequence against themselves. This allowed the model to better capture word-to-word level interactions that could also be useful for detecting sarcasm, such as the incongruity phenomenon BIBREF3 . Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags. The usage of a purely character-based input would allow us to directly recover and model these features. Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10 . The ELMo layer allows to recover a rich 1,024-dimensional dense vector for each word. Using CNNs, each vector is built upon the characters that compose the underlying words. As ELMo also contains a deep bi-directional LSTM on top of this character-derived vectors, each word-level embedding contains contextual information from their surroundings. Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 . Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification. Experimental Setup We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary. Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag. Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol. Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 . In Table TABREF1 , we see a notable difference in terms of size among the Twitter datasets. Given this circumstance, and in light of the findings by BIBREF18 , we are interested in studying how the addition of external soft-annotated data impacts on the performance. Thus, in addition to the datasets introduced before, we use two corpora for augmentation purposes. The first dataset was collected using the Twitter API, targeting tweets with the hashtags #sarcasm or #irony, resulting on a total of 180,000 and 45,000 tweets respectively. On the other hand, to obtain non-sarcastic and non-ironic tweets, we relied on the SemEval 2018 Task 1 dataset BIBREF25 . To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems. We later extract all the hashtags in each target dataset and proceed to augment only using those external tweets that contain any of these hashtags. This allows us to, for each class, add a total of 36,835 tweets for the Ptáček corpus, 8,095 for the Riloff corpus and 26,168 for the SemEval-2018 corpus. In terms of pre-processing, as in our case the preservation of morphological structures is crucial, the amount of normalization is minimal. Concretely, we forgo stemming or lemmatizing, punctuation removal and lowercasing. We limit ourselves to replacing user mentions and URLs with one generic token respectively. In the case of the SemEval-2018 dataset, an additional step was to remove the hashtags #sarcasm, #irony and #not, as they are the artifacts used for creating the dataset. For tokenizing, we use a variation of the Twokenizer BIBREF26 to better deal with emojis. Our models are trained using Adam with a learning rate of 0.001 and a decay rate of 0.5 when there is no improvement on the accuracy on the validation set, which we use to select the best models. We also experimented using a slanted triangular learning rate scheme, which was shown by BIBREF27 to deliver excellent results on several tasks, but in practice we did not obtain significant differences. We experimented with batch sizes of 16, 32 and 64, and dropouts ranging from 0.1 to 0.5. The size of the LSTM hidden layer was fixed to 1,024, based on our preliminary experiments. We do not train the ELMo embeddings, but allow their dropouts to be active during training. Results Table TABREF2 summarizes our results. For each dataset, the top row denotes our baseline and the second row shows our best comparable model. Rows with FULL models denote our best single model trained with all the development available data, without any other preprocessing other than mentioned in the previous section. In the case of the Twitter datasets, rows indicated as AUG refer to our the models trained using the augmented version of the corresponding datasets. For the case of the SemEval-2018 dataset we use the best performing model from the Shared Task as a baseline, taken from the task description paper BIBREF18 . As the winning system is a voting-based ensemble of 10 models, for comparison, we report results using an equivalent setting. For the Riloff, Ptáček, SC-V1 and SC-V2 datasets, our baseline models are taken directly from BIBREF14 . As their pre-processing includes truncating sentence lengths at 40 and 80 tokens for the Twitter and Dialog datasets respectively, while always removing examples with less than 5 tokens, we replicate those steps and report our results under these settings. Finally, for the Reddit datasets, our baselines are taken from BIBREF21 . Although their models are trained for binary classification, instead of reporting the performance in terms of standard classification evaluation metrics, their proposed evaluation task is predicting which of two given statements that share the same context is sarcastic, with performance measured solely by accuracy. We follow this and report our results. In summary, we see our introduced models are able to outperform all previously proposed methods for all metrics, except for the SemEval-2018 best system. Although our approach yields higher Precision, it is not able to reach the given Recall and F1-Score. We note that in terms of single-model architectures, our setting offers increased performance compared to BIBREF19 and their obtained F1-score of 0.674. Moreover, our system does so without requiring external features or multi-task learning. For the other tasks we are able to outperform BIBREF14 without requiring any kind of intra-attention. This shows the effectiveness of using pre-trained character-based word representations, that allow us to recover many of the morpho-syntactic cues that tend to denote irony and sarcasm. Finally, our experiments showed that enlarging existing Twitter datasets by adding external soft-labeled data from the same media source does not yield improvements in the overall performance. This complies with the observations made by BIBREF18 . Since we have designed our augmentation tactics to maximize the overlap in terms of topic, we believe the soft-annotated nature of the additional data we have used is the reason that keeps the model from improving further. Conclusions We have presented a deep learning model based on character-level word representations obtained from ELMo. It is able to obtain the state of the art in sarcasm and irony detection in 6 out of 7 datasets derived from 3 different data sources. Our results also showed that the model does not benefit from using additional soft-labeled data in any of the three tested Twitter datasets, showing that manually-annotated data may be needed in order to improve the performance in this way.
Yes
46570c8faaeefecc8232cfc2faab0005faaba35f
46570c8faaeefecc8232cfc2faab0005faaba35f_0
Q: What are the 7 different datasets? Text: Introduction Sarcastic and ironic expressions are prevalent in social media and, due to the tendency to invert polarity, play an important role in the context of opinion mining, emotion recognition and sentiment analysis BIBREF0 . Sarcasm and irony are two closely related linguistic phenomena, with the concept of meaning the opposite of what is literally expressed at its core. There is no consensus in academic research on the formal definition, both terms are non-static, depending on different factors such as context, domain and even region in some cases BIBREF1 . In light of the general complexity of natural language, this presents a range of challenges, from the initial dataset design and annotation to computational methods and evaluation BIBREF2 . The difficulties lie in capturing linguistic nuances, context-dependencies and latent meaning, due to richness of dynamic variants and figurative use of language BIBREF3 . The automatic detection of sarcastic expressions often relies on the contrast between positive and negative sentiment BIBREF4 . This incongruence can be found on a lexical level with sentiment-bearing words, as in "I love being ignored". In more complex linguistic settings an action or a situation can be perceived as negative, without revealing any affect-related lexical elements. The intention of the speaker as well as common knowledge or shared experience can be key aspects, as in "I love waking up at 5 am", which can be sarcastic, but not necessarily. Similarly, verbal irony is referred to as saying the opposite of what is meant and based on sentiment contrast BIBREF5 , whereas situational irony is seen as describing circumstances with unexpected consequences BIBREF6 , BIBREF7 . Empirical studies have shown that there are specific linguistic cues and combinations of such that can serve as indicators for sarcastic and ironic expressions. Lexical and morpho-syntactic cues include exclamations and interjections, typographic markers such as all caps, quotation marks and emoticons, intensifiers and hyperboles BIBREF8 , BIBREF9 . In the case of Twitter, the usage of emojis and hashtags has also proven to help automatic irony detection. We propose a purely character-based architecture which tackles these challenges by allowing us to use a learned representation that models features derived from morpho-syntactic cues. To do so, we use deep contextualized word representations, which have recently been used to achieve the state of the art on six NLP tasks, including sentiment analysis BIBREF10 . We test our proposed architecture on 7 different irony/sarcasm datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them and otherwise offering competitive results, showing the effectiveness of our proposal. We make our code available at https://github.com/epochx/elmo4irony. Related work Apart from the relevance for industry applications related to sentiment analysis, sarcasm and irony detection has received great traction within the NLP research community, resulting in a variety of methods, shared tasks and benchmark datasets. Computational approaches for the classification task range from rule-based systems BIBREF4 , BIBREF11 and statistical methods and machine learning algorithms such as Support Vector Machines BIBREF3 , BIBREF12 , Naive Bayes and Decision Trees BIBREF13 leveraging extensive feature sets, to deep learning-based approaches. In this context, BIBREF14 . delivered state-of-the-art results by using an intra-attentional component in addition to a recurrent neural network. Previous work such as the one by BIBREF15 had proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that also achieved excellent results. A comprehensive survey on automatic sarcasm detection was done by BIBREF16 , while computational irony detection was reviewed by BIBREF17 . Further improvements both in terms of classic and deep models came as a result of the SemEval 2018 Shared Task on Irony in English Tweets BIBREF18 . The system that achieved the best results was hybrid, namely, a densely-connected BiLSTM with a multi-task learning strategy, which also makes use of features such as POS tags and lexicons BIBREF19 . Proposed Approach The wide spectrum of linguistic cues that can serve as indicators for sarcastic and ironic expressions has been usually exploited for automatic sarcasm or irony detection by modeling them in the form of binary features in traditional machine learning. On the other hand, deep models for irony and sarcasm detection, which are currently offer state-of-the-art performance, have exploited sequential neural networks such as LSTMs and GRUs BIBREF15 , BIBREF23 on top of distributed word representations. Recently, in addition to using a sequential model, BIBREF14 proposed to use intra-attention to compare elements in a sequence against themselves. This allowed the model to better capture word-to-word level interactions that could also be useful for detecting sarcasm, such as the incongruity phenomenon BIBREF3 . Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags. The usage of a purely character-based input would allow us to directly recover and model these features. Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10 . The ELMo layer allows to recover a rich 1,024-dimensional dense vector for each word. Using CNNs, each vector is built upon the characters that compose the underlying words. As ELMo also contains a deep bi-directional LSTM on top of this character-derived vectors, each word-level embedding contains contextual information from their surroundings. Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 . Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification. Experimental Setup We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary. Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag. Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol. Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 . In Table TABREF1 , we see a notable difference in terms of size among the Twitter datasets. Given this circumstance, and in light of the findings by BIBREF18 , we are interested in studying how the addition of external soft-annotated data impacts on the performance. Thus, in addition to the datasets introduced before, we use two corpora for augmentation purposes. The first dataset was collected using the Twitter API, targeting tweets with the hashtags #sarcasm or #irony, resulting on a total of 180,000 and 45,000 tweets respectively. On the other hand, to obtain non-sarcastic and non-ironic tweets, we relied on the SemEval 2018 Task 1 dataset BIBREF25 . To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems. We later extract all the hashtags in each target dataset and proceed to augment only using those external tweets that contain any of these hashtags. This allows us to, for each class, add a total of 36,835 tweets for the Ptáček corpus, 8,095 for the Riloff corpus and 26,168 for the SemEval-2018 corpus. In terms of pre-processing, as in our case the preservation of morphological structures is crucial, the amount of normalization is minimal. Concretely, we forgo stemming or lemmatizing, punctuation removal and lowercasing. We limit ourselves to replacing user mentions and URLs with one generic token respectively. In the case of the SemEval-2018 dataset, an additional step was to remove the hashtags #sarcasm, #irony and #not, as they are the artifacts used for creating the dataset. For tokenizing, we use a variation of the Twokenizer BIBREF26 to better deal with emojis. Our models are trained using Adam with a learning rate of 0.001 and a decay rate of 0.5 when there is no improvement on the accuracy on the validation set, which we use to select the best models. We also experimented using a slanted triangular learning rate scheme, which was shown by BIBREF27 to deliver excellent results on several tasks, but in practice we did not obtain significant differences. We experimented with batch sizes of 16, 32 and 64, and dropouts ranging from 0.1 to 0.5. The size of the LSTM hidden layer was fixed to 1,024, based on our preliminary experiments. We do not train the ELMo embeddings, but allow their dropouts to be active during training. Results Table TABREF2 summarizes our results. For each dataset, the top row denotes our baseline and the second row shows our best comparable model. Rows with FULL models denote our best single model trained with all the development available data, without any other preprocessing other than mentioned in the previous section. In the case of the Twitter datasets, rows indicated as AUG refer to our the models trained using the augmented version of the corresponding datasets. For the case of the SemEval-2018 dataset we use the best performing model from the Shared Task as a baseline, taken from the task description paper BIBREF18 . As the winning system is a voting-based ensemble of 10 models, for comparison, we report results using an equivalent setting. For the Riloff, Ptáček, SC-V1 and SC-V2 datasets, our baseline models are taken directly from BIBREF14 . As their pre-processing includes truncating sentence lengths at 40 and 80 tokens for the Twitter and Dialog datasets respectively, while always removing examples with less than 5 tokens, we replicate those steps and report our results under these settings. Finally, for the Reddit datasets, our baselines are taken from BIBREF21 . Although their models are trained for binary classification, instead of reporting the performance in terms of standard classification evaluation metrics, their proposed evaluation task is predicting which of two given statements that share the same context is sarcastic, with performance measured solely by accuracy. We follow this and report our results. In summary, we see our introduced models are able to outperform all previously proposed methods for all metrics, except for the SemEval-2018 best system. Although our approach yields higher Precision, it is not able to reach the given Recall and F1-Score. We note that in terms of single-model architectures, our setting offers increased performance compared to BIBREF19 and their obtained F1-score of 0.674. Moreover, our system does so without requiring external features or multi-task learning. For the other tasks we are able to outperform BIBREF14 without requiring any kind of intra-attention. This shows the effectiveness of using pre-trained character-based word representations, that allow us to recover many of the morpho-syntactic cues that tend to denote irony and sarcasm. Finally, our experiments showed that enlarging existing Twitter datasets by adding external soft-labeled data from the same media source does not yield improvements in the overall performance. This complies with the observations made by BIBREF18 . Since we have designed our augmentation tactics to maximize the overlap in terms of topic, we believe the soft-annotated nature of the additional data we have used is the reason that keeps the model from improving further. Conclusions We have presented a deep learning model based on character-level word representations obtained from ELMo. It is able to obtain the state of the art in sarcasm and irony detection in 6 out of 7 datasets derived from 3 different data sources. Our results also showed that the model does not benefit from using additional soft-labeled data in any of the three tested Twitter datasets, showing that manually-annotated data may be needed in order to improve the performance in this way.
SemEval 2018 Task 3, BIBREF20, BIBREF4, SARC 2.0, SARC 2.0 pol, Sarcasm Corpus V1 (SC-V1), Sarcasm Corpus V2 (SC-V2)
982d375378238d0adbc9a4c987d633ed16b7f98f
982d375378238d0adbc9a4c987d633ed16b7f98f_0
Q: What are the three different sources of data? Text: Introduction Sarcastic and ironic expressions are prevalent in social media and, due to the tendency to invert polarity, play an important role in the context of opinion mining, emotion recognition and sentiment analysis BIBREF0 . Sarcasm and irony are two closely related linguistic phenomena, with the concept of meaning the opposite of what is literally expressed at its core. There is no consensus in academic research on the formal definition, both terms are non-static, depending on different factors such as context, domain and even region in some cases BIBREF1 . In light of the general complexity of natural language, this presents a range of challenges, from the initial dataset design and annotation to computational methods and evaluation BIBREF2 . The difficulties lie in capturing linguistic nuances, context-dependencies and latent meaning, due to richness of dynamic variants and figurative use of language BIBREF3 . The automatic detection of sarcastic expressions often relies on the contrast between positive and negative sentiment BIBREF4 . This incongruence can be found on a lexical level with sentiment-bearing words, as in "I love being ignored". In more complex linguistic settings an action or a situation can be perceived as negative, without revealing any affect-related lexical elements. The intention of the speaker as well as common knowledge or shared experience can be key aspects, as in "I love waking up at 5 am", which can be sarcastic, but not necessarily. Similarly, verbal irony is referred to as saying the opposite of what is meant and based on sentiment contrast BIBREF5 , whereas situational irony is seen as describing circumstances with unexpected consequences BIBREF6 , BIBREF7 . Empirical studies have shown that there are specific linguistic cues and combinations of such that can serve as indicators for sarcastic and ironic expressions. Lexical and morpho-syntactic cues include exclamations and interjections, typographic markers such as all caps, quotation marks and emoticons, intensifiers and hyperboles BIBREF8 , BIBREF9 . In the case of Twitter, the usage of emojis and hashtags has also proven to help automatic irony detection. We propose a purely character-based architecture which tackles these challenges by allowing us to use a learned representation that models features derived from morpho-syntactic cues. To do so, we use deep contextualized word representations, which have recently been used to achieve the state of the art on six NLP tasks, including sentiment analysis BIBREF10 . We test our proposed architecture on 7 different irony/sarcasm datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them and otherwise offering competitive results, showing the effectiveness of our proposal. We make our code available at https://github.com/epochx/elmo4irony. Related work Apart from the relevance for industry applications related to sentiment analysis, sarcasm and irony detection has received great traction within the NLP research community, resulting in a variety of methods, shared tasks and benchmark datasets. Computational approaches for the classification task range from rule-based systems BIBREF4 , BIBREF11 and statistical methods and machine learning algorithms such as Support Vector Machines BIBREF3 , BIBREF12 , Naive Bayes and Decision Trees BIBREF13 leveraging extensive feature sets, to deep learning-based approaches. In this context, BIBREF14 . delivered state-of-the-art results by using an intra-attentional component in addition to a recurrent neural network. Previous work such as the one by BIBREF15 had proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that also achieved excellent results. A comprehensive survey on automatic sarcasm detection was done by BIBREF16 , while computational irony detection was reviewed by BIBREF17 . Further improvements both in terms of classic and deep models came as a result of the SemEval 2018 Shared Task on Irony in English Tweets BIBREF18 . The system that achieved the best results was hybrid, namely, a densely-connected BiLSTM with a multi-task learning strategy, which also makes use of features such as POS tags and lexicons BIBREF19 . Proposed Approach The wide spectrum of linguistic cues that can serve as indicators for sarcastic and ironic expressions has been usually exploited for automatic sarcasm or irony detection by modeling them in the form of binary features in traditional machine learning. On the other hand, deep models for irony and sarcasm detection, which are currently offer state-of-the-art performance, have exploited sequential neural networks such as LSTMs and GRUs BIBREF15 , BIBREF23 on top of distributed word representations. Recently, in addition to using a sequential model, BIBREF14 proposed to use intra-attention to compare elements in a sequence against themselves. This allowed the model to better capture word-to-word level interactions that could also be useful for detecting sarcasm, such as the incongruity phenomenon BIBREF3 . Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags. The usage of a purely character-based input would allow us to directly recover and model these features. Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10 . The ELMo layer allows to recover a rich 1,024-dimensional dense vector for each word. Using CNNs, each vector is built upon the characters that compose the underlying words. As ELMo also contains a deep bi-directional LSTM on top of this character-derived vectors, each word-level embedding contains contextual information from their surroundings. Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 . Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification. Experimental Setup We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary. Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag. Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol. Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 . In Table TABREF1 , we see a notable difference in terms of size among the Twitter datasets. Given this circumstance, and in light of the findings by BIBREF18 , we are interested in studying how the addition of external soft-annotated data impacts on the performance. Thus, in addition to the datasets introduced before, we use two corpora for augmentation purposes. The first dataset was collected using the Twitter API, targeting tweets with the hashtags #sarcasm or #irony, resulting on a total of 180,000 and 45,000 tweets respectively. On the other hand, to obtain non-sarcastic and non-ironic tweets, we relied on the SemEval 2018 Task 1 dataset BIBREF25 . To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems. We later extract all the hashtags in each target dataset and proceed to augment only using those external tweets that contain any of these hashtags. This allows us to, for each class, add a total of 36,835 tweets for the Ptáček corpus, 8,095 for the Riloff corpus and 26,168 for the SemEval-2018 corpus. In terms of pre-processing, as in our case the preservation of morphological structures is crucial, the amount of normalization is minimal. Concretely, we forgo stemming or lemmatizing, punctuation removal and lowercasing. We limit ourselves to replacing user mentions and URLs with one generic token respectively. In the case of the SemEval-2018 dataset, an additional step was to remove the hashtags #sarcasm, #irony and #not, as they are the artifacts used for creating the dataset. For tokenizing, we use a variation of the Twokenizer BIBREF26 to better deal with emojis. Our models are trained using Adam with a learning rate of 0.001 and a decay rate of 0.5 when there is no improvement on the accuracy on the validation set, which we use to select the best models. We also experimented using a slanted triangular learning rate scheme, which was shown by BIBREF27 to deliver excellent results on several tasks, but in practice we did not obtain significant differences. We experimented with batch sizes of 16, 32 and 64, and dropouts ranging from 0.1 to 0.5. The size of the LSTM hidden layer was fixed to 1,024, based on our preliminary experiments. We do not train the ELMo embeddings, but allow their dropouts to be active during training. Results Table TABREF2 summarizes our results. For each dataset, the top row denotes our baseline and the second row shows our best comparable model. Rows with FULL models denote our best single model trained with all the development available data, without any other preprocessing other than mentioned in the previous section. In the case of the Twitter datasets, rows indicated as AUG refer to our the models trained using the augmented version of the corresponding datasets. For the case of the SemEval-2018 dataset we use the best performing model from the Shared Task as a baseline, taken from the task description paper BIBREF18 . As the winning system is a voting-based ensemble of 10 models, for comparison, we report results using an equivalent setting. For the Riloff, Ptáček, SC-V1 and SC-V2 datasets, our baseline models are taken directly from BIBREF14 . As their pre-processing includes truncating sentence lengths at 40 and 80 tokens for the Twitter and Dialog datasets respectively, while always removing examples with less than 5 tokens, we replicate those steps and report our results under these settings. Finally, for the Reddit datasets, our baselines are taken from BIBREF21 . Although their models are trained for binary classification, instead of reporting the performance in terms of standard classification evaluation metrics, their proposed evaluation task is predicting which of two given statements that share the same context is sarcastic, with performance measured solely by accuracy. We follow this and report our results. In summary, we see our introduced models are able to outperform all previously proposed methods for all metrics, except for the SemEval-2018 best system. Although our approach yields higher Precision, it is not able to reach the given Recall and F1-Score. We note that in terms of single-model architectures, our setting offers increased performance compared to BIBREF19 and their obtained F1-score of 0.674. Moreover, our system does so without requiring external features or multi-task learning. For the other tasks we are able to outperform BIBREF14 without requiring any kind of intra-attention. This shows the effectiveness of using pre-trained character-based word representations, that allow us to recover many of the morpho-syntactic cues that tend to denote irony and sarcasm. Finally, our experiments showed that enlarging existing Twitter datasets by adding external soft-labeled data from the same media source does not yield improvements in the overall performance. This complies with the observations made by BIBREF18 . Since we have designed our augmentation tactics to maximize the overlap in terms of topic, we believe the soft-annotated nature of the additional data we have used is the reason that keeps the model from improving further. Conclusions We have presented a deep learning model based on character-level word representations obtained from ELMo. It is able to obtain the state of the art in sarcasm and irony detection in 6 out of 7 datasets derived from 3 different data sources. Our results also showed that the model does not benefit from using additional soft-labeled data in any of the three tested Twitter datasets, showing that manually-annotated data may be needed in order to improve the performance in this way.
Twitter, Reddit, Online Dialogues
bbdb2942dc6de3d384e3a1b705af996a5341031b
bbdb2942dc6de3d384e3a1b705af996a5341031b_0
Q: What type of model are the ELMo representations used in? Text: Introduction Sarcastic and ironic expressions are prevalent in social media and, due to the tendency to invert polarity, play an important role in the context of opinion mining, emotion recognition and sentiment analysis BIBREF0 . Sarcasm and irony are two closely related linguistic phenomena, with the concept of meaning the opposite of what is literally expressed at its core. There is no consensus in academic research on the formal definition, both terms are non-static, depending on different factors such as context, domain and even region in some cases BIBREF1 . In light of the general complexity of natural language, this presents a range of challenges, from the initial dataset design and annotation to computational methods and evaluation BIBREF2 . The difficulties lie in capturing linguistic nuances, context-dependencies and latent meaning, due to richness of dynamic variants and figurative use of language BIBREF3 . The automatic detection of sarcastic expressions often relies on the contrast between positive and negative sentiment BIBREF4 . This incongruence can be found on a lexical level with sentiment-bearing words, as in "I love being ignored". In more complex linguistic settings an action or a situation can be perceived as negative, without revealing any affect-related lexical elements. The intention of the speaker as well as common knowledge or shared experience can be key aspects, as in "I love waking up at 5 am", which can be sarcastic, but not necessarily. Similarly, verbal irony is referred to as saying the opposite of what is meant and based on sentiment contrast BIBREF5 , whereas situational irony is seen as describing circumstances with unexpected consequences BIBREF6 , BIBREF7 . Empirical studies have shown that there are specific linguistic cues and combinations of such that can serve as indicators for sarcastic and ironic expressions. Lexical and morpho-syntactic cues include exclamations and interjections, typographic markers such as all caps, quotation marks and emoticons, intensifiers and hyperboles BIBREF8 , BIBREF9 . In the case of Twitter, the usage of emojis and hashtags has also proven to help automatic irony detection. We propose a purely character-based architecture which tackles these challenges by allowing us to use a learned representation that models features derived from morpho-syntactic cues. To do so, we use deep contextualized word representations, which have recently been used to achieve the state of the art on six NLP tasks, including sentiment analysis BIBREF10 . We test our proposed architecture on 7 different irony/sarcasm datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them and otherwise offering competitive results, showing the effectiveness of our proposal. We make our code available at https://github.com/epochx/elmo4irony. Related work Apart from the relevance for industry applications related to sentiment analysis, sarcasm and irony detection has received great traction within the NLP research community, resulting in a variety of methods, shared tasks and benchmark datasets. Computational approaches for the classification task range from rule-based systems BIBREF4 , BIBREF11 and statistical methods and machine learning algorithms such as Support Vector Machines BIBREF3 , BIBREF12 , Naive Bayes and Decision Trees BIBREF13 leveraging extensive feature sets, to deep learning-based approaches. In this context, BIBREF14 . delivered state-of-the-art results by using an intra-attentional component in addition to a recurrent neural network. Previous work such as the one by BIBREF15 had proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that also achieved excellent results. A comprehensive survey on automatic sarcasm detection was done by BIBREF16 , while computational irony detection was reviewed by BIBREF17 . Further improvements both in terms of classic and deep models came as a result of the SemEval 2018 Shared Task on Irony in English Tweets BIBREF18 . The system that achieved the best results was hybrid, namely, a densely-connected BiLSTM with a multi-task learning strategy, which also makes use of features such as POS tags and lexicons BIBREF19 . Proposed Approach The wide spectrum of linguistic cues that can serve as indicators for sarcastic and ironic expressions has been usually exploited for automatic sarcasm or irony detection by modeling them in the form of binary features in traditional machine learning. On the other hand, deep models for irony and sarcasm detection, which are currently offer state-of-the-art performance, have exploited sequential neural networks such as LSTMs and GRUs BIBREF15 , BIBREF23 on top of distributed word representations. Recently, in addition to using a sequential model, BIBREF14 proposed to use intra-attention to compare elements in a sequence against themselves. This allowed the model to better capture word-to-word level interactions that could also be useful for detecting sarcasm, such as the incongruity phenomenon BIBREF3 . Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags. The usage of a purely character-based input would allow us to directly recover and model these features. Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10 . The ELMo layer allows to recover a rich 1,024-dimensional dense vector for each word. Using CNNs, each vector is built upon the characters that compose the underlying words. As ELMo also contains a deep bi-directional LSTM on top of this character-derived vectors, each word-level embedding contains contextual information from their surroundings. Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 . Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification. Experimental Setup We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary. Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag. Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol. Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 . In Table TABREF1 , we see a notable difference in terms of size among the Twitter datasets. Given this circumstance, and in light of the findings by BIBREF18 , we are interested in studying how the addition of external soft-annotated data impacts on the performance. Thus, in addition to the datasets introduced before, we use two corpora for augmentation purposes. The first dataset was collected using the Twitter API, targeting tweets with the hashtags #sarcasm or #irony, resulting on a total of 180,000 and 45,000 tweets respectively. On the other hand, to obtain non-sarcastic and non-ironic tweets, we relied on the SemEval 2018 Task 1 dataset BIBREF25 . To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems. We later extract all the hashtags in each target dataset and proceed to augment only using those external tweets that contain any of these hashtags. This allows us to, for each class, add a total of 36,835 tweets for the Ptáček corpus, 8,095 for the Riloff corpus and 26,168 for the SemEval-2018 corpus. In terms of pre-processing, as in our case the preservation of morphological structures is crucial, the amount of normalization is minimal. Concretely, we forgo stemming or lemmatizing, punctuation removal and lowercasing. We limit ourselves to replacing user mentions and URLs with one generic token respectively. In the case of the SemEval-2018 dataset, an additional step was to remove the hashtags #sarcasm, #irony and #not, as they are the artifacts used for creating the dataset. For tokenizing, we use a variation of the Twokenizer BIBREF26 to better deal with emojis. Our models are trained using Adam with a learning rate of 0.001 and a decay rate of 0.5 when there is no improvement on the accuracy on the validation set, which we use to select the best models. We also experimented using a slanted triangular learning rate scheme, which was shown by BIBREF27 to deliver excellent results on several tasks, but in practice we did not obtain significant differences. We experimented with batch sizes of 16, 32 and 64, and dropouts ranging from 0.1 to 0.5. The size of the LSTM hidden layer was fixed to 1,024, based on our preliminary experiments. We do not train the ELMo embeddings, but allow their dropouts to be active during training. Results Table TABREF2 summarizes our results. For each dataset, the top row denotes our baseline and the second row shows our best comparable model. Rows with FULL models denote our best single model trained with all the development available data, without any other preprocessing other than mentioned in the previous section. In the case of the Twitter datasets, rows indicated as AUG refer to our the models trained using the augmented version of the corresponding datasets. For the case of the SemEval-2018 dataset we use the best performing model from the Shared Task as a baseline, taken from the task description paper BIBREF18 . As the winning system is a voting-based ensemble of 10 models, for comparison, we report results using an equivalent setting. For the Riloff, Ptáček, SC-V1 and SC-V2 datasets, our baseline models are taken directly from BIBREF14 . As their pre-processing includes truncating sentence lengths at 40 and 80 tokens for the Twitter and Dialog datasets respectively, while always removing examples with less than 5 tokens, we replicate those steps and report our results under these settings. Finally, for the Reddit datasets, our baselines are taken from BIBREF21 . Although their models are trained for binary classification, instead of reporting the performance in terms of standard classification evaluation metrics, their proposed evaluation task is predicting which of two given statements that share the same context is sarcastic, with performance measured solely by accuracy. We follow this and report our results. In summary, we see our introduced models are able to outperform all previously proposed methods for all metrics, except for the SemEval-2018 best system. Although our approach yields higher Precision, it is not able to reach the given Recall and F1-Score. We note that in terms of single-model architectures, our setting offers increased performance compared to BIBREF19 and their obtained F1-score of 0.674. Moreover, our system does so without requiring external features or multi-task learning. For the other tasks we are able to outperform BIBREF14 without requiring any kind of intra-attention. This shows the effectiveness of using pre-trained character-based word representations, that allow us to recover many of the morpho-syntactic cues that tend to denote irony and sarcasm. Finally, our experiments showed that enlarging existing Twitter datasets by adding external soft-labeled data from the same media source does not yield improvements in the overall performance. This complies with the observations made by BIBREF18 . Since we have designed our augmentation tactics to maximize the overlap in terms of topic, we believe the soft-annotated nature of the additional data we have used is the reason that keeps the model from improving further. Conclusions We have presented a deep learning model based on character-level word representations obtained from ELMo. It is able to obtain the state of the art in sarcasm and irony detection in 6 out of 7 datasets derived from 3 different data sources. Our results also showed that the model does not benefit from using additional soft-labeled data in any of the three tested Twitter datasets, showing that manually-annotated data may be needed in order to improve the performance in this way.
A bi-LSTM with max-pooling on top of it
4ec538e114356f72ef82f001549accefaf85e99c
4ec538e114356f72ef82f001549accefaf85e99c_0
Q: Which morphosyntactic features are thought to indicate irony or sarcasm? Text: Introduction Sarcastic and ironic expressions are prevalent in social media and, due to the tendency to invert polarity, play an important role in the context of opinion mining, emotion recognition and sentiment analysis BIBREF0 . Sarcasm and irony are two closely related linguistic phenomena, with the concept of meaning the opposite of what is literally expressed at its core. There is no consensus in academic research on the formal definition, both terms are non-static, depending on different factors such as context, domain and even region in some cases BIBREF1 . In light of the general complexity of natural language, this presents a range of challenges, from the initial dataset design and annotation to computational methods and evaluation BIBREF2 . The difficulties lie in capturing linguistic nuances, context-dependencies and latent meaning, due to richness of dynamic variants and figurative use of language BIBREF3 . The automatic detection of sarcastic expressions often relies on the contrast between positive and negative sentiment BIBREF4 . This incongruence can be found on a lexical level with sentiment-bearing words, as in "I love being ignored". In more complex linguistic settings an action or a situation can be perceived as negative, without revealing any affect-related lexical elements. The intention of the speaker as well as common knowledge or shared experience can be key aspects, as in "I love waking up at 5 am", which can be sarcastic, but not necessarily. Similarly, verbal irony is referred to as saying the opposite of what is meant and based on sentiment contrast BIBREF5 , whereas situational irony is seen as describing circumstances with unexpected consequences BIBREF6 , BIBREF7 . Empirical studies have shown that there are specific linguistic cues and combinations of such that can serve as indicators for sarcastic and ironic expressions. Lexical and morpho-syntactic cues include exclamations and interjections, typographic markers such as all caps, quotation marks and emoticons, intensifiers and hyperboles BIBREF8 , BIBREF9 . In the case of Twitter, the usage of emojis and hashtags has also proven to help automatic irony detection. We propose a purely character-based architecture which tackles these challenges by allowing us to use a learned representation that models features derived from morpho-syntactic cues. To do so, we use deep contextualized word representations, which have recently been used to achieve the state of the art on six NLP tasks, including sentiment analysis BIBREF10 . We test our proposed architecture on 7 different irony/sarcasm datasets derived from 3 different data sources, providing state-of-the-art performance in 6 of them and otherwise offering competitive results, showing the effectiveness of our proposal. We make our code available at https://github.com/epochx/elmo4irony. Related work Apart from the relevance for industry applications related to sentiment analysis, sarcasm and irony detection has received great traction within the NLP research community, resulting in a variety of methods, shared tasks and benchmark datasets. Computational approaches for the classification task range from rule-based systems BIBREF4 , BIBREF11 and statistical methods and machine learning algorithms such as Support Vector Machines BIBREF3 , BIBREF12 , Naive Bayes and Decision Trees BIBREF13 leveraging extensive feature sets, to deep learning-based approaches. In this context, BIBREF14 . delivered state-of-the-art results by using an intra-attentional component in addition to a recurrent neural network. Previous work such as the one by BIBREF15 had proposed a convolutional long-short-term memory network (CNN-LSTM-DNN) that also achieved excellent results. A comprehensive survey on automatic sarcasm detection was done by BIBREF16 , while computational irony detection was reviewed by BIBREF17 . Further improvements both in terms of classic and deep models came as a result of the SemEval 2018 Shared Task on Irony in English Tweets BIBREF18 . The system that achieved the best results was hybrid, namely, a densely-connected BiLSTM with a multi-task learning strategy, which also makes use of features such as POS tags and lexicons BIBREF19 . Proposed Approach The wide spectrum of linguistic cues that can serve as indicators for sarcastic and ironic expressions has been usually exploited for automatic sarcasm or irony detection by modeling them in the form of binary features in traditional machine learning. On the other hand, deep models for irony and sarcasm detection, which are currently offer state-of-the-art performance, have exploited sequential neural networks such as LSTMs and GRUs BIBREF15 , BIBREF23 on top of distributed word representations. Recently, in addition to using a sequential model, BIBREF14 proposed to use intra-attention to compare elements in a sequence against themselves. This allowed the model to better capture word-to-word level interactions that could also be useful for detecting sarcasm, such as the incongruity phenomenon BIBREF3 . Despite this, all models in the literature rely on word-level representations, which keeps the models from being able to easily capture some of the lexical and morpho-syntactic cues known to denote irony, such as all caps, quotation marks and emoticons, and in Twitter, also emojis and hashtags. The usage of a purely character-based input would allow us to directly recover and model these features. Consequently, our architecture is based on Embeddings from Language Model or ELMo BIBREF10 . The ELMo layer allows to recover a rich 1,024-dimensional dense vector for each word. Using CNNs, each vector is built upon the characters that compose the underlying words. As ELMo also contains a deep bi-directional LSTM on top of this character-derived vectors, each word-level embedding contains contextual information from their surroundings. Concretely, we use a pre-trained ELMo model, obtained using the 1 Billion Word Benchmark which contains about 800M tokens of news crawl data from WMT 2011 BIBREF24 . Subsequently, the contextualized embeddings are passed on to a BiLSTM with 2,048 hidden units. We aggregate the LSTM hidden states using max-pooling, which in our preliminary experiments offered us better results, and feed the resulting vector to a 2-layer feed-forward network, where each layer has 512 units. The output of this is then fed to the final layer of the model, which performs the binary classification. Experimental Setup We test our proposed approach for binary classification on either sarcasm or irony, on seven benchmark datasets retrieved from different media sources. Below we describe each dataset, please see Table TABREF1 below for a summary. Twitter: We use the Twitter dataset provided for the SemEval 2018 Task 3, Irony Detection in English Tweets BIBREF18 . The dataset was manually annotated using binary labels. We also use the dataset by BIBREF4 , which is manually annotated for sarcasm. Finally, we use the dataset by BIBREF20 , who collected a user self-annotated corpus of tweets with the #sarcasm hashtag. Reddit: BIBREF21 collected SARC, a corpus comprising of 600.000 sarcastic comments on Reddit. We use main subset, SARC 2.0, and the political subset, SARC 2.0 pol. Online Dialogues: We utilize the Sarcasm Corpus V1 (SC-V1) and the Sarcasm Corpus V2 (SC-V2), which are subsets of the Internet Argument Corpus (IAC). Compared to other datasets in our selection, these differ mainly in text length and structure complexity BIBREF22 . In Table TABREF1 , we see a notable difference in terms of size among the Twitter datasets. Given this circumstance, and in light of the findings by BIBREF18 , we are interested in studying how the addition of external soft-annotated data impacts on the performance. Thus, in addition to the datasets introduced before, we use two corpora for augmentation purposes. The first dataset was collected using the Twitter API, targeting tweets with the hashtags #sarcasm or #irony, resulting on a total of 180,000 and 45,000 tweets respectively. On the other hand, to obtain non-sarcastic and non-ironic tweets, we relied on the SemEval 2018 Task 1 dataset BIBREF25 . To augment each dataset with our external data, we first filter out tweets that are not in English using language guessing systems. We later extract all the hashtags in each target dataset and proceed to augment only using those external tweets that contain any of these hashtags. This allows us to, for each class, add a total of 36,835 tweets for the Ptáček corpus, 8,095 for the Riloff corpus and 26,168 for the SemEval-2018 corpus. In terms of pre-processing, as in our case the preservation of morphological structures is crucial, the amount of normalization is minimal. Concretely, we forgo stemming or lemmatizing, punctuation removal and lowercasing. We limit ourselves to replacing user mentions and URLs with one generic token respectively. In the case of the SemEval-2018 dataset, an additional step was to remove the hashtags #sarcasm, #irony and #not, as they are the artifacts used for creating the dataset. For tokenizing, we use a variation of the Twokenizer BIBREF26 to better deal with emojis. Our models are trained using Adam with a learning rate of 0.001 and a decay rate of 0.5 when there is no improvement on the accuracy on the validation set, which we use to select the best models. We also experimented using a slanted triangular learning rate scheme, which was shown by BIBREF27 to deliver excellent results on several tasks, but in practice we did not obtain significant differences. We experimented with batch sizes of 16, 32 and 64, and dropouts ranging from 0.1 to 0.5. The size of the LSTM hidden layer was fixed to 1,024, based on our preliminary experiments. We do not train the ELMo embeddings, but allow their dropouts to be active during training. Results Table TABREF2 summarizes our results. For each dataset, the top row denotes our baseline and the second row shows our best comparable model. Rows with FULL models denote our best single model trained with all the development available data, without any other preprocessing other than mentioned in the previous section. In the case of the Twitter datasets, rows indicated as AUG refer to our the models trained using the augmented version of the corresponding datasets. For the case of the SemEval-2018 dataset we use the best performing model from the Shared Task as a baseline, taken from the task description paper BIBREF18 . As the winning system is a voting-based ensemble of 10 models, for comparison, we report results using an equivalent setting. For the Riloff, Ptáček, SC-V1 and SC-V2 datasets, our baseline models are taken directly from BIBREF14 . As their pre-processing includes truncating sentence lengths at 40 and 80 tokens for the Twitter and Dialog datasets respectively, while always removing examples with less than 5 tokens, we replicate those steps and report our results under these settings. Finally, for the Reddit datasets, our baselines are taken from BIBREF21 . Although their models are trained for binary classification, instead of reporting the performance in terms of standard classification evaluation metrics, their proposed evaluation task is predicting which of two given statements that share the same context is sarcastic, with performance measured solely by accuracy. We follow this and report our results. In summary, we see our introduced models are able to outperform all previously proposed methods for all metrics, except for the SemEval-2018 best system. Although our approach yields higher Precision, it is not able to reach the given Recall and F1-Score. We note that in terms of single-model architectures, our setting offers increased performance compared to BIBREF19 and their obtained F1-score of 0.674. Moreover, our system does so without requiring external features or multi-task learning. For the other tasks we are able to outperform BIBREF14 without requiring any kind of intra-attention. This shows the effectiveness of using pre-trained character-based word representations, that allow us to recover many of the morpho-syntactic cues that tend to denote irony and sarcasm. Finally, our experiments showed that enlarging existing Twitter datasets by adding external soft-labeled data from the same media source does not yield improvements in the overall performance. This complies with the observations made by BIBREF18 . Since we have designed our augmentation tactics to maximize the overlap in terms of topic, we believe the soft-annotated nature of the additional data we have used is the reason that keeps the model from improving further. Conclusions We have presented a deep learning model based on character-level word representations obtained from ELMo. It is able to obtain the state of the art in sarcasm and irony detection in 6 out of 7 datasets derived from 3 different data sources. Our results also showed that the model does not benefit from using additional soft-labeled data in any of the three tested Twitter datasets, showing that manually-annotated data may be needed in order to improve the performance in this way.
all caps, quotation marks, emoticons, emojis, hashtags
40a45d59a2ef7a67c8ab0f2b2d5b43fc85b85498
40a45d59a2ef7a67c8ab0f2b2d5b43fc85b85498_0
Q: Is the model evaluated on other datasets? Text: Introduction Traditional machine reading comprehension (MRC) tasks share the single-turn setting of answering a single question related to a passage. There is usually no connection between different questions and answers to the same passage. However, the most natural way humans seek answers is via conversation, which carries over context through the dialogue flow. To incorporate conversation into reading comprehension, recently there are several public datasets that evaluate QA model's efficacy in conversational setting, such as CoQA BIBREF0 , QuAC BIBREF1 and QBLink BIBREF2 . In these datasets, to generate correct responses, models are required to fully understand the given passage as well as the context of previous questions and answers. Thus, traditional neural MRC models are not suitable to be directly applied to this scenario. Existing approaches to conversational QA tasks include BiDAF++ BIBREF3 , FlowQA BIBREF4 , DrQA+PGNet BIBREF0 , which all try to find the optimal answer span given the passage and dialogue history. In this paper, we propose SDNet, a contextual attention-based deep neural network for the task of conversational question answering. Our network stems from machine reading comprehension models, but has several unique characteristics to tackle contextual understanding during conversation. Firstly, we apply both inter-attention and self-attention on passage and question to obtain a more effective understanding of the passage and dialogue history. Secondly, SDNet leverages the latest breakthrough in NLP: BERT contextual embedding BIBREF5 . Different from the canonical way of appending a thin layer after BERT structure according to BIBREF5 , we innovatively employed a weighted sum of BERT layer outputs, with locked BERT parameters. Thirdly, we prepend previous rounds of questions and answers to the current question to incorporate contextual information. Empirical results show that each of these components has substantial gains in prediction accuracy. We evaluated SDNet on CoQA dataset, which improves the previous state-of-the-art model's result by 1.6% (from 75.0% to 76.6%) overall $F_1$ score. The ensemble model further increase the $F_1$ score to $79.3\%$ . Moreover, SDNet is the first model ever to pass $80\%$ on CoQA's in-domain dataset. Approach In this section, we propose the neural model, SDNet, for the conversational question answering task, which is formulated as follows. Given a passage $\mathcal {C}$ , and history question and answer utterances $Q_1, A_1, Q_2, A_2, ..., Q_{k-1}, A_{k-1}$ , the task is to generate response $A_k$ given the latest question $Q_k$ . The response is dependent on both the passage and history utterances. To incorporate conversation history into response generation, we employ the idea from DrQA+PGNet BIBREF0 to prepend the latest $N$ rounds of utterances to the current question $Q_k$ . The problem is then converted into a machine reading comprehension task. In other words, the reformulate question is $\mathcal {Q}_k=\lbrace Q_{k-N}; A_{k-N}; ..., Q_{k-1}; A_{k-1}; Q_k\rbrace $ . To differentiate between question and answering, we add symbol $\langle Q \rangle $ before each question and $\langle A \rangle $ before each answer in the experiment. Model Overview Encoding layer encodes each token in passage and question into a fixed-length vector, which includes both word embeddings and contextualized embeddings. For contextualized embedding, we utilize the latest result from BERT BIBREF5 . Different from previous work, we fix the parameters in BERT model and use the linear combination of embeddings from different layers in BERT. Integration layer uses multi-layer recurrent neural networks (RNN) to capture contextual information within passage and question. To characterize the relationship between passage and question, we conduct word-level attention from question to passage both before and after the RNNs. We employ the idea of history-of-word from FusionNet BIBREF6 to reduce the dimension of output hidden vectors. Furthermore, we conduct self-attention to extract relationship between words at different positions of context and question. Output layer computes the final answer span. It uses attention to condense the question into a fixed-length vector, which is then used in a bilinear projection to obtain the probability that the answer should start and end at each position. An illustration of our model SDNet is in fig:model. Encoding layer We use 300-dim GloVe BIBREF7 embedding and contextualized embedding for each word in context and question. We employ BERT BIBREF5 as contextualized embedding. Instead of adding a scoring layer to BERT structure as proposed in BIBREF5 , we use the transformer output from BERT as contextualized embedding in our encoding layer. BERT generates $L$ layers of hidden states for all BPE tokens BIBREF8 in a sentence/passage and we employ a weighted sum of these hidden states to obtain contextualized embedding. Furthermore, we lock BERT's internal weights, setting their gradients to zero. In ablation studies, we will show that this weighted sum and weight-locking mechanism can significantly boost the model's performance. In detail, suppose a word $w$ is tokenized to $s$ BPE tokens $w=\lbrace b_1, b_2, ..., b_s\rbrace $ , and BERT generates $L$ hidden states for each BPE token, $\mathbf {h^l_t}, 1\le l \le L, 1\le t \le s$ . The contextual embedding $\operatorname{\mbox{BERT}}_w$ for word $w$ is then a per-layer weighted sum of average BERT embedding, with weights $\alpha _1, ..., \alpha _L$ . $\operatorname{\mbox{BERT}}_w = \sum _{l=1}^L \alpha _l \frac{\sum _{t=1}^s \mathbf {h}^l_t}{s} $ Integration layer Word-level Inter-Attention. We conduct attention from question to context (passage) based on GloVe word embeddings. Suppose the context word embeddings are $\lbrace {h}^C_1, ..., {h}^C_m\rbrace \subset \mathbb {R}^d$ , and the question word embeddings are $\lbrace {h}^Q_1, ..., {h}^Q_n\rbrace \subset \mathbb {R}^d$ . Then the attended vectors from question to context are $\lbrace \hat{{h}}^C_1, ..., \hat{{h}}^C_m\rbrace $ , defined as, $S_{ij} = \operatornamewithlimits{ReLU}(Uh^C_i)D\operatornamewithlimits{ReLU}(Uh^Q_j),$ $\alpha _{ij} \propto {exp(S_{ij})},$ where $D\in \mathbb {R}^{k\times k}$ is a diagonal matrix and $U\in \mathbb {R}^{d\times k}$ , $k$ is the attention hidden size. To simplify notation, we define the attention function above as $\mbox{Attn}({A}, {B}, {C})$ , meaning we compute the attention score $\alpha _{ij}$ based on two sets of vectors ${A}$ and ${B}$ , and use that to linearly combine vector set ${C}$ . So the word-level attention above can be simplified as $\mbox{Attn}(\lbrace {h}^C_i\rbrace _{i=1}^m, \lbrace {h}^Q_i\rbrace _{i=1}^n\rbrace , \lbrace {h}^Q_i\rbrace _{i=1}^n\rbrace )$ . For each context word in $\mathcal {C}$ , we also include a feature vector $f_w$ including 12-dim POS embedding, 8-dim NER embedding, a 3-dim exact matching vector $em_i$ indicating whether each context word appears in the question, and a normalized term frequency, following the approach in DrQA BIBREF9 . Therefore, the input vector for each context word is $\tilde{{w}}_i^C=[\operatorname{GloVe}(w_i^C); \operatorname{\mbox{BERT}}_{w_i^C}; \hat{{h}}^C_i; f_{w_i^C}]$ ; the input vector for each question word is $\tilde{{w}}_i^Q=[\operatorname{GloVe}(w_i^Q); \operatorname{\mbox{BERT}}_{w_i^Q}]$ . RNN. In this component, we use two separate bidirectional RNNs (BiLSTMs BIBREF10 ) to form the contextualized understanding for $\mathcal {C}$ and $\mathcal {Q}$ . $ {h}_1^{C,k}, ..., {h}_m^{C,k} = \operatorname{\mbox{BiLSTM}}{({h}_1^{C,k-1}, ..., {h}_m^{C,k-1})}, $ $ {h}_1^{Q,k}, ..., {h}_n^{Q,k} = \operatorname{\mbox{BiLSTM}}{({h}_1^{Q,k-1}, ..., {h}_n^{Q,k-1})}, $ where $1\le k \le K$ and $K$ is the number of RNN layers. We use variational dropout BIBREF11 for input vector to each layer of RNN, i.e. the dropout mask is shared over different timesteps. Question Understanding. For each question word in $\mathcal {Q}$ , we employ one more layer of RNN to generate a higher level of understanding of the question. $ {h}_1^{Q,K+1}, ..., {h}_n^{Q,K+1} = \operatorname{\mbox{BiLSTM}}{({h}_1^{Q}, ..., {h}_n^{Q})}, $ $ {h}_i^{Q} = [{h}_i^{Q,1};...;{h}_i^{Q,K}] $ Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution. We thus employ self-attention on question. The formula is the same as word-level attention, except that we are attending a question to itself: $\lbrace {u}_i^Q\rbrace _{i=1}^n=\mbox{Attn}(\lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n, \lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n, \lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n)$ . The final question representation is thus $\lbrace {u}_i^Q\rbrace _{i=1}^n$ . Multilevel Inter-Attention. After multiple layers of RNN extract different levels of understanding of each word, we conduct multilevel attention from question to context based on all layers of generated representations. However, the aggregated dimensions can be very large, which is computationally inefficient. We thus leverage the history-of-word idea from FusionNet BIBREF6 : we use all previous levels to compute attentions scores, but only linearly combine RNN outputs. In detail, we conduct $K+1$ times of multilevel attention from each RNN layer output of question to context. $ \lbrace {m}_i^{(k),C}\rbrace _{i=1}^m=\mbox{Attn}(\lbrace \mbox{HoW}_i^C\rbrace _{i=1}^m, \lbrace \mbox{HoW}_i^Q\rbrace _{i=1}^n,\lbrace {h}_i^{Q,k}\rbrace _{i=1}^n), 1\le k \le K+1 $ where history-of-word vectors are defined as $\mbox{HoW}_i^C = [\operatorname{GloVe}(w_i^C); \operatorname{\mbox{BERT}}_{w_i^C}; {h}_i^{C,1}; ..., {h}_i^{C,k}],$ $\mbox{HoW}_i^Q = [\operatorname{GloVe}(w_i^Q); \operatorname{\mbox{BERT}}_{w_i^Q}; {h}_i^{Q,1}; ..., {h}_i^{Q,k}].$ An additional RNN layer is applied to obtain the contextualized representation ${v}_i^C$ for each word in $\mathcal {C}$ . $ {y}_i^C = [{h}_i^{C,1}; ...; {h}_i^{C,k}; {m}_i^{(1),C}; ...; {m}_i^{(K+1),C}], $ $ {v}_1^{C}, ..., {v}_m^{C} = \operatorname{\mbox{BiLSTM}}{({y}_1^{C}, ..., {y}_n^{C})}, $ Self Attention on Context. Similar to questions, we conduct self attention on context to establish direct correlations between all pairs of words in $\mathcal {C}$ . Again, we use the history of word concept to reduce the output dimension by linearly combining ${v}_i^C$ . $ {s}_i^C = &[\operatorname{GloVe}(w_i^C); \operatorname{\mbox{BERT}}_{w_i^C}; {h}_i^{C,1}; ...; {h}_i^{C,k}; {m}_i^{(1),Q}; ...; {m}_i^{(K+1),Q}; {v}_i^C] $ $\lbrace \tilde{{v}}_i^C\rbrace _{i=1}^m=\mbox{Attn}(\lbrace {s}_i^C\rbrace _{i=1}^m, \lbrace {s}_i^C\rbrace _{i=1}^m, \lbrace {v}_i^C\rbrace _{i=1}^m)$ The self-attention is followed by an additional layer of RNN to generate the final representation of context: $\lbrace {u}_i^C\rbrace _{i=1}^m = \operatorname{\mbox{BiLSTM}}{([{v}_1^C; \tilde{{v}}_1^C], ..., [{v}_m^C; \tilde{{v}}_m^C])}$ Output layer Generating Answer Span. This component is to generate two scores for each context word corresponding to the probability that the answer starts and ends at this word, respectively. Firstly, we condense the question representation into one vector: ${u}^Q=\sum _i{\beta _i}{u}_i^Q$ , where $\beta _i\propto {\exp {({w}^T{u}_i^Q)}}$ and ${w}$ is a parametrized vector. Secondly, we compute the probability that the answer span should start at the $i$ -th word: $P_i^S\propto {\exp {(({u}^Q)^TW_S{u}_i^C)}},$ where $W_S$ is a parametrized matrix. We further fuse the start-position probability into the computation of end-position probability via a GRU, ${t}^Q = \operatorname{GRU}{({u}^Q, \sum _i P_i^S{u}_i^C)}$ . Thus, the probability that the answer span should end at the $i$ -th word is: $P_i^E\propto {\exp {(({t}^Q)^TW_E{u}_i^C)}},$ where $W_E$ is another parametrized matrix. For CoQA dataset, the answer could be affirmation “yes”, negation “no” or no answer “unknown”. We separately generate three probabilities corresponding to these three scenarios, $P_Y, P_N, P_U$ , respectively. For instance, to generate the probability that the answer is “yes”, $P_Y$ , we use: $P_i^{Y}\propto {\exp {(({u}^Q)^T W_{Y}{u}_i^C})},$ $P_{Y} = (\sum _i P_i^{Y}{u}_i^C)^T{w}_{Y},$ where $W_Y$ and ${w}_Y$ are parametrized matrix and vector, respectively. Training. For training, we use all questions/answers for one passage as a batch. The goal is to maximize the probability of the ground-truth answer, including span start/end position, affirmation, negation and no-answer situations. Equivalently, we minimize the negative log-likelihood function $\mathcal {L}$ : $ \mathcal {L} =& \sum _k I^S_k(\mbox{log}(P^S_{i_k^s}) + \mbox{log}(P^E_{i_k^e})) + I^Y_k\mbox{log}P^Y_k+I^N_k\mbox{log}P^N_k + I^U_k\mbox{log}P^U_k, $ where $i_k^s$ and $i_k^e$ are the ground-truth span start and end position for the $k$ -th question. $I^S_k, I^Y_k, I^N_k, I^U_k$ indicate whether the $k$ -th ground-truth answer is a passage span, “yes”, “no” and “unknown”, respectively. More implementation details are in Appendix. Prediction. During inference, we pick the largest span/yes/no/unknown probability. The span is constrained to have a maximum length of 15. Experiments We evaluated our model on CoQA BIBREF0 , a large-scale conversational question answering dataset. In CoQA, many questions require understanding of both the passage and previous questions and answers, which poses challenge to conventional machine reading models. table:coqa summarizes the domain distribution in CoQA. As shown, CoQA contains passages from multiple domains, and the average number of question answering turns is more than 15 per passage. Many questions require contextual understanding to generate the correct answer. For each in-domain dataset, 100 passages are in the development set, and 100 passages are in the test set. The rest in-domain dataset are in the training set. The test set also includes all of the out-of-domain passages. Baseline models and metrics. We compare SDNet with the following baseline models: PGNet (Seq2Seq with copy mechanism) BIBREF12 , DrQA BIBREF9 , DrQA+PGNet BIBREF0 , BiDAF++ BIBREF3 and FlowQA BIBREF4 . Aligned with the official leaderboard, we use $F_1$ as the evaluation metric, which is the harmonic mean of precision and recall at word level between the predicted answer and ground truth. Results. table:mainresult report the performance of SDNet and baseline models. As shown, SDNet achieves significantly better results than baseline models. In detail, the single SDNet model improves overall $F_1$ by 1.6%, compared with previous state-of-art model on CoQA, FlowQA. Ensemble SDNet model further improves overall $F_1$ score by 2.7%, and it's the first model to achieve over 80% $F_1$ score on in-domain datasets (80.7%). fig:epoch shows the $F_1$ score on development set over epochs. As seen, SDNet overpasses all but one baseline models after the second epoch, and achieves state-of-the-art results only after 8 epochs. Ablation Studies. We conduct ablation studies on SDNet model and display the results in table:ablation. The results show that removing BERT can reduce the $F_1$ score on development set by $7.15\%$ . Our proposed weight sum of per-layer output from BERT is crucial, which can boost the performance by $1.75\%$ , compared with using only last layer's output. This shows that the output from each layer in BERT is useful in downstream tasks. This technique can also be applied to other NLP tasks. Using BERT-base instead of BERT-large pretrained model hurts the $F_1$ score by $2.61\%$ , which manifests the superiority of BERT-large model. Variational dropout and self attention can each improve the performance by 0.24% and 0.75%, respectively. Contextual history. In SDNet, we utilize conversation history via prepending the current question with previous $N$ rounds of questions and ground-truth answers. We experimented the effect of $N$ and show the result in table:context. Excluding dialogue history ( $N=0$ ) can reduce the $F_1$ score by as much as $8.56\%$ , showing the importance of contextual information in conversational QA task. The performance of our model peaks when $N=2$ , which was used in the final SDNet model. Conclusions In this paper, we propose a novel contextual attention-based deep neural network, SDNet, to tackle conversational question answering task. By leveraging inter-attention and self-attention on passage and conversation history, the model is able to comprehend dialogue flow and fuse it with the digestion of passage content. Furthermore, we incorporate the latest breakthrough in NLP, BERT, and leverage it in an innovative way. SDNet achieves superior results over previous approaches. On the public dataset CoQA, SDNet outperforms previous state-of-the-art model by 1.6% in overall $F_1$ metric. Our future work is to apply this model to open-domain multiturn QA problem with large corpus or knowledge base, where the target passage may not be directly available. This will be an even more realistic setting to human question answering. Implementation Details We use spaCy for tokenization. As BERT use BPE as the tokenizer, we did BPE tokenization for each token generated by spaCy. In case a token in spaCy corresponds to multiple BPE sub-tokens, we average the BERT embeddings of these BPE sub-tokens as the embedding for the token. We fix the BERT weights and use the BERT-Large-Uncased model. During training, we use a dropout rate of 0.4 for BERT layer outputs and 0.3 for other layers. We use variational dropout BIBREF11 , which shares the dropout mask over timesteps in RNN. We batch the data according to passages, so all questions and answers from the same passage make one batch. We use Adamax BIBREF13 as the optimizer, with a learning rate of $\alpha =0.002, \beta =(0.9, 0.999)$ and $\epsilon =10^{-8}$ . We train the model using 30 epochs, with each epoch going over the data once. We clip the gradient at length 10. The word-level attention has a hidden size of 300. The flow module has a hidden size of 300. The question self attention has a hidden size of 300. The RNN for both question and context has $K=2$ layers and each layer has a hidden size of 125. The multilevel attention from question to context has a hidden size of 250. The context self attention has a hidden size of 250. The final layer of RNN for context has a hidden size of 125.
No
b29b5c39575454da9566b3dd27707fced8c6f4a1
b29b5c39575454da9566b3dd27707fced8c6f4a1_0
Q: Does the model incorporate coreference and entailment? Text: Introduction Traditional machine reading comprehension (MRC) tasks share the single-turn setting of answering a single question related to a passage. There is usually no connection between different questions and answers to the same passage. However, the most natural way humans seek answers is via conversation, which carries over context through the dialogue flow. To incorporate conversation into reading comprehension, recently there are several public datasets that evaluate QA model's efficacy in conversational setting, such as CoQA BIBREF0 , QuAC BIBREF1 and QBLink BIBREF2 . In these datasets, to generate correct responses, models are required to fully understand the given passage as well as the context of previous questions and answers. Thus, traditional neural MRC models are not suitable to be directly applied to this scenario. Existing approaches to conversational QA tasks include BiDAF++ BIBREF3 , FlowQA BIBREF4 , DrQA+PGNet BIBREF0 , which all try to find the optimal answer span given the passage and dialogue history. In this paper, we propose SDNet, a contextual attention-based deep neural network for the task of conversational question answering. Our network stems from machine reading comprehension models, but has several unique characteristics to tackle contextual understanding during conversation. Firstly, we apply both inter-attention and self-attention on passage and question to obtain a more effective understanding of the passage and dialogue history. Secondly, SDNet leverages the latest breakthrough in NLP: BERT contextual embedding BIBREF5 . Different from the canonical way of appending a thin layer after BERT structure according to BIBREF5 , we innovatively employed a weighted sum of BERT layer outputs, with locked BERT parameters. Thirdly, we prepend previous rounds of questions and answers to the current question to incorporate contextual information. Empirical results show that each of these components has substantial gains in prediction accuracy. We evaluated SDNet on CoQA dataset, which improves the previous state-of-the-art model's result by 1.6% (from 75.0% to 76.6%) overall $F_1$ score. The ensemble model further increase the $F_1$ score to $79.3\%$ . Moreover, SDNet is the first model ever to pass $80\%$ on CoQA's in-domain dataset. Approach In this section, we propose the neural model, SDNet, for the conversational question answering task, which is formulated as follows. Given a passage $\mathcal {C}$ , and history question and answer utterances $Q_1, A_1, Q_2, A_2, ..., Q_{k-1}, A_{k-1}$ , the task is to generate response $A_k$ given the latest question $Q_k$ . The response is dependent on both the passage and history utterances. To incorporate conversation history into response generation, we employ the idea from DrQA+PGNet BIBREF0 to prepend the latest $N$ rounds of utterances to the current question $Q_k$ . The problem is then converted into a machine reading comprehension task. In other words, the reformulate question is $\mathcal {Q}_k=\lbrace Q_{k-N}; A_{k-N}; ..., Q_{k-1}; A_{k-1}; Q_k\rbrace $ . To differentiate between question and answering, we add symbol $\langle Q \rangle $ before each question and $\langle A \rangle $ before each answer in the experiment. Model Overview Encoding layer encodes each token in passage and question into a fixed-length vector, which includes both word embeddings and contextualized embeddings. For contextualized embedding, we utilize the latest result from BERT BIBREF5 . Different from previous work, we fix the parameters in BERT model and use the linear combination of embeddings from different layers in BERT. Integration layer uses multi-layer recurrent neural networks (RNN) to capture contextual information within passage and question. To characterize the relationship between passage and question, we conduct word-level attention from question to passage both before and after the RNNs. We employ the idea of history-of-word from FusionNet BIBREF6 to reduce the dimension of output hidden vectors. Furthermore, we conduct self-attention to extract relationship between words at different positions of context and question. Output layer computes the final answer span. It uses attention to condense the question into a fixed-length vector, which is then used in a bilinear projection to obtain the probability that the answer should start and end at each position. An illustration of our model SDNet is in fig:model. Encoding layer We use 300-dim GloVe BIBREF7 embedding and contextualized embedding for each word in context and question. We employ BERT BIBREF5 as contextualized embedding. Instead of adding a scoring layer to BERT structure as proposed in BIBREF5 , we use the transformer output from BERT as contextualized embedding in our encoding layer. BERT generates $L$ layers of hidden states for all BPE tokens BIBREF8 in a sentence/passage and we employ a weighted sum of these hidden states to obtain contextualized embedding. Furthermore, we lock BERT's internal weights, setting their gradients to zero. In ablation studies, we will show that this weighted sum and weight-locking mechanism can significantly boost the model's performance. In detail, suppose a word $w$ is tokenized to $s$ BPE tokens $w=\lbrace b_1, b_2, ..., b_s\rbrace $ , and BERT generates $L$ hidden states for each BPE token, $\mathbf {h^l_t}, 1\le l \le L, 1\le t \le s$ . The contextual embedding $\operatorname{\mbox{BERT}}_w$ for word $w$ is then a per-layer weighted sum of average BERT embedding, with weights $\alpha _1, ..., \alpha _L$ . $\operatorname{\mbox{BERT}}_w = \sum _{l=1}^L \alpha _l \frac{\sum _{t=1}^s \mathbf {h}^l_t}{s} $ Integration layer Word-level Inter-Attention. We conduct attention from question to context (passage) based on GloVe word embeddings. Suppose the context word embeddings are $\lbrace {h}^C_1, ..., {h}^C_m\rbrace \subset \mathbb {R}^d$ , and the question word embeddings are $\lbrace {h}^Q_1, ..., {h}^Q_n\rbrace \subset \mathbb {R}^d$ . Then the attended vectors from question to context are $\lbrace \hat{{h}}^C_1, ..., \hat{{h}}^C_m\rbrace $ , defined as, $S_{ij} = \operatornamewithlimits{ReLU}(Uh^C_i)D\operatornamewithlimits{ReLU}(Uh^Q_j),$ $\alpha _{ij} \propto {exp(S_{ij})},$ where $D\in \mathbb {R}^{k\times k}$ is a diagonal matrix and $U\in \mathbb {R}^{d\times k}$ , $k$ is the attention hidden size. To simplify notation, we define the attention function above as $\mbox{Attn}({A}, {B}, {C})$ , meaning we compute the attention score $\alpha _{ij}$ based on two sets of vectors ${A}$ and ${B}$ , and use that to linearly combine vector set ${C}$ . So the word-level attention above can be simplified as $\mbox{Attn}(\lbrace {h}^C_i\rbrace _{i=1}^m, \lbrace {h}^Q_i\rbrace _{i=1}^n\rbrace , \lbrace {h}^Q_i\rbrace _{i=1}^n\rbrace )$ . For each context word in $\mathcal {C}$ , we also include a feature vector $f_w$ including 12-dim POS embedding, 8-dim NER embedding, a 3-dim exact matching vector $em_i$ indicating whether each context word appears in the question, and a normalized term frequency, following the approach in DrQA BIBREF9 . Therefore, the input vector for each context word is $\tilde{{w}}_i^C=[\operatorname{GloVe}(w_i^C); \operatorname{\mbox{BERT}}_{w_i^C}; \hat{{h}}^C_i; f_{w_i^C}]$ ; the input vector for each question word is $\tilde{{w}}_i^Q=[\operatorname{GloVe}(w_i^Q); \operatorname{\mbox{BERT}}_{w_i^Q}]$ . RNN. In this component, we use two separate bidirectional RNNs (BiLSTMs BIBREF10 ) to form the contextualized understanding for $\mathcal {C}$ and $\mathcal {Q}$ . $ {h}_1^{C,k}, ..., {h}_m^{C,k} = \operatorname{\mbox{BiLSTM}}{({h}_1^{C,k-1}, ..., {h}_m^{C,k-1})}, $ $ {h}_1^{Q,k}, ..., {h}_n^{Q,k} = \operatorname{\mbox{BiLSTM}}{({h}_1^{Q,k-1}, ..., {h}_n^{Q,k-1})}, $ where $1\le k \le K$ and $K$ is the number of RNN layers. We use variational dropout BIBREF11 for input vector to each layer of RNN, i.e. the dropout mask is shared over different timesteps. Question Understanding. For each question word in $\mathcal {Q}$ , we employ one more layer of RNN to generate a higher level of understanding of the question. $ {h}_1^{Q,K+1}, ..., {h}_n^{Q,K+1} = \operatorname{\mbox{BiLSTM}}{({h}_1^{Q}, ..., {h}_n^{Q})}, $ $ {h}_i^{Q} = [{h}_i^{Q,1};...;{h}_i^{Q,K}] $ Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution. We thus employ self-attention on question. The formula is the same as word-level attention, except that we are attending a question to itself: $\lbrace {u}_i^Q\rbrace _{i=1}^n=\mbox{Attn}(\lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n, \lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n, \lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n)$ . The final question representation is thus $\lbrace {u}_i^Q\rbrace _{i=1}^n$ . Multilevel Inter-Attention. After multiple layers of RNN extract different levels of understanding of each word, we conduct multilevel attention from question to context based on all layers of generated representations. However, the aggregated dimensions can be very large, which is computationally inefficient. We thus leverage the history-of-word idea from FusionNet BIBREF6 : we use all previous levels to compute attentions scores, but only linearly combine RNN outputs. In detail, we conduct $K+1$ times of multilevel attention from each RNN layer output of question to context. $ \lbrace {m}_i^{(k),C}\rbrace _{i=1}^m=\mbox{Attn}(\lbrace \mbox{HoW}_i^C\rbrace _{i=1}^m, \lbrace \mbox{HoW}_i^Q\rbrace _{i=1}^n,\lbrace {h}_i^{Q,k}\rbrace _{i=1}^n), 1\le k \le K+1 $ where history-of-word vectors are defined as $\mbox{HoW}_i^C = [\operatorname{GloVe}(w_i^C); \operatorname{\mbox{BERT}}_{w_i^C}; {h}_i^{C,1}; ..., {h}_i^{C,k}],$ $\mbox{HoW}_i^Q = [\operatorname{GloVe}(w_i^Q); \operatorname{\mbox{BERT}}_{w_i^Q}; {h}_i^{Q,1}; ..., {h}_i^{Q,k}].$ An additional RNN layer is applied to obtain the contextualized representation ${v}_i^C$ for each word in $\mathcal {C}$ . $ {y}_i^C = [{h}_i^{C,1}; ...; {h}_i^{C,k}; {m}_i^{(1),C}; ...; {m}_i^{(K+1),C}], $ $ {v}_1^{C}, ..., {v}_m^{C} = \operatorname{\mbox{BiLSTM}}{({y}_1^{C}, ..., {y}_n^{C})}, $ Self Attention on Context. Similar to questions, we conduct self attention on context to establish direct correlations between all pairs of words in $\mathcal {C}$ . Again, we use the history of word concept to reduce the output dimension by linearly combining ${v}_i^C$ . $ {s}_i^C = &[\operatorname{GloVe}(w_i^C); \operatorname{\mbox{BERT}}_{w_i^C}; {h}_i^{C,1}; ...; {h}_i^{C,k}; {m}_i^{(1),Q}; ...; {m}_i^{(K+1),Q}; {v}_i^C] $ $\lbrace \tilde{{v}}_i^C\rbrace _{i=1}^m=\mbox{Attn}(\lbrace {s}_i^C\rbrace _{i=1}^m, \lbrace {s}_i^C\rbrace _{i=1}^m, \lbrace {v}_i^C\rbrace _{i=1}^m)$ The self-attention is followed by an additional layer of RNN to generate the final representation of context: $\lbrace {u}_i^C\rbrace _{i=1}^m = \operatorname{\mbox{BiLSTM}}{([{v}_1^C; \tilde{{v}}_1^C], ..., [{v}_m^C; \tilde{{v}}_m^C])}$ Output layer Generating Answer Span. This component is to generate two scores for each context word corresponding to the probability that the answer starts and ends at this word, respectively. Firstly, we condense the question representation into one vector: ${u}^Q=\sum _i{\beta _i}{u}_i^Q$ , where $\beta _i\propto {\exp {({w}^T{u}_i^Q)}}$ and ${w}$ is a parametrized vector. Secondly, we compute the probability that the answer span should start at the $i$ -th word: $P_i^S\propto {\exp {(({u}^Q)^TW_S{u}_i^C)}},$ where $W_S$ is a parametrized matrix. We further fuse the start-position probability into the computation of end-position probability via a GRU, ${t}^Q = \operatorname{GRU}{({u}^Q, \sum _i P_i^S{u}_i^C)}$ . Thus, the probability that the answer span should end at the $i$ -th word is: $P_i^E\propto {\exp {(({t}^Q)^TW_E{u}_i^C)}},$ where $W_E$ is another parametrized matrix. For CoQA dataset, the answer could be affirmation “yes”, negation “no” or no answer “unknown”. We separately generate three probabilities corresponding to these three scenarios, $P_Y, P_N, P_U$ , respectively. For instance, to generate the probability that the answer is “yes”, $P_Y$ , we use: $P_i^{Y}\propto {\exp {(({u}^Q)^T W_{Y}{u}_i^C})},$ $P_{Y} = (\sum _i P_i^{Y}{u}_i^C)^T{w}_{Y},$ where $W_Y$ and ${w}_Y$ are parametrized matrix and vector, respectively. Training. For training, we use all questions/answers for one passage as a batch. The goal is to maximize the probability of the ground-truth answer, including span start/end position, affirmation, negation and no-answer situations. Equivalently, we minimize the negative log-likelihood function $\mathcal {L}$ : $ \mathcal {L} =& \sum _k I^S_k(\mbox{log}(P^S_{i_k^s}) + \mbox{log}(P^E_{i_k^e})) + I^Y_k\mbox{log}P^Y_k+I^N_k\mbox{log}P^N_k + I^U_k\mbox{log}P^U_k, $ where $i_k^s$ and $i_k^e$ are the ground-truth span start and end position for the $k$ -th question. $I^S_k, I^Y_k, I^N_k, I^U_k$ indicate whether the $k$ -th ground-truth answer is a passage span, “yes”, “no” and “unknown”, respectively. More implementation details are in Appendix. Prediction. During inference, we pick the largest span/yes/no/unknown probability. The span is constrained to have a maximum length of 15. Experiments We evaluated our model on CoQA BIBREF0 , a large-scale conversational question answering dataset. In CoQA, many questions require understanding of both the passage and previous questions and answers, which poses challenge to conventional machine reading models. table:coqa summarizes the domain distribution in CoQA. As shown, CoQA contains passages from multiple domains, and the average number of question answering turns is more than 15 per passage. Many questions require contextual understanding to generate the correct answer. For each in-domain dataset, 100 passages are in the development set, and 100 passages are in the test set. The rest in-domain dataset are in the training set. The test set also includes all of the out-of-domain passages. Baseline models and metrics. We compare SDNet with the following baseline models: PGNet (Seq2Seq with copy mechanism) BIBREF12 , DrQA BIBREF9 , DrQA+PGNet BIBREF0 , BiDAF++ BIBREF3 and FlowQA BIBREF4 . Aligned with the official leaderboard, we use $F_1$ as the evaluation metric, which is the harmonic mean of precision and recall at word level between the predicted answer and ground truth. Results. table:mainresult report the performance of SDNet and baseline models. As shown, SDNet achieves significantly better results than baseline models. In detail, the single SDNet model improves overall $F_1$ by 1.6%, compared with previous state-of-art model on CoQA, FlowQA. Ensemble SDNet model further improves overall $F_1$ score by 2.7%, and it's the first model to achieve over 80% $F_1$ score on in-domain datasets (80.7%). fig:epoch shows the $F_1$ score on development set over epochs. As seen, SDNet overpasses all but one baseline models after the second epoch, and achieves state-of-the-art results only after 8 epochs. Ablation Studies. We conduct ablation studies on SDNet model and display the results in table:ablation. The results show that removing BERT can reduce the $F_1$ score on development set by $7.15\%$ . Our proposed weight sum of per-layer output from BERT is crucial, which can boost the performance by $1.75\%$ , compared with using only last layer's output. This shows that the output from each layer in BERT is useful in downstream tasks. This technique can also be applied to other NLP tasks. Using BERT-base instead of BERT-large pretrained model hurts the $F_1$ score by $2.61\%$ , which manifests the superiority of BERT-large model. Variational dropout and self attention can each improve the performance by 0.24% and 0.75%, respectively. Contextual history. In SDNet, we utilize conversation history via prepending the current question with previous $N$ rounds of questions and ground-truth answers. We experimented the effect of $N$ and show the result in table:context. Excluding dialogue history ( $N=0$ ) can reduce the $F_1$ score by as much as $8.56\%$ , showing the importance of contextual information in conversational QA task. The performance of our model peaks when $N=2$ , which was used in the final SDNet model. Conclusions In this paper, we propose a novel contextual attention-based deep neural network, SDNet, to tackle conversational question answering task. By leveraging inter-attention and self-attention on passage and conversation history, the model is able to comprehend dialogue flow and fuse it with the digestion of passage content. Furthermore, we incorporate the latest breakthrough in NLP, BERT, and leverage it in an innovative way. SDNet achieves superior results over previous approaches. On the public dataset CoQA, SDNet outperforms previous state-of-the-art model by 1.6% in overall $F_1$ metric. Our future work is to apply this model to open-domain multiturn QA problem with large corpus or knowledge base, where the target passage may not be directly available. This will be an even more realistic setting to human question answering. Implementation Details We use spaCy for tokenization. As BERT use BPE as the tokenizer, we did BPE tokenization for each token generated by spaCy. In case a token in spaCy corresponds to multiple BPE sub-tokens, we average the BERT embeddings of these BPE sub-tokens as the embedding for the token. We fix the BERT weights and use the BERT-Large-Uncased model. During training, we use a dropout rate of 0.4 for BERT layer outputs and 0.3 for other layers. We use variational dropout BIBREF11 , which shares the dropout mask over timesteps in RNN. We batch the data according to passages, so all questions and answers from the same passage make one batch. We use Adamax BIBREF13 as the optimizer, with a learning rate of $\alpha =0.002, \beta =(0.9, 0.999)$ and $\epsilon =10^{-8}$ . We train the model using 30 epochs, with each epoch going over the data once. We clip the gradient at length 10. The word-level attention has a hidden size of 300. The flow module has a hidden size of 300. The question self attention has a hidden size of 300. The RNN for both question and context has $K=2$ layers and each layer has a hidden size of 125. The multilevel attention from question to context has a hidden size of 250. The context self attention has a hidden size of 250. The final layer of RNN for context has a hidden size of 125.
As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution.
4040f5c9f365f9bc80b56dce944ada85bb8b4ab4
4040f5c9f365f9bc80b56dce944ada85bb8b4ab4_0
Q: Is the incorporation of context separately evaluated? Text: Introduction Traditional machine reading comprehension (MRC) tasks share the single-turn setting of answering a single question related to a passage. There is usually no connection between different questions and answers to the same passage. However, the most natural way humans seek answers is via conversation, which carries over context through the dialogue flow. To incorporate conversation into reading comprehension, recently there are several public datasets that evaluate QA model's efficacy in conversational setting, such as CoQA BIBREF0 , QuAC BIBREF1 and QBLink BIBREF2 . In these datasets, to generate correct responses, models are required to fully understand the given passage as well as the context of previous questions and answers. Thus, traditional neural MRC models are not suitable to be directly applied to this scenario. Existing approaches to conversational QA tasks include BiDAF++ BIBREF3 , FlowQA BIBREF4 , DrQA+PGNet BIBREF0 , which all try to find the optimal answer span given the passage and dialogue history. In this paper, we propose SDNet, a contextual attention-based deep neural network for the task of conversational question answering. Our network stems from machine reading comprehension models, but has several unique characteristics to tackle contextual understanding during conversation. Firstly, we apply both inter-attention and self-attention on passage and question to obtain a more effective understanding of the passage and dialogue history. Secondly, SDNet leverages the latest breakthrough in NLP: BERT contextual embedding BIBREF5 . Different from the canonical way of appending a thin layer after BERT structure according to BIBREF5 , we innovatively employed a weighted sum of BERT layer outputs, with locked BERT parameters. Thirdly, we prepend previous rounds of questions and answers to the current question to incorporate contextual information. Empirical results show that each of these components has substantial gains in prediction accuracy. We evaluated SDNet on CoQA dataset, which improves the previous state-of-the-art model's result by 1.6% (from 75.0% to 76.6%) overall $F_1$ score. The ensemble model further increase the $F_1$ score to $79.3\%$ . Moreover, SDNet is the first model ever to pass $80\%$ on CoQA's in-domain dataset. Approach In this section, we propose the neural model, SDNet, for the conversational question answering task, which is formulated as follows. Given a passage $\mathcal {C}$ , and history question and answer utterances $Q_1, A_1, Q_2, A_2, ..., Q_{k-1}, A_{k-1}$ , the task is to generate response $A_k$ given the latest question $Q_k$ . The response is dependent on both the passage and history utterances. To incorporate conversation history into response generation, we employ the idea from DrQA+PGNet BIBREF0 to prepend the latest $N$ rounds of utterances to the current question $Q_k$ . The problem is then converted into a machine reading comprehension task. In other words, the reformulate question is $\mathcal {Q}_k=\lbrace Q_{k-N}; A_{k-N}; ..., Q_{k-1}; A_{k-1}; Q_k\rbrace $ . To differentiate between question and answering, we add symbol $\langle Q \rangle $ before each question and $\langle A \rangle $ before each answer in the experiment. Model Overview Encoding layer encodes each token in passage and question into a fixed-length vector, which includes both word embeddings and contextualized embeddings. For contextualized embedding, we utilize the latest result from BERT BIBREF5 . Different from previous work, we fix the parameters in BERT model and use the linear combination of embeddings from different layers in BERT. Integration layer uses multi-layer recurrent neural networks (RNN) to capture contextual information within passage and question. To characterize the relationship between passage and question, we conduct word-level attention from question to passage both before and after the RNNs. We employ the idea of history-of-word from FusionNet BIBREF6 to reduce the dimension of output hidden vectors. Furthermore, we conduct self-attention to extract relationship between words at different positions of context and question. Output layer computes the final answer span. It uses attention to condense the question into a fixed-length vector, which is then used in a bilinear projection to obtain the probability that the answer should start and end at each position. An illustration of our model SDNet is in fig:model. Encoding layer We use 300-dim GloVe BIBREF7 embedding and contextualized embedding for each word in context and question. We employ BERT BIBREF5 as contextualized embedding. Instead of adding a scoring layer to BERT structure as proposed in BIBREF5 , we use the transformer output from BERT as contextualized embedding in our encoding layer. BERT generates $L$ layers of hidden states for all BPE tokens BIBREF8 in a sentence/passage and we employ a weighted sum of these hidden states to obtain contextualized embedding. Furthermore, we lock BERT's internal weights, setting their gradients to zero. In ablation studies, we will show that this weighted sum and weight-locking mechanism can significantly boost the model's performance. In detail, suppose a word $w$ is tokenized to $s$ BPE tokens $w=\lbrace b_1, b_2, ..., b_s\rbrace $ , and BERT generates $L$ hidden states for each BPE token, $\mathbf {h^l_t}, 1\le l \le L, 1\le t \le s$ . The contextual embedding $\operatorname{\mbox{BERT}}_w$ for word $w$ is then a per-layer weighted sum of average BERT embedding, with weights $\alpha _1, ..., \alpha _L$ . $\operatorname{\mbox{BERT}}_w = \sum _{l=1}^L \alpha _l \frac{\sum _{t=1}^s \mathbf {h}^l_t}{s} $ Integration layer Word-level Inter-Attention. We conduct attention from question to context (passage) based on GloVe word embeddings. Suppose the context word embeddings are $\lbrace {h}^C_1, ..., {h}^C_m\rbrace \subset \mathbb {R}^d$ , and the question word embeddings are $\lbrace {h}^Q_1, ..., {h}^Q_n\rbrace \subset \mathbb {R}^d$ . Then the attended vectors from question to context are $\lbrace \hat{{h}}^C_1, ..., \hat{{h}}^C_m\rbrace $ , defined as, $S_{ij} = \operatornamewithlimits{ReLU}(Uh^C_i)D\operatornamewithlimits{ReLU}(Uh^Q_j),$ $\alpha _{ij} \propto {exp(S_{ij})},$ where $D\in \mathbb {R}^{k\times k}$ is a diagonal matrix and $U\in \mathbb {R}^{d\times k}$ , $k$ is the attention hidden size. To simplify notation, we define the attention function above as $\mbox{Attn}({A}, {B}, {C})$ , meaning we compute the attention score $\alpha _{ij}$ based on two sets of vectors ${A}$ and ${B}$ , and use that to linearly combine vector set ${C}$ . So the word-level attention above can be simplified as $\mbox{Attn}(\lbrace {h}^C_i\rbrace _{i=1}^m, \lbrace {h}^Q_i\rbrace _{i=1}^n\rbrace , \lbrace {h}^Q_i\rbrace _{i=1}^n\rbrace )$ . For each context word in $\mathcal {C}$ , we also include a feature vector $f_w$ including 12-dim POS embedding, 8-dim NER embedding, a 3-dim exact matching vector $em_i$ indicating whether each context word appears in the question, and a normalized term frequency, following the approach in DrQA BIBREF9 . Therefore, the input vector for each context word is $\tilde{{w}}_i^C=[\operatorname{GloVe}(w_i^C); \operatorname{\mbox{BERT}}_{w_i^C}; \hat{{h}}^C_i; f_{w_i^C}]$ ; the input vector for each question word is $\tilde{{w}}_i^Q=[\operatorname{GloVe}(w_i^Q); \operatorname{\mbox{BERT}}_{w_i^Q}]$ . RNN. In this component, we use two separate bidirectional RNNs (BiLSTMs BIBREF10 ) to form the contextualized understanding for $\mathcal {C}$ and $\mathcal {Q}$ . $ {h}_1^{C,k}, ..., {h}_m^{C,k} = \operatorname{\mbox{BiLSTM}}{({h}_1^{C,k-1}, ..., {h}_m^{C,k-1})}, $ $ {h}_1^{Q,k}, ..., {h}_n^{Q,k} = \operatorname{\mbox{BiLSTM}}{({h}_1^{Q,k-1}, ..., {h}_n^{Q,k-1})}, $ where $1\le k \le K$ and $K$ is the number of RNN layers. We use variational dropout BIBREF11 for input vector to each layer of RNN, i.e. the dropout mask is shared over different timesteps. Question Understanding. For each question word in $\mathcal {Q}$ , we employ one more layer of RNN to generate a higher level of understanding of the question. $ {h}_1^{Q,K+1}, ..., {h}_n^{Q,K+1} = \operatorname{\mbox{BiLSTM}}{({h}_1^{Q}, ..., {h}_n^{Q})}, $ $ {h}_i^{Q} = [{h}_i^{Q,1};...;{h}_i^{Q,K}] $ Self-Attention on Question. As the question has integrated previous utterances, the model needs to directly relate previously mentioned concept with the current question. This is helpful for concept carry-over and coreference resolution. We thus employ self-attention on question. The formula is the same as word-level attention, except that we are attending a question to itself: $\lbrace {u}_i^Q\rbrace _{i=1}^n=\mbox{Attn}(\lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n, \lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n, \lbrace {h}_i^{Q,K+1}\rbrace _{i=1}^n)$ . The final question representation is thus $\lbrace {u}_i^Q\rbrace _{i=1}^n$ . Multilevel Inter-Attention. After multiple layers of RNN extract different levels of understanding of each word, we conduct multilevel attention from question to context based on all layers of generated representations. However, the aggregated dimensions can be very large, which is computationally inefficient. We thus leverage the history-of-word idea from FusionNet BIBREF6 : we use all previous levels to compute attentions scores, but only linearly combine RNN outputs. In detail, we conduct $K+1$ times of multilevel attention from each RNN layer output of question to context. $ \lbrace {m}_i^{(k),C}\rbrace _{i=1}^m=\mbox{Attn}(\lbrace \mbox{HoW}_i^C\rbrace _{i=1}^m, \lbrace \mbox{HoW}_i^Q\rbrace _{i=1}^n,\lbrace {h}_i^{Q,k}\rbrace _{i=1}^n), 1\le k \le K+1 $ where history-of-word vectors are defined as $\mbox{HoW}_i^C = [\operatorname{GloVe}(w_i^C); \operatorname{\mbox{BERT}}_{w_i^C}; {h}_i^{C,1}; ..., {h}_i^{C,k}],$ $\mbox{HoW}_i^Q = [\operatorname{GloVe}(w_i^Q); \operatorname{\mbox{BERT}}_{w_i^Q}; {h}_i^{Q,1}; ..., {h}_i^{Q,k}].$ An additional RNN layer is applied to obtain the contextualized representation ${v}_i^C$ for each word in $\mathcal {C}$ . $ {y}_i^C = [{h}_i^{C,1}; ...; {h}_i^{C,k}; {m}_i^{(1),C}; ...; {m}_i^{(K+1),C}], $ $ {v}_1^{C}, ..., {v}_m^{C} = \operatorname{\mbox{BiLSTM}}{({y}_1^{C}, ..., {y}_n^{C})}, $ Self Attention on Context. Similar to questions, we conduct self attention on context to establish direct correlations between all pairs of words in $\mathcal {C}$ . Again, we use the history of word concept to reduce the output dimension by linearly combining ${v}_i^C$ . $ {s}_i^C = &[\operatorname{GloVe}(w_i^C); \operatorname{\mbox{BERT}}_{w_i^C}; {h}_i^{C,1}; ...; {h}_i^{C,k}; {m}_i^{(1),Q}; ...; {m}_i^{(K+1),Q}; {v}_i^C] $ $\lbrace \tilde{{v}}_i^C\rbrace _{i=1}^m=\mbox{Attn}(\lbrace {s}_i^C\rbrace _{i=1}^m, \lbrace {s}_i^C\rbrace _{i=1}^m, \lbrace {v}_i^C\rbrace _{i=1}^m)$ The self-attention is followed by an additional layer of RNN to generate the final representation of context: $\lbrace {u}_i^C\rbrace _{i=1}^m = \operatorname{\mbox{BiLSTM}}{([{v}_1^C; \tilde{{v}}_1^C], ..., [{v}_m^C; \tilde{{v}}_m^C])}$ Output layer Generating Answer Span. This component is to generate two scores for each context word corresponding to the probability that the answer starts and ends at this word, respectively. Firstly, we condense the question representation into one vector: ${u}^Q=\sum _i{\beta _i}{u}_i^Q$ , where $\beta _i\propto {\exp {({w}^T{u}_i^Q)}}$ and ${w}$ is a parametrized vector. Secondly, we compute the probability that the answer span should start at the $i$ -th word: $P_i^S\propto {\exp {(({u}^Q)^TW_S{u}_i^C)}},$ where $W_S$ is a parametrized matrix. We further fuse the start-position probability into the computation of end-position probability via a GRU, ${t}^Q = \operatorname{GRU}{({u}^Q, \sum _i P_i^S{u}_i^C)}$ . Thus, the probability that the answer span should end at the $i$ -th word is: $P_i^E\propto {\exp {(({t}^Q)^TW_E{u}_i^C)}},$ where $W_E$ is another parametrized matrix. For CoQA dataset, the answer could be affirmation “yes”, negation “no” or no answer “unknown”. We separately generate three probabilities corresponding to these three scenarios, $P_Y, P_N, P_U$ , respectively. For instance, to generate the probability that the answer is “yes”, $P_Y$ , we use: $P_i^{Y}\propto {\exp {(({u}^Q)^T W_{Y}{u}_i^C})},$ $P_{Y} = (\sum _i P_i^{Y}{u}_i^C)^T{w}_{Y},$ where $W_Y$ and ${w}_Y$ are parametrized matrix and vector, respectively. Training. For training, we use all questions/answers for one passage as a batch. The goal is to maximize the probability of the ground-truth answer, including span start/end position, affirmation, negation and no-answer situations. Equivalently, we minimize the negative log-likelihood function $\mathcal {L}$ : $ \mathcal {L} =& \sum _k I^S_k(\mbox{log}(P^S_{i_k^s}) + \mbox{log}(P^E_{i_k^e})) + I^Y_k\mbox{log}P^Y_k+I^N_k\mbox{log}P^N_k + I^U_k\mbox{log}P^U_k, $ where $i_k^s$ and $i_k^e$ are the ground-truth span start and end position for the $k$ -th question. $I^S_k, I^Y_k, I^N_k, I^U_k$ indicate whether the $k$ -th ground-truth answer is a passage span, “yes”, “no” and “unknown”, respectively. More implementation details are in Appendix. Prediction. During inference, we pick the largest span/yes/no/unknown probability. The span is constrained to have a maximum length of 15. Experiments We evaluated our model on CoQA BIBREF0 , a large-scale conversational question answering dataset. In CoQA, many questions require understanding of both the passage and previous questions and answers, which poses challenge to conventional machine reading models. table:coqa summarizes the domain distribution in CoQA. As shown, CoQA contains passages from multiple domains, and the average number of question answering turns is more than 15 per passage. Many questions require contextual understanding to generate the correct answer. For each in-domain dataset, 100 passages are in the development set, and 100 passages are in the test set. The rest in-domain dataset are in the training set. The test set also includes all of the out-of-domain passages. Baseline models and metrics. We compare SDNet with the following baseline models: PGNet (Seq2Seq with copy mechanism) BIBREF12 , DrQA BIBREF9 , DrQA+PGNet BIBREF0 , BiDAF++ BIBREF3 and FlowQA BIBREF4 . Aligned with the official leaderboard, we use $F_1$ as the evaluation metric, which is the harmonic mean of precision and recall at word level between the predicted answer and ground truth. Results. table:mainresult report the performance of SDNet and baseline models. As shown, SDNet achieves significantly better results than baseline models. In detail, the single SDNet model improves overall $F_1$ by 1.6%, compared with previous state-of-art model on CoQA, FlowQA. Ensemble SDNet model further improves overall $F_1$ score by 2.7%, and it's the first model to achieve over 80% $F_1$ score on in-domain datasets (80.7%). fig:epoch shows the $F_1$ score on development set over epochs. As seen, SDNet overpasses all but one baseline models after the second epoch, and achieves state-of-the-art results only after 8 epochs. Ablation Studies. We conduct ablation studies on SDNet model and display the results in table:ablation. The results show that removing BERT can reduce the $F_1$ score on development set by $7.15\%$ . Our proposed weight sum of per-layer output from BERT is crucial, which can boost the performance by $1.75\%$ , compared with using only last layer's output. This shows that the output from each layer in BERT is useful in downstream tasks. This technique can also be applied to other NLP tasks. Using BERT-base instead of BERT-large pretrained model hurts the $F_1$ score by $2.61\%$ , which manifests the superiority of BERT-large model. Variational dropout and self attention can each improve the performance by 0.24% and 0.75%, respectively. Contextual history. In SDNet, we utilize conversation history via prepending the current question with previous $N$ rounds of questions and ground-truth answers. We experimented the effect of $N$ and show the result in table:context. Excluding dialogue history ( $N=0$ ) can reduce the $F_1$ score by as much as $8.56\%$ , showing the importance of contextual information in conversational QA task. The performance of our model peaks when $N=2$ , which was used in the final SDNet model. Conclusions In this paper, we propose a novel contextual attention-based deep neural network, SDNet, to tackle conversational question answering task. By leveraging inter-attention and self-attention on passage and conversation history, the model is able to comprehend dialogue flow and fuse it with the digestion of passage content. Furthermore, we incorporate the latest breakthrough in NLP, BERT, and leverage it in an innovative way. SDNet achieves superior results over previous approaches. On the public dataset CoQA, SDNet outperforms previous state-of-the-art model by 1.6% in overall $F_1$ metric. Our future work is to apply this model to open-domain multiturn QA problem with large corpus or knowledge base, where the target passage may not be directly available. This will be an even more realistic setting to human question answering. Implementation Details We use spaCy for tokenization. As BERT use BPE as the tokenizer, we did BPE tokenization for each token generated by spaCy. In case a token in spaCy corresponds to multiple BPE sub-tokens, we average the BERT embeddings of these BPE sub-tokens as the embedding for the token. We fix the BERT weights and use the BERT-Large-Uncased model. During training, we use a dropout rate of 0.4 for BERT layer outputs and 0.3 for other layers. We use variational dropout BIBREF11 , which shares the dropout mask over timesteps in RNN. We batch the data according to passages, so all questions and answers from the same passage make one batch. We use Adamax BIBREF13 as the optimizer, with a learning rate of $\alpha =0.002, \beta =(0.9, 0.999)$ and $\epsilon =10^{-8}$ . We train the model using 30 epochs, with each epoch going over the data once. We clip the gradient at length 10. The word-level attention has a hidden size of 300. The flow module has a hidden size of 300. The question self attention has a hidden size of 300. The RNN for both question and context has $K=2$ layers and each layer has a hidden size of 125. The multilevel attention from question to context has a hidden size of 250. The context self attention has a hidden size of 250. The final layer of RNN for context has a hidden size of 125.
No
7dce1b64c0040500951c864fce93d1ad7a1809bc
7dce1b64c0040500951c864fce93d1ad7a1809bc_0
Q: Which frozen acoustic model do they use? Text: Introduction Typical speech enhancement techniques focus on local criteria for improving speech intelligibility and quality. Time-frequency prediction techniques use local spectral quality estimates as an objective function; time domain methods directly predict clean output with a potential spectral quality metric BIBREF0. Such techniques have been extremely successful in predicting a speech denoising function, but also require parallel clean and noisy speech for training. The trained systems implicitly learn the phonetic patterns of the speech signal in the coordinated output of time-domain or time-frequency units. However, our hypothesis is that directly providing phonetic feedback can be a powerful additional signal for speech enhancement. For example, many local metrics will be more attuned to high-energy regions of speech, but not all phones of a language carry equal energy in production (compare /v/ to /ae/). Our proxy for phonetic intelligibility is a frozen automatic speech recognition (ASR) acoustic model trained on clean speech; the loss functions we incorporate into training encourage the speech enhancement system to produce output that is interpretable to a fixed acoustic model as clean speech, by making the output of the acoustic model mimic its behavior under clean speech. This mimic loss BIBREF1 provides key linguistic insights to the enhancement model about what a recognizable phoneme looks like. When no parallel data is available, but transcripts are available, a loss is easily computed against hard senone labels and backpropagated to the enhancement model trained from scratch. Since the clean acoustic model is frozen, the only way for the enhancement model to improve the loss is to make a signal that is more recognizable to the acoustic model. The improvement by this model demonstrates the power of phonetic feedback; very few neural enhancement techniques until now have been able to achieve improvements without parallel data. When parallel data is available, mimic loss works by comparing the outputs of the acoustic model on clean speech with the outputs of the acoustic model on denoised speech. This is a more informative loss than the loss against hard senone labels, and is complimentary to local losses. We show that mimic loss can be applied to an off-the-shelf enhancement system and gives an improvement in intelligibility scores. Our technique is agnostic to the enhancement system as long as it is differentiably trainable. Mimic loss has previously improved performance on robust ASR tasks BIBREF1, but has not yet demonstrated success at enhancement metrics, and has not been used in a non-parallel setting. We seek to demonstrate these advantages here: We show that using hard targets in the mimic loss framework leads to improvements in objective intelligibility metrics when no parallel data is available. We show that when parallel data is available, training the state-of-the-art method with mimic loss improves objective intelligibility metrics. Related Work Speech enhancement is a rich field of work with a huge variety of techniques. Spectral feature based enhancement systems have focused on masking approaches BIBREF2, and have gained popularity with deep learning techniques BIBREF3 for ideal ratio mask and ideal binary mask estimation BIBREF4. Related Work ::: Perceptual Loss Perceptual losses are a form of knowledge transfer BIBREF5, which is defined as the technique of adding auxiliary information at train time, to better inform the trained model. The first perceptual loss was introduced for the task of style transfer BIBREF6. These losses depends on a pre-trained network that can disentangle relevant factors. Two examples are fed through the network to generate a loss at a high level of the network. In style transfer, the perceptual loss ensures that the high-level contents of an image remain the same, while allowing the texture of the image to change. For speech-related tasks a perceptual loss has been used to denoise time-domain speech data BIBREF7, where the loss was called a "deep feature loss". The perceiving network was trained for acoustic environment detection and domestic audio tagging. The clean and denoised signals are both fed to this network, and a loss is computed at a higher level. Perceptual loss has also been used for spectral-domain data, in the mimic loss framework. This has been used for spectral mapping for robust ASR in BIBREF1 and BIBREF8. The perceiving network in this case is an acoustic model trained with senone targets. Clean and denoised spectral features are fed through the acoustic model, and a loss is computed from the outputs of the network. These works did not evaluate mimic loss for speech enhancement, nor did they develop the framework for use without parallel data. Related Work ::: Enhancement Without Parallel Data One approach for enhancement without parallel data introduces an adversarial loss to generate realistic masks BIBREF9. However, this work is only evaluated for ASR performance, and not speech enhancement performance. For the related task of voice conversion, a sparse representation was used by BIBREF10 to do conversion without parallel data. This wasn't evaluated on enhancement metrics or ASR metrics, but would prove an interesting approach. Several recent works have investigated jointly training the acoustic model with a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13, but these works did not evaluate their system on speech enhancement metrics. Indeed, our internal experiments show that without access to the clean data, joint training severely harms performance on these metrics. Mimic Loss for Enhancement As noted before, we build on the work by Pandey and Wang that denoises the speech signal in the time domain, but computes a mapping loss on the spectral magnitudes of the clean and denoised speech samples. This is possible because the STFT operation for computing the spectral features is fully differentiable. This framework for enhancement lends itself to other spectral processing techniques, such as mimic loss. In order to train this off-the-shelf denoiser using the mimic loss objective, we first train an acoustic model on clean spectral magnitudes. The training objective for this model is cross-entropy loss against hard senone targets. Crucially, the weights of the acoustic model are frozen during the training of the enhancement model. This prevents passing information from enhancement model to acoustic model in a manner other than by producing a signal that behaves like clean speech. This is in contrast to joint training, where the weights of the acoustic model are updated at the same time as the denoising model weights, which usually leads to a degradation in enhancement metrics. Without parallel speech examples, we apply the mimic loss framework by using hard senone targets instead of soft targets. The loss against these hard targets is cross-entropy loss ($L_{CE}$). The senone labels can be gathered from a hard alignment of the transcripts with the noisy or denoised features; the process does not require clean speech samples. Since this method only has access to phone alignments and not clean spectra, we do not expect it to improve the speech quality, but expect it to improve intelligibility. We also ran experiments on different formats for the mimic loss when parallel data is available. Setting the mapping losses to be $L_1$ was determined to be most effective by Pandey and Wang. For the mimic loss, we tried both teacher-student learning with $L_1$ and $L_2$ losses, and knowledge-distillation with various temperature parameters on the softmax outputs. We found that using $L_1$ loss on the pre-softmax outputs performed the best, likely due to the fact that the other losses are also $L_1$. When the loss types are different, one loss type usually comes to dominate, but each loss serves an important purpose here. We provide an example of the effects of mimic loss, both with and without parallel data, by showing the log-mel filterbank features, seen in Figure FIGREF6. A set of relatively high-frequency and low-magnitude features is seen in the highlighted portion of the features. Since local metrics tend to emphasize regions of high energy differences, they miss this important phonetic information. However, in the mimic-loss-trained systems, this information is retained. Experiments For all experiments, we use the CHiME-4 corpus, a popular corpus for robust ASR experiments, though it has not often been used for enhancement experiments. During training, we randomly select a channel for each example each epoch, and we evaluate our enhancement results on channel 5 of et05. Before training the enhancement system, we train the acoustic model used for mimic loss on the clean spectral magnitudes available in CHiME-4. Our architecture is a Wide-ResNet-inspired model, that takes a whole utterance and produces a posterior over each frame. The model has 4 blocks of 3 layers, where the blocks have 128, 256, 512, 1024 filters respectively. The first layer of each block has a stride of 2, down-sampling the input. After the convolutional layers, the filters are divided into 16 parts, and each part is fed to a fully-connected layer, so the number of output posterior vectors is the same as the input frames. This is an utterance-level version of the model in BIBREF8. In the case of parallel data, the best results were obtained by training the network for only a few epochs (we used 5). However, when using hard targets, we achieved better results from using the fully-converged network. We suspect that the outputs of the converged network more closely reflect the one-hot nature of the senone labels, which makes training easier for the enhancement model when hard targets are used. On the other hand, only lightly training the acoustic model generates softer targets when parallel data is available. For our enhancement model, we began with the state-of-the-art framework introduced by Pandey and Wang in BIBREF0, called AECNN. We reproduce the architecture of their system, replacing the PReLU activations with leaky ReLU activations, since the performance is similar, but the leaky ReLU network has fewer parameters. Experiments ::: Without parallel data We first train this network without the use of parallel data, using only the senone targets, and starting from random weights in the AECNN. In Table TABREF8 we see results for enhancement without parallel data: the cross-entropy loss with senone targets given a frozen clean-speech network is enough to improve eSTOI by 4.3 points. This is a surprising improvement in intelligibility given the lack of parallel data, and demonstrates that phonetic information alone is powerful enough to provide improvements to speech intelligibility metrics. The degradation in SI-SDR performance, a measure of speech quality, is expected, given that the denoising model does not have access to clean data, and may corrupt the phase. We compare also against joint training of the enhancement model with the acoustic model. This is a common technique for robust ASR, but has not been evaluated for enhancement. With the hard targets, joint training performs poorly on enhancement, due to co-adaptation of the enhancement and acoustic model networks. Freezing the acoustic model network is critical since it requires the enhancement model to produce speech the acoustic model sees as “clean.” Experiments ::: With parallel data In addition to the setting without any parallel data, we show results given parallel data. In Table TABREF10 we demonstrate that training the AECNN framework with mimic loss improves intelligibility over both the model trained with only time-domain loss (AECNN-T), as well as the model trained with both time-domain and spectral-domain losses (AECNN-T-SM). We only see a small improvement in the SI-SDR, likely due to the fact that the mimic loss technique is designed to improve the recognizablity of the results. In fact, seeing any improvement in SI-SDR at all is a surprising result. We also compare against joint training with an identical setup to the mimic setup (i.e. a combination of three losses: teacher-student loss against the clean outputs, spectral magnitude loss, and time-domain loss). The jointly trained acoustic model is initialized with the weights of the system trained on clean speech. We find that joint training performs much better on the enhancement metrics in this setup, though still not quite as well as the mimic setup. Compared to the previous experiment without parallel data, the presence of the spectral magnitude and time-domain losses likely keep the enhancement output more stable when joint training, at the cost of requiring parallel training data. Conclusion We have shown that phonetic feedback is valuable for speech enhancement systems. In addition, we show that our approach to this feedback, the mimic loss framework, is useful in many scenarios: with and without the presence of parallel data, in both the enhancement and robust ASR scenarios. Using this framework, we show improvement on a state-of-the-art model for speech enhancement. The methodology is agnostic to the enhancement technique, so may be applicable to other differentiably trained enhancement modules. In the future, we hope to address the reduction in speech quality scores when training without parallel data. One approach may be to add a GAN loss to the denoised time-domain signal, which may help with introduced distortions. In addition, we could soften the cross-entropy loss to an $L_1$ loss by generating "prototypical" posterior distributions for each senone, averaged across the training dataset. Mimic loss as a framework allows for a rich space of future possibilities. To that end, we have made our code available at http://github.com/OSU-slatelab/mimic-enhance.
a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13
e1b36927114969f3b759cba056cfb3756de474e4
e1b36927114969f3b759cba056cfb3756de474e4_0
Q: By how much does using phonetic feedback improve state-of-the-art systems? Text: Introduction Typical speech enhancement techniques focus on local criteria for improving speech intelligibility and quality. Time-frequency prediction techniques use local spectral quality estimates as an objective function; time domain methods directly predict clean output with a potential spectral quality metric BIBREF0. Such techniques have been extremely successful in predicting a speech denoising function, but also require parallel clean and noisy speech for training. The trained systems implicitly learn the phonetic patterns of the speech signal in the coordinated output of time-domain or time-frequency units. However, our hypothesis is that directly providing phonetic feedback can be a powerful additional signal for speech enhancement. For example, many local metrics will be more attuned to high-energy regions of speech, but not all phones of a language carry equal energy in production (compare /v/ to /ae/). Our proxy for phonetic intelligibility is a frozen automatic speech recognition (ASR) acoustic model trained on clean speech; the loss functions we incorporate into training encourage the speech enhancement system to produce output that is interpretable to a fixed acoustic model as clean speech, by making the output of the acoustic model mimic its behavior under clean speech. This mimic loss BIBREF1 provides key linguistic insights to the enhancement model about what a recognizable phoneme looks like. When no parallel data is available, but transcripts are available, a loss is easily computed against hard senone labels and backpropagated to the enhancement model trained from scratch. Since the clean acoustic model is frozen, the only way for the enhancement model to improve the loss is to make a signal that is more recognizable to the acoustic model. The improvement by this model demonstrates the power of phonetic feedback; very few neural enhancement techniques until now have been able to achieve improvements without parallel data. When parallel data is available, mimic loss works by comparing the outputs of the acoustic model on clean speech with the outputs of the acoustic model on denoised speech. This is a more informative loss than the loss against hard senone labels, and is complimentary to local losses. We show that mimic loss can be applied to an off-the-shelf enhancement system and gives an improvement in intelligibility scores. Our technique is agnostic to the enhancement system as long as it is differentiably trainable. Mimic loss has previously improved performance on robust ASR tasks BIBREF1, but has not yet demonstrated success at enhancement metrics, and has not been used in a non-parallel setting. We seek to demonstrate these advantages here: We show that using hard targets in the mimic loss framework leads to improvements in objective intelligibility metrics when no parallel data is available. We show that when parallel data is available, training the state-of-the-art method with mimic loss improves objective intelligibility metrics. Related Work Speech enhancement is a rich field of work with a huge variety of techniques. Spectral feature based enhancement systems have focused on masking approaches BIBREF2, and have gained popularity with deep learning techniques BIBREF3 for ideal ratio mask and ideal binary mask estimation BIBREF4. Related Work ::: Perceptual Loss Perceptual losses are a form of knowledge transfer BIBREF5, which is defined as the technique of adding auxiliary information at train time, to better inform the trained model. The first perceptual loss was introduced for the task of style transfer BIBREF6. These losses depends on a pre-trained network that can disentangle relevant factors. Two examples are fed through the network to generate a loss at a high level of the network. In style transfer, the perceptual loss ensures that the high-level contents of an image remain the same, while allowing the texture of the image to change. For speech-related tasks a perceptual loss has been used to denoise time-domain speech data BIBREF7, where the loss was called a "deep feature loss". The perceiving network was trained for acoustic environment detection and domestic audio tagging. The clean and denoised signals are both fed to this network, and a loss is computed at a higher level. Perceptual loss has also been used for spectral-domain data, in the mimic loss framework. This has been used for spectral mapping for robust ASR in BIBREF1 and BIBREF8. The perceiving network in this case is an acoustic model trained with senone targets. Clean and denoised spectral features are fed through the acoustic model, and a loss is computed from the outputs of the network. These works did not evaluate mimic loss for speech enhancement, nor did they develop the framework for use without parallel data. Related Work ::: Enhancement Without Parallel Data One approach for enhancement without parallel data introduces an adversarial loss to generate realistic masks BIBREF9. However, this work is only evaluated for ASR performance, and not speech enhancement performance. For the related task of voice conversion, a sparse representation was used by BIBREF10 to do conversion without parallel data. This wasn't evaluated on enhancement metrics or ASR metrics, but would prove an interesting approach. Several recent works have investigated jointly training the acoustic model with a masking speech enhancement model BIBREF11, BIBREF12, BIBREF13, but these works did not evaluate their system on speech enhancement metrics. Indeed, our internal experiments show that without access to the clean data, joint training severely harms performance on these metrics. Mimic Loss for Enhancement As noted before, we build on the work by Pandey and Wang that denoises the speech signal in the time domain, but computes a mapping loss on the spectral magnitudes of the clean and denoised speech samples. This is possible because the STFT operation for computing the spectral features is fully differentiable. This framework for enhancement lends itself to other spectral processing techniques, such as mimic loss. In order to train this off-the-shelf denoiser using the mimic loss objective, we first train an acoustic model on clean spectral magnitudes. The training objective for this model is cross-entropy loss against hard senone targets. Crucially, the weights of the acoustic model are frozen during the training of the enhancement model. This prevents passing information from enhancement model to acoustic model in a manner other than by producing a signal that behaves like clean speech. This is in contrast to joint training, where the weights of the acoustic model are updated at the same time as the denoising model weights, which usually leads to a degradation in enhancement metrics. Without parallel speech examples, we apply the mimic loss framework by using hard senone targets instead of soft targets. The loss against these hard targets is cross-entropy loss ($L_{CE}$). The senone labels can be gathered from a hard alignment of the transcripts with the noisy or denoised features; the process does not require clean speech samples. Since this method only has access to phone alignments and not clean spectra, we do not expect it to improve the speech quality, but expect it to improve intelligibility. We also ran experiments on different formats for the mimic loss when parallel data is available. Setting the mapping losses to be $L_1$ was determined to be most effective by Pandey and Wang. For the mimic loss, we tried both teacher-student learning with $L_1$ and $L_2$ losses, and knowledge-distillation with various temperature parameters on the softmax outputs. We found that using $L_1$ loss on the pre-softmax outputs performed the best, likely due to the fact that the other losses are also $L_1$. When the loss types are different, one loss type usually comes to dominate, but each loss serves an important purpose here. We provide an example of the effects of mimic loss, both with and without parallel data, by showing the log-mel filterbank features, seen in Figure FIGREF6. A set of relatively high-frequency and low-magnitude features is seen in the highlighted portion of the features. Since local metrics tend to emphasize regions of high energy differences, they miss this important phonetic information. However, in the mimic-loss-trained systems, this information is retained. Experiments For all experiments, we use the CHiME-4 corpus, a popular corpus for robust ASR experiments, though it has not often been used for enhancement experiments. During training, we randomly select a channel for each example each epoch, and we evaluate our enhancement results on channel 5 of et05. Before training the enhancement system, we train the acoustic model used for mimic loss on the clean spectral magnitudes available in CHiME-4. Our architecture is a Wide-ResNet-inspired model, that takes a whole utterance and produces a posterior over each frame. The model has 4 blocks of 3 layers, where the blocks have 128, 256, 512, 1024 filters respectively. The first layer of each block has a stride of 2, down-sampling the input. After the convolutional layers, the filters are divided into 16 parts, and each part is fed to a fully-connected layer, so the number of output posterior vectors is the same as the input frames. This is an utterance-level version of the model in BIBREF8. In the case of parallel data, the best results were obtained by training the network for only a few epochs (we used 5). However, when using hard targets, we achieved better results from using the fully-converged network. We suspect that the outputs of the converged network more closely reflect the one-hot nature of the senone labels, which makes training easier for the enhancement model when hard targets are used. On the other hand, only lightly training the acoustic model generates softer targets when parallel data is available. For our enhancement model, we began with the state-of-the-art framework introduced by Pandey and Wang in BIBREF0, called AECNN. We reproduce the architecture of their system, replacing the PReLU activations with leaky ReLU activations, since the performance is similar, but the leaky ReLU network has fewer parameters. Experiments ::: Without parallel data We first train this network without the use of parallel data, using only the senone targets, and starting from random weights in the AECNN. In Table TABREF8 we see results for enhancement without parallel data: the cross-entropy loss with senone targets given a frozen clean-speech network is enough to improve eSTOI by 4.3 points. This is a surprising improvement in intelligibility given the lack of parallel data, and demonstrates that phonetic information alone is powerful enough to provide improvements to speech intelligibility metrics. The degradation in SI-SDR performance, a measure of speech quality, is expected, given that the denoising model does not have access to clean data, and may corrupt the phase. We compare also against joint training of the enhancement model with the acoustic model. This is a common technique for robust ASR, but has not been evaluated for enhancement. With the hard targets, joint training performs poorly on enhancement, due to co-adaptation of the enhancement and acoustic model networks. Freezing the acoustic model network is critical since it requires the enhancement model to produce speech the acoustic model sees as “clean.” Experiments ::: With parallel data In addition to the setting without any parallel data, we show results given parallel data. In Table TABREF10 we demonstrate that training the AECNN framework with mimic loss improves intelligibility over both the model trained with only time-domain loss (AECNN-T), as well as the model trained with both time-domain and spectral-domain losses (AECNN-T-SM). We only see a small improvement in the SI-SDR, likely due to the fact that the mimic loss technique is designed to improve the recognizablity of the results. In fact, seeing any improvement in SI-SDR at all is a surprising result. We also compare against joint training with an identical setup to the mimic setup (i.e. a combination of three losses: teacher-student loss against the clean outputs, spectral magnitude loss, and time-domain loss). The jointly trained acoustic model is initialized with the weights of the system trained on clean speech. We find that joint training performs much better on the enhancement metrics in this setup, though still not quite as well as the mimic setup. Compared to the previous experiment without parallel data, the presence of the spectral magnitude and time-domain losses likely keep the enhancement output more stable when joint training, at the cost of requiring parallel training data. Conclusion We have shown that phonetic feedback is valuable for speech enhancement systems. In addition, we show that our approach to this feedback, the mimic loss framework, is useful in many scenarios: with and without the presence of parallel data, in both the enhancement and robust ASR scenarios. Using this framework, we show improvement on a state-of-the-art model for speech enhancement. The methodology is agnostic to the enhancement technique, so may be applicable to other differentiably trained enhancement modules. In the future, we hope to address the reduction in speech quality scores when training without parallel data. One approach may be to add a GAN loss to the denoised time-domain signal, which may help with introduced distortions. In addition, we could soften the cross-entropy loss to an $L_1$ loss by generating "prototypical" posterior distributions for each senone, averaged across the training dataset. Mimic loss as a framework allows for a rich space of future possibilities. To that end, we have made our code available at http://github.com/OSU-slatelab/mimic-enhance.
Improved AECNN-T by 2.1 and AECNN-T-SM BY 0.9
186ccc18c6361904bee0d58196e341a719fb31c2
186ccc18c6361904bee0d58196e341a719fb31c2_0
Q: What features are used? Text: Introduction and Related Work Psychotic disorders affect approximately 2.5-4% of the population BIBREF0 BIBREF1. They are one of the leading causes of disability worldwide BIBREF2 and are a frequent cause of inpatient readmission after discharge BIBREF3. Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF4 BIBREF5. Assessing readmission risk is therefore critically needed, as it can help inform the selection of treatment interventions and implement preventive measures. Predicting hospital readmission risk is, however, a complex endeavour across all medical fields. Prior work in readmission risk prediction has used structured data (such as medical comorbidity, prior hospitalizations, sociodemographic factors, functional status, physiological variables, etc) extracted from patients' charts BIBREF6. NLP-based prediction models that extract unstructured data from EHR have also been developed with some success in other medical fields BIBREF7. In Psychiatry, due to the unique characteristics of medical record content (highly varied and context-sensitive vocabulary, abundance of multiword expressions, etc), NLP-based approaches have seldom been applied BIBREF8, BIBREF9, BIBREF10 and strategies to study readmission risk factors primarily rely on clinical observation and manual review BIBREF11 BIBREF12, which is effort-intensive, and does not scale well. In this paper we aim to assess the suitability of using NLP-based features like clinical sentiment analysis and topic extraction to predict 30-day readmission risk in psychiatry patients. We begin by describing the EHR corpus that was created using in-house data to train and evaluate our models. We then present the NLP pipeline for feature extraction that was used to parse the EHRs in our corpus. Finally, we compare the performances of our model when using only structured clinical variables and when incorporating features derived from free-text narratives. Data The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission. The age of the patients ranged from 20 to 67 (mean = 26.65, standard deviation = 8.73). 51% of the patients were male. The number of admissions per patient ranged from 2 to 21 (mean = 4, standard deviation = 2.85). Each admission contained on average 4.25 notes and 4,298 tokens. In total, the corpus contains 552 admissions, and 280 of those (50%) resulted in early readmissions. Feature Extraction The readmission risk prediction task was performed at the admission level. An admission consists of a collection of all the clinical notes for a given patient written by medical personnel between inpatient admission and discharge. Every admission was labeled as either `readmitted' (i.e. the patient was readmitted within the next 30 days of discharge) or `not readmitted'. Therefore, the classification task consists of creating a single feature representation of all the clinical notes belonging to one admission, plus the past medical history and demographic information of the patient, and establishing whether that admission will be followed by a 30-day readmission or not. 45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features): Sociodemographics: gender, age, marital status, etc. Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc. Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc. The Current Admission feature group has the most number of features, with 29 features included in this group alone. These features can be further stratified into two groups: `structured' clinical features and `unstructured' clinical features. Feature Extraction ::: Structured Features Structure features are features that were identified on the EHR using regular expression matching and include rating scores that have been reported in the psychiatric literature as correlated with increased readmission risk, such as Global Assessment of Functioning, Insight and Compliance: Global Assessment of Functioning (GAF): The psychosocial functioning of the patient ranging from 100 (extremely high functioning) to 1 (severely impaired) BIBREF13. Insight: The degree to which the patient recognizes and accepts his/her illness (either Good, Fair or Poor). Compliance: The ability of the patient to comply with medication and to follow medical advice (either Yes, Partial, or None). These features are widely-used in clinical practice and evaluate the general state and prognosis of the patient during the patient's evaluation. Feature Extraction ::: Unstructured Features Unstructured features aim to capture the state of the patient in relation to seven risk factor domains (Appearance, Thought Process, Thought Content, Interpersonal, Substance Use, Occupation, and Mood) from the free-text narratives on the EHR. These seven domains have been identified as associated with readmission risk in prior work BIBREF14. These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain. These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text. In our paper we also showed that this automatic pipeline achieves reasonably strong F-scores, with an overall performance of 0.828 F1 for the topic extraction component and 0.5 F1 on the clinical sentiment component. The clinical sentiment scores are computed for every note in the admission. Figure FIGREF4 details the data analysis pipeline that is employed for the feature extraction. First, a multilayer perceptron (MLP) classifier is trained on EHR sentences (8,000,000 sentences consisting of 340,000,000 tokens) that are extracted from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These sentences are automatically identified and labeled for their respective risk factor domain(s) by using a lexicon of clinician identified domain-related keywords and multiword expressions, and thus require no manual annotation. The sentences are vectorized using the Universal Sentence Encoder (USE), a transformer attention network pretrained on a large volume of general-domain web data and optimized for greater-than-word length sequences. Sentences that are marked for one or more of the seven risk factor domains are then passed to a suite of seven clinical sentiment MLP classifiers (one for each risk factor domain) that are trained on a corpus of 3,500 EHR sentences (63,127 tokens) labeled by a team of three clinicians involved in this project. To prevent overfitting to this small amount of training data, the models are designed to be more generalizable through the use of two hidden layers and a dropout rate BIBREF16 of 0.75. The outputs of each clinical sentiment model are then averaged across notes to create a single value for each risk factor domain that corresponds to the patient's level of functioning on a -1 to 1 scale (see Figure 2). Experiments and Results We tested six different classification models: Stochastic Gradient Descent, Logistic Regression, C-Support Vector, Decision Tree, Random Forest, and MLP. All of them were implemented and fine-tuned using the scikit-learn machine learning toolkit BIBREF17. Because an accurate readmission risk prediction model is designed to be used to inform treatment decisions, it is important in adopting a model architecture that is clinically interpretable and allows for an analysis of the specific contribution of each feature in the input. As such, we include a Random Forest classifier, which we also found to have the best performance out of the six models. To systematically evaluate the importance of the clinical sentiment values extracted from the free text in EHRs, we first build a baseline model using the structured features, which are similar to prior studies on readmission risk prediction BIBREF6. We then compare two models incorporating the unstructured features. In the "Baseline+Domain Sentences" model, we consider whether adding the counts of sentences per EHR that involve each of the seven risk factor domains as identified by our topic extraction model improved the model performance. In the "Baseline+Clinical Sentiment" model, we evaluate whether adding clinical sentiment scores for each risk factor domain improved the model performance. We also experimented with combining both sets of features and found no additional improvement. Each model configuration was trained and evaluated 100 times and the features with the highest importance for each iteration were recorded. To further fine-tune our models, we also perform three-fold cross-validated recursive feature elimination 30 times on each of the three configurations and report the performances of the models with the best performing feature sets. These can be found in Table TABREF9. Our baseline results show that the model trained using only the structured features produce equivalent performances as reported by prior models for readmission risk prediction across all healthcare fields BIBREF18. The two models that were trained using unstructured features produced better results and both outperform the baseline results. The "Baseline+Clinical Sentiment" model produced the best results, resulting in an F1 of 0.72, an improvement of 14.3% over the baseline. In order to establish what features were not relevant in the classification task, we performed recursive feature elimination. We identified 13 feature values as being not predictive of readmission (they were eliminated from at least two of the three feature sets without producing a drop in performance) including: all values for marital status (Single, Married, Other, and Unknown), missing values for GAF at admission, GAF score difference between admission & discharge, GAF at discharge, Veteran status, Race, and Insight & Mode of Past Insight values reflecting a clinically positive change (Good and Improving). Poor Insight values, however, are predictive of readmission. Conclusions We have introduced and assessed the efficacy of adding NLP-based features like topic extraction and clinical sentiment features to traditional structured-feature based classification models for early readmission prediction in psychiatry patients. The approach we have introduced is a hybrid machine learning approach that combines deep learning techniques with linear methods to ensure clinical interpretability of the prediction model. Results show not only that both the number of sentences per risk domain and the clinical sentiment analysis scores outperform the structured-feature baseline and contribute significantly to better classification results, but also that the clinical sentiment features produce the highest results in all evaluation metrics (F1 = 0.72). These results suggest that clinical sentiment features for each of seven risk domains extracted from free-text narratives further enhance early readmission prediction. In addition, combining state-of-art MLP methods has a potential utility in generating clinical meaningful features that can be be used in downstream linear models with interpretable and transparent results. In future work, we intend to increase the size of the EHR corpus, increase the demographic spread of patients, and extract new features based on clinical expertise to increase our model performances. Additionally, we intend to continue our clinical sentiment annotation project from BIBREF15 to increase the accuracy of that portion of our NLP pipeline. Acknowledgments This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2019 Workshop reviewers for their constructive and helpful comments.
Sociodemographics: gender, age, marital status, etc., Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc., Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc.
fd5412e2784acefb50afc3bfae1e087580b90ab9
fd5412e2784acefb50afc3bfae1e087580b90ab9_0
Q: Do they compare to previous models? Text: Introduction and Related Work Psychotic disorders affect approximately 2.5-4% of the population BIBREF0 BIBREF1. They are one of the leading causes of disability worldwide BIBREF2 and are a frequent cause of inpatient readmission after discharge BIBREF3. Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF4 BIBREF5. Assessing readmission risk is therefore critically needed, as it can help inform the selection of treatment interventions and implement preventive measures. Predicting hospital readmission risk is, however, a complex endeavour across all medical fields. Prior work in readmission risk prediction has used structured data (such as medical comorbidity, prior hospitalizations, sociodemographic factors, functional status, physiological variables, etc) extracted from patients' charts BIBREF6. NLP-based prediction models that extract unstructured data from EHR have also been developed with some success in other medical fields BIBREF7. In Psychiatry, due to the unique characteristics of medical record content (highly varied and context-sensitive vocabulary, abundance of multiword expressions, etc), NLP-based approaches have seldom been applied BIBREF8, BIBREF9, BIBREF10 and strategies to study readmission risk factors primarily rely on clinical observation and manual review BIBREF11 BIBREF12, which is effort-intensive, and does not scale well. In this paper we aim to assess the suitability of using NLP-based features like clinical sentiment analysis and topic extraction to predict 30-day readmission risk in psychiatry patients. We begin by describing the EHR corpus that was created using in-house data to train and evaluate our models. We then present the NLP pipeline for feature extraction that was used to parse the EHRs in our corpus. Finally, we compare the performances of our model when using only structured clinical variables and when incorporating features derived from free-text narratives. Data The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission. The age of the patients ranged from 20 to 67 (mean = 26.65, standard deviation = 8.73). 51% of the patients were male. The number of admissions per patient ranged from 2 to 21 (mean = 4, standard deviation = 2.85). Each admission contained on average 4.25 notes and 4,298 tokens. In total, the corpus contains 552 admissions, and 280 of those (50%) resulted in early readmissions. Feature Extraction The readmission risk prediction task was performed at the admission level. An admission consists of a collection of all the clinical notes for a given patient written by medical personnel between inpatient admission and discharge. Every admission was labeled as either `readmitted' (i.e. the patient was readmitted within the next 30 days of discharge) or `not readmitted'. Therefore, the classification task consists of creating a single feature representation of all the clinical notes belonging to one admission, plus the past medical history and demographic information of the patient, and establishing whether that admission will be followed by a 30-day readmission or not. 45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features): Sociodemographics: gender, age, marital status, etc. Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc. Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc. The Current Admission feature group has the most number of features, with 29 features included in this group alone. These features can be further stratified into two groups: `structured' clinical features and `unstructured' clinical features. Feature Extraction ::: Structured Features Structure features are features that were identified on the EHR using regular expression matching and include rating scores that have been reported in the psychiatric literature as correlated with increased readmission risk, such as Global Assessment of Functioning, Insight and Compliance: Global Assessment of Functioning (GAF): The psychosocial functioning of the patient ranging from 100 (extremely high functioning) to 1 (severely impaired) BIBREF13. Insight: The degree to which the patient recognizes and accepts his/her illness (either Good, Fair or Poor). Compliance: The ability of the patient to comply with medication and to follow medical advice (either Yes, Partial, or None). These features are widely-used in clinical practice and evaluate the general state and prognosis of the patient during the patient's evaluation. Feature Extraction ::: Unstructured Features Unstructured features aim to capture the state of the patient in relation to seven risk factor domains (Appearance, Thought Process, Thought Content, Interpersonal, Substance Use, Occupation, and Mood) from the free-text narratives on the EHR. These seven domains have been identified as associated with readmission risk in prior work BIBREF14. These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain. These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text. In our paper we also showed that this automatic pipeline achieves reasonably strong F-scores, with an overall performance of 0.828 F1 for the topic extraction component and 0.5 F1 on the clinical sentiment component. The clinical sentiment scores are computed for every note in the admission. Figure FIGREF4 details the data analysis pipeline that is employed for the feature extraction. First, a multilayer perceptron (MLP) classifier is trained on EHR sentences (8,000,000 sentences consisting of 340,000,000 tokens) that are extracted from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These sentences are automatically identified and labeled for their respective risk factor domain(s) by using a lexicon of clinician identified domain-related keywords and multiword expressions, and thus require no manual annotation. The sentences are vectorized using the Universal Sentence Encoder (USE), a transformer attention network pretrained on a large volume of general-domain web data and optimized for greater-than-word length sequences. Sentences that are marked for one or more of the seven risk factor domains are then passed to a suite of seven clinical sentiment MLP classifiers (one for each risk factor domain) that are trained on a corpus of 3,500 EHR sentences (63,127 tokens) labeled by a team of three clinicians involved in this project. To prevent overfitting to this small amount of training data, the models are designed to be more generalizable through the use of two hidden layers and a dropout rate BIBREF16 of 0.75. The outputs of each clinical sentiment model are then averaged across notes to create a single value for each risk factor domain that corresponds to the patient's level of functioning on a -1 to 1 scale (see Figure 2). Experiments and Results We tested six different classification models: Stochastic Gradient Descent, Logistic Regression, C-Support Vector, Decision Tree, Random Forest, and MLP. All of them were implemented and fine-tuned using the scikit-learn machine learning toolkit BIBREF17. Because an accurate readmission risk prediction model is designed to be used to inform treatment decisions, it is important in adopting a model architecture that is clinically interpretable and allows for an analysis of the specific contribution of each feature in the input. As such, we include a Random Forest classifier, which we also found to have the best performance out of the six models. To systematically evaluate the importance of the clinical sentiment values extracted from the free text in EHRs, we first build a baseline model using the structured features, which are similar to prior studies on readmission risk prediction BIBREF6. We then compare two models incorporating the unstructured features. In the "Baseline+Domain Sentences" model, we consider whether adding the counts of sentences per EHR that involve each of the seven risk factor domains as identified by our topic extraction model improved the model performance. In the "Baseline+Clinical Sentiment" model, we evaluate whether adding clinical sentiment scores for each risk factor domain improved the model performance. We also experimented with combining both sets of features and found no additional improvement. Each model configuration was trained and evaluated 100 times and the features with the highest importance for each iteration were recorded. To further fine-tune our models, we also perform three-fold cross-validated recursive feature elimination 30 times on each of the three configurations and report the performances of the models with the best performing feature sets. These can be found in Table TABREF9. Our baseline results show that the model trained using only the structured features produce equivalent performances as reported by prior models for readmission risk prediction across all healthcare fields BIBREF18. The two models that were trained using unstructured features produced better results and both outperform the baseline results. The "Baseline+Clinical Sentiment" model produced the best results, resulting in an F1 of 0.72, an improvement of 14.3% over the baseline. In order to establish what features were not relevant in the classification task, we performed recursive feature elimination. We identified 13 feature values as being not predictive of readmission (they were eliminated from at least two of the three feature sets without producing a drop in performance) including: all values for marital status (Single, Married, Other, and Unknown), missing values for GAF at admission, GAF score difference between admission & discharge, GAF at discharge, Veteran status, Race, and Insight & Mode of Past Insight values reflecting a clinically positive change (Good and Improving). Poor Insight values, however, are predictive of readmission. Conclusions We have introduced and assessed the efficacy of adding NLP-based features like topic extraction and clinical sentiment features to traditional structured-feature based classification models for early readmission prediction in psychiatry patients. The approach we have introduced is a hybrid machine learning approach that combines deep learning techniques with linear methods to ensure clinical interpretability of the prediction model. Results show not only that both the number of sentences per risk domain and the clinical sentiment analysis scores outperform the structured-feature baseline and contribute significantly to better classification results, but also that the clinical sentiment features produce the highest results in all evaluation metrics (F1 = 0.72). These results suggest that clinical sentiment features for each of seven risk domains extracted from free-text narratives further enhance early readmission prediction. In addition, combining state-of-art MLP methods has a potential utility in generating clinical meaningful features that can be be used in downstream linear models with interpretable and transparent results. In future work, we intend to increase the size of the EHR corpus, increase the demographic spread of patients, and extract new features based on clinical expertise to increase our model performances. Additionally, we intend to continue our clinical sentiment annotation project from BIBREF15 to increase the accuracy of that portion of our NLP pipeline. Acknowledgments This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2019 Workshop reviewers for their constructive and helpful comments.
Yes
c7f087c78768d5c6f3ff26921858186d627fd4fd
c7f087c78768d5c6f3ff26921858186d627fd4fd_0
Q: How do they incorporate sentiment analysis? Text: Introduction and Related Work Psychotic disorders affect approximately 2.5-4% of the population BIBREF0 BIBREF1. They are one of the leading causes of disability worldwide BIBREF2 and are a frequent cause of inpatient readmission after discharge BIBREF3. Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF4 BIBREF5. Assessing readmission risk is therefore critically needed, as it can help inform the selection of treatment interventions and implement preventive measures. Predicting hospital readmission risk is, however, a complex endeavour across all medical fields. Prior work in readmission risk prediction has used structured data (such as medical comorbidity, prior hospitalizations, sociodemographic factors, functional status, physiological variables, etc) extracted from patients' charts BIBREF6. NLP-based prediction models that extract unstructured data from EHR have also been developed with some success in other medical fields BIBREF7. In Psychiatry, due to the unique characteristics of medical record content (highly varied and context-sensitive vocabulary, abundance of multiword expressions, etc), NLP-based approaches have seldom been applied BIBREF8, BIBREF9, BIBREF10 and strategies to study readmission risk factors primarily rely on clinical observation and manual review BIBREF11 BIBREF12, which is effort-intensive, and does not scale well. In this paper we aim to assess the suitability of using NLP-based features like clinical sentiment analysis and topic extraction to predict 30-day readmission risk in psychiatry patients. We begin by describing the EHR corpus that was created using in-house data to train and evaluate our models. We then present the NLP pipeline for feature extraction that was used to parse the EHRs in our corpus. Finally, we compare the performances of our model when using only structured clinical variables and when incorporating features derived from free-text narratives. Data The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission. The age of the patients ranged from 20 to 67 (mean = 26.65, standard deviation = 8.73). 51% of the patients were male. The number of admissions per patient ranged from 2 to 21 (mean = 4, standard deviation = 2.85). Each admission contained on average 4.25 notes and 4,298 tokens. In total, the corpus contains 552 admissions, and 280 of those (50%) resulted in early readmissions. Feature Extraction The readmission risk prediction task was performed at the admission level. An admission consists of a collection of all the clinical notes for a given patient written by medical personnel between inpatient admission and discharge. Every admission was labeled as either `readmitted' (i.e. the patient was readmitted within the next 30 days of discharge) or `not readmitted'. Therefore, the classification task consists of creating a single feature representation of all the clinical notes belonging to one admission, plus the past medical history and demographic information of the patient, and establishing whether that admission will be followed by a 30-day readmission or not. 45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features): Sociodemographics: gender, age, marital status, etc. Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc. Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc. The Current Admission feature group has the most number of features, with 29 features included in this group alone. These features can be further stratified into two groups: `structured' clinical features and `unstructured' clinical features. Feature Extraction ::: Structured Features Structure features are features that were identified on the EHR using regular expression matching and include rating scores that have been reported in the psychiatric literature as correlated with increased readmission risk, such as Global Assessment of Functioning, Insight and Compliance: Global Assessment of Functioning (GAF): The psychosocial functioning of the patient ranging from 100 (extremely high functioning) to 1 (severely impaired) BIBREF13. Insight: The degree to which the patient recognizes and accepts his/her illness (either Good, Fair or Poor). Compliance: The ability of the patient to comply with medication and to follow medical advice (either Yes, Partial, or None). These features are widely-used in clinical practice and evaluate the general state and prognosis of the patient during the patient's evaluation. Feature Extraction ::: Unstructured Features Unstructured features aim to capture the state of the patient in relation to seven risk factor domains (Appearance, Thought Process, Thought Content, Interpersonal, Substance Use, Occupation, and Mood) from the free-text narratives on the EHR. These seven domains have been identified as associated with readmission risk in prior work BIBREF14. These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain. These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text. In our paper we also showed that this automatic pipeline achieves reasonably strong F-scores, with an overall performance of 0.828 F1 for the topic extraction component and 0.5 F1 on the clinical sentiment component. The clinical sentiment scores are computed for every note in the admission. Figure FIGREF4 details the data analysis pipeline that is employed for the feature extraction. First, a multilayer perceptron (MLP) classifier is trained on EHR sentences (8,000,000 sentences consisting of 340,000,000 tokens) that are extracted from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These sentences are automatically identified and labeled for their respective risk factor domain(s) by using a lexicon of clinician identified domain-related keywords and multiword expressions, and thus require no manual annotation. The sentences are vectorized using the Universal Sentence Encoder (USE), a transformer attention network pretrained on a large volume of general-domain web data and optimized for greater-than-word length sequences. Sentences that are marked for one or more of the seven risk factor domains are then passed to a suite of seven clinical sentiment MLP classifiers (one for each risk factor domain) that are trained on a corpus of 3,500 EHR sentences (63,127 tokens) labeled by a team of three clinicians involved in this project. To prevent overfitting to this small amount of training data, the models are designed to be more generalizable through the use of two hidden layers and a dropout rate BIBREF16 of 0.75. The outputs of each clinical sentiment model are then averaged across notes to create a single value for each risk factor domain that corresponds to the patient's level of functioning on a -1 to 1 scale (see Figure 2). Experiments and Results We tested six different classification models: Stochastic Gradient Descent, Logistic Regression, C-Support Vector, Decision Tree, Random Forest, and MLP. All of them were implemented and fine-tuned using the scikit-learn machine learning toolkit BIBREF17. Because an accurate readmission risk prediction model is designed to be used to inform treatment decisions, it is important in adopting a model architecture that is clinically interpretable and allows for an analysis of the specific contribution of each feature in the input. As such, we include a Random Forest classifier, which we also found to have the best performance out of the six models. To systematically evaluate the importance of the clinical sentiment values extracted from the free text in EHRs, we first build a baseline model using the structured features, which are similar to prior studies on readmission risk prediction BIBREF6. We then compare two models incorporating the unstructured features. In the "Baseline+Domain Sentences" model, we consider whether adding the counts of sentences per EHR that involve each of the seven risk factor domains as identified by our topic extraction model improved the model performance. In the "Baseline+Clinical Sentiment" model, we evaluate whether adding clinical sentiment scores for each risk factor domain improved the model performance. We also experimented with combining both sets of features and found no additional improvement. Each model configuration was trained and evaluated 100 times and the features with the highest importance for each iteration were recorded. To further fine-tune our models, we also perform three-fold cross-validated recursive feature elimination 30 times on each of the three configurations and report the performances of the models with the best performing feature sets. These can be found in Table TABREF9. Our baseline results show that the model trained using only the structured features produce equivalent performances as reported by prior models for readmission risk prediction across all healthcare fields BIBREF18. The two models that were trained using unstructured features produced better results and both outperform the baseline results. The "Baseline+Clinical Sentiment" model produced the best results, resulting in an F1 of 0.72, an improvement of 14.3% over the baseline. In order to establish what features were not relevant in the classification task, we performed recursive feature elimination. We identified 13 feature values as being not predictive of readmission (they were eliminated from at least two of the three feature sets without producing a drop in performance) including: all values for marital status (Single, Married, Other, and Unknown), missing values for GAF at admission, GAF score difference between admission & discharge, GAF at discharge, Veteran status, Race, and Insight & Mode of Past Insight values reflecting a clinically positive change (Good and Improving). Poor Insight values, however, are predictive of readmission. Conclusions We have introduced and assessed the efficacy of adding NLP-based features like topic extraction and clinical sentiment features to traditional structured-feature based classification models for early readmission prediction in psychiatry patients. The approach we have introduced is a hybrid machine learning approach that combines deep learning techniques with linear methods to ensure clinical interpretability of the prediction model. Results show not only that both the number of sentences per risk domain and the clinical sentiment analysis scores outperform the structured-feature baseline and contribute significantly to better classification results, but also that the clinical sentiment features produce the highest results in all evaluation metrics (F1 = 0.72). These results suggest that clinical sentiment features for each of seven risk domains extracted from free-text narratives further enhance early readmission prediction. In addition, combining state-of-art MLP methods has a potential utility in generating clinical meaningful features that can be be used in downstream linear models with interpretable and transparent results. In future work, we intend to increase the size of the EHR corpus, increase the demographic spread of patients, and extract new features based on clinical expertise to increase our model performances. Additionally, we intend to continue our clinical sentiment annotation project from BIBREF15 to increase the accuracy of that portion of our NLP pipeline. Acknowledgments This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2019 Workshop reviewers for their constructive and helpful comments.
features per admission were extracted as inputs to the readmission risk classifier
82596190560dc2e2ced2131779730f40a3f3eb8c
82596190560dc2e2ced2131779730f40a3f3eb8c_0
Q: What is the dataset used? Text: Introduction and Related Work Psychotic disorders affect approximately 2.5-4% of the population BIBREF0 BIBREF1. They are one of the leading causes of disability worldwide BIBREF2 and are a frequent cause of inpatient readmission after discharge BIBREF3. Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF4 BIBREF5. Assessing readmission risk is therefore critically needed, as it can help inform the selection of treatment interventions and implement preventive measures. Predicting hospital readmission risk is, however, a complex endeavour across all medical fields. Prior work in readmission risk prediction has used structured data (such as medical comorbidity, prior hospitalizations, sociodemographic factors, functional status, physiological variables, etc) extracted from patients' charts BIBREF6. NLP-based prediction models that extract unstructured data from EHR have also been developed with some success in other medical fields BIBREF7. In Psychiatry, due to the unique characteristics of medical record content (highly varied and context-sensitive vocabulary, abundance of multiword expressions, etc), NLP-based approaches have seldom been applied BIBREF8, BIBREF9, BIBREF10 and strategies to study readmission risk factors primarily rely on clinical observation and manual review BIBREF11 BIBREF12, which is effort-intensive, and does not scale well. In this paper we aim to assess the suitability of using NLP-based features like clinical sentiment analysis and topic extraction to predict 30-day readmission risk in psychiatry patients. We begin by describing the EHR corpus that was created using in-house data to train and evaluate our models. We then present the NLP pipeline for feature extraction that was used to parse the EHRs in our corpus. Finally, we compare the performances of our model when using only structured clinical variables and when incorporating features derived from free-text narratives. Data The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission. The age of the patients ranged from 20 to 67 (mean = 26.65, standard deviation = 8.73). 51% of the patients were male. The number of admissions per patient ranged from 2 to 21 (mean = 4, standard deviation = 2.85). Each admission contained on average 4.25 notes and 4,298 tokens. In total, the corpus contains 552 admissions, and 280 of those (50%) resulted in early readmissions. Feature Extraction The readmission risk prediction task was performed at the admission level. An admission consists of a collection of all the clinical notes for a given patient written by medical personnel between inpatient admission and discharge. Every admission was labeled as either `readmitted' (i.e. the patient was readmitted within the next 30 days of discharge) or `not readmitted'. Therefore, the classification task consists of creating a single feature representation of all the clinical notes belonging to one admission, plus the past medical history and demographic information of the patient, and establishing whether that admission will be followed by a 30-day readmission or not. 45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features): Sociodemographics: gender, age, marital status, etc. Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc. Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc. The Current Admission feature group has the most number of features, with 29 features included in this group alone. These features can be further stratified into two groups: `structured' clinical features and `unstructured' clinical features. Feature Extraction ::: Structured Features Structure features are features that were identified on the EHR using regular expression matching and include rating scores that have been reported in the psychiatric literature as correlated with increased readmission risk, such as Global Assessment of Functioning, Insight and Compliance: Global Assessment of Functioning (GAF): The psychosocial functioning of the patient ranging from 100 (extremely high functioning) to 1 (severely impaired) BIBREF13. Insight: The degree to which the patient recognizes and accepts his/her illness (either Good, Fair or Poor). Compliance: The ability of the patient to comply with medication and to follow medical advice (either Yes, Partial, or None). These features are widely-used in clinical practice and evaluate the general state and prognosis of the patient during the patient's evaluation. Feature Extraction ::: Unstructured Features Unstructured features aim to capture the state of the patient in relation to seven risk factor domains (Appearance, Thought Process, Thought Content, Interpersonal, Substance Use, Occupation, and Mood) from the free-text narratives on the EHR. These seven domains have been identified as associated with readmission risk in prior work BIBREF14. These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain. These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text. In our paper we also showed that this automatic pipeline achieves reasonably strong F-scores, with an overall performance of 0.828 F1 for the topic extraction component and 0.5 F1 on the clinical sentiment component. The clinical sentiment scores are computed for every note in the admission. Figure FIGREF4 details the data analysis pipeline that is employed for the feature extraction. First, a multilayer perceptron (MLP) classifier is trained on EHR sentences (8,000,000 sentences consisting of 340,000,000 tokens) that are extracted from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These sentences are automatically identified and labeled for their respective risk factor domain(s) by using a lexicon of clinician identified domain-related keywords and multiword expressions, and thus require no manual annotation. The sentences are vectorized using the Universal Sentence Encoder (USE), a transformer attention network pretrained on a large volume of general-domain web data and optimized for greater-than-word length sequences. Sentences that are marked for one or more of the seven risk factor domains are then passed to a suite of seven clinical sentiment MLP classifiers (one for each risk factor domain) that are trained on a corpus of 3,500 EHR sentences (63,127 tokens) labeled by a team of three clinicians involved in this project. To prevent overfitting to this small amount of training data, the models are designed to be more generalizable through the use of two hidden layers and a dropout rate BIBREF16 of 0.75. The outputs of each clinical sentiment model are then averaged across notes to create a single value for each risk factor domain that corresponds to the patient's level of functioning on a -1 to 1 scale (see Figure 2). Experiments and Results We tested six different classification models: Stochastic Gradient Descent, Logistic Regression, C-Support Vector, Decision Tree, Random Forest, and MLP. All of them were implemented and fine-tuned using the scikit-learn machine learning toolkit BIBREF17. Because an accurate readmission risk prediction model is designed to be used to inform treatment decisions, it is important in adopting a model architecture that is clinically interpretable and allows for an analysis of the specific contribution of each feature in the input. As such, we include a Random Forest classifier, which we also found to have the best performance out of the six models. To systematically evaluate the importance of the clinical sentiment values extracted from the free text in EHRs, we first build a baseline model using the structured features, which are similar to prior studies on readmission risk prediction BIBREF6. We then compare two models incorporating the unstructured features. In the "Baseline+Domain Sentences" model, we consider whether adding the counts of sentences per EHR that involve each of the seven risk factor domains as identified by our topic extraction model improved the model performance. In the "Baseline+Clinical Sentiment" model, we evaluate whether adding clinical sentiment scores for each risk factor domain improved the model performance. We also experimented with combining both sets of features and found no additional improvement. Each model configuration was trained and evaluated 100 times and the features with the highest importance for each iteration were recorded. To further fine-tune our models, we also perform three-fold cross-validated recursive feature elimination 30 times on each of the three configurations and report the performances of the models with the best performing feature sets. These can be found in Table TABREF9. Our baseline results show that the model trained using only the structured features produce equivalent performances as reported by prior models for readmission risk prediction across all healthcare fields BIBREF18. The two models that were trained using unstructured features produced better results and both outperform the baseline results. The "Baseline+Clinical Sentiment" model produced the best results, resulting in an F1 of 0.72, an improvement of 14.3% over the baseline. In order to establish what features were not relevant in the classification task, we performed recursive feature elimination. We identified 13 feature values as being not predictive of readmission (they were eliminated from at least two of the three feature sets without producing a drop in performance) including: all values for marital status (Single, Married, Other, and Unknown), missing values for GAF at admission, GAF score difference between admission & discharge, GAF at discharge, Veteran status, Race, and Insight & Mode of Past Insight values reflecting a clinically positive change (Good and Improving). Poor Insight values, however, are predictive of readmission. Conclusions We have introduced and assessed the efficacy of adding NLP-based features like topic extraction and clinical sentiment features to traditional structured-feature based classification models for early readmission prediction in psychiatry patients. The approach we have introduced is a hybrid machine learning approach that combines deep learning techniques with linear methods to ensure clinical interpretability of the prediction model. Results show not only that both the number of sentences per risk domain and the clinical sentiment analysis scores outperform the structured-feature baseline and contribute significantly to better classification results, but also that the clinical sentiment features produce the highest results in all evaluation metrics (F1 = 0.72). These results suggest that clinical sentiment features for each of seven risk domains extracted from free-text narratives further enhance early readmission prediction. In addition, combining state-of-art MLP methods has a potential utility in generating clinical meaningful features that can be be used in downstream linear models with interpretable and transparent results. In future work, we intend to increase the size of the EHR corpus, increase the demographic spread of patients, and extract new features based on clinical expertise to increase our model performances. Additionally, we intend to continue our clinical sentiment annotation project from BIBREF15 to increase the accuracy of that portion of our NLP pipeline. Acknowledgments This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2019 Workshop reviewers for their constructive and helpful comments.
EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA
345f65eaff1610deecb02ff785198aa531648e75
345f65eaff1610deecb02ff785198aa531648e75_0
Q: How do they extract topics? Text: Introduction and Related Work Psychotic disorders affect approximately 2.5-4% of the population BIBREF0 BIBREF1. They are one of the leading causes of disability worldwide BIBREF2 and are a frequent cause of inpatient readmission after discharge BIBREF3. Readmissions are disruptive for patients and families, and are a key driver of rising healthcare costs BIBREF4 BIBREF5. Assessing readmission risk is therefore critically needed, as it can help inform the selection of treatment interventions and implement preventive measures. Predicting hospital readmission risk is, however, a complex endeavour across all medical fields. Prior work in readmission risk prediction has used structured data (such as medical comorbidity, prior hospitalizations, sociodemographic factors, functional status, physiological variables, etc) extracted from patients' charts BIBREF6. NLP-based prediction models that extract unstructured data from EHR have also been developed with some success in other medical fields BIBREF7. In Psychiatry, due to the unique characteristics of medical record content (highly varied and context-sensitive vocabulary, abundance of multiword expressions, etc), NLP-based approaches have seldom been applied BIBREF8, BIBREF9, BIBREF10 and strategies to study readmission risk factors primarily rely on clinical observation and manual review BIBREF11 BIBREF12, which is effort-intensive, and does not scale well. In this paper we aim to assess the suitability of using NLP-based features like clinical sentiment analysis and topic extraction to predict 30-day readmission risk in psychiatry patients. We begin by describing the EHR corpus that was created using in-house data to train and evaluate our models. We then present the NLP pipeline for feature extraction that was used to parse the EHRs in our corpus. Finally, we compare the performances of our model when using only structured clinical variables and when incorporating features derived from free-text narratives. Data The corpus consists of a collection of 2,346 clinical notes (admission notes, progress notes, and discharge summaries), which amounts to 2,372,323 tokens in total (an average of 1,011 tokens per note). All the notes were written in English and extracted from the EHRs of 183 psychosis patients from McLean Psychiatric Hospital in Belmont, MA, all of whom had in their history at least one instance of 30-day readmission. The age of the patients ranged from 20 to 67 (mean = 26.65, standard deviation = 8.73). 51% of the patients were male. The number of admissions per patient ranged from 2 to 21 (mean = 4, standard deviation = 2.85). Each admission contained on average 4.25 notes and 4,298 tokens. In total, the corpus contains 552 admissions, and 280 of those (50%) resulted in early readmissions. Feature Extraction The readmission risk prediction task was performed at the admission level. An admission consists of a collection of all the clinical notes for a given patient written by medical personnel between inpatient admission and discharge. Every admission was labeled as either `readmitted' (i.e. the patient was readmitted within the next 30 days of discharge) or `not readmitted'. Therefore, the classification task consists of creating a single feature representation of all the clinical notes belonging to one admission, plus the past medical history and demographic information of the patient, and establishing whether that admission will be followed by a 30-day readmission or not. 45 clinically interpretable features per admission were extracted as inputs to the readmission risk classifier. These features can be grouped into three categories (See Table TABREF5 for complete list of features): Sociodemographics: gender, age, marital status, etc. Past medical history: number of previous admissions, history of suicidality, average length of stay (up until that admission), etc. Information from the current admission: length of stay (LOS), suicidal risk, number and length of notes, time of discharge, evaluation scores, etc. The Current Admission feature group has the most number of features, with 29 features included in this group alone. These features can be further stratified into two groups: `structured' clinical features and `unstructured' clinical features. Feature Extraction ::: Structured Features Structure features are features that were identified on the EHR using regular expression matching and include rating scores that have been reported in the psychiatric literature as correlated with increased readmission risk, such as Global Assessment of Functioning, Insight and Compliance: Global Assessment of Functioning (GAF): The psychosocial functioning of the patient ranging from 100 (extremely high functioning) to 1 (severely impaired) BIBREF13. Insight: The degree to which the patient recognizes and accepts his/her illness (either Good, Fair or Poor). Compliance: The ability of the patient to comply with medication and to follow medical advice (either Yes, Partial, or None). These features are widely-used in clinical practice and evaluate the general state and prognosis of the patient during the patient's evaluation. Feature Extraction ::: Unstructured Features Unstructured features aim to capture the state of the patient in relation to seven risk factor domains (Appearance, Thought Process, Thought Content, Interpersonal, Substance Use, Occupation, and Mood) from the free-text narratives on the EHR. These seven domains have been identified as associated with readmission risk in prior work BIBREF14. These unstructured features include: 1) the relative number of sentences in the admission notes that involve each risk factor domain (out of total number of sentences within the admission) and 2) clinical sentiment scores for each of these risk factor domains, i.e. sentiment scores that evaluate the patient’s psychosocial functioning level (positive, negative, or neutral) with respect to each of these risk factor domain. These sentiment scores were automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15 and pretrained on in-house psychiatric EHR text. In our paper we also showed that this automatic pipeline achieves reasonably strong F-scores, with an overall performance of 0.828 F1 for the topic extraction component and 0.5 F1 on the clinical sentiment component. The clinical sentiment scores are computed for every note in the admission. Figure FIGREF4 details the data analysis pipeline that is employed for the feature extraction. First, a multilayer perceptron (MLP) classifier is trained on EHR sentences (8,000,000 sentences consisting of 340,000,000 tokens) that are extracted from the Research Patient Data Registry (RPDR), a centralized regional data repository of clinical data from all institutions in the Partners HealthCare network. These sentences are automatically identified and labeled for their respective risk factor domain(s) by using a lexicon of clinician identified domain-related keywords and multiword expressions, and thus require no manual annotation. The sentences are vectorized using the Universal Sentence Encoder (USE), a transformer attention network pretrained on a large volume of general-domain web data and optimized for greater-than-word length sequences. Sentences that are marked for one or more of the seven risk factor domains are then passed to a suite of seven clinical sentiment MLP classifiers (one for each risk factor domain) that are trained on a corpus of 3,500 EHR sentences (63,127 tokens) labeled by a team of three clinicians involved in this project. To prevent overfitting to this small amount of training data, the models are designed to be more generalizable through the use of two hidden layers and a dropout rate BIBREF16 of 0.75. The outputs of each clinical sentiment model are then averaged across notes to create a single value for each risk factor domain that corresponds to the patient's level of functioning on a -1 to 1 scale (see Figure 2). Experiments and Results We tested six different classification models: Stochastic Gradient Descent, Logistic Regression, C-Support Vector, Decision Tree, Random Forest, and MLP. All of them were implemented and fine-tuned using the scikit-learn machine learning toolkit BIBREF17. Because an accurate readmission risk prediction model is designed to be used to inform treatment decisions, it is important in adopting a model architecture that is clinically interpretable and allows for an analysis of the specific contribution of each feature in the input. As such, we include a Random Forest classifier, which we also found to have the best performance out of the six models. To systematically evaluate the importance of the clinical sentiment values extracted from the free text in EHRs, we first build a baseline model using the structured features, which are similar to prior studies on readmission risk prediction BIBREF6. We then compare two models incorporating the unstructured features. In the "Baseline+Domain Sentences" model, we consider whether adding the counts of sentences per EHR that involve each of the seven risk factor domains as identified by our topic extraction model improved the model performance. In the "Baseline+Clinical Sentiment" model, we evaluate whether adding clinical sentiment scores for each risk factor domain improved the model performance. We also experimented with combining both sets of features and found no additional improvement. Each model configuration was trained and evaluated 100 times and the features with the highest importance for each iteration were recorded. To further fine-tune our models, we also perform three-fold cross-validated recursive feature elimination 30 times on each of the three configurations and report the performances of the models with the best performing feature sets. These can be found in Table TABREF9. Our baseline results show that the model trained using only the structured features produce equivalent performances as reported by prior models for readmission risk prediction across all healthcare fields BIBREF18. The two models that were trained using unstructured features produced better results and both outperform the baseline results. The "Baseline+Clinical Sentiment" model produced the best results, resulting in an F1 of 0.72, an improvement of 14.3% over the baseline. In order to establish what features were not relevant in the classification task, we performed recursive feature elimination. We identified 13 feature values as being not predictive of readmission (they were eliminated from at least two of the three feature sets without producing a drop in performance) including: all values for marital status (Single, Married, Other, and Unknown), missing values for GAF at admission, GAF score difference between admission & discharge, GAF at discharge, Veteran status, Race, and Insight & Mode of Past Insight values reflecting a clinically positive change (Good and Improving). Poor Insight values, however, are predictive of readmission. Conclusions We have introduced and assessed the efficacy of adding NLP-based features like topic extraction and clinical sentiment features to traditional structured-feature based classification models for early readmission prediction in psychiatry patients. The approach we have introduced is a hybrid machine learning approach that combines deep learning techniques with linear methods to ensure clinical interpretability of the prediction model. Results show not only that both the number of sentences per risk domain and the clinical sentiment analysis scores outperform the structured-feature baseline and contribute significantly to better classification results, but also that the clinical sentiment features produce the highest results in all evaluation metrics (F1 = 0.72). These results suggest that clinical sentiment features for each of seven risk domains extracted from free-text narratives further enhance early readmission prediction. In addition, combining state-of-art MLP methods has a potential utility in generating clinical meaningful features that can be be used in downstream linear models with interpretable and transparent results. In future work, we intend to increase the size of the EHR corpus, increase the demographic spread of patients, and extract new features based on clinical expertise to increase our model performances. Additionally, we intend to continue our clinical sentiment annotation project from BIBREF15 to increase the accuracy of that portion of our NLP pipeline. Acknowledgments This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2019 Workshop reviewers for their constructive and helpful comments.
automatically obtained through the topic extraction and sentiment analysis pipeline introduced in our prior work BIBREF15
51d03f0741b72ae242c380266acd2321baf43444
51d03f0741b72ae242c380266acd2321baf43444_0
Q: How does this compare to simple interpolation between a word-level and a character-level language model? Text: Introduction This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ Many NLP tasks, including named entity recognition (NER), part-of-speech (POS) tagging and shallow parsing can be framed as types of sequence labeling. The development of accurate and efficient sequence labeling models is thereby useful for a wide range of downstream applications. Work in this area has traditionally involved task-specific feature engineering – for example, integrating gazetteers for named entity recognition, or using features from a morphological analyser in POS-tagging. Recent developments in neural architectures and representation learning have opened the door to models that can discover useful features automatically from the data. Such sequence labeling systems are applicable to many tasks, using only the surface text as input, yet are able to achieve competitive results BIBREF0 , BIBREF1 . Current neural models generally make use of word embeddings, which allow them to learn similar representations for semantically or functionally similar words. While this is an important improvement over count-based models, they still have weaknesses that should be addressed. The most obvious problem arises when dealing with out-of-vocabulary (OOV) words – if a token has never been seen before, then it does not have an embedding and the model needs to back-off to a generic OOV representation. Words that have been seen very infrequently have embeddings, but they will likely have low quality due to lack of training data. The approach can also be sub-optimal in terms of parameter usage – for example, certain suffixes indicate more likely POS tags for these words, but this information gets encoded into each individual embedding as opposed to being shared between the whole vocabulary. In this paper, we construct a task-independent neural network architecture for sequence labeling, and then extend it with two different approaches for integrating character-level information. By operating on individual characters, the model is able to infer representations for previously unseen words and share information about morpheme-level regularities. We propose a novel architecture for combining character-level representations with word embeddings using a gating mechanism, also referred to as attention, which allows the model to dynamically decide which source of information to use for each word. In addition, we describe a new objective for model training where the character-level representations are optimised to mimic the current state of word embeddings. We evaluate the neural models on 8 datasets from the fields of NER, POS-tagging, chunking and error detection in learner texts. Our experiments show that including a character-based component in the sequence labeling model provides substantial performance improvements on all the benchmarks. In addition, the attention-based architecture achieves the best results on all evaluations, while requiring a smaller number of parameters. Bidirectional LSTM for sequence labeling We first describe a basic word-level neural network for sequence labeling, following the models described by Lample2016 and Rei2016, and then propose two alternative methods for incorporating character-level information. Figure 1 shows the general architecture of the sequence labeling network. The model receives a sequence of tokens $(w_1, ..., w_T)$ as input, and predicts a label corresponding to each of the input tokens. The tokens are first mapped to a distributed vector space, resulting in a sequence of word embeddings $(x_1, ..., x_T)$ . Next, the embeddings are given as input to two LSTM BIBREF2 components moving in opposite directions through the text, creating context-specific representations. The respective forward- and backward-conditioned representations are concatenated for each word position, resulting in representations that are conditioned on the whole sequence: $$\overrightarrow{h_t} = LSTM(x_t, \overrightarrow{h_{t-1}})\hspace{30.0pt} \overleftarrow{h_t} = LSTM(x_t, \overleftarrow{h_{t+1}})\hspace{30.0pt} h_t = [\overrightarrow{h_t};\overleftarrow{h_t}]$$ (Eq. 1) We include an extra narrow hidden layer on top of the LSTM, which proved to be a useful modification based on development experiments. An additional hidden layer allows the model to detect higher-level feature combinations, while constraining it to be small forces it to focus on more generalisable patterns: $$d_t = tanh(W_d h_t)$$ (Eq. 2) where $W_d$ is a weight matrix between the layers, and the size of $d_t$ is intentionally kept small. Finally, to produce label predictions, we use either a softmax layer or a conditional random field (CRF, Lafferty2001). The softmax calculates a normalised probability distribution over all the possible labels for each word: $$P(y_t = k | d_t) = \frac{e^{W_{o,k} d_t}}{\sum _{\tilde{k} \in K} e^{W_{o,\tilde{k}} d_t}}$$ (Eq. 3) where $P (y_t = k|d_t )$ is the probability of the label of the $t$ -th word ( $y_t$ ) being $k$ , $K$ is the set of all possible labels, and $W_{o,k}$ is the $k$ -th row of output weight matrix $W_o$ . To optimise this model, we minimise categorical crossentropy, which is equivalent to minimising the negative log-probability of the correct labels: $$E = - \sum _{t=1}^{T} log(P(y_t| d_t))$$ (Eq. 4) Following Huang2015, we can also use a CRF as the output layer, which conditions each prediction on the previously predicted label. In this architecture, the last hidden layer is used to predict confidence scores for the word having each of the possible labels. A separate weight matrix is used to learn transition probabilities between different labels, and the Viterbi algorithm is used to find an optimal sequence of weights. Given that $y$ is a sequence of labels $[y_1, ..., y_T]$ , then the CRF score for this sequence can be calculated as: $$s(y) = \sum _{t=1}^T A_{t,y_t} + \sum _{t=0}^T B_{y_t,y_{t+1}}$$ (Eq. 5) $$A_{t,y_t} = W_{o,y_t} d_t$$ (Eq. 6) where $A_{t,y_t}$ shows how confident the network is that the label on the $t$ -th word is $y_t$ . $B_{y_t,y_{t+1}}$ shows the likelihood of transitioning from label $y_t$ to label $y_{t+1}$ , and these values are optimised during training. The output from the model is the sequence of labels with the largest score $s(y)$ , which can be found efficiently using the Viterbi algorithm. In order to optimise the CRF model, the loss function maximises the score for the correct label sequence, while minimising the scores for all other sequences: $$E = - s(y) + log \sum _{\tilde{y} \in \widetilde{Y}} e^{s(\tilde{y})}$$ (Eq. 7) where $\widetilde{Y}$ is the set of all possible label sequences. Character-level sequence labeling Distributed embeddings map words into a space where semantically similar words have similar vector representations, allowing the models to generalise better. However, they still treat words as atomic units and ignore any surface- or morphological similarities between different words. By constructing models that operate over individual characters in each word, we can take advantage of these regularities. This can be particularly useful for handling unseen words – for example, if we have never seen the word cabinets before, a character-level model could still infer a representation for this word if it has previously seen the word cabinet and other words with the suffix -s. In contrast, a word-level model can only represent this word with a generic out-of-vocabulary representation, which is shared between all other unseen words. Research into character-level models is still in fairly early stages, and models that operate exclusively on characters are not yet competitive to word-level models on most tasks. However, instead of fully replacing word embeddings, we are interested in combining the two approaches, thereby allowing the model to take advantage of information at both granularity levels. The general outline of our approach is shown in Figure 2 . Each word is broken down into individual characters, these are then mapped to a sequence of character embeddings $(c_1, ..., c_R)$ , which are passed through a bidirectional LSTM: $$\overrightarrow{h^*_i} = LSTM(c_i, \overrightarrow{h^*_{i-1}}) \hspace{30.0pt} \overleftarrow{h^*_i} = LSTM(c_i, \overleftarrow{h^*_{i+1}})$$ (Eq. 9) We then use the last hidden vectors from each of the LSTM components, concatenate them together, and pass the result through a separate non-linear layer. $$h^* = [\overrightarrow{h^*_R};\overleftarrow{h^*_1}] \hspace{30.0pt} m = tanh(W_m h^*)$$ (Eq. 10) where $W_m$ is a weight matrix mapping the concatenated hidden vectors from both LSTMs into a joint word representation $m$ , built from individual characters. We now have two alternative feature representations for each word – $x_t$ from Section "Bidirectional LSTM for sequence labeling" is an embedding learned on the word level, and $m^{(t)}$ is a representation dynamically built from individual characters in the $t$ -th word of the input text. Following Lample2016, one possible approach is to concatenate the two vectors and use this as the new word-level representation for the sequence labeling model: $$\widetilde{x} = [x; m]$$ (Eq. 11) This approach, also illustrated in Figure 2 , assumes that the word-level and character-level components learn somewhat disjoint information, and it is beneficial to give them separately as input to the sequence labeler. Attention over character features Alternatively, we can have the word embedding and the character-level component learn the same semantic features for each word. Instead of concatenating them as alternative feature sets, we specifically construct the network so that they would learn the same representations, and then allow the model to decide how to combine the information for each specific word. We first construct the word representation from characters using the same architecture – a bidirectional LSTM operates over characters, and the last hidden states are used to create vector $m$ for the input word. Instead of concatenating this with the word embedding, the two vectors are added together using a weighted sum, where the weights are predicted by a two-layer network: $$z = \sigma (W^{(3)}_z tanh(W^{(1)}_{z} x + W^{(2)}_{z} m)) \hspace{30.0pt} \widetilde{x} = z\cdot x + (1-z) \cdot m$$ (Eq. 13) where $W^{(1)}_{z}$ , $W^{(2)}_{z}$ and $W^{(3)}_{z}$ are weight matrices for calculating $z$ , and $\sigma ()$ is the logistic function with values in the range $[0,1]$ . The vector $z$ has the same dimensions as $x$ or $m$ , acting as the weight between the two vectors. It allows the model to dynamically decide how much information to use from the character-level component or from the word embedding. This decision is done for each feature separately, which adds extra flexiblity – for example, words with regular suffixes can share some character-level features, whereas irregular words can store exceptions into word embeddings. Furthermore, previously unknown words are able to use character-level regularities whenever possible, and are still able to revert to using the generic OOV token when necessary. The main benefits of character-level modeling are expected to come from improved handling of rare and unseen words, whereas frequent words are likely able to learn high-quality word-level embeddings directly. We would like to take advantage of this, and train the character component to predict these word embeddings. Our attention-based architecture requires the learned features in both word representations to align, and we can add in an extra constraint to encourage this. During training, we add a term to the loss function that optimises the vector $m$ to be similar to the word embedding $x$ : $$\widetilde{E} = E + \sum _{t=1}^{T} g_t (1 - cos(m^{(t)}, x_t)) \hspace{30.0pt} g_t = {\left\lbrace \begin{array}{ll} 0, & \text{if}\ w_t = OOV \\ 1, & \text{otherwise} \end{array}\right.}$$ (Eq. 14) Equation 14 maximises the cosine similarity between $m^{(t)}$ and $x_t$ . Importantly, this is done only for words that are not out-of-vocabulary – we want the character-level component to learn from the word embeddings, but this should exclude the OOV embedding, as it is shared between many words. We use $g_t$ to set this cost component to 0 for any OOV tokens. While the character component learns general regularities that are shared between all the words, individual word embeddings provide a way for the model to store word-specific information and any exceptions. Therefore, while we want the character-based model to shift towards predicting high-quality word embeddings, it is not desireable to optimise the word embeddings towards the character-level representations. This can be achieved by making sure that the optimisation is performed only in one direction; in Theano BIBREF3 , the disconnected_grad function gives the desired effect. Datasets We evaluate the sequence labeling models and character architectures on 8 different datasets. Table 1 contains information about the number of labels and dataset sizes for each of them. Experiment settings For data prepocessing, all digits were replaced with the character '0'. Any words that occurred only once in the training data were replaced by the generic OOV token for word embeddings, but were still used in the character-level components. The word embeddings were initialised with publicly available pretrained vectors, created using word2vec BIBREF12 , and then fine-tuned during model training. For the general-domain datasets we used 300-dimensional vectors trained on Google News; for the biomedical datasets we used 200-dimensional vectors trained on PubMed and PMC. The embeddings for characters were set to length 50 and initialised randomly. The LSTM layer size was set to 200 in each direction for both word- and character-level components. The hidden layer $d$ has size 50, and the combined representation $m$ has the same length as the word embeddings. CRF was used as the output layer for all the experiments – we found that this gave most benefits to tasks with larger numbers of possible labels. Parameters were optimised using AdaDelta BIBREF13 with default learning rate $1.0$ and sentences were grouped into batches of size 64. Performance on the development set was measured at every epoch and training was stopped if performance had not improved for 7 epochs; the best-performing model on the development set was then used for evaluation on the test set. In order to avoid any outlier results due to randomness in the model initialisation, we trained each configuration with 10 different random seeds and present here the averaged results. When evaluating on each dataset, we report the measures established in previous work. Token-level accuracy is used for PTB-POS and GENIA-POS; $F_{0.5}$ score over the erroneous words for FCEPUBLIC; the official evaluation script for BC2GM which allows for alternative correct entity spans; and microaveraged mention-level $F_{1}$ score for the remaining datasets. Results While optimising the hyperparameters for each dataset separately would likely improve individual performance, we conduct more controlled experiments on a task-independent model. Therefore, we use the same hyperparameters from Section "Experiment settings" on all datasets, and the development set is only used for the stopping condition. With these experiments, we wish to determine 1) on which sequence labeling tasks do character-based models offer an advantange, and 2) which character-based architecture performs better. Results for the different model architectures on all 8 datasets are shown in Table 2 . As can be seen, including a character-based component in the sequence labeling architecture improves performance on every benchmark. The NER datasets have the largest absolute improvement – the model is able to learn character-level patterns for names, and also improve the handling of any previously unseen tokens. Compared to concatenating the word- and character-level representations, the attention-based character model outperforms the former on all evaluations. The mechanism for dynamically deciding how much character-level information to use allows the model to better handle individual word representations, giving it an advantage in the experiments. Visualisation of the attention values in Figure 3 shows that the model is actively using character-based features, and the attention areas vary between different words. The results of this general tagging architecture are competitive, even when compared to previous work using hand-crafted features. The network achieves 97.27% on PTB-POS compared to 97.55% by Huang2015, and 72.70% on JNLPBA compared to 72.55% by Zhou2004. In some cases, we are also able to beat the previous best results – 87.99% on BC2GM compared to 87.48% by Campos2015, and 41.88% on FCEPUBLIC compared to 41.1% by Rei2016. Lample2016 report a considerably higher result of 90.94% on CoNLL03, indicating that the chosen hyperparameters for the baseline system are suboptimal for this specific task. Compared to the experiments presented here, their model used the IOBES tagging scheme instead of the original IOB, and embeddings pretrained with a more specialised method that accounts for word order. It is important to also compare the parameter counts of alternative neural architectures, as this shows their learning capacity and indicates their time requirements in practice. Table 3 contains the parameter counts on three representative datasets. While keeping the model hyperparameters constant, the character-level models require additional parameters for the character composition and character embeddings. However, the attention-based model uses fewer parameters compared to the concatenation approach. When the two representations are concatenated, the overall word representation size is increased, which in turn increases the number of parameters required for the word-level bidirectional LSTM. Therefore, the attention-based character architecture achieves improved results even with a smaller parameter footprint. Related work There is a wide range of previous work on constructing and optimising neural architectures applicable to sequence labeling. Collobert2011 described one of the first task-independent neural tagging models using convolutional neural networks. They were able to achieve good results on POS tagging, chunking, NER and semantic role labeling, without relying on hand-engineered features. Irsoy2014a experimented with multi-layer bidirectional Elman-style recurrent networks, and found that the deep models outperformed conditional random fields on the task of opinion mining. Huang2015 described a bidirectional LSTM model with a CRF layer, which included hand-crafted features specialised for the task of named entity recognition. Rei2016 evaluated a range of neural architectures, including convolutional and recurrent networks, on the task of error detection in learner writing. The word-level sequence labeling model described in this paper follows the previous work, combining useful design choices from each of them. In addition, we extended the model with two alternative character-level architectures, and evaluated its performance on 8 different datasets. Character-level models have the potential of capturing morpheme patterns, thereby improving generalisation on both frequent and unseen words. In recent years, there has been an increase in research into these models, resulting in several interesting applications. Ling2015b described a character-level neural model for machine translation, performing both encoding and decoding on individual characters. Kim2016 implemented a language model where encoding is performed by a convolutional network and LSTM over characters, whereas predictions are given on the word-level. Cao2016 proposed a method for learning both word embeddings and morphological segmentation with a bidirectional recurrent network over characters. There is also research on performing parsing BIBREF14 and text classification BIBREF15 with character-level neural models. Ling2015a proposed a neural architecture that replaces word embeddings with dynamically-constructed character-based representations. We applied a similar method for operating over characters, but combined them with word embeddings instead of replacing them, as this allows the model to benefit from both approaches. Lample2016 described a model where the character-level representation is combined with word embeddings through concatenation. In this work, we proposed an alternative architecture, where the representations are combined using an attention mechanism, and evaluated both approaches on a range of tasks and datasets. Recently, Miyamoto2016 have also described a related method for the task of language modelling, combining characters and word embeddings using gating. Conclusion Developments in neural network research allow for model architectures that work well on a wide range of sequence labeling datasets without requiring hand-crafted data. While word-level representation learning is a powerful tool for automatically discovering useful features, these models still come with certain weaknesses – rare words have low-quality representations, previously unseen words cannot be modeled at all, and morpheme-level information is not shared with the whole vocabulary. In this paper, we investigated character-level model components for a sequence labeling architecture, which allow the system to learn useful patterns from sub-word units. In addition to a bidirectional LSTM operating over words, a separate bidirectional LSTM is used to construct word representations from individual characters. We proposed a novel architecture for combining the character-based representation with the word embedding by using an attention mechanism, allowing the model to dynamically choose which information to use from each information source. In addition, the character-level composition function is augmented with a novel training objective, optimising it to predict representations that are similar to the word embeddings in the model. The evaluation was performed on 8 different sequence labeling datasets, covering a range of tasks and domains. We found that incorporating character-level information into the model improved performance on every benchmark, indicating that capturing features regarding characters and morphmes is indeed useful in a general-purpose tagging system. In addition, the attention-based model for combining character representations outperformed the concatenation method used in previous work in all evaluations. Even though the proposed method requires fewer parameters, the added ability of controlling how much character-level information is used for each word has led to improved performance on a range of different tasks.
Unanswerable
96c20af8bbef435d0d534d10c42ae15ff2f926f8
96c20af8bbef435d0d534d10c42ae15ff2f926f8_0
Q: What translationese effects are seen in the analysis? Text: Introduction In the present paper, we analyse coreference in the output of three neural machine translation systems (NMT) that were trained under different settings. We use a transformer architecture BIBREF0 and train it on corpora of different sizes with and without the specific coreference information. Transformers are the current state-of-the-art in NMT BIBREF1 and are solely based on attention, therefore, the kind of errors they produce might be different from other architectures such as CNN or RNN-based ones. Here we focus on one architecture to study the different errors produced only under different data configurations. Coreference is an important component of discourse coherence which is achieved in how discourse entities (and events) are introduced and discussed. Coreference chains contain mentions of one and the same discourse element throughout a text. These mentions are realised by a variety of linguistic devices such as pronouns, nominal phrases (NPs) and other linguistic means. As languages differ in the range of such linguistic means BIBREF2, BIBREF3, BIBREF4, BIBREF5 and in their contextual restrictions BIBREF6, these differences give rise to problems that may result in incoherent (automatic) translations. We focus on coreference chains in English-German translations belonging to two different genres. In German, pronouns, articles and adjectives (and some nouns) are subject to grammatical gender agreement, whereas in English, only person pronouns carry gender marking. An incorrect translation of a pronoun or a nominal phrase may lead to an incorrect relation in a discourse and will destroy a coreference chain. Recent studies in automatic coreference translation have shown that dedicated systems can lead to improvements in pronoun translation BIBREF7, BIBREF8. However, standard NMT systems work at sentence level, so improvements in NMT translate into improvements on pronouns with intra-sentential antecedents, but the phenomenon of coreference is not limited to anaphoric pronouns, and even less to a subset of them. Document-level machine translation (MT) systems are needed to deal with coreference as a whole. Although some attempts to include extra-sentential information exist BIBREF9, BIBREF10, BIBREF11, BIBREF12, the problem is far from being solved. Besides that, some further problems of NMT that do not seem to be related to coreference at first glance (such as translation of unknown words and proper names or the hallucination of additional words) cause coreference-related errors. In our work, we focus on the analysis of complete coreference chains, manually annotating them in the three translation variants. We also evaluate them from the point of view of coreference chain translation. The goal of this paper is two-fold. On the one hand, we are interested in various properties of coreference chains in these translations. They include total number of chains, average chain length, the size of the longest chain and the total number of annotated mentions. These features are compared to those of the underlying source texts and also the corresponding human translation reference. On the other hand, we are also interested in the quality of coreference translations. Therefore, we define a typology of errors, and and chain members in MT output are annotated as to whether or not they are correct. The main focus is on such errors as gender, number and case of the mentions, but we also consider wrong word selection or missing words in a chain. Unlike previous work, we do not restrict ourselves to pronouns. Our analyses show that there are further errors that are not directly related to coreference but consequently have an influence on the correctness of coreference chains. The remainder of the paper is organised as follows. Section SECREF2 introduces the main concepts and presents an overview of related MT studies. Section SECREF3 provides details on the data, systems used and annotation procedures. Section SECREF4 analyses the performance of our transformer systems on coreferent mentions. Finally we summarise and draw conclusions in Section SECREF5. Background and Related Work ::: Coreference Coreference is related to cohesion and coherence. The latter is the logical flow of inter-related ideas in a text, whereas cohesion refers to the text-internal relationship of linguistic elements that are overtly connected via lexico-grammatical devices across sentences BIBREF13. As stated by BIBREF14, this connectedness of texts implies dependencies between sentences. And if these dependencies are neglected in translation, the output text no longer has the property of connectedness which makes a sequence of sentences a text. Coreference expresses identity to a referent mentioned in another textual part (not necessarily in neighbouring sentences) contributing to text connectedness. An addressee is following the mentioned referents and identifies them when they are repeated. Identification of certain referents depends not only on a lexical form, but also on other linguistic means, e.g. articles or modifying pronouns BIBREF15. The use of these is influenced by various factors which can be language-dependent (range of linguistic means available in grammar) and also context-independent (pragmatic situation, genre). Thus, the means of expressing reference differ across languages and genres. This has been shown by some studies in the area of contrastive linguistics BIBREF6, BIBREF3, BIBREF5. Analyses in cross-lingual coreference resolution BIBREF16, BIBREF17, BIBREF18, BIBREF19 show that there are still unsolved problems that should be addressed. Background and Related Work ::: Translation studies Differences between languages and genres in the linguistic means expressing reference are important for translation, as the choice of an appropriate referring expression in the target language poses challenges for both human and machine translation. In translation studies, there is a number of corpus-based works analysing these differences in translation. However, most of them are restricted to individual phenomena within coreference. For instance, BIBREF20 analyse abstract anaphors in English-German translations. To our knowledge, they do not consider chains. BIBREF21 in their contrastive analysis of potential coreference chain members in English-German translations, describe transformation patterns that contain different types of referring expressions. However, the authors rely on automatic tagging and parsing procedures and do not include chains into their analysis. The data used by BIBREF4 and BIBREF22 contain manual chain annotations. The authors focus on different categories of anaphoric pronouns in English-Czech translations, though not paying attention to chain features (e.g. their number or size). Chain features are considered in a contrastive analysis by BIBREF6. Their study concerns different phenomena in a variety of genres in English and German comparable texts. Using contrastive interpretations, they suggest preferred translation strategies from English into German, i.e. translators should use demonstrative pronouns instead of personal pronouns (e.g. dies/das instead of es/it) when translating from English into German and vice versa. However, corpus-based studies show that translators do not necessarily apply such strategies. Instead, they often preserve the source language anaphor's categories BIBREF20 which results in the shining through effects BIBREF23. Moreover, due to the tendency of translators to explicitly realise meanings in translations that were implicit in the source texts BIBREF24, translations are believed to contain more (explicit) referring expressions, and subsequently, more (and longer) coreference chains. Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations. Background and Related Work ::: Coreference in MT As explained in the introduction, several recent works tackle the automatic translation of pronouns and also coreference BIBREF25, BIBREF26 and this has, in part, motivated the creation of devoted shared tasks and test sets to evaluate the quality of pronoun translation BIBREF7, BIBREF27, BIBREF28, BIBREF29. But coreference is a wider phenomenon that affects more linguistic elements. Noun phrases also appear in coreference chains but they are usually studied under coherence and consistency in MT. BIBREF30 use topic modelling to extract coherence chains in the source, predict them in the target and then promote them as translations. BIBREF31 use word embeddings to enforce consistency within documents. Before these works, several methods to post-process the translations and even including a second decoding pass were used BIBREF32, BIBREF33, BIBREF34, BIBREF35. Recent NMT systems that include context deal with both phenomena, coreference and coherence, but usually context is limited to the previous sentence, so chains as a whole are never considered. BIBREF10 encode both a source and a context sentence and then combine them to obtain a context-aware input. The same idea was implemented before by BIBREF36 where they concatenate a source sentence with the previous one to include context. Caches BIBREF37, memory networks BIBREF38 and hierarchical attention methods BIBREF39 allow to use a wider context. Finally, our work is also related to BIBREF40 and BIBREF41 where their oracle translations are similar to the data-based approach we introduce in Section SECREF4. Systems, Methods and Resources ::: State-of-the-art NMT Our NMT systems are based on a transformer architecture BIBREF0 as implemented in the Marian toolkit BIBREF42 using the transformer big configuration. We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1. A variant of the S3 system participated in the news machine translation of the shared task held at WMT 2019 BIBREF43. Systems, Methods and Resources ::: State-of-the-art NMT ::: S1 is trained with the concatenation of Common Crawl, Europarl, a cleaned version of Rapid and the News Commentary corpus. We oversample the latter in order to have a significant representation of data close to the news genre in the final corpus. Systems, Methods and Resources ::: State-of-the-art NMT ::: S2 uses the same data as S1 with the addition of a filtered portion of Paracrawl. This corpus is known to be noisy, so we use it to create a larger training corpus but it is diluted by a factor 4 to give more importance to high quality translations. Systems, Methods and Resources ::: State-of-the-art NMT ::: S3 S3 uses the same data as S1, but this time enriched with the cross- and intra-sentential coreference chain markup as described below. The information is included as follows. Source documents are annotated with coreference chains using the neural annotator of Stanford CoreNLP BIBREF44. The tool detects pronouns, nominal phrases and proper names as mentions in a chain. For every mention, CoreNLP extracts its gender (male, female, neutral, unknown), number (singular, plural, unknown), and animacy (animate, inanimate, unknown). This information is not added directly but used to enrich the single sentence-based MT training data by applying a set of heuristics implemented in DocTrans: We enrich pronominal mentions with the exception of "I" with the head (main noun phrase) of the chain. The head is cleaned by removing articles and Saxon genitives and we only consider heads with less than 4 tokens in order to avoid enriching a word with a full sentence We enrich nominal mentions including proper names with the gender of the head The head itself is enriched with she/he/it/they depending on its gender and animacy The enrichment is done with the addition of tags as shown in the examples: I never cook with $<$b_crf$>$ salt $<$e_crf$>$ it. $<$b_crf$>$ she $<$e_crf$>$ Biles arrived late. In the first case heuristic 1 is used, salt is the head of the chain and it is prepended to the pronoun. The second example shows a sentence where heuristic 2 has been used and the proper name Biles has now information about the gender of the person it is referring to. Afterwards, the NMT system is trained at sentence level in the usual way. The data used for the three systems is cleaned, tokenised, truecased with Moses scripts and BPEd with subword-nmt using separated vocabularies with 50 k subword units each. The validation set ($news2014$) and the test sets described in the following section are pre-processed in the same way. Systems, Methods and Resources ::: Test data under analysis As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46. The corpus contains ca. 160.7 thousand tokens manually annotated with about 14.9 thousand mentions and 4.7 thousand coreference chains. For our analysis, we select a portion of English news texts and TED talks from ParCorFull and translate them with the three NMT systems described in SECREF4 above. As texts considerably differ in their length, we select 17 news texts (494 sentences) and four TED talks (518 sentences). The size (in tokens) of the total data set under analysis – source (src) and human translations (ref) from ParCorFull and the automatic translations produced within this study (S1, S2 and S3) are presented in Table TABREF20. Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set. Systems, Methods and Resources ::: Manual annotation process The English sources and their corresponding human translations into German were already manually annotated for coreference chains. We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull. In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference. The annotation of machine-translated texts was integrated into a university course on discourse phenomena. Our annotators, well-trained students of linguistics, worked in small groups on the assigned annotation tasks (4-5 texts, i.e. 12-15 translations per group). At the beginning of the annotation process, the categories under analysis were discussed within the small groups and also in the class. The final versions of the annotation were then corrected by the instructor. Results and Analyses ::: Chain features First, we compare the distribution of several chain features in the three MT outputs, their source texts and the corresponding human translations. Table TABREF20 shows that, overall, all machine translations contain a greater number of annotated mentions in both news texts and TED talks than in the annotated source (src and src$_{\rm CoreNLP}$) and reference (ref) texts. Notice that src$_{\rm CoreNLP}$ —where coreferences are not manually but automatically annotated with CoreNLP— counts also the tokens that the mentions add to the sentences, but not the tags. The larger number of mentions may indicate a strong explicitation effect observed in machine-translated texts. Interestingly, CoreNLP detects a similar number of mentions in both genres, while human annotators clearly marked more chains for TED than for news. Both genres are in fact quite different in nature; whereas only $37\%$ of the mentions are pronominal in news texts (343 out of 915), the number grows to $58\%$ for TED (577 out of 989), and this could be an indicator of the difficulty of the genres for NMT systems. There is also a variation in terms of chain number between translations of TED talks and news. While automatic translations of news texts contain more chains than the corresponding human annotated sources and references, machine-translated TED talks contain less chains than the sources and human translations. However, there is not much variation between the chain features of the three MT outputs. The chains are also longer in machine-translated output than in reference translations as can be seen by the number of mentions per chain and the length of the longest chain. Results and Analyses ::: MT quality at system level We evaluate the quality of the three transformer engines with two automatic metrics, BLEU BIBREF49 and METEOR BIBREF50. Table TABREF25 shows the scores in two cases: all, when the complete texts are evaluated and coref, when only the subset of sentences that have been augmented in S3 are considered – 265 out of 494 for news and 239 out of 518 for TED. For news, the best system is that trained on more data, S2; but for TED talks S3 with less data has the best performance. The difference between the behaviour of the systems can be related to the different genres. We have seen that news are dominated by nominal mentions while TED is dominated by pronominal ones. Pronouns mostly need coreference information to be properly translated, while noun phrases can be improved simply because more instances of the nouns appear in the training data. With this, S3 improves the baseline S1 in +1.1 BLEU points for TED$_{coref}$ but -0.2 BLEU points for news$_{coref}$. However, even if the systems differ in the overall performance, the change is not related to the number of errors in coreference chains. Table TABREF25 also reports the number of mistakes in the translation of coreferent mentions. Whereas the number of errors correlates with translation quality (as measured by BLEU) for news$_{coref}$ this is not the case of TED$_{coref}$. Results and Analyses ::: Error analysis The total distribution for the 10 categories of errors defined in Section SECREF23 can be seen in Figure FIGREF29. Globally, the proportion of errors due to our closed categories (gender, number, case and ambiguous) is larger for TED talks than for news (see analysis in Section SECREF28). Gender is an issue with all systems and genres which does not get solved by the addition of more data. Additionally, news struggle with wrong words and named entities; for this genre the additional error types (see analysis in Section SECREF30) represent around 60% of the errors of S1/S3 to be compared to the 40% of TED talks. Results and Analyses ::: Error analysis ::: Predefined error categories 0.4em 0.4Within our predefined closed categories (gender, number, case and ambiguous), the gender errors belong to the most frequent errors. They include wrong gender translation of both pronouns, as sie (“her”) instead of ihn (“him”) in example SECREF28 referring to the masculine noun Mindestlohn, and nominal phrases, as der Stasi instead of die Stasi, where a masculine form of the definite article is used instead of a feminine one, in example SECREF28. .src: [The current minimum wage] of 7.25 US dollars is a pittance... She wants to raise [it] to 15 dollars an hour. S3: [Der aktuelle Mindestlohn] von 7,25 US-Dollar sei Almosen... Sie möchte [sie] auf 15 Dollar pro Stunde erhöhen. . src: ...let's have a short look at the history of [the Stasi], because it is really important for understanding [its] self-conception. S2: Lassen sie uns... einen kurzen Blick auf die Geschichte [des Stasi] werfen denn es wirklich wichtig, [seine] Selbstauffassung zu verstehen. The gender-related errors are common to all the automatic translations. Interestingly, systems S1 and S3 have more problems with gender in translations of TED talks, whereas they do better in translating news, which leads us to assume that this is a data-dependent issue: while the antecedent for news is in the same sentence it is not for TED talks. A closer look at the texts with a high number of gender problems confirms this assumption —they contain references to females who were translated with male forms of nouns and pronouns (e.g. Mannschaftskapitän instead of Mannschaftskapitänin). We also observe errors related to gender for the cases of explicitation in translation. Some impersonal English constructions not having direct equivalents in German are translated with personal constructions, which requires an addition of a pronoun. Such cases of explicitation were automatically detected in parallel data in BIBREF21, BIBREF2. They belong to the category of obligatory explicitation, i.e. explicitation dictated by differences in the syntactic and semantic structure of languages, as defined by BIBREF51. An MT system tends to insert a male form instead of a female one even if it's marked as feminine (S3 adds the feminine form she as markup), as illustrated in example SECREF28 where the automatic translation contains the masculine pronoun er (“he”) instead of sie (“she”). . src: [Biles] earned the first one on Tuesday while serving as the exclamation point to retiring national team coordinator Martha Karolyi's going away party. ref: [Biles] holte die erste Medaille am Dienstag, während [sie] auf der Abschiedsfeier der sich in Ruhestand begehenden Mannschaftskoordinatorin Martha Karolyi als Ausrufezeichen diente. S2: [Biles] verdiente den ersten am Dienstag, während [er] als Ausrufezeichen für den pensionierten Koordinator der Nationalmannschaft, Martha Karolyi, diente. Another interesting case of a problem related to gender is the dependence of the referring expressions on grammatical restrictions in German. In example SECREF28, the source chain contains the pronoun him referring to both a 6-year-old boy and The child. In German, these two nominal phrases have different gender (masculine vs. neutral). The pronoun has grammatical agreement with the second noun of the chain (des Kindes) and not its head (ein 6 Jahre alter Junge). . src: Police say [a 6-year-old boy] has been shot in Philadelphia... [The child]'s grandparents identified [him] to CBS Philadelphia as [Mahaj Brown]. S1: Die Polizei behauptet, [ein 6 Jahre alter Junge] sei in Philadelphia erschossen worden... Die Großeltern [des Kindes] identifizierten [ihn] mit CBS Philadelphia als [Mahaj Brown]. Case- and number-related errors are less frequent in our data. However, translations of TED talks with S2 contain much more number-related errors than other outputs. Example SECREF28 illustrates this error type which occurs within a sentence. The English source contains the nominal chain in singular the cost – it, whereas the German correspondence Kosten has a plural form and requires a plural pronoun (sie). However, the automatic translation contains the singular pronoun es. . src: ...to the point where [the cost] is now below 1,000 dollars, and it's confidently predicted that by the year 2015 [it] will be below 100 dollars... S2: bis zu dem Punkt, wo [die Kosten] jetzt unter 1.000 Dollar liegen, und es ist zuversichtlich, dass [es] bis zum Jahr 2015 unter 100 Dollar liegen wird... Ambiguous cases often contain a combination of errors or they are difficult to categorise due to the ambiguity of the source pronouns, as the pronoun it in example SECREF28 which may refer either to the noun trouble or even the clause Democracy is in trouble is translated with the pronoun sie (feminine). In case of the first meaning, the pronoun would be correct, but the form of the following verb should be in plural. In case of a singular form, we would need to use a demonstrative pronoun dies (or possibly the personal pronoun es). . src: Democracy is in trouble... and [it] comes in part from a deep dilemma... S2: Die Demokratie steckt in Schwierigkeiten ... und [sie] rührt teilweise aus einem tiefen Dilemma her... Results and Analyses ::: Error analysis ::: Additional error types At first glance, the error types discussed in this section do not seem to be related to coreference —a wrong translation of a noun can be traced back to the training data available and the way NMT deals with unknown words. However, a wrong translation of a noun may result in its invalidity to be a referring expression for a certain discourse item. As a consequence, a coreference chain is damaged. We illustrate a chain with a wrong named entity translation in example SECREF30. The source chain contains five nominal mentions referring to an American gymnast Aly Raisman: silver medalist – “Final Five” teammate – Aly Raisman – Aly Raisman – Raisman. All the three systems used different names. Example SECREF30 illustrates the translation with S2, where Aly Donovan and Aly Encence were used instead of Aly Raisman, and the mention Raisman disappears completely from the chain. . src: Her total of 62.198 was well clear of [silver medalist] and [“Final Five” teammate] [Aly Raisman]...United States' Simone Biles, left, and [Aly Raisman] embrace after winning gold and silver respectively... [Raisman]'s performance was a bit of revenge from four years ago, when [she] tied... S2: Ihre Gesamtmenge von 62.198 war deutlich von [Silbermedaillengewinner] und [“Final Five” Teamkollegen] [Aly Donovan]... Die Vereinigten Staaten Simone Biles, links und [Aly Encence] Umarmung nach dem Gewinn von Gold und Silber... Vor vier Jahren, als [sie]... Example SECREF30 illustrates translation of the chain The scaling in the opposite direction – that scale. The noun phrases Die Verlagerung in die entgegengesetzte Richtung (“the shift in the opposite direction”) and dieses Ausmaß (“extent/scale”) used in the S1 output do not corefer (cf. Wachstum in die entgegengesetzte Richtung and Wachstum in the reference translation). Notice that these cases with long noun phrases are not tackled by S3 either. . src: [The scaling in the opposite direction]...drive the structure of business towards the creation of new kinds of institutions that can achieve [that scale]. ref: [Wachstum in die entgegengesetzte Richtung]... steuert die Struktur der Geschäfte in Richtung Erschaffung von neuen Institutionen, die [dieses Wachstum] erreichen können. S1: [Die Verlagerung in die entgegengesetzte Richtung]... treibt die Struktur der Unternehmen in Richtung der Schaffung neuer Arten von Institutionen, die [dieses Ausmaß] erreichen können. Results and Analyses ::: Error analysis ::: Types of erroneous mentions Finally, we also analyse the types of the mentions marked as errors. They include either nominal phrases or pronouns. Table TABREF32 shows that there is a variation between the news texts and TED talks in terms of these features. News contain more erroneous nominal phrases, whereas TED talks contain more pronoun-related errors. Whereas both the news and the TED talks have more errors in translating anaphors, there is a higher proportion of erroneous antecedents in the news than in the TED talks. It is also interesting to see that S3 reduces the percentage of errors in anaphors for TED, but has a similar performance to S2 on news. Summary and Conclusions We analysed coreferences in the translation outputs of three transformer systems that differ in the training data and in whether they have access to explicit intra- and cross-sentential anaphoric information (S3) or not (S1, S2). We see that the translation errors are more dependent on the genre than on the nature of the specific NMT system: whereas news (with mainly NP mentions) contain a majority of errors related to wrong word selection, TED talks (with mainly pronominal mentions) are prone to accumulate errors on gender and number. System S3 was specifically designed to solve this issue, but we cannot trace the improvement from S1 to S3 by just counting the errors and error types, as some errors disappear and others emerge: coreference quality and automatic translation quality do not correlate in our analysis on TED talks. As a further improvement to address the issue, we could add more parallel data to our training corpus with a higher density of coreference chains such as movie subtitles or parallel TED talks. We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3. Acknowledgments The annotation work was performed at Saarland University. We thank Anna Felsing, Francesco Fernicola, Viktoria Henn, Johanna Irsch, Kira Janine Jebing, Alicia Lauer, Friederike Lessau and Christina Pollkläsener for performing the manual annotation of the NMT outputs. The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee) and by the German Research Foundation (DFG) as part of SFB 1102 Information Density and Linguistic Encoding. Responsibility for the content of this publication is with the authors.
potentially indicating a shining through effect, explicitation effect
9544cc0244db480217ce9174aa13f1bf09ba0d94
9544cc0244db480217ce9174aa13f1bf09ba0d94_0
Q: What languages are seen in the news and TED datasets? Text: Introduction In the present paper, we analyse coreference in the output of three neural machine translation systems (NMT) that were trained under different settings. We use a transformer architecture BIBREF0 and train it on corpora of different sizes with and without the specific coreference information. Transformers are the current state-of-the-art in NMT BIBREF1 and are solely based on attention, therefore, the kind of errors they produce might be different from other architectures such as CNN or RNN-based ones. Here we focus on one architecture to study the different errors produced only under different data configurations. Coreference is an important component of discourse coherence which is achieved in how discourse entities (and events) are introduced and discussed. Coreference chains contain mentions of one and the same discourse element throughout a text. These mentions are realised by a variety of linguistic devices such as pronouns, nominal phrases (NPs) and other linguistic means. As languages differ in the range of such linguistic means BIBREF2, BIBREF3, BIBREF4, BIBREF5 and in their contextual restrictions BIBREF6, these differences give rise to problems that may result in incoherent (automatic) translations. We focus on coreference chains in English-German translations belonging to two different genres. In German, pronouns, articles and adjectives (and some nouns) are subject to grammatical gender agreement, whereas in English, only person pronouns carry gender marking. An incorrect translation of a pronoun or a nominal phrase may lead to an incorrect relation in a discourse and will destroy a coreference chain. Recent studies in automatic coreference translation have shown that dedicated systems can lead to improvements in pronoun translation BIBREF7, BIBREF8. However, standard NMT systems work at sentence level, so improvements in NMT translate into improvements on pronouns with intra-sentential antecedents, but the phenomenon of coreference is not limited to anaphoric pronouns, and even less to a subset of them. Document-level machine translation (MT) systems are needed to deal with coreference as a whole. Although some attempts to include extra-sentential information exist BIBREF9, BIBREF10, BIBREF11, BIBREF12, the problem is far from being solved. Besides that, some further problems of NMT that do not seem to be related to coreference at first glance (such as translation of unknown words and proper names or the hallucination of additional words) cause coreference-related errors. In our work, we focus on the analysis of complete coreference chains, manually annotating them in the three translation variants. We also evaluate them from the point of view of coreference chain translation. The goal of this paper is two-fold. On the one hand, we are interested in various properties of coreference chains in these translations. They include total number of chains, average chain length, the size of the longest chain and the total number of annotated mentions. These features are compared to those of the underlying source texts and also the corresponding human translation reference. On the other hand, we are also interested in the quality of coreference translations. Therefore, we define a typology of errors, and and chain members in MT output are annotated as to whether or not they are correct. The main focus is on such errors as gender, number and case of the mentions, but we also consider wrong word selection or missing words in a chain. Unlike previous work, we do not restrict ourselves to pronouns. Our analyses show that there are further errors that are not directly related to coreference but consequently have an influence on the correctness of coreference chains. The remainder of the paper is organised as follows. Section SECREF2 introduces the main concepts and presents an overview of related MT studies. Section SECREF3 provides details on the data, systems used and annotation procedures. Section SECREF4 analyses the performance of our transformer systems on coreferent mentions. Finally we summarise and draw conclusions in Section SECREF5. Background and Related Work ::: Coreference Coreference is related to cohesion and coherence. The latter is the logical flow of inter-related ideas in a text, whereas cohesion refers to the text-internal relationship of linguistic elements that are overtly connected via lexico-grammatical devices across sentences BIBREF13. As stated by BIBREF14, this connectedness of texts implies dependencies between sentences. And if these dependencies are neglected in translation, the output text no longer has the property of connectedness which makes a sequence of sentences a text. Coreference expresses identity to a referent mentioned in another textual part (not necessarily in neighbouring sentences) contributing to text connectedness. An addressee is following the mentioned referents and identifies them when they are repeated. Identification of certain referents depends not only on a lexical form, but also on other linguistic means, e.g. articles or modifying pronouns BIBREF15. The use of these is influenced by various factors which can be language-dependent (range of linguistic means available in grammar) and also context-independent (pragmatic situation, genre). Thus, the means of expressing reference differ across languages and genres. This has been shown by some studies in the area of contrastive linguistics BIBREF6, BIBREF3, BIBREF5. Analyses in cross-lingual coreference resolution BIBREF16, BIBREF17, BIBREF18, BIBREF19 show that there are still unsolved problems that should be addressed. Background and Related Work ::: Translation studies Differences between languages and genres in the linguistic means expressing reference are important for translation, as the choice of an appropriate referring expression in the target language poses challenges for both human and machine translation. In translation studies, there is a number of corpus-based works analysing these differences in translation. However, most of them are restricted to individual phenomena within coreference. For instance, BIBREF20 analyse abstract anaphors in English-German translations. To our knowledge, they do not consider chains. BIBREF21 in their contrastive analysis of potential coreference chain members in English-German translations, describe transformation patterns that contain different types of referring expressions. However, the authors rely on automatic tagging and parsing procedures and do not include chains into their analysis. The data used by BIBREF4 and BIBREF22 contain manual chain annotations. The authors focus on different categories of anaphoric pronouns in English-Czech translations, though not paying attention to chain features (e.g. their number or size). Chain features are considered in a contrastive analysis by BIBREF6. Their study concerns different phenomena in a variety of genres in English and German comparable texts. Using contrastive interpretations, they suggest preferred translation strategies from English into German, i.e. translators should use demonstrative pronouns instead of personal pronouns (e.g. dies/das instead of es/it) when translating from English into German and vice versa. However, corpus-based studies show that translators do not necessarily apply such strategies. Instead, they often preserve the source language anaphor's categories BIBREF20 which results in the shining through effects BIBREF23. Moreover, due to the tendency of translators to explicitly realise meanings in translations that were implicit in the source texts BIBREF24, translations are believed to contain more (explicit) referring expressions, and subsequently, more (and longer) coreference chains. Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations. Background and Related Work ::: Coreference in MT As explained in the introduction, several recent works tackle the automatic translation of pronouns and also coreference BIBREF25, BIBREF26 and this has, in part, motivated the creation of devoted shared tasks and test sets to evaluate the quality of pronoun translation BIBREF7, BIBREF27, BIBREF28, BIBREF29. But coreference is a wider phenomenon that affects more linguistic elements. Noun phrases also appear in coreference chains but they are usually studied under coherence and consistency in MT. BIBREF30 use topic modelling to extract coherence chains in the source, predict them in the target and then promote them as translations. BIBREF31 use word embeddings to enforce consistency within documents. Before these works, several methods to post-process the translations and even including a second decoding pass were used BIBREF32, BIBREF33, BIBREF34, BIBREF35. Recent NMT systems that include context deal with both phenomena, coreference and coherence, but usually context is limited to the previous sentence, so chains as a whole are never considered. BIBREF10 encode both a source and a context sentence and then combine them to obtain a context-aware input. The same idea was implemented before by BIBREF36 where they concatenate a source sentence with the previous one to include context. Caches BIBREF37, memory networks BIBREF38 and hierarchical attention methods BIBREF39 allow to use a wider context. Finally, our work is also related to BIBREF40 and BIBREF41 where their oracle translations are similar to the data-based approach we introduce in Section SECREF4. Systems, Methods and Resources ::: State-of-the-art NMT Our NMT systems are based on a transformer architecture BIBREF0 as implemented in the Marian toolkit BIBREF42 using the transformer big configuration. We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1. A variant of the S3 system participated in the news machine translation of the shared task held at WMT 2019 BIBREF43. Systems, Methods and Resources ::: State-of-the-art NMT ::: S1 is trained with the concatenation of Common Crawl, Europarl, a cleaned version of Rapid and the News Commentary corpus. We oversample the latter in order to have a significant representation of data close to the news genre in the final corpus. Systems, Methods and Resources ::: State-of-the-art NMT ::: S2 uses the same data as S1 with the addition of a filtered portion of Paracrawl. This corpus is known to be noisy, so we use it to create a larger training corpus but it is diluted by a factor 4 to give more importance to high quality translations. Systems, Methods and Resources ::: State-of-the-art NMT ::: S3 S3 uses the same data as S1, but this time enriched with the cross- and intra-sentential coreference chain markup as described below. The information is included as follows. Source documents are annotated with coreference chains using the neural annotator of Stanford CoreNLP BIBREF44. The tool detects pronouns, nominal phrases and proper names as mentions in a chain. For every mention, CoreNLP extracts its gender (male, female, neutral, unknown), number (singular, plural, unknown), and animacy (animate, inanimate, unknown). This information is not added directly but used to enrich the single sentence-based MT training data by applying a set of heuristics implemented in DocTrans: We enrich pronominal mentions with the exception of "I" with the head (main noun phrase) of the chain. The head is cleaned by removing articles and Saxon genitives and we only consider heads with less than 4 tokens in order to avoid enriching a word with a full sentence We enrich nominal mentions including proper names with the gender of the head The head itself is enriched with she/he/it/they depending on its gender and animacy The enrichment is done with the addition of tags as shown in the examples: I never cook with $<$b_crf$>$ salt $<$e_crf$>$ it. $<$b_crf$>$ she $<$e_crf$>$ Biles arrived late. In the first case heuristic 1 is used, salt is the head of the chain and it is prepended to the pronoun. The second example shows a sentence where heuristic 2 has been used and the proper name Biles has now information about the gender of the person it is referring to. Afterwards, the NMT system is trained at sentence level in the usual way. The data used for the three systems is cleaned, tokenised, truecased with Moses scripts and BPEd with subword-nmt using separated vocabularies with 50 k subword units each. The validation set ($news2014$) and the test sets described in the following section are pre-processed in the same way. Systems, Methods and Resources ::: Test data under analysis As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46. The corpus contains ca. 160.7 thousand tokens manually annotated with about 14.9 thousand mentions and 4.7 thousand coreference chains. For our analysis, we select a portion of English news texts and TED talks from ParCorFull and translate them with the three NMT systems described in SECREF4 above. As texts considerably differ in their length, we select 17 news texts (494 sentences) and four TED talks (518 sentences). The size (in tokens) of the total data set under analysis – source (src) and human translations (ref) from ParCorFull and the automatic translations produced within this study (S1, S2 and S3) are presented in Table TABREF20. Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set. Systems, Methods and Resources ::: Manual annotation process The English sources and their corresponding human translations into German were already manually annotated for coreference chains. We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull. In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference. The annotation of machine-translated texts was integrated into a university course on discourse phenomena. Our annotators, well-trained students of linguistics, worked in small groups on the assigned annotation tasks (4-5 texts, i.e. 12-15 translations per group). At the beginning of the annotation process, the categories under analysis were discussed within the small groups and also in the class. The final versions of the annotation were then corrected by the instructor. Results and Analyses ::: Chain features First, we compare the distribution of several chain features in the three MT outputs, their source texts and the corresponding human translations. Table TABREF20 shows that, overall, all machine translations contain a greater number of annotated mentions in both news texts and TED talks than in the annotated source (src and src$_{\rm CoreNLP}$) and reference (ref) texts. Notice that src$_{\rm CoreNLP}$ —where coreferences are not manually but automatically annotated with CoreNLP— counts also the tokens that the mentions add to the sentences, but not the tags. The larger number of mentions may indicate a strong explicitation effect observed in machine-translated texts. Interestingly, CoreNLP detects a similar number of mentions in both genres, while human annotators clearly marked more chains for TED than for news. Both genres are in fact quite different in nature; whereas only $37\%$ of the mentions are pronominal in news texts (343 out of 915), the number grows to $58\%$ for TED (577 out of 989), and this could be an indicator of the difficulty of the genres for NMT systems. There is also a variation in terms of chain number between translations of TED talks and news. While automatic translations of news texts contain more chains than the corresponding human annotated sources and references, machine-translated TED talks contain less chains than the sources and human translations. However, there is not much variation between the chain features of the three MT outputs. The chains are also longer in machine-translated output than in reference translations as can be seen by the number of mentions per chain and the length of the longest chain. Results and Analyses ::: MT quality at system level We evaluate the quality of the three transformer engines with two automatic metrics, BLEU BIBREF49 and METEOR BIBREF50. Table TABREF25 shows the scores in two cases: all, when the complete texts are evaluated and coref, when only the subset of sentences that have been augmented in S3 are considered – 265 out of 494 for news and 239 out of 518 for TED. For news, the best system is that trained on more data, S2; but for TED talks S3 with less data has the best performance. The difference between the behaviour of the systems can be related to the different genres. We have seen that news are dominated by nominal mentions while TED is dominated by pronominal ones. Pronouns mostly need coreference information to be properly translated, while noun phrases can be improved simply because more instances of the nouns appear in the training data. With this, S3 improves the baseline S1 in +1.1 BLEU points for TED$_{coref}$ but -0.2 BLEU points for news$_{coref}$. However, even if the systems differ in the overall performance, the change is not related to the number of errors in coreference chains. Table TABREF25 also reports the number of mistakes in the translation of coreferent mentions. Whereas the number of errors correlates with translation quality (as measured by BLEU) for news$_{coref}$ this is not the case of TED$_{coref}$. Results and Analyses ::: Error analysis The total distribution for the 10 categories of errors defined in Section SECREF23 can be seen in Figure FIGREF29. Globally, the proportion of errors due to our closed categories (gender, number, case and ambiguous) is larger for TED talks than for news (see analysis in Section SECREF28). Gender is an issue with all systems and genres which does not get solved by the addition of more data. Additionally, news struggle with wrong words and named entities; for this genre the additional error types (see analysis in Section SECREF30) represent around 60% of the errors of S1/S3 to be compared to the 40% of TED talks. Results and Analyses ::: Error analysis ::: Predefined error categories 0.4em 0.4Within our predefined closed categories (gender, number, case and ambiguous), the gender errors belong to the most frequent errors. They include wrong gender translation of both pronouns, as sie (“her”) instead of ihn (“him”) in example SECREF28 referring to the masculine noun Mindestlohn, and nominal phrases, as der Stasi instead of die Stasi, where a masculine form of the definite article is used instead of a feminine one, in example SECREF28. .src: [The current minimum wage] of 7.25 US dollars is a pittance... She wants to raise [it] to 15 dollars an hour. S3: [Der aktuelle Mindestlohn] von 7,25 US-Dollar sei Almosen... Sie möchte [sie] auf 15 Dollar pro Stunde erhöhen. . src: ...let's have a short look at the history of [the Stasi], because it is really important for understanding [its] self-conception. S2: Lassen sie uns... einen kurzen Blick auf die Geschichte [des Stasi] werfen denn es wirklich wichtig, [seine] Selbstauffassung zu verstehen. The gender-related errors are common to all the automatic translations. Interestingly, systems S1 and S3 have more problems with gender in translations of TED talks, whereas they do better in translating news, which leads us to assume that this is a data-dependent issue: while the antecedent for news is in the same sentence it is not for TED talks. A closer look at the texts with a high number of gender problems confirms this assumption —they contain references to females who were translated with male forms of nouns and pronouns (e.g. Mannschaftskapitän instead of Mannschaftskapitänin). We also observe errors related to gender for the cases of explicitation in translation. Some impersonal English constructions not having direct equivalents in German are translated with personal constructions, which requires an addition of a pronoun. Such cases of explicitation were automatically detected in parallel data in BIBREF21, BIBREF2. They belong to the category of obligatory explicitation, i.e. explicitation dictated by differences in the syntactic and semantic structure of languages, as defined by BIBREF51. An MT system tends to insert a male form instead of a female one even if it's marked as feminine (S3 adds the feminine form she as markup), as illustrated in example SECREF28 where the automatic translation contains the masculine pronoun er (“he”) instead of sie (“she”). . src: [Biles] earned the first one on Tuesday while serving as the exclamation point to retiring national team coordinator Martha Karolyi's going away party. ref: [Biles] holte die erste Medaille am Dienstag, während [sie] auf der Abschiedsfeier der sich in Ruhestand begehenden Mannschaftskoordinatorin Martha Karolyi als Ausrufezeichen diente. S2: [Biles] verdiente den ersten am Dienstag, während [er] als Ausrufezeichen für den pensionierten Koordinator der Nationalmannschaft, Martha Karolyi, diente. Another interesting case of a problem related to gender is the dependence of the referring expressions on grammatical restrictions in German. In example SECREF28, the source chain contains the pronoun him referring to both a 6-year-old boy and The child. In German, these two nominal phrases have different gender (masculine vs. neutral). The pronoun has grammatical agreement with the second noun of the chain (des Kindes) and not its head (ein 6 Jahre alter Junge). . src: Police say [a 6-year-old boy] has been shot in Philadelphia... [The child]'s grandparents identified [him] to CBS Philadelphia as [Mahaj Brown]. S1: Die Polizei behauptet, [ein 6 Jahre alter Junge] sei in Philadelphia erschossen worden... Die Großeltern [des Kindes] identifizierten [ihn] mit CBS Philadelphia als [Mahaj Brown]. Case- and number-related errors are less frequent in our data. However, translations of TED talks with S2 contain much more number-related errors than other outputs. Example SECREF28 illustrates this error type which occurs within a sentence. The English source contains the nominal chain in singular the cost – it, whereas the German correspondence Kosten has a plural form and requires a plural pronoun (sie). However, the automatic translation contains the singular pronoun es. . src: ...to the point where [the cost] is now below 1,000 dollars, and it's confidently predicted that by the year 2015 [it] will be below 100 dollars... S2: bis zu dem Punkt, wo [die Kosten] jetzt unter 1.000 Dollar liegen, und es ist zuversichtlich, dass [es] bis zum Jahr 2015 unter 100 Dollar liegen wird... Ambiguous cases often contain a combination of errors or they are difficult to categorise due to the ambiguity of the source pronouns, as the pronoun it in example SECREF28 which may refer either to the noun trouble or even the clause Democracy is in trouble is translated with the pronoun sie (feminine). In case of the first meaning, the pronoun would be correct, but the form of the following verb should be in plural. In case of a singular form, we would need to use a demonstrative pronoun dies (or possibly the personal pronoun es). . src: Democracy is in trouble... and [it] comes in part from a deep dilemma... S2: Die Demokratie steckt in Schwierigkeiten ... und [sie] rührt teilweise aus einem tiefen Dilemma her... Results and Analyses ::: Error analysis ::: Additional error types At first glance, the error types discussed in this section do not seem to be related to coreference —a wrong translation of a noun can be traced back to the training data available and the way NMT deals with unknown words. However, a wrong translation of a noun may result in its invalidity to be a referring expression for a certain discourse item. As a consequence, a coreference chain is damaged. We illustrate a chain with a wrong named entity translation in example SECREF30. The source chain contains five nominal mentions referring to an American gymnast Aly Raisman: silver medalist – “Final Five” teammate – Aly Raisman – Aly Raisman – Raisman. All the three systems used different names. Example SECREF30 illustrates the translation with S2, where Aly Donovan and Aly Encence were used instead of Aly Raisman, and the mention Raisman disappears completely from the chain. . src: Her total of 62.198 was well clear of [silver medalist] and [“Final Five” teammate] [Aly Raisman]...United States' Simone Biles, left, and [Aly Raisman] embrace after winning gold and silver respectively... [Raisman]'s performance was a bit of revenge from four years ago, when [she] tied... S2: Ihre Gesamtmenge von 62.198 war deutlich von [Silbermedaillengewinner] und [“Final Five” Teamkollegen] [Aly Donovan]... Die Vereinigten Staaten Simone Biles, links und [Aly Encence] Umarmung nach dem Gewinn von Gold und Silber... Vor vier Jahren, als [sie]... Example SECREF30 illustrates translation of the chain The scaling in the opposite direction – that scale. The noun phrases Die Verlagerung in die entgegengesetzte Richtung (“the shift in the opposite direction”) and dieses Ausmaß (“extent/scale”) used in the S1 output do not corefer (cf. Wachstum in die entgegengesetzte Richtung and Wachstum in the reference translation). Notice that these cases with long noun phrases are not tackled by S3 either. . src: [The scaling in the opposite direction]...drive the structure of business towards the creation of new kinds of institutions that can achieve [that scale]. ref: [Wachstum in die entgegengesetzte Richtung]... steuert die Struktur der Geschäfte in Richtung Erschaffung von neuen Institutionen, die [dieses Wachstum] erreichen können. S1: [Die Verlagerung in die entgegengesetzte Richtung]... treibt die Struktur der Unternehmen in Richtung der Schaffung neuer Arten von Institutionen, die [dieses Ausmaß] erreichen können. Results and Analyses ::: Error analysis ::: Types of erroneous mentions Finally, we also analyse the types of the mentions marked as errors. They include either nominal phrases or pronouns. Table TABREF32 shows that there is a variation between the news texts and TED talks in terms of these features. News contain more erroneous nominal phrases, whereas TED talks contain more pronoun-related errors. Whereas both the news and the TED talks have more errors in translating anaphors, there is a higher proportion of erroneous antecedents in the news than in the TED talks. It is also interesting to see that S3 reduces the percentage of errors in anaphors for TED, but has a similar performance to S2 on news. Summary and Conclusions We analysed coreferences in the translation outputs of three transformer systems that differ in the training data and in whether they have access to explicit intra- and cross-sentential anaphoric information (S3) or not (S1, S2). We see that the translation errors are more dependent on the genre than on the nature of the specific NMT system: whereas news (with mainly NP mentions) contain a majority of errors related to wrong word selection, TED talks (with mainly pronominal mentions) are prone to accumulate errors on gender and number. System S3 was specifically designed to solve this issue, but we cannot trace the improvement from S1 to S3 by just counting the errors and error types, as some errors disappear and others emerge: coreference quality and automatic translation quality do not correlate in our analysis on TED talks. As a further improvement to address the issue, we could add more parallel data to our training corpus with a higher density of coreference chains such as movie subtitles or parallel TED talks. We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3. Acknowledgments The annotation work was performed at Saarland University. We thank Anna Felsing, Francesco Fernicola, Viktoria Henn, Johanna Irsch, Kira Janine Jebing, Alicia Lauer, Friederike Lessau and Christina Pollkläsener for performing the manual annotation of the NMT outputs. The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee) and by the German Research Foundation (DFG) as part of SFB 1102 Information Density and Linguistic Encoding. Responsibility for the content of this publication is with the authors.
English, German
c97a4a1c0e3d00137a9ae8d6fbb809ba6492991d
c97a4a1c0e3d00137a9ae8d6fbb809ba6492991d_0
Q: How are the coreference chain translations evaluated? Text: Introduction In the present paper, we analyse coreference in the output of three neural machine translation systems (NMT) that were trained under different settings. We use a transformer architecture BIBREF0 and train it on corpora of different sizes with and without the specific coreference information. Transformers are the current state-of-the-art in NMT BIBREF1 and are solely based on attention, therefore, the kind of errors they produce might be different from other architectures such as CNN or RNN-based ones. Here we focus on one architecture to study the different errors produced only under different data configurations. Coreference is an important component of discourse coherence which is achieved in how discourse entities (and events) are introduced and discussed. Coreference chains contain mentions of one and the same discourse element throughout a text. These mentions are realised by a variety of linguistic devices such as pronouns, nominal phrases (NPs) and other linguistic means. As languages differ in the range of such linguistic means BIBREF2, BIBREF3, BIBREF4, BIBREF5 and in their contextual restrictions BIBREF6, these differences give rise to problems that may result in incoherent (automatic) translations. We focus on coreference chains in English-German translations belonging to two different genres. In German, pronouns, articles and adjectives (and some nouns) are subject to grammatical gender agreement, whereas in English, only person pronouns carry gender marking. An incorrect translation of a pronoun or a nominal phrase may lead to an incorrect relation in a discourse and will destroy a coreference chain. Recent studies in automatic coreference translation have shown that dedicated systems can lead to improvements in pronoun translation BIBREF7, BIBREF8. However, standard NMT systems work at sentence level, so improvements in NMT translate into improvements on pronouns with intra-sentential antecedents, but the phenomenon of coreference is not limited to anaphoric pronouns, and even less to a subset of them. Document-level machine translation (MT) systems are needed to deal with coreference as a whole. Although some attempts to include extra-sentential information exist BIBREF9, BIBREF10, BIBREF11, BIBREF12, the problem is far from being solved. Besides that, some further problems of NMT that do not seem to be related to coreference at first glance (such as translation of unknown words and proper names or the hallucination of additional words) cause coreference-related errors. In our work, we focus on the analysis of complete coreference chains, manually annotating them in the three translation variants. We also evaluate them from the point of view of coreference chain translation. The goal of this paper is two-fold. On the one hand, we are interested in various properties of coreference chains in these translations. They include total number of chains, average chain length, the size of the longest chain and the total number of annotated mentions. These features are compared to those of the underlying source texts and also the corresponding human translation reference. On the other hand, we are also interested in the quality of coreference translations. Therefore, we define a typology of errors, and and chain members in MT output are annotated as to whether or not they are correct. The main focus is on such errors as gender, number and case of the mentions, but we also consider wrong word selection or missing words in a chain. Unlike previous work, we do not restrict ourselves to pronouns. Our analyses show that there are further errors that are not directly related to coreference but consequently have an influence on the correctness of coreference chains. The remainder of the paper is organised as follows. Section SECREF2 introduces the main concepts and presents an overview of related MT studies. Section SECREF3 provides details on the data, systems used and annotation procedures. Section SECREF4 analyses the performance of our transformer systems on coreferent mentions. Finally we summarise and draw conclusions in Section SECREF5. Background and Related Work ::: Coreference Coreference is related to cohesion and coherence. The latter is the logical flow of inter-related ideas in a text, whereas cohesion refers to the text-internal relationship of linguistic elements that are overtly connected via lexico-grammatical devices across sentences BIBREF13. As stated by BIBREF14, this connectedness of texts implies dependencies between sentences. And if these dependencies are neglected in translation, the output text no longer has the property of connectedness which makes a sequence of sentences a text. Coreference expresses identity to a referent mentioned in another textual part (not necessarily in neighbouring sentences) contributing to text connectedness. An addressee is following the mentioned referents and identifies them when they are repeated. Identification of certain referents depends not only on a lexical form, but also on other linguistic means, e.g. articles or modifying pronouns BIBREF15. The use of these is influenced by various factors which can be language-dependent (range of linguistic means available in grammar) and also context-independent (pragmatic situation, genre). Thus, the means of expressing reference differ across languages and genres. This has been shown by some studies in the area of contrastive linguistics BIBREF6, BIBREF3, BIBREF5. Analyses in cross-lingual coreference resolution BIBREF16, BIBREF17, BIBREF18, BIBREF19 show that there are still unsolved problems that should be addressed. Background and Related Work ::: Translation studies Differences between languages and genres in the linguistic means expressing reference are important for translation, as the choice of an appropriate referring expression in the target language poses challenges for both human and machine translation. In translation studies, there is a number of corpus-based works analysing these differences in translation. However, most of them are restricted to individual phenomena within coreference. For instance, BIBREF20 analyse abstract anaphors in English-German translations. To our knowledge, they do not consider chains. BIBREF21 in their contrastive analysis of potential coreference chain members in English-German translations, describe transformation patterns that contain different types of referring expressions. However, the authors rely on automatic tagging and parsing procedures and do not include chains into their analysis. The data used by BIBREF4 and BIBREF22 contain manual chain annotations. The authors focus on different categories of anaphoric pronouns in English-Czech translations, though not paying attention to chain features (e.g. their number or size). Chain features are considered in a contrastive analysis by BIBREF6. Their study concerns different phenomena in a variety of genres in English and German comparable texts. Using contrastive interpretations, they suggest preferred translation strategies from English into German, i.e. translators should use demonstrative pronouns instead of personal pronouns (e.g. dies/das instead of es/it) when translating from English into German and vice versa. However, corpus-based studies show that translators do not necessarily apply such strategies. Instead, they often preserve the source language anaphor's categories BIBREF20 which results in the shining through effects BIBREF23. Moreover, due to the tendency of translators to explicitly realise meanings in translations that were implicit in the source texts BIBREF24, translations are believed to contain more (explicit) referring expressions, and subsequently, more (and longer) coreference chains. Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations. Background and Related Work ::: Coreference in MT As explained in the introduction, several recent works tackle the automatic translation of pronouns and also coreference BIBREF25, BIBREF26 and this has, in part, motivated the creation of devoted shared tasks and test sets to evaluate the quality of pronoun translation BIBREF7, BIBREF27, BIBREF28, BIBREF29. But coreference is a wider phenomenon that affects more linguistic elements. Noun phrases also appear in coreference chains but they are usually studied under coherence and consistency in MT. BIBREF30 use topic modelling to extract coherence chains in the source, predict them in the target and then promote them as translations. BIBREF31 use word embeddings to enforce consistency within documents. Before these works, several methods to post-process the translations and even including a second decoding pass were used BIBREF32, BIBREF33, BIBREF34, BIBREF35. Recent NMT systems that include context deal with both phenomena, coreference and coherence, but usually context is limited to the previous sentence, so chains as a whole are never considered. BIBREF10 encode both a source and a context sentence and then combine them to obtain a context-aware input. The same idea was implemented before by BIBREF36 where they concatenate a source sentence with the previous one to include context. Caches BIBREF37, memory networks BIBREF38 and hierarchical attention methods BIBREF39 allow to use a wider context. Finally, our work is also related to BIBREF40 and BIBREF41 where their oracle translations are similar to the data-based approach we introduce in Section SECREF4. Systems, Methods and Resources ::: State-of-the-art NMT Our NMT systems are based on a transformer architecture BIBREF0 as implemented in the Marian toolkit BIBREF42 using the transformer big configuration. We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1. A variant of the S3 system participated in the news machine translation of the shared task held at WMT 2019 BIBREF43. Systems, Methods and Resources ::: State-of-the-art NMT ::: S1 is trained with the concatenation of Common Crawl, Europarl, a cleaned version of Rapid and the News Commentary corpus. We oversample the latter in order to have a significant representation of data close to the news genre in the final corpus. Systems, Methods and Resources ::: State-of-the-art NMT ::: S2 uses the same data as S1 with the addition of a filtered portion of Paracrawl. This corpus is known to be noisy, so we use it to create a larger training corpus but it is diluted by a factor 4 to give more importance to high quality translations. Systems, Methods and Resources ::: State-of-the-art NMT ::: S3 S3 uses the same data as S1, but this time enriched with the cross- and intra-sentential coreference chain markup as described below. The information is included as follows. Source documents are annotated with coreference chains using the neural annotator of Stanford CoreNLP BIBREF44. The tool detects pronouns, nominal phrases and proper names as mentions in a chain. For every mention, CoreNLP extracts its gender (male, female, neutral, unknown), number (singular, plural, unknown), and animacy (animate, inanimate, unknown). This information is not added directly but used to enrich the single sentence-based MT training data by applying a set of heuristics implemented in DocTrans: We enrich pronominal mentions with the exception of "I" with the head (main noun phrase) of the chain. The head is cleaned by removing articles and Saxon genitives and we only consider heads with less than 4 tokens in order to avoid enriching a word with a full sentence We enrich nominal mentions including proper names with the gender of the head The head itself is enriched with she/he/it/they depending on its gender and animacy The enrichment is done with the addition of tags as shown in the examples: I never cook with $<$b_crf$>$ salt $<$e_crf$>$ it. $<$b_crf$>$ she $<$e_crf$>$ Biles arrived late. In the first case heuristic 1 is used, salt is the head of the chain and it is prepended to the pronoun. The second example shows a sentence where heuristic 2 has been used and the proper name Biles has now information about the gender of the person it is referring to. Afterwards, the NMT system is trained at sentence level in the usual way. The data used for the three systems is cleaned, tokenised, truecased with Moses scripts and BPEd with subword-nmt using separated vocabularies with 50 k subword units each. The validation set ($news2014$) and the test sets described in the following section are pre-processed in the same way. Systems, Methods and Resources ::: Test data under analysis As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46. The corpus contains ca. 160.7 thousand tokens manually annotated with about 14.9 thousand mentions and 4.7 thousand coreference chains. For our analysis, we select a portion of English news texts and TED talks from ParCorFull and translate them with the three NMT systems described in SECREF4 above. As texts considerably differ in their length, we select 17 news texts (494 sentences) and four TED talks (518 sentences). The size (in tokens) of the total data set under analysis – source (src) and human translations (ref) from ParCorFull and the automatic translations produced within this study (S1, S2 and S3) are presented in Table TABREF20. Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set. Systems, Methods and Resources ::: Manual annotation process The English sources and their corresponding human translations into German were already manually annotated for coreference chains. We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull. In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference. The annotation of machine-translated texts was integrated into a university course on discourse phenomena. Our annotators, well-trained students of linguistics, worked in small groups on the assigned annotation tasks (4-5 texts, i.e. 12-15 translations per group). At the beginning of the annotation process, the categories under analysis were discussed within the small groups and also in the class. The final versions of the annotation were then corrected by the instructor. Results and Analyses ::: Chain features First, we compare the distribution of several chain features in the three MT outputs, their source texts and the corresponding human translations. Table TABREF20 shows that, overall, all machine translations contain a greater number of annotated mentions in both news texts and TED talks than in the annotated source (src and src$_{\rm CoreNLP}$) and reference (ref) texts. Notice that src$_{\rm CoreNLP}$ —where coreferences are not manually but automatically annotated with CoreNLP— counts also the tokens that the mentions add to the sentences, but not the tags. The larger number of mentions may indicate a strong explicitation effect observed in machine-translated texts. Interestingly, CoreNLP detects a similar number of mentions in both genres, while human annotators clearly marked more chains for TED than for news. Both genres are in fact quite different in nature; whereas only $37\%$ of the mentions are pronominal in news texts (343 out of 915), the number grows to $58\%$ for TED (577 out of 989), and this could be an indicator of the difficulty of the genres for NMT systems. There is also a variation in terms of chain number between translations of TED talks and news. While automatic translations of news texts contain more chains than the corresponding human annotated sources and references, machine-translated TED talks contain less chains than the sources and human translations. However, there is not much variation between the chain features of the three MT outputs. The chains are also longer in machine-translated output than in reference translations as can be seen by the number of mentions per chain and the length of the longest chain. Results and Analyses ::: MT quality at system level We evaluate the quality of the three transformer engines with two automatic metrics, BLEU BIBREF49 and METEOR BIBREF50. Table TABREF25 shows the scores in two cases: all, when the complete texts are evaluated and coref, when only the subset of sentences that have been augmented in S3 are considered – 265 out of 494 for news and 239 out of 518 for TED. For news, the best system is that trained on more data, S2; but for TED talks S3 with less data has the best performance. The difference between the behaviour of the systems can be related to the different genres. We have seen that news are dominated by nominal mentions while TED is dominated by pronominal ones. Pronouns mostly need coreference information to be properly translated, while noun phrases can be improved simply because more instances of the nouns appear in the training data. With this, S3 improves the baseline S1 in +1.1 BLEU points for TED$_{coref}$ but -0.2 BLEU points for news$_{coref}$. However, even if the systems differ in the overall performance, the change is not related to the number of errors in coreference chains. Table TABREF25 also reports the number of mistakes in the translation of coreferent mentions. Whereas the number of errors correlates with translation quality (as measured by BLEU) for news$_{coref}$ this is not the case of TED$_{coref}$. Results and Analyses ::: Error analysis The total distribution for the 10 categories of errors defined in Section SECREF23 can be seen in Figure FIGREF29. Globally, the proportion of errors due to our closed categories (gender, number, case and ambiguous) is larger for TED talks than for news (see analysis in Section SECREF28). Gender is an issue with all systems and genres which does not get solved by the addition of more data. Additionally, news struggle with wrong words and named entities; for this genre the additional error types (see analysis in Section SECREF30) represent around 60% of the errors of S1/S3 to be compared to the 40% of TED talks. Results and Analyses ::: Error analysis ::: Predefined error categories 0.4em 0.4Within our predefined closed categories (gender, number, case and ambiguous), the gender errors belong to the most frequent errors. They include wrong gender translation of both pronouns, as sie (“her”) instead of ihn (“him”) in example SECREF28 referring to the masculine noun Mindestlohn, and nominal phrases, as der Stasi instead of die Stasi, where a masculine form of the definite article is used instead of a feminine one, in example SECREF28. .src: [The current minimum wage] of 7.25 US dollars is a pittance... She wants to raise [it] to 15 dollars an hour. S3: [Der aktuelle Mindestlohn] von 7,25 US-Dollar sei Almosen... Sie möchte [sie] auf 15 Dollar pro Stunde erhöhen. . src: ...let's have a short look at the history of [the Stasi], because it is really important for understanding [its] self-conception. S2: Lassen sie uns... einen kurzen Blick auf die Geschichte [des Stasi] werfen denn es wirklich wichtig, [seine] Selbstauffassung zu verstehen. The gender-related errors are common to all the automatic translations. Interestingly, systems S1 and S3 have more problems with gender in translations of TED talks, whereas they do better in translating news, which leads us to assume that this is a data-dependent issue: while the antecedent for news is in the same sentence it is not for TED talks. A closer look at the texts with a high number of gender problems confirms this assumption —they contain references to females who were translated with male forms of nouns and pronouns (e.g. Mannschaftskapitän instead of Mannschaftskapitänin). We also observe errors related to gender for the cases of explicitation in translation. Some impersonal English constructions not having direct equivalents in German are translated with personal constructions, which requires an addition of a pronoun. Such cases of explicitation were automatically detected in parallel data in BIBREF21, BIBREF2. They belong to the category of obligatory explicitation, i.e. explicitation dictated by differences in the syntactic and semantic structure of languages, as defined by BIBREF51. An MT system tends to insert a male form instead of a female one even if it's marked as feminine (S3 adds the feminine form she as markup), as illustrated in example SECREF28 where the automatic translation contains the masculine pronoun er (“he”) instead of sie (“she”). . src: [Biles] earned the first one on Tuesday while serving as the exclamation point to retiring national team coordinator Martha Karolyi's going away party. ref: [Biles] holte die erste Medaille am Dienstag, während [sie] auf der Abschiedsfeier der sich in Ruhestand begehenden Mannschaftskoordinatorin Martha Karolyi als Ausrufezeichen diente. S2: [Biles] verdiente den ersten am Dienstag, während [er] als Ausrufezeichen für den pensionierten Koordinator der Nationalmannschaft, Martha Karolyi, diente. Another interesting case of a problem related to gender is the dependence of the referring expressions on grammatical restrictions in German. In example SECREF28, the source chain contains the pronoun him referring to both a 6-year-old boy and The child. In German, these two nominal phrases have different gender (masculine vs. neutral). The pronoun has grammatical agreement with the second noun of the chain (des Kindes) and not its head (ein 6 Jahre alter Junge). . src: Police say [a 6-year-old boy] has been shot in Philadelphia... [The child]'s grandparents identified [him] to CBS Philadelphia as [Mahaj Brown]. S1: Die Polizei behauptet, [ein 6 Jahre alter Junge] sei in Philadelphia erschossen worden... Die Großeltern [des Kindes] identifizierten [ihn] mit CBS Philadelphia als [Mahaj Brown]. Case- and number-related errors are less frequent in our data. However, translations of TED talks with S2 contain much more number-related errors than other outputs. Example SECREF28 illustrates this error type which occurs within a sentence. The English source contains the nominal chain in singular the cost – it, whereas the German correspondence Kosten has a plural form and requires a plural pronoun (sie). However, the automatic translation contains the singular pronoun es. . src: ...to the point where [the cost] is now below 1,000 dollars, and it's confidently predicted that by the year 2015 [it] will be below 100 dollars... S2: bis zu dem Punkt, wo [die Kosten] jetzt unter 1.000 Dollar liegen, und es ist zuversichtlich, dass [es] bis zum Jahr 2015 unter 100 Dollar liegen wird... Ambiguous cases often contain a combination of errors or they are difficult to categorise due to the ambiguity of the source pronouns, as the pronoun it in example SECREF28 which may refer either to the noun trouble or even the clause Democracy is in trouble is translated with the pronoun sie (feminine). In case of the first meaning, the pronoun would be correct, but the form of the following verb should be in plural. In case of a singular form, we would need to use a demonstrative pronoun dies (or possibly the personal pronoun es). . src: Democracy is in trouble... and [it] comes in part from a deep dilemma... S2: Die Demokratie steckt in Schwierigkeiten ... und [sie] rührt teilweise aus einem tiefen Dilemma her... Results and Analyses ::: Error analysis ::: Additional error types At first glance, the error types discussed in this section do not seem to be related to coreference —a wrong translation of a noun can be traced back to the training data available and the way NMT deals with unknown words. However, a wrong translation of a noun may result in its invalidity to be a referring expression for a certain discourse item. As a consequence, a coreference chain is damaged. We illustrate a chain with a wrong named entity translation in example SECREF30. The source chain contains five nominal mentions referring to an American gymnast Aly Raisman: silver medalist – “Final Five” teammate – Aly Raisman – Aly Raisman – Raisman. All the three systems used different names. Example SECREF30 illustrates the translation with S2, where Aly Donovan and Aly Encence were used instead of Aly Raisman, and the mention Raisman disappears completely from the chain. . src: Her total of 62.198 was well clear of [silver medalist] and [“Final Five” teammate] [Aly Raisman]...United States' Simone Biles, left, and [Aly Raisman] embrace after winning gold and silver respectively... [Raisman]'s performance was a bit of revenge from four years ago, when [she] tied... S2: Ihre Gesamtmenge von 62.198 war deutlich von [Silbermedaillengewinner] und [“Final Five” Teamkollegen] [Aly Donovan]... Die Vereinigten Staaten Simone Biles, links und [Aly Encence] Umarmung nach dem Gewinn von Gold und Silber... Vor vier Jahren, als [sie]... Example SECREF30 illustrates translation of the chain The scaling in the opposite direction – that scale. The noun phrases Die Verlagerung in die entgegengesetzte Richtung (“the shift in the opposite direction”) and dieses Ausmaß (“extent/scale”) used in the S1 output do not corefer (cf. Wachstum in die entgegengesetzte Richtung and Wachstum in the reference translation). Notice that these cases with long noun phrases are not tackled by S3 either. . src: [The scaling in the opposite direction]...drive the structure of business towards the creation of new kinds of institutions that can achieve [that scale]. ref: [Wachstum in die entgegengesetzte Richtung]... steuert die Struktur der Geschäfte in Richtung Erschaffung von neuen Institutionen, die [dieses Wachstum] erreichen können. S1: [Die Verlagerung in die entgegengesetzte Richtung]... treibt die Struktur der Unternehmen in Richtung der Schaffung neuer Arten von Institutionen, die [dieses Ausmaß] erreichen können. Results and Analyses ::: Error analysis ::: Types of erroneous mentions Finally, we also analyse the types of the mentions marked as errors. They include either nominal phrases or pronouns. Table TABREF32 shows that there is a variation between the news texts and TED talks in terms of these features. News contain more erroneous nominal phrases, whereas TED talks contain more pronoun-related errors. Whereas both the news and the TED talks have more errors in translating anaphors, there is a higher proportion of erroneous antecedents in the news than in the TED talks. It is also interesting to see that S3 reduces the percentage of errors in anaphors for TED, but has a similar performance to S2 on news. Summary and Conclusions We analysed coreferences in the translation outputs of three transformer systems that differ in the training data and in whether they have access to explicit intra- and cross-sentential anaphoric information (S3) or not (S1, S2). We see that the translation errors are more dependent on the genre than on the nature of the specific NMT system: whereas news (with mainly NP mentions) contain a majority of errors related to wrong word selection, TED talks (with mainly pronominal mentions) are prone to accumulate errors on gender and number. System S3 was specifically designed to solve this issue, but we cannot trace the improvement from S1 to S3 by just counting the errors and error types, as some errors disappear and others emerge: coreference quality and automatic translation quality do not correlate in our analysis on TED talks. As a further improvement to address the issue, we could add more parallel data to our training corpus with a higher density of coreference chains such as movie subtitles or parallel TED talks. We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3. Acknowledgments The annotation work was performed at Saarland University. We thank Anna Felsing, Francesco Fernicola, Viktoria Henn, Johanna Irsch, Kira Janine Jebing, Alicia Lauer, Friederike Lessau and Christina Pollkläsener for performing the manual annotation of the NMT outputs. The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee) and by the German Research Foundation (DFG) as part of SFB 1102 Information Density and Linguistic Encoding. Responsibility for the content of this publication is with the authors.
Unanswerable
3758669426e8fb55a4102564cf05f2864275041b
3758669426e8fb55a4102564cf05f2864275041b_0
Q: How are the (possibly incorrect) coreference chains in the MT outputs annotated? Text: Introduction In the present paper, we analyse coreference in the output of three neural machine translation systems (NMT) that were trained under different settings. We use a transformer architecture BIBREF0 and train it on corpora of different sizes with and without the specific coreference information. Transformers are the current state-of-the-art in NMT BIBREF1 and are solely based on attention, therefore, the kind of errors they produce might be different from other architectures such as CNN or RNN-based ones. Here we focus on one architecture to study the different errors produced only under different data configurations. Coreference is an important component of discourse coherence which is achieved in how discourse entities (and events) are introduced and discussed. Coreference chains contain mentions of one and the same discourse element throughout a text. These mentions are realised by a variety of linguistic devices such as pronouns, nominal phrases (NPs) and other linguistic means. As languages differ in the range of such linguistic means BIBREF2, BIBREF3, BIBREF4, BIBREF5 and in their contextual restrictions BIBREF6, these differences give rise to problems that may result in incoherent (automatic) translations. We focus on coreference chains in English-German translations belonging to two different genres. In German, pronouns, articles and adjectives (and some nouns) are subject to grammatical gender agreement, whereas in English, only person pronouns carry gender marking. An incorrect translation of a pronoun or a nominal phrase may lead to an incorrect relation in a discourse and will destroy a coreference chain. Recent studies in automatic coreference translation have shown that dedicated systems can lead to improvements in pronoun translation BIBREF7, BIBREF8. However, standard NMT systems work at sentence level, so improvements in NMT translate into improvements on pronouns with intra-sentential antecedents, but the phenomenon of coreference is not limited to anaphoric pronouns, and even less to a subset of them. Document-level machine translation (MT) systems are needed to deal with coreference as a whole. Although some attempts to include extra-sentential information exist BIBREF9, BIBREF10, BIBREF11, BIBREF12, the problem is far from being solved. Besides that, some further problems of NMT that do not seem to be related to coreference at first glance (such as translation of unknown words and proper names or the hallucination of additional words) cause coreference-related errors. In our work, we focus on the analysis of complete coreference chains, manually annotating them in the three translation variants. We also evaluate them from the point of view of coreference chain translation. The goal of this paper is two-fold. On the one hand, we are interested in various properties of coreference chains in these translations. They include total number of chains, average chain length, the size of the longest chain and the total number of annotated mentions. These features are compared to those of the underlying source texts and also the corresponding human translation reference. On the other hand, we are also interested in the quality of coreference translations. Therefore, we define a typology of errors, and and chain members in MT output are annotated as to whether or not they are correct. The main focus is on such errors as gender, number and case of the mentions, but we also consider wrong word selection or missing words in a chain. Unlike previous work, we do not restrict ourselves to pronouns. Our analyses show that there are further errors that are not directly related to coreference but consequently have an influence on the correctness of coreference chains. The remainder of the paper is organised as follows. Section SECREF2 introduces the main concepts and presents an overview of related MT studies. Section SECREF3 provides details on the data, systems used and annotation procedures. Section SECREF4 analyses the performance of our transformer systems on coreferent mentions. Finally we summarise and draw conclusions in Section SECREF5. Background and Related Work ::: Coreference Coreference is related to cohesion and coherence. The latter is the logical flow of inter-related ideas in a text, whereas cohesion refers to the text-internal relationship of linguistic elements that are overtly connected via lexico-grammatical devices across sentences BIBREF13. As stated by BIBREF14, this connectedness of texts implies dependencies between sentences. And if these dependencies are neglected in translation, the output text no longer has the property of connectedness which makes a sequence of sentences a text. Coreference expresses identity to a referent mentioned in another textual part (not necessarily in neighbouring sentences) contributing to text connectedness. An addressee is following the mentioned referents and identifies them when they are repeated. Identification of certain referents depends not only on a lexical form, but also on other linguistic means, e.g. articles or modifying pronouns BIBREF15. The use of these is influenced by various factors which can be language-dependent (range of linguistic means available in grammar) and also context-independent (pragmatic situation, genre). Thus, the means of expressing reference differ across languages and genres. This has been shown by some studies in the area of contrastive linguistics BIBREF6, BIBREF3, BIBREF5. Analyses in cross-lingual coreference resolution BIBREF16, BIBREF17, BIBREF18, BIBREF19 show that there are still unsolved problems that should be addressed. Background and Related Work ::: Translation studies Differences between languages and genres in the linguistic means expressing reference are important for translation, as the choice of an appropriate referring expression in the target language poses challenges for both human and machine translation. In translation studies, there is a number of corpus-based works analysing these differences in translation. However, most of them are restricted to individual phenomena within coreference. For instance, BIBREF20 analyse abstract anaphors in English-German translations. To our knowledge, they do not consider chains. BIBREF21 in their contrastive analysis of potential coreference chain members in English-German translations, describe transformation patterns that contain different types of referring expressions. However, the authors rely on automatic tagging and parsing procedures and do not include chains into their analysis. The data used by BIBREF4 and BIBREF22 contain manual chain annotations. The authors focus on different categories of anaphoric pronouns in English-Czech translations, though not paying attention to chain features (e.g. their number or size). Chain features are considered in a contrastive analysis by BIBREF6. Their study concerns different phenomena in a variety of genres in English and German comparable texts. Using contrastive interpretations, they suggest preferred translation strategies from English into German, i.e. translators should use demonstrative pronouns instead of personal pronouns (e.g. dies/das instead of es/it) when translating from English into German and vice versa. However, corpus-based studies show that translators do not necessarily apply such strategies. Instead, they often preserve the source language anaphor's categories BIBREF20 which results in the shining through effects BIBREF23. Moreover, due to the tendency of translators to explicitly realise meanings in translations that were implicit in the source texts BIBREF24, translations are believed to contain more (explicit) referring expressions, and subsequently, more (and longer) coreference chains. Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations. Background and Related Work ::: Coreference in MT As explained in the introduction, several recent works tackle the automatic translation of pronouns and also coreference BIBREF25, BIBREF26 and this has, in part, motivated the creation of devoted shared tasks and test sets to evaluate the quality of pronoun translation BIBREF7, BIBREF27, BIBREF28, BIBREF29. But coreference is a wider phenomenon that affects more linguistic elements. Noun phrases also appear in coreference chains but they are usually studied under coherence and consistency in MT. BIBREF30 use topic modelling to extract coherence chains in the source, predict them in the target and then promote them as translations. BIBREF31 use word embeddings to enforce consistency within documents. Before these works, several methods to post-process the translations and even including a second decoding pass were used BIBREF32, BIBREF33, BIBREF34, BIBREF35. Recent NMT systems that include context deal with both phenomena, coreference and coherence, but usually context is limited to the previous sentence, so chains as a whole are never considered. BIBREF10 encode both a source and a context sentence and then combine them to obtain a context-aware input. The same idea was implemented before by BIBREF36 where they concatenate a source sentence with the previous one to include context. Caches BIBREF37, memory networks BIBREF38 and hierarchical attention methods BIBREF39 allow to use a wider context. Finally, our work is also related to BIBREF40 and BIBREF41 where their oracle translations are similar to the data-based approach we introduce in Section SECREF4. Systems, Methods and Resources ::: State-of-the-art NMT Our NMT systems are based on a transformer architecture BIBREF0 as implemented in the Marian toolkit BIBREF42 using the transformer big configuration. We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1. A variant of the S3 system participated in the news machine translation of the shared task held at WMT 2019 BIBREF43. Systems, Methods and Resources ::: State-of-the-art NMT ::: S1 is trained with the concatenation of Common Crawl, Europarl, a cleaned version of Rapid and the News Commentary corpus. We oversample the latter in order to have a significant representation of data close to the news genre in the final corpus. Systems, Methods and Resources ::: State-of-the-art NMT ::: S2 uses the same data as S1 with the addition of a filtered portion of Paracrawl. This corpus is known to be noisy, so we use it to create a larger training corpus but it is diluted by a factor 4 to give more importance to high quality translations. Systems, Methods and Resources ::: State-of-the-art NMT ::: S3 S3 uses the same data as S1, but this time enriched with the cross- and intra-sentential coreference chain markup as described below. The information is included as follows. Source documents are annotated with coreference chains using the neural annotator of Stanford CoreNLP BIBREF44. The tool detects pronouns, nominal phrases and proper names as mentions in a chain. For every mention, CoreNLP extracts its gender (male, female, neutral, unknown), number (singular, plural, unknown), and animacy (animate, inanimate, unknown). This information is not added directly but used to enrich the single sentence-based MT training data by applying a set of heuristics implemented in DocTrans: We enrich pronominal mentions with the exception of "I" with the head (main noun phrase) of the chain. The head is cleaned by removing articles and Saxon genitives and we only consider heads with less than 4 tokens in order to avoid enriching a word with a full sentence We enrich nominal mentions including proper names with the gender of the head The head itself is enriched with she/he/it/they depending on its gender and animacy The enrichment is done with the addition of tags as shown in the examples: I never cook with $<$b_crf$>$ salt $<$e_crf$>$ it. $<$b_crf$>$ she $<$e_crf$>$ Biles arrived late. In the first case heuristic 1 is used, salt is the head of the chain and it is prepended to the pronoun. The second example shows a sentence where heuristic 2 has been used and the proper name Biles has now information about the gender of the person it is referring to. Afterwards, the NMT system is trained at sentence level in the usual way. The data used for the three systems is cleaned, tokenised, truecased with Moses scripts and BPEd with subword-nmt using separated vocabularies with 50 k subword units each. The validation set ($news2014$) and the test sets described in the following section are pre-processed in the same way. Systems, Methods and Resources ::: Test data under analysis As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46. The corpus contains ca. 160.7 thousand tokens manually annotated with about 14.9 thousand mentions and 4.7 thousand coreference chains. For our analysis, we select a portion of English news texts and TED talks from ParCorFull and translate them with the three NMT systems described in SECREF4 above. As texts considerably differ in their length, we select 17 news texts (494 sentences) and four TED talks (518 sentences). The size (in tokens) of the total data set under analysis – source (src) and human translations (ref) from ParCorFull and the automatic translations produced within this study (S1, S2 and S3) are presented in Table TABREF20. Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set. Systems, Methods and Resources ::: Manual annotation process The English sources and their corresponding human translations into German were already manually annotated for coreference chains. We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull. In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference. The annotation of machine-translated texts was integrated into a university course on discourse phenomena. Our annotators, well-trained students of linguistics, worked in small groups on the assigned annotation tasks (4-5 texts, i.e. 12-15 translations per group). At the beginning of the annotation process, the categories under analysis were discussed within the small groups and also in the class. The final versions of the annotation were then corrected by the instructor. Results and Analyses ::: Chain features First, we compare the distribution of several chain features in the three MT outputs, their source texts and the corresponding human translations. Table TABREF20 shows that, overall, all machine translations contain a greater number of annotated mentions in both news texts and TED talks than in the annotated source (src and src$_{\rm CoreNLP}$) and reference (ref) texts. Notice that src$_{\rm CoreNLP}$ —where coreferences are not manually but automatically annotated with CoreNLP— counts also the tokens that the mentions add to the sentences, but not the tags. The larger number of mentions may indicate a strong explicitation effect observed in machine-translated texts. Interestingly, CoreNLP detects a similar number of mentions in both genres, while human annotators clearly marked more chains for TED than for news. Both genres are in fact quite different in nature; whereas only $37\%$ of the mentions are pronominal in news texts (343 out of 915), the number grows to $58\%$ for TED (577 out of 989), and this could be an indicator of the difficulty of the genres for NMT systems. There is also a variation in terms of chain number between translations of TED talks and news. While automatic translations of news texts contain more chains than the corresponding human annotated sources and references, machine-translated TED talks contain less chains than the sources and human translations. However, there is not much variation between the chain features of the three MT outputs. The chains are also longer in machine-translated output than in reference translations as can be seen by the number of mentions per chain and the length of the longest chain. Results and Analyses ::: MT quality at system level We evaluate the quality of the three transformer engines with two automatic metrics, BLEU BIBREF49 and METEOR BIBREF50. Table TABREF25 shows the scores in two cases: all, when the complete texts are evaluated and coref, when only the subset of sentences that have been augmented in S3 are considered – 265 out of 494 for news and 239 out of 518 for TED. For news, the best system is that trained on more data, S2; but for TED talks S3 with less data has the best performance. The difference between the behaviour of the systems can be related to the different genres. We have seen that news are dominated by nominal mentions while TED is dominated by pronominal ones. Pronouns mostly need coreference information to be properly translated, while noun phrases can be improved simply because more instances of the nouns appear in the training data. With this, S3 improves the baseline S1 in +1.1 BLEU points for TED$_{coref}$ but -0.2 BLEU points for news$_{coref}$. However, even if the systems differ in the overall performance, the change is not related to the number of errors in coreference chains. Table TABREF25 also reports the number of mistakes in the translation of coreferent mentions. Whereas the number of errors correlates with translation quality (as measured by BLEU) for news$_{coref}$ this is not the case of TED$_{coref}$. Results and Analyses ::: Error analysis The total distribution for the 10 categories of errors defined in Section SECREF23 can be seen in Figure FIGREF29. Globally, the proportion of errors due to our closed categories (gender, number, case and ambiguous) is larger for TED talks than for news (see analysis in Section SECREF28). Gender is an issue with all systems and genres which does not get solved by the addition of more data. Additionally, news struggle with wrong words and named entities; for this genre the additional error types (see analysis in Section SECREF30) represent around 60% of the errors of S1/S3 to be compared to the 40% of TED talks. Results and Analyses ::: Error analysis ::: Predefined error categories 0.4em 0.4Within our predefined closed categories (gender, number, case and ambiguous), the gender errors belong to the most frequent errors. They include wrong gender translation of both pronouns, as sie (“her”) instead of ihn (“him”) in example SECREF28 referring to the masculine noun Mindestlohn, and nominal phrases, as der Stasi instead of die Stasi, where a masculine form of the definite article is used instead of a feminine one, in example SECREF28. .src: [The current minimum wage] of 7.25 US dollars is a pittance... She wants to raise [it] to 15 dollars an hour. S3: [Der aktuelle Mindestlohn] von 7,25 US-Dollar sei Almosen... Sie möchte [sie] auf 15 Dollar pro Stunde erhöhen. . src: ...let's have a short look at the history of [the Stasi], because it is really important for understanding [its] self-conception. S2: Lassen sie uns... einen kurzen Blick auf die Geschichte [des Stasi] werfen denn es wirklich wichtig, [seine] Selbstauffassung zu verstehen. The gender-related errors are common to all the automatic translations. Interestingly, systems S1 and S3 have more problems with gender in translations of TED talks, whereas they do better in translating news, which leads us to assume that this is a data-dependent issue: while the antecedent for news is in the same sentence it is not for TED talks. A closer look at the texts with a high number of gender problems confirms this assumption —they contain references to females who were translated with male forms of nouns and pronouns (e.g. Mannschaftskapitän instead of Mannschaftskapitänin). We also observe errors related to gender for the cases of explicitation in translation. Some impersonal English constructions not having direct equivalents in German are translated with personal constructions, which requires an addition of a pronoun. Such cases of explicitation were automatically detected in parallel data in BIBREF21, BIBREF2. They belong to the category of obligatory explicitation, i.e. explicitation dictated by differences in the syntactic and semantic structure of languages, as defined by BIBREF51. An MT system tends to insert a male form instead of a female one even if it's marked as feminine (S3 adds the feminine form she as markup), as illustrated in example SECREF28 where the automatic translation contains the masculine pronoun er (“he”) instead of sie (“she”). . src: [Biles] earned the first one on Tuesday while serving as the exclamation point to retiring national team coordinator Martha Karolyi's going away party. ref: [Biles] holte die erste Medaille am Dienstag, während [sie] auf der Abschiedsfeier der sich in Ruhestand begehenden Mannschaftskoordinatorin Martha Karolyi als Ausrufezeichen diente. S2: [Biles] verdiente den ersten am Dienstag, während [er] als Ausrufezeichen für den pensionierten Koordinator der Nationalmannschaft, Martha Karolyi, diente. Another interesting case of a problem related to gender is the dependence of the referring expressions on grammatical restrictions in German. In example SECREF28, the source chain contains the pronoun him referring to both a 6-year-old boy and The child. In German, these two nominal phrases have different gender (masculine vs. neutral). The pronoun has grammatical agreement with the second noun of the chain (des Kindes) and not its head (ein 6 Jahre alter Junge). . src: Police say [a 6-year-old boy] has been shot in Philadelphia... [The child]'s grandparents identified [him] to CBS Philadelphia as [Mahaj Brown]. S1: Die Polizei behauptet, [ein 6 Jahre alter Junge] sei in Philadelphia erschossen worden... Die Großeltern [des Kindes] identifizierten [ihn] mit CBS Philadelphia als [Mahaj Brown]. Case- and number-related errors are less frequent in our data. However, translations of TED talks with S2 contain much more number-related errors than other outputs. Example SECREF28 illustrates this error type which occurs within a sentence. The English source contains the nominal chain in singular the cost – it, whereas the German correspondence Kosten has a plural form and requires a plural pronoun (sie). However, the automatic translation contains the singular pronoun es. . src: ...to the point where [the cost] is now below 1,000 dollars, and it's confidently predicted that by the year 2015 [it] will be below 100 dollars... S2: bis zu dem Punkt, wo [die Kosten] jetzt unter 1.000 Dollar liegen, und es ist zuversichtlich, dass [es] bis zum Jahr 2015 unter 100 Dollar liegen wird... Ambiguous cases often contain a combination of errors or they are difficult to categorise due to the ambiguity of the source pronouns, as the pronoun it in example SECREF28 which may refer either to the noun trouble or even the clause Democracy is in trouble is translated with the pronoun sie (feminine). In case of the first meaning, the pronoun would be correct, but the form of the following verb should be in plural. In case of a singular form, we would need to use a demonstrative pronoun dies (or possibly the personal pronoun es). . src: Democracy is in trouble... and [it] comes in part from a deep dilemma... S2: Die Demokratie steckt in Schwierigkeiten ... und [sie] rührt teilweise aus einem tiefen Dilemma her... Results and Analyses ::: Error analysis ::: Additional error types At first glance, the error types discussed in this section do not seem to be related to coreference —a wrong translation of a noun can be traced back to the training data available and the way NMT deals with unknown words. However, a wrong translation of a noun may result in its invalidity to be a referring expression for a certain discourse item. As a consequence, a coreference chain is damaged. We illustrate a chain with a wrong named entity translation in example SECREF30. The source chain contains five nominal mentions referring to an American gymnast Aly Raisman: silver medalist – “Final Five” teammate – Aly Raisman – Aly Raisman – Raisman. All the three systems used different names. Example SECREF30 illustrates the translation with S2, where Aly Donovan and Aly Encence were used instead of Aly Raisman, and the mention Raisman disappears completely from the chain. . src: Her total of 62.198 was well clear of [silver medalist] and [“Final Five” teammate] [Aly Raisman]...United States' Simone Biles, left, and [Aly Raisman] embrace after winning gold and silver respectively... [Raisman]'s performance was a bit of revenge from four years ago, when [she] tied... S2: Ihre Gesamtmenge von 62.198 war deutlich von [Silbermedaillengewinner] und [“Final Five” Teamkollegen] [Aly Donovan]... Die Vereinigten Staaten Simone Biles, links und [Aly Encence] Umarmung nach dem Gewinn von Gold und Silber... Vor vier Jahren, als [sie]... Example SECREF30 illustrates translation of the chain The scaling in the opposite direction – that scale. The noun phrases Die Verlagerung in die entgegengesetzte Richtung (“the shift in the opposite direction”) and dieses Ausmaß (“extent/scale”) used in the S1 output do not corefer (cf. Wachstum in die entgegengesetzte Richtung and Wachstum in the reference translation). Notice that these cases with long noun phrases are not tackled by S3 either. . src: [The scaling in the opposite direction]...drive the structure of business towards the creation of new kinds of institutions that can achieve [that scale]. ref: [Wachstum in die entgegengesetzte Richtung]... steuert die Struktur der Geschäfte in Richtung Erschaffung von neuen Institutionen, die [dieses Wachstum] erreichen können. S1: [Die Verlagerung in die entgegengesetzte Richtung]... treibt die Struktur der Unternehmen in Richtung der Schaffung neuer Arten von Institutionen, die [dieses Ausmaß] erreichen können. Results and Analyses ::: Error analysis ::: Types of erroneous mentions Finally, we also analyse the types of the mentions marked as errors. They include either nominal phrases or pronouns. Table TABREF32 shows that there is a variation between the news texts and TED talks in terms of these features. News contain more erroneous nominal phrases, whereas TED talks contain more pronoun-related errors. Whereas both the news and the TED talks have more errors in translating anaphors, there is a higher proportion of erroneous antecedents in the news than in the TED talks. It is also interesting to see that S3 reduces the percentage of errors in anaphors for TED, but has a similar performance to S2 on news. Summary and Conclusions We analysed coreferences in the translation outputs of three transformer systems that differ in the training data and in whether they have access to explicit intra- and cross-sentential anaphoric information (S3) or not (S1, S2). We see that the translation errors are more dependent on the genre than on the nature of the specific NMT system: whereas news (with mainly NP mentions) contain a majority of errors related to wrong word selection, TED talks (with mainly pronominal mentions) are prone to accumulate errors on gender and number. System S3 was specifically designed to solve this issue, but we cannot trace the improvement from S1 to S3 by just counting the errors and error types, as some errors disappear and others emerge: coreference quality and automatic translation quality do not correlate in our analysis on TED talks. As a further improvement to address the issue, we could add more parallel data to our training corpus with a higher density of coreference chains such as movie subtitles or parallel TED talks. We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3. Acknowledgments The annotation work was performed at Saarland University. We thank Anna Felsing, Francesco Fernicola, Viktoria Henn, Johanna Irsch, Kira Janine Jebing, Alicia Lauer, Friederike Lessau and Christina Pollkläsener for performing the manual annotation of the NMT outputs. The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee) and by the German Research Foundation (DFG) as part of SFB 1102 Information Density and Linguistic Encoding. Responsibility for the content of this publication is with the authors.
allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause), The mentions referring to the same discourse item are linked between each other., chain members are annotated for their correctness
1ebd6f703458eb6690421398c79abf3fc114148f
1ebd6f703458eb6690421398c79abf3fc114148f_0
Q: Which three neural machine translation systems are analyzed? Text: Introduction In the present paper, we analyse coreference in the output of three neural machine translation systems (NMT) that were trained under different settings. We use a transformer architecture BIBREF0 and train it on corpora of different sizes with and without the specific coreference information. Transformers are the current state-of-the-art in NMT BIBREF1 and are solely based on attention, therefore, the kind of errors they produce might be different from other architectures such as CNN or RNN-based ones. Here we focus on one architecture to study the different errors produced only under different data configurations. Coreference is an important component of discourse coherence which is achieved in how discourse entities (and events) are introduced and discussed. Coreference chains contain mentions of one and the same discourse element throughout a text. These mentions are realised by a variety of linguistic devices such as pronouns, nominal phrases (NPs) and other linguistic means. As languages differ in the range of such linguistic means BIBREF2, BIBREF3, BIBREF4, BIBREF5 and in their contextual restrictions BIBREF6, these differences give rise to problems that may result in incoherent (automatic) translations. We focus on coreference chains in English-German translations belonging to two different genres. In German, pronouns, articles and adjectives (and some nouns) are subject to grammatical gender agreement, whereas in English, only person pronouns carry gender marking. An incorrect translation of a pronoun or a nominal phrase may lead to an incorrect relation in a discourse and will destroy a coreference chain. Recent studies in automatic coreference translation have shown that dedicated systems can lead to improvements in pronoun translation BIBREF7, BIBREF8. However, standard NMT systems work at sentence level, so improvements in NMT translate into improvements on pronouns with intra-sentential antecedents, but the phenomenon of coreference is not limited to anaphoric pronouns, and even less to a subset of them. Document-level machine translation (MT) systems are needed to deal with coreference as a whole. Although some attempts to include extra-sentential information exist BIBREF9, BIBREF10, BIBREF11, BIBREF12, the problem is far from being solved. Besides that, some further problems of NMT that do not seem to be related to coreference at first glance (such as translation of unknown words and proper names or the hallucination of additional words) cause coreference-related errors. In our work, we focus on the analysis of complete coreference chains, manually annotating them in the three translation variants. We also evaluate them from the point of view of coreference chain translation. The goal of this paper is two-fold. On the one hand, we are interested in various properties of coreference chains in these translations. They include total number of chains, average chain length, the size of the longest chain and the total number of annotated mentions. These features are compared to those of the underlying source texts and also the corresponding human translation reference. On the other hand, we are also interested in the quality of coreference translations. Therefore, we define a typology of errors, and and chain members in MT output are annotated as to whether or not they are correct. The main focus is on such errors as gender, number and case of the mentions, but we also consider wrong word selection or missing words in a chain. Unlike previous work, we do not restrict ourselves to pronouns. Our analyses show that there are further errors that are not directly related to coreference but consequently have an influence on the correctness of coreference chains. The remainder of the paper is organised as follows. Section SECREF2 introduces the main concepts and presents an overview of related MT studies. Section SECREF3 provides details on the data, systems used and annotation procedures. Section SECREF4 analyses the performance of our transformer systems on coreferent mentions. Finally we summarise and draw conclusions in Section SECREF5. Background and Related Work ::: Coreference Coreference is related to cohesion and coherence. The latter is the logical flow of inter-related ideas in a text, whereas cohesion refers to the text-internal relationship of linguistic elements that are overtly connected via lexico-grammatical devices across sentences BIBREF13. As stated by BIBREF14, this connectedness of texts implies dependencies between sentences. And if these dependencies are neglected in translation, the output text no longer has the property of connectedness which makes a sequence of sentences a text. Coreference expresses identity to a referent mentioned in another textual part (not necessarily in neighbouring sentences) contributing to text connectedness. An addressee is following the mentioned referents and identifies them when they are repeated. Identification of certain referents depends not only on a lexical form, but also on other linguistic means, e.g. articles or modifying pronouns BIBREF15. The use of these is influenced by various factors which can be language-dependent (range of linguistic means available in grammar) and also context-independent (pragmatic situation, genre). Thus, the means of expressing reference differ across languages and genres. This has been shown by some studies in the area of contrastive linguistics BIBREF6, BIBREF3, BIBREF5. Analyses in cross-lingual coreference resolution BIBREF16, BIBREF17, BIBREF18, BIBREF19 show that there are still unsolved problems that should be addressed. Background and Related Work ::: Translation studies Differences between languages and genres in the linguistic means expressing reference are important for translation, as the choice of an appropriate referring expression in the target language poses challenges for both human and machine translation. In translation studies, there is a number of corpus-based works analysing these differences in translation. However, most of them are restricted to individual phenomena within coreference. For instance, BIBREF20 analyse abstract anaphors in English-German translations. To our knowledge, they do not consider chains. BIBREF21 in their contrastive analysis of potential coreference chain members in English-German translations, describe transformation patterns that contain different types of referring expressions. However, the authors rely on automatic tagging and parsing procedures and do not include chains into their analysis. The data used by BIBREF4 and BIBREF22 contain manual chain annotations. The authors focus on different categories of anaphoric pronouns in English-Czech translations, though not paying attention to chain features (e.g. their number or size). Chain features are considered in a contrastive analysis by BIBREF6. Their study concerns different phenomena in a variety of genres in English and German comparable texts. Using contrastive interpretations, they suggest preferred translation strategies from English into German, i.e. translators should use demonstrative pronouns instead of personal pronouns (e.g. dies/das instead of es/it) when translating from English into German and vice versa. However, corpus-based studies show that translators do not necessarily apply such strategies. Instead, they often preserve the source language anaphor's categories BIBREF20 which results in the shining through effects BIBREF23. Moreover, due to the tendency of translators to explicitly realise meanings in translations that were implicit in the source texts BIBREF24, translations are believed to contain more (explicit) referring expressions, and subsequently, more (and longer) coreference chains. Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations. Background and Related Work ::: Coreference in MT As explained in the introduction, several recent works tackle the automatic translation of pronouns and also coreference BIBREF25, BIBREF26 and this has, in part, motivated the creation of devoted shared tasks and test sets to evaluate the quality of pronoun translation BIBREF7, BIBREF27, BIBREF28, BIBREF29. But coreference is a wider phenomenon that affects more linguistic elements. Noun phrases also appear in coreference chains but they are usually studied under coherence and consistency in MT. BIBREF30 use topic modelling to extract coherence chains in the source, predict them in the target and then promote them as translations. BIBREF31 use word embeddings to enforce consistency within documents. Before these works, several methods to post-process the translations and even including a second decoding pass were used BIBREF32, BIBREF33, BIBREF34, BIBREF35. Recent NMT systems that include context deal with both phenomena, coreference and coherence, but usually context is limited to the previous sentence, so chains as a whole are never considered. BIBREF10 encode both a source and a context sentence and then combine them to obtain a context-aware input. The same idea was implemented before by BIBREF36 where they concatenate a source sentence with the previous one to include context. Caches BIBREF37, memory networks BIBREF38 and hierarchical attention methods BIBREF39 allow to use a wider context. Finally, our work is also related to BIBREF40 and BIBREF41 where their oracle translations are similar to the data-based approach we introduce in Section SECREF4. Systems, Methods and Resources ::: State-of-the-art NMT Our NMT systems are based on a transformer architecture BIBREF0 as implemented in the Marian toolkit BIBREF42 using the transformer big configuration. We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1. A variant of the S3 system participated in the news machine translation of the shared task held at WMT 2019 BIBREF43. Systems, Methods and Resources ::: State-of-the-art NMT ::: S1 is trained with the concatenation of Common Crawl, Europarl, a cleaned version of Rapid and the News Commentary corpus. We oversample the latter in order to have a significant representation of data close to the news genre in the final corpus. Systems, Methods and Resources ::: State-of-the-art NMT ::: S2 uses the same data as S1 with the addition of a filtered portion of Paracrawl. This corpus is known to be noisy, so we use it to create a larger training corpus but it is diluted by a factor 4 to give more importance to high quality translations. Systems, Methods and Resources ::: State-of-the-art NMT ::: S3 S3 uses the same data as S1, but this time enriched with the cross- and intra-sentential coreference chain markup as described below. The information is included as follows. Source documents are annotated with coreference chains using the neural annotator of Stanford CoreNLP BIBREF44. The tool detects pronouns, nominal phrases and proper names as mentions in a chain. For every mention, CoreNLP extracts its gender (male, female, neutral, unknown), number (singular, plural, unknown), and animacy (animate, inanimate, unknown). This information is not added directly but used to enrich the single sentence-based MT training data by applying a set of heuristics implemented in DocTrans: We enrich pronominal mentions with the exception of "I" with the head (main noun phrase) of the chain. The head is cleaned by removing articles and Saxon genitives and we only consider heads with less than 4 tokens in order to avoid enriching a word with a full sentence We enrich nominal mentions including proper names with the gender of the head The head itself is enriched with she/he/it/they depending on its gender and animacy The enrichment is done with the addition of tags as shown in the examples: I never cook with $<$b_crf$>$ salt $<$e_crf$>$ it. $<$b_crf$>$ she $<$e_crf$>$ Biles arrived late. In the first case heuristic 1 is used, salt is the head of the chain and it is prepended to the pronoun. The second example shows a sentence where heuristic 2 has been used and the proper name Biles has now information about the gender of the person it is referring to. Afterwards, the NMT system is trained at sentence level in the usual way. The data used for the three systems is cleaned, tokenised, truecased with Moses scripts and BPEd with subword-nmt using separated vocabularies with 50 k subword units each. The validation set ($news2014$) and the test sets described in the following section are pre-processed in the same way. Systems, Methods and Resources ::: Test data under analysis As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46. The corpus contains ca. 160.7 thousand tokens manually annotated with about 14.9 thousand mentions and 4.7 thousand coreference chains. For our analysis, we select a portion of English news texts and TED talks from ParCorFull and translate them with the three NMT systems described in SECREF4 above. As texts considerably differ in their length, we select 17 news texts (494 sentences) and four TED talks (518 sentences). The size (in tokens) of the total data set under analysis – source (src) and human translations (ref) from ParCorFull and the automatic translations produced within this study (S1, S2 and S3) are presented in Table TABREF20. Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set. Systems, Methods and Resources ::: Manual annotation process The English sources and their corresponding human translations into German were already manually annotated for coreference chains. We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull. In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference. The annotation of machine-translated texts was integrated into a university course on discourse phenomena. Our annotators, well-trained students of linguistics, worked in small groups on the assigned annotation tasks (4-5 texts, i.e. 12-15 translations per group). At the beginning of the annotation process, the categories under analysis were discussed within the small groups and also in the class. The final versions of the annotation were then corrected by the instructor. Results and Analyses ::: Chain features First, we compare the distribution of several chain features in the three MT outputs, their source texts and the corresponding human translations. Table TABREF20 shows that, overall, all machine translations contain a greater number of annotated mentions in both news texts and TED talks than in the annotated source (src and src$_{\rm CoreNLP}$) and reference (ref) texts. Notice that src$_{\rm CoreNLP}$ —where coreferences are not manually but automatically annotated with CoreNLP— counts also the tokens that the mentions add to the sentences, but not the tags. The larger number of mentions may indicate a strong explicitation effect observed in machine-translated texts. Interestingly, CoreNLP detects a similar number of mentions in both genres, while human annotators clearly marked more chains for TED than for news. Both genres are in fact quite different in nature; whereas only $37\%$ of the mentions are pronominal in news texts (343 out of 915), the number grows to $58\%$ for TED (577 out of 989), and this could be an indicator of the difficulty of the genres for NMT systems. There is also a variation in terms of chain number between translations of TED talks and news. While automatic translations of news texts contain more chains than the corresponding human annotated sources and references, machine-translated TED talks contain less chains than the sources and human translations. However, there is not much variation between the chain features of the three MT outputs. The chains are also longer in machine-translated output than in reference translations as can be seen by the number of mentions per chain and the length of the longest chain. Results and Analyses ::: MT quality at system level We evaluate the quality of the three transformer engines with two automatic metrics, BLEU BIBREF49 and METEOR BIBREF50. Table TABREF25 shows the scores in two cases: all, when the complete texts are evaluated and coref, when only the subset of sentences that have been augmented in S3 are considered – 265 out of 494 for news and 239 out of 518 for TED. For news, the best system is that trained on more data, S2; but for TED talks S3 with less data has the best performance. The difference between the behaviour of the systems can be related to the different genres. We have seen that news are dominated by nominal mentions while TED is dominated by pronominal ones. Pronouns mostly need coreference information to be properly translated, while noun phrases can be improved simply because more instances of the nouns appear in the training data. With this, S3 improves the baseline S1 in +1.1 BLEU points for TED$_{coref}$ but -0.2 BLEU points for news$_{coref}$. However, even if the systems differ in the overall performance, the change is not related to the number of errors in coreference chains. Table TABREF25 also reports the number of mistakes in the translation of coreferent mentions. Whereas the number of errors correlates with translation quality (as measured by BLEU) for news$_{coref}$ this is not the case of TED$_{coref}$. Results and Analyses ::: Error analysis The total distribution for the 10 categories of errors defined in Section SECREF23 can be seen in Figure FIGREF29. Globally, the proportion of errors due to our closed categories (gender, number, case and ambiguous) is larger for TED talks than for news (see analysis in Section SECREF28). Gender is an issue with all systems and genres which does not get solved by the addition of more data. Additionally, news struggle with wrong words and named entities; for this genre the additional error types (see analysis in Section SECREF30) represent around 60% of the errors of S1/S3 to be compared to the 40% of TED talks. Results and Analyses ::: Error analysis ::: Predefined error categories 0.4em 0.4Within our predefined closed categories (gender, number, case and ambiguous), the gender errors belong to the most frequent errors. They include wrong gender translation of both pronouns, as sie (“her”) instead of ihn (“him”) in example SECREF28 referring to the masculine noun Mindestlohn, and nominal phrases, as der Stasi instead of die Stasi, where a masculine form of the definite article is used instead of a feminine one, in example SECREF28. .src: [The current minimum wage] of 7.25 US dollars is a pittance... She wants to raise [it] to 15 dollars an hour. S3: [Der aktuelle Mindestlohn] von 7,25 US-Dollar sei Almosen... Sie möchte [sie] auf 15 Dollar pro Stunde erhöhen. . src: ...let's have a short look at the history of [the Stasi], because it is really important for understanding [its] self-conception. S2: Lassen sie uns... einen kurzen Blick auf die Geschichte [des Stasi] werfen denn es wirklich wichtig, [seine] Selbstauffassung zu verstehen. The gender-related errors are common to all the automatic translations. Interestingly, systems S1 and S3 have more problems with gender in translations of TED talks, whereas they do better in translating news, which leads us to assume that this is a data-dependent issue: while the antecedent for news is in the same sentence it is not for TED talks. A closer look at the texts with a high number of gender problems confirms this assumption —they contain references to females who were translated with male forms of nouns and pronouns (e.g. Mannschaftskapitän instead of Mannschaftskapitänin). We also observe errors related to gender for the cases of explicitation in translation. Some impersonal English constructions not having direct equivalents in German are translated with personal constructions, which requires an addition of a pronoun. Such cases of explicitation were automatically detected in parallel data in BIBREF21, BIBREF2. They belong to the category of obligatory explicitation, i.e. explicitation dictated by differences in the syntactic and semantic structure of languages, as defined by BIBREF51. An MT system tends to insert a male form instead of a female one even if it's marked as feminine (S3 adds the feminine form she as markup), as illustrated in example SECREF28 where the automatic translation contains the masculine pronoun er (“he”) instead of sie (“she”). . src: [Biles] earned the first one on Tuesday while serving as the exclamation point to retiring national team coordinator Martha Karolyi's going away party. ref: [Biles] holte die erste Medaille am Dienstag, während [sie] auf der Abschiedsfeier der sich in Ruhestand begehenden Mannschaftskoordinatorin Martha Karolyi als Ausrufezeichen diente. S2: [Biles] verdiente den ersten am Dienstag, während [er] als Ausrufezeichen für den pensionierten Koordinator der Nationalmannschaft, Martha Karolyi, diente. Another interesting case of a problem related to gender is the dependence of the referring expressions on grammatical restrictions in German. In example SECREF28, the source chain contains the pronoun him referring to both a 6-year-old boy and The child. In German, these two nominal phrases have different gender (masculine vs. neutral). The pronoun has grammatical agreement with the second noun of the chain (des Kindes) and not its head (ein 6 Jahre alter Junge). . src: Police say [a 6-year-old boy] has been shot in Philadelphia... [The child]'s grandparents identified [him] to CBS Philadelphia as [Mahaj Brown]. S1: Die Polizei behauptet, [ein 6 Jahre alter Junge] sei in Philadelphia erschossen worden... Die Großeltern [des Kindes] identifizierten [ihn] mit CBS Philadelphia als [Mahaj Brown]. Case- and number-related errors are less frequent in our data. However, translations of TED talks with S2 contain much more number-related errors than other outputs. Example SECREF28 illustrates this error type which occurs within a sentence. The English source contains the nominal chain in singular the cost – it, whereas the German correspondence Kosten has a plural form and requires a plural pronoun (sie). However, the automatic translation contains the singular pronoun es. . src: ...to the point where [the cost] is now below 1,000 dollars, and it's confidently predicted that by the year 2015 [it] will be below 100 dollars... S2: bis zu dem Punkt, wo [die Kosten] jetzt unter 1.000 Dollar liegen, und es ist zuversichtlich, dass [es] bis zum Jahr 2015 unter 100 Dollar liegen wird... Ambiguous cases often contain a combination of errors or they are difficult to categorise due to the ambiguity of the source pronouns, as the pronoun it in example SECREF28 which may refer either to the noun trouble or even the clause Democracy is in trouble is translated with the pronoun sie (feminine). In case of the first meaning, the pronoun would be correct, but the form of the following verb should be in plural. In case of a singular form, we would need to use a demonstrative pronoun dies (or possibly the personal pronoun es). . src: Democracy is in trouble... and [it] comes in part from a deep dilemma... S2: Die Demokratie steckt in Schwierigkeiten ... und [sie] rührt teilweise aus einem tiefen Dilemma her... Results and Analyses ::: Error analysis ::: Additional error types At first glance, the error types discussed in this section do not seem to be related to coreference —a wrong translation of a noun can be traced back to the training data available and the way NMT deals with unknown words. However, a wrong translation of a noun may result in its invalidity to be a referring expression for a certain discourse item. As a consequence, a coreference chain is damaged. We illustrate a chain with a wrong named entity translation in example SECREF30. The source chain contains five nominal mentions referring to an American gymnast Aly Raisman: silver medalist – “Final Five” teammate – Aly Raisman – Aly Raisman – Raisman. All the three systems used different names. Example SECREF30 illustrates the translation with S2, where Aly Donovan and Aly Encence were used instead of Aly Raisman, and the mention Raisman disappears completely from the chain. . src: Her total of 62.198 was well clear of [silver medalist] and [“Final Five” teammate] [Aly Raisman]...United States' Simone Biles, left, and [Aly Raisman] embrace after winning gold and silver respectively... [Raisman]'s performance was a bit of revenge from four years ago, when [she] tied... S2: Ihre Gesamtmenge von 62.198 war deutlich von [Silbermedaillengewinner] und [“Final Five” Teamkollegen] [Aly Donovan]... Die Vereinigten Staaten Simone Biles, links und [Aly Encence] Umarmung nach dem Gewinn von Gold und Silber... Vor vier Jahren, als [sie]... Example SECREF30 illustrates translation of the chain The scaling in the opposite direction – that scale. The noun phrases Die Verlagerung in die entgegengesetzte Richtung (“the shift in the opposite direction”) and dieses Ausmaß (“extent/scale”) used in the S1 output do not corefer (cf. Wachstum in die entgegengesetzte Richtung and Wachstum in the reference translation). Notice that these cases with long noun phrases are not tackled by S3 either. . src: [The scaling in the opposite direction]...drive the structure of business towards the creation of new kinds of institutions that can achieve [that scale]. ref: [Wachstum in die entgegengesetzte Richtung]... steuert die Struktur der Geschäfte in Richtung Erschaffung von neuen Institutionen, die [dieses Wachstum] erreichen können. S1: [Die Verlagerung in die entgegengesetzte Richtung]... treibt die Struktur der Unternehmen in Richtung der Schaffung neuer Arten von Institutionen, die [dieses Ausmaß] erreichen können. Results and Analyses ::: Error analysis ::: Types of erroneous mentions Finally, we also analyse the types of the mentions marked as errors. They include either nominal phrases or pronouns. Table TABREF32 shows that there is a variation between the news texts and TED talks in terms of these features. News contain more erroneous nominal phrases, whereas TED talks contain more pronoun-related errors. Whereas both the news and the TED talks have more errors in translating anaphors, there is a higher proportion of erroneous antecedents in the news than in the TED talks. It is also interesting to see that S3 reduces the percentage of errors in anaphors for TED, but has a similar performance to S2 on news. Summary and Conclusions We analysed coreferences in the translation outputs of three transformer systems that differ in the training data and in whether they have access to explicit intra- and cross-sentential anaphoric information (S3) or not (S1, S2). We see that the translation errors are more dependent on the genre than on the nature of the specific NMT system: whereas news (with mainly NP mentions) contain a majority of errors related to wrong word selection, TED talks (with mainly pronominal mentions) are prone to accumulate errors on gender and number. System S3 was specifically designed to solve this issue, but we cannot trace the improvement from S1 to S3 by just counting the errors and error types, as some errors disappear and others emerge: coreference quality and automatic translation quality do not correlate in our analysis on TED talks. As a further improvement to address the issue, we could add more parallel data to our training corpus with a higher density of coreference chains such as movie subtitles or parallel TED talks. We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3. Acknowledgments The annotation work was performed at Saarland University. We thank Anna Felsing, Francesco Fernicola, Viktoria Henn, Johanna Irsch, Kira Janine Jebing, Alicia Lauer, Friederike Lessau and Christina Pollkläsener for performing the manual annotation of the NMT outputs. The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee) and by the German Research Foundation (DFG) as part of SFB 1102 Information Density and Linguistic Encoding. Responsibility for the content of this publication is with the authors.
first two systems are transformer models trained on different amounts of data, The third system includes a modification to consider the information of full coreference chains
15a1df59ed20aa415a4daf0acb256747f6766f77
15a1df59ed20aa415a4daf0acb256747f6766f77_0
Q: Which coreference phenomena are analyzed? Text: Introduction In the present paper, we analyse coreference in the output of three neural machine translation systems (NMT) that were trained under different settings. We use a transformer architecture BIBREF0 and train it on corpora of different sizes with and without the specific coreference information. Transformers are the current state-of-the-art in NMT BIBREF1 and are solely based on attention, therefore, the kind of errors they produce might be different from other architectures such as CNN or RNN-based ones. Here we focus on one architecture to study the different errors produced only under different data configurations. Coreference is an important component of discourse coherence which is achieved in how discourse entities (and events) are introduced and discussed. Coreference chains contain mentions of one and the same discourse element throughout a text. These mentions are realised by a variety of linguistic devices such as pronouns, nominal phrases (NPs) and other linguistic means. As languages differ in the range of such linguistic means BIBREF2, BIBREF3, BIBREF4, BIBREF5 and in their contextual restrictions BIBREF6, these differences give rise to problems that may result in incoherent (automatic) translations. We focus on coreference chains in English-German translations belonging to two different genres. In German, pronouns, articles and adjectives (and some nouns) are subject to grammatical gender agreement, whereas in English, only person pronouns carry gender marking. An incorrect translation of a pronoun or a nominal phrase may lead to an incorrect relation in a discourse and will destroy a coreference chain. Recent studies in automatic coreference translation have shown that dedicated systems can lead to improvements in pronoun translation BIBREF7, BIBREF8. However, standard NMT systems work at sentence level, so improvements in NMT translate into improvements on pronouns with intra-sentential antecedents, but the phenomenon of coreference is not limited to anaphoric pronouns, and even less to a subset of them. Document-level machine translation (MT) systems are needed to deal with coreference as a whole. Although some attempts to include extra-sentential information exist BIBREF9, BIBREF10, BIBREF11, BIBREF12, the problem is far from being solved. Besides that, some further problems of NMT that do not seem to be related to coreference at first glance (such as translation of unknown words and proper names or the hallucination of additional words) cause coreference-related errors. In our work, we focus on the analysis of complete coreference chains, manually annotating them in the three translation variants. We also evaluate them from the point of view of coreference chain translation. The goal of this paper is two-fold. On the one hand, we are interested in various properties of coreference chains in these translations. They include total number of chains, average chain length, the size of the longest chain and the total number of annotated mentions. These features are compared to those of the underlying source texts and also the corresponding human translation reference. On the other hand, we are also interested in the quality of coreference translations. Therefore, we define a typology of errors, and and chain members in MT output are annotated as to whether or not they are correct. The main focus is on such errors as gender, number and case of the mentions, but we also consider wrong word selection or missing words in a chain. Unlike previous work, we do not restrict ourselves to pronouns. Our analyses show that there are further errors that are not directly related to coreference but consequently have an influence on the correctness of coreference chains. The remainder of the paper is organised as follows. Section SECREF2 introduces the main concepts and presents an overview of related MT studies. Section SECREF3 provides details on the data, systems used and annotation procedures. Section SECREF4 analyses the performance of our transformer systems on coreferent mentions. Finally we summarise and draw conclusions in Section SECREF5. Background and Related Work ::: Coreference Coreference is related to cohesion and coherence. The latter is the logical flow of inter-related ideas in a text, whereas cohesion refers to the text-internal relationship of linguistic elements that are overtly connected via lexico-grammatical devices across sentences BIBREF13. As stated by BIBREF14, this connectedness of texts implies dependencies between sentences. And if these dependencies are neglected in translation, the output text no longer has the property of connectedness which makes a sequence of sentences a text. Coreference expresses identity to a referent mentioned in another textual part (not necessarily in neighbouring sentences) contributing to text connectedness. An addressee is following the mentioned referents and identifies them when they are repeated. Identification of certain referents depends not only on a lexical form, but also on other linguistic means, e.g. articles or modifying pronouns BIBREF15. The use of these is influenced by various factors which can be language-dependent (range of linguistic means available in grammar) and also context-independent (pragmatic situation, genre). Thus, the means of expressing reference differ across languages and genres. This has been shown by some studies in the area of contrastive linguistics BIBREF6, BIBREF3, BIBREF5. Analyses in cross-lingual coreference resolution BIBREF16, BIBREF17, BIBREF18, BIBREF19 show that there are still unsolved problems that should be addressed. Background and Related Work ::: Translation studies Differences between languages and genres in the linguistic means expressing reference are important for translation, as the choice of an appropriate referring expression in the target language poses challenges for both human and machine translation. In translation studies, there is a number of corpus-based works analysing these differences in translation. However, most of them are restricted to individual phenomena within coreference. For instance, BIBREF20 analyse abstract anaphors in English-German translations. To our knowledge, they do not consider chains. BIBREF21 in their contrastive analysis of potential coreference chain members in English-German translations, describe transformation patterns that contain different types of referring expressions. However, the authors rely on automatic tagging and parsing procedures and do not include chains into their analysis. The data used by BIBREF4 and BIBREF22 contain manual chain annotations. The authors focus on different categories of anaphoric pronouns in English-Czech translations, though not paying attention to chain features (e.g. their number or size). Chain features are considered in a contrastive analysis by BIBREF6. Their study concerns different phenomena in a variety of genres in English and German comparable texts. Using contrastive interpretations, they suggest preferred translation strategies from English into German, i.e. translators should use demonstrative pronouns instead of personal pronouns (e.g. dies/das instead of es/it) when translating from English into German and vice versa. However, corpus-based studies show that translators do not necessarily apply such strategies. Instead, they often preserve the source language anaphor's categories BIBREF20 which results in the shining through effects BIBREF23. Moreover, due to the tendency of translators to explicitly realise meanings in translations that were implicit in the source texts BIBREF24, translations are believed to contain more (explicit) referring expressions, and subsequently, more (and longer) coreference chains. Therefore, in our analysis, we focus on the chain features related to the phenomena of shining through and explicitation. These features include number of mentions, number of chains, average chain length and the longest chain size. Machine-translated texts are compared to their sources and the corresponding human translations in terms of these features. We expect to find shining through and explicitation effects in automatic translations. Background and Related Work ::: Coreference in MT As explained in the introduction, several recent works tackle the automatic translation of pronouns and also coreference BIBREF25, BIBREF26 and this has, in part, motivated the creation of devoted shared tasks and test sets to evaluate the quality of pronoun translation BIBREF7, BIBREF27, BIBREF28, BIBREF29. But coreference is a wider phenomenon that affects more linguistic elements. Noun phrases also appear in coreference chains but they are usually studied under coherence and consistency in MT. BIBREF30 use topic modelling to extract coherence chains in the source, predict them in the target and then promote them as translations. BIBREF31 use word embeddings to enforce consistency within documents. Before these works, several methods to post-process the translations and even including a second decoding pass were used BIBREF32, BIBREF33, BIBREF34, BIBREF35. Recent NMT systems that include context deal with both phenomena, coreference and coherence, but usually context is limited to the previous sentence, so chains as a whole are never considered. BIBREF10 encode both a source and a context sentence and then combine them to obtain a context-aware input. The same idea was implemented before by BIBREF36 where they concatenate a source sentence with the previous one to include context. Caches BIBREF37, memory networks BIBREF38 and hierarchical attention methods BIBREF39 allow to use a wider context. Finally, our work is also related to BIBREF40 and BIBREF41 where their oracle translations are similar to the data-based approach we introduce in Section SECREF4. Systems, Methods and Resources ::: State-of-the-art NMT Our NMT systems are based on a transformer architecture BIBREF0 as implemented in the Marian toolkit BIBREF42 using the transformer big configuration. We train three systems (S1, S2 and S3) with the corpora summarised in Table TABREF5. The first two systems are transformer models trained on different amounts of data (6M vs. 18M parallel sentences as seen in the Table). The third system includes a modification to consider the information of full coreference chains throughout a document augmenting the sentence to be translated with this information and it is trained with the same amount of sentence pairs as S1. A variant of the S3 system participated in the news machine translation of the shared task held at WMT 2019 BIBREF43. Systems, Methods and Resources ::: State-of-the-art NMT ::: S1 is trained with the concatenation of Common Crawl, Europarl, a cleaned version of Rapid and the News Commentary corpus. We oversample the latter in order to have a significant representation of data close to the news genre in the final corpus. Systems, Methods and Resources ::: State-of-the-art NMT ::: S2 uses the same data as S1 with the addition of a filtered portion of Paracrawl. This corpus is known to be noisy, so we use it to create a larger training corpus but it is diluted by a factor 4 to give more importance to high quality translations. Systems, Methods and Resources ::: State-of-the-art NMT ::: S3 S3 uses the same data as S1, but this time enriched with the cross- and intra-sentential coreference chain markup as described below. The information is included as follows. Source documents are annotated with coreference chains using the neural annotator of Stanford CoreNLP BIBREF44. The tool detects pronouns, nominal phrases and proper names as mentions in a chain. For every mention, CoreNLP extracts its gender (male, female, neutral, unknown), number (singular, plural, unknown), and animacy (animate, inanimate, unknown). This information is not added directly but used to enrich the single sentence-based MT training data by applying a set of heuristics implemented in DocTrans: We enrich pronominal mentions with the exception of "I" with the head (main noun phrase) of the chain. The head is cleaned by removing articles and Saxon genitives and we only consider heads with less than 4 tokens in order to avoid enriching a word with a full sentence We enrich nominal mentions including proper names with the gender of the head The head itself is enriched with she/he/it/they depending on its gender and animacy The enrichment is done with the addition of tags as shown in the examples: I never cook with $<$b_crf$>$ salt $<$e_crf$>$ it. $<$b_crf$>$ she $<$e_crf$>$ Biles arrived late. In the first case heuristic 1 is used, salt is the head of the chain and it is prepended to the pronoun. The second example shows a sentence where heuristic 2 has been used and the proper name Biles has now information about the gender of the person it is referring to. Afterwards, the NMT system is trained at sentence level in the usual way. The data used for the three systems is cleaned, tokenised, truecased with Moses scripts and BPEd with subword-nmt using separated vocabularies with 50 k subword units each. The validation set ($news2014$) and the test sets described in the following section are pre-processed in the same way. Systems, Methods and Resources ::: Test data under analysis As one of our aims is to compare coreference chain properties in automatic translation with those of the source texts and human reference, we derive data from ParCorFull, an English-German corpus annotated with full coreference chains BIBREF46. The corpus contains ca. 160.7 thousand tokens manually annotated with about 14.9 thousand mentions and 4.7 thousand coreference chains. For our analysis, we select a portion of English news texts and TED talks from ParCorFull and translate them with the three NMT systems described in SECREF4 above. As texts considerably differ in their length, we select 17 news texts (494 sentences) and four TED talks (518 sentences). The size (in tokens) of the total data set under analysis – source (src) and human translations (ref) from ParCorFull and the automatic translations produced within this study (S1, S2 and S3) are presented in Table TABREF20. Notably, automatic translations of TED talks contain more words than the corresponding reference translation, which means that machine-translated texts of this type have also more potential tokens to enter in a coreference relation, and potentially indicating a shining through effect. The same does not happen with the news test set. Systems, Methods and Resources ::: Manual annotation process The English sources and their corresponding human translations into German were already manually annotated for coreference chains. We follow the same scheme as BIBREF47 to annotate the MT outputs with coreference chains. This scheme allows the annotator to define each markable as a certain mention type (pronoun, NP, VP or clause). The mentions can be defined further in terms of their cohesive function (antecedent, anaphoric, cataphoric, comparative, substitution, ellipsis, apposition). Antecedents can either be marked as simple or split or as entity or event. The annotation scheme also includes pronoun type (personal, possessive, demonstrative, reflexive, relative) and modifier types of NPs (possessive, demonstrative, definite article, or none for proper names), see BIBREF46 for details. The mentions referring to the same discourse item are linked between each other. We use the annotation tool MMAX2 BIBREF48 which was also used for the annotation of ParCorFull. In the next step, chain members are annotated for their correctness. For the incorrect translations of mentions, we include the following error categories: gender, number, case, ambiguous and other. The latter category is open, which means that the annotators can add their own error types during the annotation process. With this, the final typology of errors also considered wrong named entity, wrong word, missing word, wrong syntactic structure, spelling error and addressee reference. The annotation of machine-translated texts was integrated into a university course on discourse phenomena. Our annotators, well-trained students of linguistics, worked in small groups on the assigned annotation tasks (4-5 texts, i.e. 12-15 translations per group). At the beginning of the annotation process, the categories under analysis were discussed within the small groups and also in the class. The final versions of the annotation were then corrected by the instructor. Results and Analyses ::: Chain features First, we compare the distribution of several chain features in the three MT outputs, their source texts and the corresponding human translations. Table TABREF20 shows that, overall, all machine translations contain a greater number of annotated mentions in both news texts and TED talks than in the annotated source (src and src$_{\rm CoreNLP}$) and reference (ref) texts. Notice that src$_{\rm CoreNLP}$ —where coreferences are not manually but automatically annotated with CoreNLP— counts also the tokens that the mentions add to the sentences, but not the tags. The larger number of mentions may indicate a strong explicitation effect observed in machine-translated texts. Interestingly, CoreNLP detects a similar number of mentions in both genres, while human annotators clearly marked more chains for TED than for news. Both genres are in fact quite different in nature; whereas only $37\%$ of the mentions are pronominal in news texts (343 out of 915), the number grows to $58\%$ for TED (577 out of 989), and this could be an indicator of the difficulty of the genres for NMT systems. There is also a variation in terms of chain number between translations of TED talks and news. While automatic translations of news texts contain more chains than the corresponding human annotated sources and references, machine-translated TED talks contain less chains than the sources and human translations. However, there is not much variation between the chain features of the three MT outputs. The chains are also longer in machine-translated output than in reference translations as can be seen by the number of mentions per chain and the length of the longest chain. Results and Analyses ::: MT quality at system level We evaluate the quality of the three transformer engines with two automatic metrics, BLEU BIBREF49 and METEOR BIBREF50. Table TABREF25 shows the scores in two cases: all, when the complete texts are evaluated and coref, when only the subset of sentences that have been augmented in S3 are considered – 265 out of 494 for news and 239 out of 518 for TED. For news, the best system is that trained on more data, S2; but for TED talks S3 with less data has the best performance. The difference between the behaviour of the systems can be related to the different genres. We have seen that news are dominated by nominal mentions while TED is dominated by pronominal ones. Pronouns mostly need coreference information to be properly translated, while noun phrases can be improved simply because more instances of the nouns appear in the training data. With this, S3 improves the baseline S1 in +1.1 BLEU points for TED$_{coref}$ but -0.2 BLEU points for news$_{coref}$. However, even if the systems differ in the overall performance, the change is not related to the number of errors in coreference chains. Table TABREF25 also reports the number of mistakes in the translation of coreferent mentions. Whereas the number of errors correlates with translation quality (as measured by BLEU) for news$_{coref}$ this is not the case of TED$_{coref}$. Results and Analyses ::: Error analysis The total distribution for the 10 categories of errors defined in Section SECREF23 can be seen in Figure FIGREF29. Globally, the proportion of errors due to our closed categories (gender, number, case and ambiguous) is larger for TED talks than for news (see analysis in Section SECREF28). Gender is an issue with all systems and genres which does not get solved by the addition of more data. Additionally, news struggle with wrong words and named entities; for this genre the additional error types (see analysis in Section SECREF30) represent around 60% of the errors of S1/S3 to be compared to the 40% of TED talks. Results and Analyses ::: Error analysis ::: Predefined error categories 0.4em 0.4Within our predefined closed categories (gender, number, case and ambiguous), the gender errors belong to the most frequent errors. They include wrong gender translation of both pronouns, as sie (“her”) instead of ihn (“him”) in example SECREF28 referring to the masculine noun Mindestlohn, and nominal phrases, as der Stasi instead of die Stasi, where a masculine form of the definite article is used instead of a feminine one, in example SECREF28. .src: [The current minimum wage] of 7.25 US dollars is a pittance... She wants to raise [it] to 15 dollars an hour. S3: [Der aktuelle Mindestlohn] von 7,25 US-Dollar sei Almosen... Sie möchte [sie] auf 15 Dollar pro Stunde erhöhen. . src: ...let's have a short look at the history of [the Stasi], because it is really important for understanding [its] self-conception. S2: Lassen sie uns... einen kurzen Blick auf die Geschichte [des Stasi] werfen denn es wirklich wichtig, [seine] Selbstauffassung zu verstehen. The gender-related errors are common to all the automatic translations. Interestingly, systems S1 and S3 have more problems with gender in translations of TED talks, whereas they do better in translating news, which leads us to assume that this is a data-dependent issue: while the antecedent for news is in the same sentence it is not for TED talks. A closer look at the texts with a high number of gender problems confirms this assumption —they contain references to females who were translated with male forms of nouns and pronouns (e.g. Mannschaftskapitän instead of Mannschaftskapitänin). We also observe errors related to gender for the cases of explicitation in translation. Some impersonal English constructions not having direct equivalents in German are translated with personal constructions, which requires an addition of a pronoun. Such cases of explicitation were automatically detected in parallel data in BIBREF21, BIBREF2. They belong to the category of obligatory explicitation, i.e. explicitation dictated by differences in the syntactic and semantic structure of languages, as defined by BIBREF51. An MT system tends to insert a male form instead of a female one even if it's marked as feminine (S3 adds the feminine form she as markup), as illustrated in example SECREF28 where the automatic translation contains the masculine pronoun er (“he”) instead of sie (“she”). . src: [Biles] earned the first one on Tuesday while serving as the exclamation point to retiring national team coordinator Martha Karolyi's going away party. ref: [Biles] holte die erste Medaille am Dienstag, während [sie] auf der Abschiedsfeier der sich in Ruhestand begehenden Mannschaftskoordinatorin Martha Karolyi als Ausrufezeichen diente. S2: [Biles] verdiente den ersten am Dienstag, während [er] als Ausrufezeichen für den pensionierten Koordinator der Nationalmannschaft, Martha Karolyi, diente. Another interesting case of a problem related to gender is the dependence of the referring expressions on grammatical restrictions in German. In example SECREF28, the source chain contains the pronoun him referring to both a 6-year-old boy and The child. In German, these two nominal phrases have different gender (masculine vs. neutral). The pronoun has grammatical agreement with the second noun of the chain (des Kindes) and not its head (ein 6 Jahre alter Junge). . src: Police say [a 6-year-old boy] has been shot in Philadelphia... [The child]'s grandparents identified [him] to CBS Philadelphia as [Mahaj Brown]. S1: Die Polizei behauptet, [ein 6 Jahre alter Junge] sei in Philadelphia erschossen worden... Die Großeltern [des Kindes] identifizierten [ihn] mit CBS Philadelphia als [Mahaj Brown]. Case- and number-related errors are less frequent in our data. However, translations of TED talks with S2 contain much more number-related errors than other outputs. Example SECREF28 illustrates this error type which occurs within a sentence. The English source contains the nominal chain in singular the cost – it, whereas the German correspondence Kosten has a plural form and requires a plural pronoun (sie). However, the automatic translation contains the singular pronoun es. . src: ...to the point where [the cost] is now below 1,000 dollars, and it's confidently predicted that by the year 2015 [it] will be below 100 dollars... S2: bis zu dem Punkt, wo [die Kosten] jetzt unter 1.000 Dollar liegen, und es ist zuversichtlich, dass [es] bis zum Jahr 2015 unter 100 Dollar liegen wird... Ambiguous cases often contain a combination of errors or they are difficult to categorise due to the ambiguity of the source pronouns, as the pronoun it in example SECREF28 which may refer either to the noun trouble or even the clause Democracy is in trouble is translated with the pronoun sie (feminine). In case of the first meaning, the pronoun would be correct, but the form of the following verb should be in plural. In case of a singular form, we would need to use a demonstrative pronoun dies (or possibly the personal pronoun es). . src: Democracy is in trouble... and [it] comes in part from a deep dilemma... S2: Die Demokratie steckt in Schwierigkeiten ... und [sie] rührt teilweise aus einem tiefen Dilemma her... Results and Analyses ::: Error analysis ::: Additional error types At first glance, the error types discussed in this section do not seem to be related to coreference —a wrong translation of a noun can be traced back to the training data available and the way NMT deals with unknown words. However, a wrong translation of a noun may result in its invalidity to be a referring expression for a certain discourse item. As a consequence, a coreference chain is damaged. We illustrate a chain with a wrong named entity translation in example SECREF30. The source chain contains five nominal mentions referring to an American gymnast Aly Raisman: silver medalist – “Final Five” teammate – Aly Raisman – Aly Raisman – Raisman. All the three systems used different names. Example SECREF30 illustrates the translation with S2, where Aly Donovan and Aly Encence were used instead of Aly Raisman, and the mention Raisman disappears completely from the chain. . src: Her total of 62.198 was well clear of [silver medalist] and [“Final Five” teammate] [Aly Raisman]...United States' Simone Biles, left, and [Aly Raisman] embrace after winning gold and silver respectively... [Raisman]'s performance was a bit of revenge from four years ago, when [she] tied... S2: Ihre Gesamtmenge von 62.198 war deutlich von [Silbermedaillengewinner] und [“Final Five” Teamkollegen] [Aly Donovan]... Die Vereinigten Staaten Simone Biles, links und [Aly Encence] Umarmung nach dem Gewinn von Gold und Silber... Vor vier Jahren, als [sie]... Example SECREF30 illustrates translation of the chain The scaling in the opposite direction – that scale. The noun phrases Die Verlagerung in die entgegengesetzte Richtung (“the shift in the opposite direction”) and dieses Ausmaß (“extent/scale”) used in the S1 output do not corefer (cf. Wachstum in die entgegengesetzte Richtung and Wachstum in the reference translation). Notice that these cases with long noun phrases are not tackled by S3 either. . src: [The scaling in the opposite direction]...drive the structure of business towards the creation of new kinds of institutions that can achieve [that scale]. ref: [Wachstum in die entgegengesetzte Richtung]... steuert die Struktur der Geschäfte in Richtung Erschaffung von neuen Institutionen, die [dieses Wachstum] erreichen können. S1: [Die Verlagerung in die entgegengesetzte Richtung]... treibt die Struktur der Unternehmen in Richtung der Schaffung neuer Arten von Institutionen, die [dieses Ausmaß] erreichen können. Results and Analyses ::: Error analysis ::: Types of erroneous mentions Finally, we also analyse the types of the mentions marked as errors. They include either nominal phrases or pronouns. Table TABREF32 shows that there is a variation between the news texts and TED talks in terms of these features. News contain more erroneous nominal phrases, whereas TED talks contain more pronoun-related errors. Whereas both the news and the TED talks have more errors in translating anaphors, there is a higher proportion of erroneous antecedents in the news than in the TED talks. It is also interesting to see that S3 reduces the percentage of errors in anaphors for TED, but has a similar performance to S2 on news. Summary and Conclusions We analysed coreferences in the translation outputs of three transformer systems that differ in the training data and in whether they have access to explicit intra- and cross-sentential anaphoric information (S3) or not (S1, S2). We see that the translation errors are more dependent on the genre than on the nature of the specific NMT system: whereas news (with mainly NP mentions) contain a majority of errors related to wrong word selection, TED talks (with mainly pronominal mentions) are prone to accumulate errors on gender and number. System S3 was specifically designed to solve this issue, but we cannot trace the improvement from S1 to S3 by just counting the errors and error types, as some errors disappear and others emerge: coreference quality and automatic translation quality do not correlate in our analysis on TED talks. As a further improvement to address the issue, we could add more parallel data to our training corpus with a higher density of coreference chains such as movie subtitles or parallel TED talks. We also characterised the originals and translations according to coreference features such as total number of chains and mentions, average chain length and size of the longest chain. We see how NMT translations increase the number of mentions about $30\%$ with respect to human references showing even a more marked explicitation effect than human translations do. As future work, we consider a more detailed comparison of the human and machine translations, and analyse the purpose of the additional mentions added by the NMT systems. It would be also interesting to evaluate of the quality of the automatically computed coreferences chains used for S3. Acknowledgments The annotation work was performed at Saarland University. We thank Anna Felsing, Francesco Fernicola, Viktoria Henn, Johanna Irsch, Kira Janine Jebing, Alicia Lauer, Friederike Lessau and Christina Pollkläsener for performing the manual annotation of the NMT outputs. The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee) and by the German Research Foundation (DFG) as part of SFB 1102 Information Density and Linguistic Encoding. Responsibility for the content of this publication is with the authors.
shining through, explicitation
b124137e62178a2bd3b5570d73b1652dfefa2457
b124137e62178a2bd3b5570d73b1652dfefa2457_0
Q: What new interesting tasks can be solved based on the uncanny semantic structures of the embedding space? Text: Introduction In recent years, digital libraries have moved towards open science and open access with several large scholarly datasets being constructed. Most popular datasets include millions of papers, authors, venues, and other information. Their large size and heterogeneous contents make it very challenging to effectively manage, explore, and utilize these datasets. The knowledge graph has emerged as a universal data format for representing knowledge about entities and their relationships in such complicated data. The main part of a knowledge graph is a collection of triples, with each triple $ (h, t, r) $ denoting the fact that relation $ r $ exists between head entity $ h $ and tail entity $ t $. This can also be formalized as a labeled directed multigraph where each triple $ (h, t, r) $ represents a directed edge from node $ h $ to node $ t $ with label $ r $. Therefore, it is straightforward to build knowledge graphs for scholarly data by representing natural connections between scholarly entities with triples such as (AuthorA, Paper1, write) and (Paper1, Paper2, cite). Notably, instead of using knowledge graphs directly in some tasks, we can model them by knowledge graph embedding methods, which represent entities and relations as embedding vectors in semantic space, then model the interactions between them to solve the knowledge graph completion task. There are many approaches BIBREF0 to modeling the interactions between embedding vectors resulting in many knowledge graph embedding methods such as ComplEx BIBREF1 and CP$ _h $ BIBREF2. In the case of word embedding methods such as word2vec, embedding vectors are known to contain rich semantic information that enables them to be used in many semantic applications BIBREF3. However, the semantic structures in the knowledge graph embedding space are not well-studied, thus knowledge graph embeddings are only used for knowledge graph completion but remain absent in the toolbox for data analysis of heterogeneous data in general and scholarly data in particular, although they have the potential to be highly effective and efficient. In this paper, we address these issues by providing a theoretical understanding of their semantic structures and designing a general semantic query framework to support data exploration. For theoretical analysis, we first analyze the state-of-the-art knowledge graph embedding model CP$ _h $ BIBREF2 in comparison to the popular word embedding model word2vec skipgram BIBREF3 to explain its components and provide understandings to its semantic structures. We then define the semantic queries on the knowledge graph embedding spaces, which are algebraic operations between the embedding vectors in the knowledge graph embedding space to solve queries such as similarity and analogy between the entities on the original datasets. Based on our theoretical results, we design a general framework for data exploration on scholarly data by semantic queries on knowledge graph embedding space. The main component in this framework is the conversion between the data exploration tasks and the semantic queries. We first outline the semantic query solutions to some traditional data exploration tasks, such as similar paper prediction and similar author prediction. We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries. Related Work ::: Knowledge graph for scholarly data Knowledge graph has gradually become the standard data format for heterogeneous and complicated datasets BIBREF4. There have been several attempts to build knowledge graph for scholarly data, either adopting the scholarly network directly BIBREF5, or deriving the knowledge graph from some similarity measures BIBREF6 BIBREF7, or constructing the knowledge graph from survey papers BIBREF8. However, they mostly focus on the data format or graph inference aspects of knowledge graph. In this paper, we instead focus on the knowledge graph embedding methods and especially the application of embedding vectors in data exploration. Related Work ::: Knowledge graph embedding For a more in depth survey of knowledge graph embedding methods, please refer to BIBREF0, which defines their architecture, categorization, and interaction mechanisms. In this paper, we only focus on the semantic structures of the state-of-the-art model CP$ _h $ BIBREF2, which is an extension of CP BIBREF9. In CP, each entity $ e $ has two embedding vectors $ $ and $ ^{(2)} $ depending on its role in a triple as head or as tail, respectively. CP$ _h $ augments the data by making an inverse triple $ (t, h, r^{(a)}) $ for each existing triple $ (h, t, r) $, where $ r^{(a)} $ is the augmented relation corresponding to $ r $. When maximizing the likelihood by stochastic gradient descent, its score function is the sum: where $ , ^{(2)}, , ^{(2)}, , ^{(a)} \in ^{D} $ are the embedding vectors of $ h $, $ t $, and $ r $, respectively, and the trilinear-product $ \langle \cdot , \cdot , \cdot \rangle $ is defined as: where $ D $ is the embedding size and $ d $ is the dimension for which $ h_d $, $ t_d $, and $ r_d $ are the scalar entries. The validity of each triple is modeled as a Bernoulli distribution and its validity probability is computed by the standard logistic function $ \sigma (\cdot ) $ as: Related Work ::: Word embedding The most popular word embedding models in recent years are word2vec variants such as word2vec skipgram BIBREF3, which predicts the context-words $ c_i $ independently given the target-word $ w $, that is: In practice, the expensive softmax functions in these multinoulli distributions are avoided by approximating them with negative sampling and solve for the Bernoulli distributions by using the standard logistic function $ \sigma (\cdot ) $: where $ _{c_i} $ is the context-embedding vector of context-word $ c_i $ and $ _w $ is the word-embedding vector of target-word $ w $. Theoretical analysis Word2vec skipgram and its semantic structures are well-studied both theoretically and empirically BIBREF3. CP$ _h $ is a new state of the art among many knowledge graph embedding models. We first ground the theoretical basis of CP$ _h $ on word2vec skipgram to explain its components and understand its semantic structures. We then define semantic queries on knowledge graph embedding space. Theoretical analysis ::: The semantic structures of CP@!START@$ _h $@!END@ We first look at Eq. DISPLAY_FORM8 of word2vec skipgram and consider only one context-word $ c $ for simplicity. We can write the probability in proportional format as: Note that the context-word $ c $ and target-word $ w $ are ordered and in word2vec skipgram, the target-word is the central word in a sliding window, e.g., $ w_i $ is the target-word and $ w_{i-k}, \dots , w_{i-1}, w_{i+1}, \dots , w_{i+k} $ are context-words. Therefore, the roles in each word pair are symmetric over the whole dataset. When maximizing the likelihood by stochastic gradient descent, we can write the approximate probability of unordered word pair and expand the dot products as: where $ _c $ and $ _c $ are the context-embedding and word-embedding vectors of $ c $, respectively, $ _w $ and $ _w $ are the context-embedding and word-embedding vectors of $ w $, respectively, and $ {u_c}_d, {v_c}_d, {u_w}_d $, and $ {v_w}_d $ are their scalar entries, respectively. We now return to Eq. DISPLAY_FORM3 of CP$ _h $ to also write the probability in Eq. DISPLAY_FORM5 in proportional format and expand the trilinear products according to Eq. DISPLAY_FORM4 as: where $ , ^{(2)} $, $ , ^{(2)} $, $ , ^{(a)} $ are knowledge graph embedding vectors and $ h_d, h^{(2)}_d $, $ t_d, t^{(2)}_d $, $ r_d, r^{(a)}_d $ are the scalar entries. Comparing Eq. of word2vec skipgram and Eq. of CP$ _h $, we can see they have essentially the same form and mechanism. Note that the embedding vectors in word2vec skipgram are learned by aligning each target-word to different context-words and vice versa, which is essentially the same for CP$ _h $ by aligning each head entity to different tail entities in different triples and vice versa, with regards to the dimensions weighted by each relation. This result suggests that the semantic structures of CP$ _h $ are similar to those in word2vec skipgram and we can use the head-role-based entity embedding vectors, such as $ $, for semantic applications similarly to word embedding vectors. The tail-role-based entity embedding vectors, such as $ ^{(2)} $, contain almost the same information due to their symmetric roles, thus can be discarded in semantic tasks, which justifies this common practices in word embedding applications BIBREF3. Theoretical analysis ::: Semantic query We mainly concern with the two following structures of the embedding space. Semantic similarity structure: Semantically similar entities are close to each other in the embedding space, and vice versa. This structure can be identified by a vector similarity measure, such as the dot product between two embedding vectors. The similarity between two embedding vectors is computed as: Semantic direction structure: There exist semantic directions in the embedding space, by which only one semantic aspect changes while all other aspects stay the same. It can be identified by a vector difference, such as the subtraction between two embedding vectors. The semantic direction between two embedding vectors is computed as: The algebraic operations, which include the above dot product and vector subtraction, or their combinations, can be used to approximate some important tasks on the original data. To do this, we first need to convert the data exploration task to the appropriate operations. We then conduct the operations on the embedding vectors and obtain the results. This process is defined as following. Definition 1 Semantic queries on knowledge graph embedding space are defined as the algebraic operations between the knowledge graph embedding vectors to approximate a given data exploration task on the original dataset. Semantic query framework Given the theoretical results, here we design a general framework for scholarly data exploration by using semantic queries on knowledge graph embedding space. Figure FIGREF19 shows the architecture of the proposed framework. There are three main components, namely data processing, task processing, and query processing. Data processing: with two steps, (1) constructing the knowledge graph from scholarly data by using the scholarly graph directly with entities such as authors, papers, venues, and relations such as author-write-paper, paper-cite-paper, paper-in-venue, and (2) learning the knowledge graph embeddings as in BIBREF0. Task processing: converting data exploration tasks to algebraic operations on the embedding space by following task-specific conversion templates. Some important tasks and their conversion templates are discussed in Section SECREF5. Query processing: executing semantic query on the embedding space and return results. Note that the algebraic operations on embedding vectors are linear and can be performed in parallel. Therefore, the semantic query is efficient. Note that the proposed semantic query framework makes no assumption on the specific knowledge graph embedding models and the induced embedding spaces. Any embedding space that contains rich semantic information such as the listed semantic structures can be applied in this framework. Exploration tasks and semantic queries conversion Here we present and discuss the semantic queries for some traditional and newly proposed data exploration tasks on scholarly data. Exploration tasks and semantic queries conversion ::: Similar entities Tasks Given an entity $ e \in $, find entities that are similar to $ e $. For example, given AuthorA, find authors, papers, and venues that are similar to AuthorA. Note that we can restrict to find specific entity types. This is a traditional tasks in scholarly data exploration, whereas other below tasks are new. Semantic query We can solve this task by looking for the entities with highest similarity to $ e $. For example, the first result is: Exploration tasks and semantic queries conversion ::: Similar entities with bias Tasks Given an entity $ e \in $ and some positive bias entities $ A = \lbrace a_1, \dots , a_k\rbrace $ known as expected results, find entities that are similar to $ e $ following the bias in $ A $. For example, given AuthorA and some successfully collaborating authors, find other similar authors that may also result in good collaborations with AuthorA. Semantic query We can solve this task by looking for the entities with highest similarity to both $ e $ and $ A $. For example, denoting the arithmetic mean of embedding vectors in $ A $ as $ \bar{A} $, the first result is: Exploration tasks and semantic queries conversion ::: Analogy query Tasks Given an entity $ e \in $, positive bias $ A = \lbrace a_1, \dots , a_k\rbrace $, and negative bias $ B = \lbrace b_1, \dots , b_k\rbrace $, find entities that are similar to $ e $ following the biases in $ A $ and $ B $. The essence of this task is tracing along a semantic direction defined by the positive and negative biases. For example, start with AuthorA, we can trace along the expertise direction to find authors that are similar to AuthorA but with higher or lower expertise. Semantic query We can solve this task by looking for the entities with highest similarity to $ e $ and $ A $ but not $ B $. For example, denoting the arithmetic mean of embedding vectors in $ A $ and $ B $ as $ \bar{A} $ and $ \bar{B} $, respectively, note that $ \bar{A} - \bar{B} $ defines the semantic direction along the positive and negative biases, the first result is: Exploration tasks and semantic queries conversion ::: Analogy browsing Tasks This task is an extension of the above analogy query task, by tracing along multiple semantic directions defined by multiple pairs of positive and negative biases. This task can be implemented as an interactive data analysis tool. For example, start with AuthorA, we can trace to authors with higher expertise, then continue tracing to new domains to find all authors similar to AuthorA with high expertise in the new domain. For another example, start with Paper1, we can trace to papers with higher quality, then continue tracing to new domain to look for papers similar to Paper1 with high quality in the new domain. Semantic query We can solve this task by simply repeating the semantic query for analogy query with each pair of positive and negative bias. Note that we can also combine different operations in different order to support flexible browsing. Conclusion In this paper, we studied the application of knowledge graph embedding in exploratory data analysis. We analyzed the CP$ _h $ model and provided understandings to its semantic structures. We then defined the semantic queries on knowledge graph embedding space to efficiently approximate some operations on heterogeneous data such as scholarly data. We designed a general framework to systematically apply semantic queries to solve scholarly data exploration tasks. Finally, we outlined and discussed the solutions to some traditional and pioneering exploration tasks emerged from the semantic structures of the knowledge graph embedding space. This paper is dedicated to the theoretical foundation of a new approach and discussions of emerging tasks, whereas experiments and evaluations are left for the future work. There are several other promising directions for future research. One direction is to explore new tasks or new solutions of traditional tasks using the proposed method. Another direction is to implement the proposed exploration tasks on real-life digital libraries for online evaluation. Acknowledgments This work was supported by “Cross-ministerial Strategic Innovation Promotion Program (SIP) Second Phase, Big-data and AI-enabled Cyberspace Technologies” by New Energy and Industrial Technology Development Organization (NEDO). 1.0
analogy query, analogy browsing
c6aa8a02597fea802890945f0b4be8d631e4d5cd
c6aa8a02597fea802890945f0b4be8d631e4d5cd_0
Q: What are the uncanny semantic structures of the embedding space? Text: Introduction In recent years, digital libraries have moved towards open science and open access with several large scholarly datasets being constructed. Most popular datasets include millions of papers, authors, venues, and other information. Their large size and heterogeneous contents make it very challenging to effectively manage, explore, and utilize these datasets. The knowledge graph has emerged as a universal data format for representing knowledge about entities and their relationships in such complicated data. The main part of a knowledge graph is a collection of triples, with each triple $ (h, t, r) $ denoting the fact that relation $ r $ exists between head entity $ h $ and tail entity $ t $. This can also be formalized as a labeled directed multigraph where each triple $ (h, t, r) $ represents a directed edge from node $ h $ to node $ t $ with label $ r $. Therefore, it is straightforward to build knowledge graphs for scholarly data by representing natural connections between scholarly entities with triples such as (AuthorA, Paper1, write) and (Paper1, Paper2, cite). Notably, instead of using knowledge graphs directly in some tasks, we can model them by knowledge graph embedding methods, which represent entities and relations as embedding vectors in semantic space, then model the interactions between them to solve the knowledge graph completion task. There are many approaches BIBREF0 to modeling the interactions between embedding vectors resulting in many knowledge graph embedding methods such as ComplEx BIBREF1 and CP$ _h $ BIBREF2. In the case of word embedding methods such as word2vec, embedding vectors are known to contain rich semantic information that enables them to be used in many semantic applications BIBREF3. However, the semantic structures in the knowledge graph embedding space are not well-studied, thus knowledge graph embeddings are only used for knowledge graph completion but remain absent in the toolbox for data analysis of heterogeneous data in general and scholarly data in particular, although they have the potential to be highly effective and efficient. In this paper, we address these issues by providing a theoretical understanding of their semantic structures and designing a general semantic query framework to support data exploration. For theoretical analysis, we first analyze the state-of-the-art knowledge graph embedding model CP$ _h $ BIBREF2 in comparison to the popular word embedding model word2vec skipgram BIBREF3 to explain its components and provide understandings to its semantic structures. We then define the semantic queries on the knowledge graph embedding spaces, which are algebraic operations between the embedding vectors in the knowledge graph embedding space to solve queries such as similarity and analogy between the entities on the original datasets. Based on our theoretical results, we design a general framework for data exploration on scholarly data by semantic queries on knowledge graph embedding space. The main component in this framework is the conversion between the data exploration tasks and the semantic queries. We first outline the semantic query solutions to some traditional data exploration tasks, such as similar paper prediction and similar author prediction. We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries. Related Work ::: Knowledge graph for scholarly data Knowledge graph has gradually become the standard data format for heterogeneous and complicated datasets BIBREF4. There have been several attempts to build knowledge graph for scholarly data, either adopting the scholarly network directly BIBREF5, or deriving the knowledge graph from some similarity measures BIBREF6 BIBREF7, or constructing the knowledge graph from survey papers BIBREF8. However, they mostly focus on the data format or graph inference aspects of knowledge graph. In this paper, we instead focus on the knowledge graph embedding methods and especially the application of embedding vectors in data exploration. Related Work ::: Knowledge graph embedding For a more in depth survey of knowledge graph embedding methods, please refer to BIBREF0, which defines their architecture, categorization, and interaction mechanisms. In this paper, we only focus on the semantic structures of the state-of-the-art model CP$ _h $ BIBREF2, which is an extension of CP BIBREF9. In CP, each entity $ e $ has two embedding vectors $ $ and $ ^{(2)} $ depending on its role in a triple as head or as tail, respectively. CP$ _h $ augments the data by making an inverse triple $ (t, h, r^{(a)}) $ for each existing triple $ (h, t, r) $, where $ r^{(a)} $ is the augmented relation corresponding to $ r $. When maximizing the likelihood by stochastic gradient descent, its score function is the sum: where $ , ^{(2)}, , ^{(2)}, , ^{(a)} \in ^{D} $ are the embedding vectors of $ h $, $ t $, and $ r $, respectively, and the trilinear-product $ \langle \cdot , \cdot , \cdot \rangle $ is defined as: where $ D $ is the embedding size and $ d $ is the dimension for which $ h_d $, $ t_d $, and $ r_d $ are the scalar entries. The validity of each triple is modeled as a Bernoulli distribution and its validity probability is computed by the standard logistic function $ \sigma (\cdot ) $ as: Related Work ::: Word embedding The most popular word embedding models in recent years are word2vec variants such as word2vec skipgram BIBREF3, which predicts the context-words $ c_i $ independently given the target-word $ w $, that is: In practice, the expensive softmax functions in these multinoulli distributions are avoided by approximating them with negative sampling and solve for the Bernoulli distributions by using the standard logistic function $ \sigma (\cdot ) $: where $ _{c_i} $ is the context-embedding vector of context-word $ c_i $ and $ _w $ is the word-embedding vector of target-word $ w $. Theoretical analysis Word2vec skipgram and its semantic structures are well-studied both theoretically and empirically BIBREF3. CP$ _h $ is a new state of the art among many knowledge graph embedding models. We first ground the theoretical basis of CP$ _h $ on word2vec skipgram to explain its components and understand its semantic structures. We then define semantic queries on knowledge graph embedding space. Theoretical analysis ::: The semantic structures of CP@!START@$ _h $@!END@ We first look at Eq. DISPLAY_FORM8 of word2vec skipgram and consider only one context-word $ c $ for simplicity. We can write the probability in proportional format as: Note that the context-word $ c $ and target-word $ w $ are ordered and in word2vec skipgram, the target-word is the central word in a sliding window, e.g., $ w_i $ is the target-word and $ w_{i-k}, \dots , w_{i-1}, w_{i+1}, \dots , w_{i+k} $ are context-words. Therefore, the roles in each word pair are symmetric over the whole dataset. When maximizing the likelihood by stochastic gradient descent, we can write the approximate probability of unordered word pair and expand the dot products as: where $ _c $ and $ _c $ are the context-embedding and word-embedding vectors of $ c $, respectively, $ _w $ and $ _w $ are the context-embedding and word-embedding vectors of $ w $, respectively, and $ {u_c}_d, {v_c}_d, {u_w}_d $, and $ {v_w}_d $ are their scalar entries, respectively. We now return to Eq. DISPLAY_FORM3 of CP$ _h $ to also write the probability in Eq. DISPLAY_FORM5 in proportional format and expand the trilinear products according to Eq. DISPLAY_FORM4 as: where $ , ^{(2)} $, $ , ^{(2)} $, $ , ^{(a)} $ are knowledge graph embedding vectors and $ h_d, h^{(2)}_d $, $ t_d, t^{(2)}_d $, $ r_d, r^{(a)}_d $ are the scalar entries. Comparing Eq. of word2vec skipgram and Eq. of CP$ _h $, we can see they have essentially the same form and mechanism. Note that the embedding vectors in word2vec skipgram are learned by aligning each target-word to different context-words and vice versa, which is essentially the same for CP$ _h $ by aligning each head entity to different tail entities in different triples and vice versa, with regards to the dimensions weighted by each relation. This result suggests that the semantic structures of CP$ _h $ are similar to those in word2vec skipgram and we can use the head-role-based entity embedding vectors, such as $ $, for semantic applications similarly to word embedding vectors. The tail-role-based entity embedding vectors, such as $ ^{(2)} $, contain almost the same information due to their symmetric roles, thus can be discarded in semantic tasks, which justifies this common practices in word embedding applications BIBREF3. Theoretical analysis ::: Semantic query We mainly concern with the two following structures of the embedding space. Semantic similarity structure: Semantically similar entities are close to each other in the embedding space, and vice versa. This structure can be identified by a vector similarity measure, such as the dot product between two embedding vectors. The similarity between two embedding vectors is computed as: Semantic direction structure: There exist semantic directions in the embedding space, by which only one semantic aspect changes while all other aspects stay the same. It can be identified by a vector difference, such as the subtraction between two embedding vectors. The semantic direction between two embedding vectors is computed as: The algebraic operations, which include the above dot product and vector subtraction, or their combinations, can be used to approximate some important tasks on the original data. To do this, we first need to convert the data exploration task to the appropriate operations. We then conduct the operations on the embedding vectors and obtain the results. This process is defined as following. Definition 1 Semantic queries on knowledge graph embedding space are defined as the algebraic operations between the knowledge graph embedding vectors to approximate a given data exploration task on the original dataset. Semantic query framework Given the theoretical results, here we design a general framework for scholarly data exploration by using semantic queries on knowledge graph embedding space. Figure FIGREF19 shows the architecture of the proposed framework. There are three main components, namely data processing, task processing, and query processing. Data processing: with two steps, (1) constructing the knowledge graph from scholarly data by using the scholarly graph directly with entities such as authors, papers, venues, and relations such as author-write-paper, paper-cite-paper, paper-in-venue, and (2) learning the knowledge graph embeddings as in BIBREF0. Task processing: converting data exploration tasks to algebraic operations on the embedding space by following task-specific conversion templates. Some important tasks and their conversion templates are discussed in Section SECREF5. Query processing: executing semantic query on the embedding space and return results. Note that the algebraic operations on embedding vectors are linear and can be performed in parallel. Therefore, the semantic query is efficient. Note that the proposed semantic query framework makes no assumption on the specific knowledge graph embedding models and the induced embedding spaces. Any embedding space that contains rich semantic information such as the listed semantic structures can be applied in this framework. Exploration tasks and semantic queries conversion Here we present and discuss the semantic queries for some traditional and newly proposed data exploration tasks on scholarly data. Exploration tasks and semantic queries conversion ::: Similar entities Tasks Given an entity $ e \in $, find entities that are similar to $ e $. For example, given AuthorA, find authors, papers, and venues that are similar to AuthorA. Note that we can restrict to find specific entity types. This is a traditional tasks in scholarly data exploration, whereas other below tasks are new. Semantic query We can solve this task by looking for the entities with highest similarity to $ e $. For example, the first result is: Exploration tasks and semantic queries conversion ::: Similar entities with bias Tasks Given an entity $ e \in $ and some positive bias entities $ A = \lbrace a_1, \dots , a_k\rbrace $ known as expected results, find entities that are similar to $ e $ following the bias in $ A $. For example, given AuthorA and some successfully collaborating authors, find other similar authors that may also result in good collaborations with AuthorA. Semantic query We can solve this task by looking for the entities with highest similarity to both $ e $ and $ A $. For example, denoting the arithmetic mean of embedding vectors in $ A $ as $ \bar{A} $, the first result is: Exploration tasks and semantic queries conversion ::: Analogy query Tasks Given an entity $ e \in $, positive bias $ A = \lbrace a_1, \dots , a_k\rbrace $, and negative bias $ B = \lbrace b_1, \dots , b_k\rbrace $, find entities that are similar to $ e $ following the biases in $ A $ and $ B $. The essence of this task is tracing along a semantic direction defined by the positive and negative biases. For example, start with AuthorA, we can trace along the expertise direction to find authors that are similar to AuthorA but with higher or lower expertise. Semantic query We can solve this task by looking for the entities with highest similarity to $ e $ and $ A $ but not $ B $. For example, denoting the arithmetic mean of embedding vectors in $ A $ and $ B $ as $ \bar{A} $ and $ \bar{B} $, respectively, note that $ \bar{A} - \bar{B} $ defines the semantic direction along the positive and negative biases, the first result is: Exploration tasks and semantic queries conversion ::: Analogy browsing Tasks This task is an extension of the above analogy query task, by tracing along multiple semantic directions defined by multiple pairs of positive and negative biases. This task can be implemented as an interactive data analysis tool. For example, start with AuthorA, we can trace to authors with higher expertise, then continue tracing to new domains to find all authors similar to AuthorA with high expertise in the new domain. For another example, start with Paper1, we can trace to papers with higher quality, then continue tracing to new domain to look for papers similar to Paper1 with high quality in the new domain. Semantic query We can solve this task by simply repeating the semantic query for analogy query with each pair of positive and negative bias. Note that we can also combine different operations in different order to support flexible browsing. Conclusion In this paper, we studied the application of knowledge graph embedding in exploratory data analysis. We analyzed the CP$ _h $ model and provided understandings to its semantic structures. We then defined the semantic queries on knowledge graph embedding space to efficiently approximate some operations on heterogeneous data such as scholarly data. We designed a general framework to systematically apply semantic queries to solve scholarly data exploration tasks. Finally, we outlined and discussed the solutions to some traditional and pioneering exploration tasks emerged from the semantic structures of the knowledge graph embedding space. This paper is dedicated to the theoretical foundation of a new approach and discussions of emerging tasks, whereas experiments and evaluations are left for the future work. There are several other promising directions for future research. One direction is to explore new tasks or new solutions of traditional tasks using the proposed method. Another direction is to implement the proposed exploration tasks on real-life digital libraries for online evaluation. Acknowledgments This work was supported by “Cross-ministerial Strategic Innovation Promotion Program (SIP) Second Phase, Big-data and AI-enabled Cyberspace Technologies” by New Energy and Industrial Technology Development Organization (NEDO). 1.0
Semantic similarity structure, Semantic direction structure
bfad30f51ce3deea8a178944fa4c6e8acdd83a48
bfad30f51ce3deea8a178944fa4c6e8acdd83a48_0
Q: What is the general framework for data exploration by semantic queries? Text: Introduction In recent years, digital libraries have moved towards open science and open access with several large scholarly datasets being constructed. Most popular datasets include millions of papers, authors, venues, and other information. Their large size and heterogeneous contents make it very challenging to effectively manage, explore, and utilize these datasets. The knowledge graph has emerged as a universal data format for representing knowledge about entities and their relationships in such complicated data. The main part of a knowledge graph is a collection of triples, with each triple $ (h, t, r) $ denoting the fact that relation $ r $ exists between head entity $ h $ and tail entity $ t $. This can also be formalized as a labeled directed multigraph where each triple $ (h, t, r) $ represents a directed edge from node $ h $ to node $ t $ with label $ r $. Therefore, it is straightforward to build knowledge graphs for scholarly data by representing natural connections between scholarly entities with triples such as (AuthorA, Paper1, write) and (Paper1, Paper2, cite). Notably, instead of using knowledge graphs directly in some tasks, we can model them by knowledge graph embedding methods, which represent entities and relations as embedding vectors in semantic space, then model the interactions between them to solve the knowledge graph completion task. There are many approaches BIBREF0 to modeling the interactions between embedding vectors resulting in many knowledge graph embedding methods such as ComplEx BIBREF1 and CP$ _h $ BIBREF2. In the case of word embedding methods such as word2vec, embedding vectors are known to contain rich semantic information that enables them to be used in many semantic applications BIBREF3. However, the semantic structures in the knowledge graph embedding space are not well-studied, thus knowledge graph embeddings are only used for knowledge graph completion but remain absent in the toolbox for data analysis of heterogeneous data in general and scholarly data in particular, although they have the potential to be highly effective and efficient. In this paper, we address these issues by providing a theoretical understanding of their semantic structures and designing a general semantic query framework to support data exploration. For theoretical analysis, we first analyze the state-of-the-art knowledge graph embedding model CP$ _h $ BIBREF2 in comparison to the popular word embedding model word2vec skipgram BIBREF3 to explain its components and provide understandings to its semantic structures. We then define the semantic queries on the knowledge graph embedding spaces, which are algebraic operations between the embedding vectors in the knowledge graph embedding space to solve queries such as similarity and analogy between the entities on the original datasets. Based on our theoretical results, we design a general framework for data exploration on scholarly data by semantic queries on knowledge graph embedding space. The main component in this framework is the conversion between the data exploration tasks and the semantic queries. We first outline the semantic query solutions to some traditional data exploration tasks, such as similar paper prediction and similar author prediction. We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries. Related Work ::: Knowledge graph for scholarly data Knowledge graph has gradually become the standard data format for heterogeneous and complicated datasets BIBREF4. There have been several attempts to build knowledge graph for scholarly data, either adopting the scholarly network directly BIBREF5, or deriving the knowledge graph from some similarity measures BIBREF6 BIBREF7, or constructing the knowledge graph from survey papers BIBREF8. However, they mostly focus on the data format or graph inference aspects of knowledge graph. In this paper, we instead focus on the knowledge graph embedding methods and especially the application of embedding vectors in data exploration. Related Work ::: Knowledge graph embedding For a more in depth survey of knowledge graph embedding methods, please refer to BIBREF0, which defines their architecture, categorization, and interaction mechanisms. In this paper, we only focus on the semantic structures of the state-of-the-art model CP$ _h $ BIBREF2, which is an extension of CP BIBREF9. In CP, each entity $ e $ has two embedding vectors $ $ and $ ^{(2)} $ depending on its role in a triple as head or as tail, respectively. CP$ _h $ augments the data by making an inverse triple $ (t, h, r^{(a)}) $ for each existing triple $ (h, t, r) $, where $ r^{(a)} $ is the augmented relation corresponding to $ r $. When maximizing the likelihood by stochastic gradient descent, its score function is the sum: where $ , ^{(2)}, , ^{(2)}, , ^{(a)} \in ^{D} $ are the embedding vectors of $ h $, $ t $, and $ r $, respectively, and the trilinear-product $ \langle \cdot , \cdot , \cdot \rangle $ is defined as: where $ D $ is the embedding size and $ d $ is the dimension for which $ h_d $, $ t_d $, and $ r_d $ are the scalar entries. The validity of each triple is modeled as a Bernoulli distribution and its validity probability is computed by the standard logistic function $ \sigma (\cdot ) $ as: Related Work ::: Word embedding The most popular word embedding models in recent years are word2vec variants such as word2vec skipgram BIBREF3, which predicts the context-words $ c_i $ independently given the target-word $ w $, that is: In practice, the expensive softmax functions in these multinoulli distributions are avoided by approximating them with negative sampling and solve for the Bernoulli distributions by using the standard logistic function $ \sigma (\cdot ) $: where $ _{c_i} $ is the context-embedding vector of context-word $ c_i $ and $ _w $ is the word-embedding vector of target-word $ w $. Theoretical analysis Word2vec skipgram and its semantic structures are well-studied both theoretically and empirically BIBREF3. CP$ _h $ is a new state of the art among many knowledge graph embedding models. We first ground the theoretical basis of CP$ _h $ on word2vec skipgram to explain its components and understand its semantic structures. We then define semantic queries on knowledge graph embedding space. Theoretical analysis ::: The semantic structures of CP@!START@$ _h $@!END@ We first look at Eq. DISPLAY_FORM8 of word2vec skipgram and consider only one context-word $ c $ for simplicity. We can write the probability in proportional format as: Note that the context-word $ c $ and target-word $ w $ are ordered and in word2vec skipgram, the target-word is the central word in a sliding window, e.g., $ w_i $ is the target-word and $ w_{i-k}, \dots , w_{i-1}, w_{i+1}, \dots , w_{i+k} $ are context-words. Therefore, the roles in each word pair are symmetric over the whole dataset. When maximizing the likelihood by stochastic gradient descent, we can write the approximate probability of unordered word pair and expand the dot products as: where $ _c $ and $ _c $ are the context-embedding and word-embedding vectors of $ c $, respectively, $ _w $ and $ _w $ are the context-embedding and word-embedding vectors of $ w $, respectively, and $ {u_c}_d, {v_c}_d, {u_w}_d $, and $ {v_w}_d $ are their scalar entries, respectively. We now return to Eq. DISPLAY_FORM3 of CP$ _h $ to also write the probability in Eq. DISPLAY_FORM5 in proportional format and expand the trilinear products according to Eq. DISPLAY_FORM4 as: where $ , ^{(2)} $, $ , ^{(2)} $, $ , ^{(a)} $ are knowledge graph embedding vectors and $ h_d, h^{(2)}_d $, $ t_d, t^{(2)}_d $, $ r_d, r^{(a)}_d $ are the scalar entries. Comparing Eq. of word2vec skipgram and Eq. of CP$ _h $, we can see they have essentially the same form and mechanism. Note that the embedding vectors in word2vec skipgram are learned by aligning each target-word to different context-words and vice versa, which is essentially the same for CP$ _h $ by aligning each head entity to different tail entities in different triples and vice versa, with regards to the dimensions weighted by each relation. This result suggests that the semantic structures of CP$ _h $ are similar to those in word2vec skipgram and we can use the head-role-based entity embedding vectors, such as $ $, for semantic applications similarly to word embedding vectors. The tail-role-based entity embedding vectors, such as $ ^{(2)} $, contain almost the same information due to their symmetric roles, thus can be discarded in semantic tasks, which justifies this common practices in word embedding applications BIBREF3. Theoretical analysis ::: Semantic query We mainly concern with the two following structures of the embedding space. Semantic similarity structure: Semantically similar entities are close to each other in the embedding space, and vice versa. This structure can be identified by a vector similarity measure, such as the dot product between two embedding vectors. The similarity between two embedding vectors is computed as: Semantic direction structure: There exist semantic directions in the embedding space, by which only one semantic aspect changes while all other aspects stay the same. It can be identified by a vector difference, such as the subtraction between two embedding vectors. The semantic direction between two embedding vectors is computed as: The algebraic operations, which include the above dot product and vector subtraction, or their combinations, can be used to approximate some important tasks on the original data. To do this, we first need to convert the data exploration task to the appropriate operations. We then conduct the operations on the embedding vectors and obtain the results. This process is defined as following. Definition 1 Semantic queries on knowledge graph embedding space are defined as the algebraic operations between the knowledge graph embedding vectors to approximate a given data exploration task on the original dataset. Semantic query framework Given the theoretical results, here we design a general framework for scholarly data exploration by using semantic queries on knowledge graph embedding space. Figure FIGREF19 shows the architecture of the proposed framework. There are three main components, namely data processing, task processing, and query processing. Data processing: with two steps, (1) constructing the knowledge graph from scholarly data by using the scholarly graph directly with entities such as authors, papers, venues, and relations such as author-write-paper, paper-cite-paper, paper-in-venue, and (2) learning the knowledge graph embeddings as in BIBREF0. Task processing: converting data exploration tasks to algebraic operations on the embedding space by following task-specific conversion templates. Some important tasks and their conversion templates are discussed in Section SECREF5. Query processing: executing semantic query on the embedding space and return results. Note that the algebraic operations on embedding vectors are linear and can be performed in parallel. Therefore, the semantic query is efficient. Note that the proposed semantic query framework makes no assumption on the specific knowledge graph embedding models and the induced embedding spaces. Any embedding space that contains rich semantic information such as the listed semantic structures can be applied in this framework. Exploration tasks and semantic queries conversion Here we present and discuss the semantic queries for some traditional and newly proposed data exploration tasks on scholarly data. Exploration tasks and semantic queries conversion ::: Similar entities Tasks Given an entity $ e \in $, find entities that are similar to $ e $. For example, given AuthorA, find authors, papers, and venues that are similar to AuthorA. Note that we can restrict to find specific entity types. This is a traditional tasks in scholarly data exploration, whereas other below tasks are new. Semantic query We can solve this task by looking for the entities with highest similarity to $ e $. For example, the first result is: Exploration tasks and semantic queries conversion ::: Similar entities with bias Tasks Given an entity $ e \in $ and some positive bias entities $ A = \lbrace a_1, \dots , a_k\rbrace $ known as expected results, find entities that are similar to $ e $ following the bias in $ A $. For example, given AuthorA and some successfully collaborating authors, find other similar authors that may also result in good collaborations with AuthorA. Semantic query We can solve this task by looking for the entities with highest similarity to both $ e $ and $ A $. For example, denoting the arithmetic mean of embedding vectors in $ A $ as $ \bar{A} $, the first result is: Exploration tasks and semantic queries conversion ::: Analogy query Tasks Given an entity $ e \in $, positive bias $ A = \lbrace a_1, \dots , a_k\rbrace $, and negative bias $ B = \lbrace b_1, \dots , b_k\rbrace $, find entities that are similar to $ e $ following the biases in $ A $ and $ B $. The essence of this task is tracing along a semantic direction defined by the positive and negative biases. For example, start with AuthorA, we can trace along the expertise direction to find authors that are similar to AuthorA but with higher or lower expertise. Semantic query We can solve this task by looking for the entities with highest similarity to $ e $ and $ A $ but not $ B $. For example, denoting the arithmetic mean of embedding vectors in $ A $ and $ B $ as $ \bar{A} $ and $ \bar{B} $, respectively, note that $ \bar{A} - \bar{B} $ defines the semantic direction along the positive and negative biases, the first result is: Exploration tasks and semantic queries conversion ::: Analogy browsing Tasks This task is an extension of the above analogy query task, by tracing along multiple semantic directions defined by multiple pairs of positive and negative biases. This task can be implemented as an interactive data analysis tool. For example, start with AuthorA, we can trace to authors with higher expertise, then continue tracing to new domains to find all authors similar to AuthorA with high expertise in the new domain. For another example, start with Paper1, we can trace to papers with higher quality, then continue tracing to new domain to look for papers similar to Paper1 with high quality in the new domain. Semantic query We can solve this task by simply repeating the semantic query for analogy query with each pair of positive and negative bias. Note that we can also combine different operations in different order to support flexible browsing. Conclusion In this paper, we studied the application of knowledge graph embedding in exploratory data analysis. We analyzed the CP$ _h $ model and provided understandings to its semantic structures. We then defined the semantic queries on knowledge graph embedding space to efficiently approximate some operations on heterogeneous data such as scholarly data. We designed a general framework to systematically apply semantic queries to solve scholarly data exploration tasks. Finally, we outlined and discussed the solutions to some traditional and pioneering exploration tasks emerged from the semantic structures of the knowledge graph embedding space. This paper is dedicated to the theoretical foundation of a new approach and discussions of emerging tasks, whereas experiments and evaluations are left for the future work. There are several other promising directions for future research. One direction is to explore new tasks or new solutions of traditional tasks using the proposed method. Another direction is to implement the proposed exploration tasks on real-life digital libraries for online evaluation. Acknowledgments This work was supported by “Cross-ministerial Strategic Innovation Promotion Program (SIP) Second Phase, Big-data and AI-enabled Cyberspace Technologies” by New Energy and Industrial Technology Development Organization (NEDO). 1.0
three main components, namely data processing, task processing, and query processing
dd9883f4adf7be072d314d7ed13fe4518c5500e0
dd9883f4adf7be072d314d7ed13fe4518c5500e0_0
Q: What data exploration is supported by the analysis of these semantic structures? Text: Introduction In recent years, digital libraries have moved towards open science and open access with several large scholarly datasets being constructed. Most popular datasets include millions of papers, authors, venues, and other information. Their large size and heterogeneous contents make it very challenging to effectively manage, explore, and utilize these datasets. The knowledge graph has emerged as a universal data format for representing knowledge about entities and their relationships in such complicated data. The main part of a knowledge graph is a collection of triples, with each triple $ (h, t, r) $ denoting the fact that relation $ r $ exists between head entity $ h $ and tail entity $ t $. This can also be formalized as a labeled directed multigraph where each triple $ (h, t, r) $ represents a directed edge from node $ h $ to node $ t $ with label $ r $. Therefore, it is straightforward to build knowledge graphs for scholarly data by representing natural connections between scholarly entities with triples such as (AuthorA, Paper1, write) and (Paper1, Paper2, cite). Notably, instead of using knowledge graphs directly in some tasks, we can model them by knowledge graph embedding methods, which represent entities and relations as embedding vectors in semantic space, then model the interactions between them to solve the knowledge graph completion task. There are many approaches BIBREF0 to modeling the interactions between embedding vectors resulting in many knowledge graph embedding methods such as ComplEx BIBREF1 and CP$ _h $ BIBREF2. In the case of word embedding methods such as word2vec, embedding vectors are known to contain rich semantic information that enables them to be used in many semantic applications BIBREF3. However, the semantic structures in the knowledge graph embedding space are not well-studied, thus knowledge graph embeddings are only used for knowledge graph completion but remain absent in the toolbox for data analysis of heterogeneous data in general and scholarly data in particular, although they have the potential to be highly effective and efficient. In this paper, we address these issues by providing a theoretical understanding of their semantic structures and designing a general semantic query framework to support data exploration. For theoretical analysis, we first analyze the state-of-the-art knowledge graph embedding model CP$ _h $ BIBREF2 in comparison to the popular word embedding model word2vec skipgram BIBREF3 to explain its components and provide understandings to its semantic structures. We then define the semantic queries on the knowledge graph embedding spaces, which are algebraic operations between the embedding vectors in the knowledge graph embedding space to solve queries such as similarity and analogy between the entities on the original datasets. Based on our theoretical results, we design a general framework for data exploration on scholarly data by semantic queries on knowledge graph embedding space. The main component in this framework is the conversion between the data exploration tasks and the semantic queries. We first outline the semantic query solutions to some traditional data exploration tasks, such as similar paper prediction and similar author prediction. We then propose a group of new interesting tasks, such as analogy query and analogy browsing, and discuss how they can be used in modern digital libraries. Related Work ::: Knowledge graph for scholarly data Knowledge graph has gradually become the standard data format for heterogeneous and complicated datasets BIBREF4. There have been several attempts to build knowledge graph for scholarly data, either adopting the scholarly network directly BIBREF5, or deriving the knowledge graph from some similarity measures BIBREF6 BIBREF7, or constructing the knowledge graph from survey papers BIBREF8. However, they mostly focus on the data format or graph inference aspects of knowledge graph. In this paper, we instead focus on the knowledge graph embedding methods and especially the application of embedding vectors in data exploration. Related Work ::: Knowledge graph embedding For a more in depth survey of knowledge graph embedding methods, please refer to BIBREF0, which defines their architecture, categorization, and interaction mechanisms. In this paper, we only focus on the semantic structures of the state-of-the-art model CP$ _h $ BIBREF2, which is an extension of CP BIBREF9. In CP, each entity $ e $ has two embedding vectors $ $ and $ ^{(2)} $ depending on its role in a triple as head or as tail, respectively. CP$ _h $ augments the data by making an inverse triple $ (t, h, r^{(a)}) $ for each existing triple $ (h, t, r) $, where $ r^{(a)} $ is the augmented relation corresponding to $ r $. When maximizing the likelihood by stochastic gradient descent, its score function is the sum: where $ , ^{(2)}, , ^{(2)}, , ^{(a)} \in ^{D} $ are the embedding vectors of $ h $, $ t $, and $ r $, respectively, and the trilinear-product $ \langle \cdot , \cdot , \cdot \rangle $ is defined as: where $ D $ is the embedding size and $ d $ is the dimension for which $ h_d $, $ t_d $, and $ r_d $ are the scalar entries. The validity of each triple is modeled as a Bernoulli distribution and its validity probability is computed by the standard logistic function $ \sigma (\cdot ) $ as: Related Work ::: Word embedding The most popular word embedding models in recent years are word2vec variants such as word2vec skipgram BIBREF3, which predicts the context-words $ c_i $ independently given the target-word $ w $, that is: In practice, the expensive softmax functions in these multinoulli distributions are avoided by approximating them with negative sampling and solve for the Bernoulli distributions by using the standard logistic function $ \sigma (\cdot ) $: where $ _{c_i} $ is the context-embedding vector of context-word $ c_i $ and $ _w $ is the word-embedding vector of target-word $ w $. Theoretical analysis Word2vec skipgram and its semantic structures are well-studied both theoretically and empirically BIBREF3. CP$ _h $ is a new state of the art among many knowledge graph embedding models. We first ground the theoretical basis of CP$ _h $ on word2vec skipgram to explain its components and understand its semantic structures. We then define semantic queries on knowledge graph embedding space. Theoretical analysis ::: The semantic structures of CP@!START@$ _h $@!END@ We first look at Eq. DISPLAY_FORM8 of word2vec skipgram and consider only one context-word $ c $ for simplicity. We can write the probability in proportional format as: Note that the context-word $ c $ and target-word $ w $ are ordered and in word2vec skipgram, the target-word is the central word in a sliding window, e.g., $ w_i $ is the target-word and $ w_{i-k}, \dots , w_{i-1}, w_{i+1}, \dots , w_{i+k} $ are context-words. Therefore, the roles in each word pair are symmetric over the whole dataset. When maximizing the likelihood by stochastic gradient descent, we can write the approximate probability of unordered word pair and expand the dot products as: where $ _c $ and $ _c $ are the context-embedding and word-embedding vectors of $ c $, respectively, $ _w $ and $ _w $ are the context-embedding and word-embedding vectors of $ w $, respectively, and $ {u_c}_d, {v_c}_d, {u_w}_d $, and $ {v_w}_d $ are their scalar entries, respectively. We now return to Eq. DISPLAY_FORM3 of CP$ _h $ to also write the probability in Eq. DISPLAY_FORM5 in proportional format and expand the trilinear products according to Eq. DISPLAY_FORM4 as: where $ , ^{(2)} $, $ , ^{(2)} $, $ , ^{(a)} $ are knowledge graph embedding vectors and $ h_d, h^{(2)}_d $, $ t_d, t^{(2)}_d $, $ r_d, r^{(a)}_d $ are the scalar entries. Comparing Eq. of word2vec skipgram and Eq. of CP$ _h $, we can see they have essentially the same form and mechanism. Note that the embedding vectors in word2vec skipgram are learned by aligning each target-word to different context-words and vice versa, which is essentially the same for CP$ _h $ by aligning each head entity to different tail entities in different triples and vice versa, with regards to the dimensions weighted by each relation. This result suggests that the semantic structures of CP$ _h $ are similar to those in word2vec skipgram and we can use the head-role-based entity embedding vectors, such as $ $, for semantic applications similarly to word embedding vectors. The tail-role-based entity embedding vectors, such as $ ^{(2)} $, contain almost the same information due to their symmetric roles, thus can be discarded in semantic tasks, which justifies this common practices in word embedding applications BIBREF3. Theoretical analysis ::: Semantic query We mainly concern with the two following structures of the embedding space. Semantic similarity structure: Semantically similar entities are close to each other in the embedding space, and vice versa. This structure can be identified by a vector similarity measure, such as the dot product between two embedding vectors. The similarity between two embedding vectors is computed as: Semantic direction structure: There exist semantic directions in the embedding space, by which only one semantic aspect changes while all other aspects stay the same. It can be identified by a vector difference, such as the subtraction between two embedding vectors. The semantic direction between two embedding vectors is computed as: The algebraic operations, which include the above dot product and vector subtraction, or their combinations, can be used to approximate some important tasks on the original data. To do this, we first need to convert the data exploration task to the appropriate operations. We then conduct the operations on the embedding vectors and obtain the results. This process is defined as following. Definition 1 Semantic queries on knowledge graph embedding space are defined as the algebraic operations between the knowledge graph embedding vectors to approximate a given data exploration task on the original dataset. Semantic query framework Given the theoretical results, here we design a general framework for scholarly data exploration by using semantic queries on knowledge graph embedding space. Figure FIGREF19 shows the architecture of the proposed framework. There are three main components, namely data processing, task processing, and query processing. Data processing: with two steps, (1) constructing the knowledge graph from scholarly data by using the scholarly graph directly with entities such as authors, papers, venues, and relations such as author-write-paper, paper-cite-paper, paper-in-venue, and (2) learning the knowledge graph embeddings as in BIBREF0. Task processing: converting data exploration tasks to algebraic operations on the embedding space by following task-specific conversion templates. Some important tasks and their conversion templates are discussed in Section SECREF5. Query processing: executing semantic query on the embedding space and return results. Note that the algebraic operations on embedding vectors are linear and can be performed in parallel. Therefore, the semantic query is efficient. Note that the proposed semantic query framework makes no assumption on the specific knowledge graph embedding models and the induced embedding spaces. Any embedding space that contains rich semantic information such as the listed semantic structures can be applied in this framework. Exploration tasks and semantic queries conversion Here we present and discuss the semantic queries for some traditional and newly proposed data exploration tasks on scholarly data. Exploration tasks and semantic queries conversion ::: Similar entities Tasks Given an entity $ e \in $, find entities that are similar to $ e $. For example, given AuthorA, find authors, papers, and venues that are similar to AuthorA. Note that we can restrict to find specific entity types. This is a traditional tasks in scholarly data exploration, whereas other below tasks are new. Semantic query We can solve this task by looking for the entities with highest similarity to $ e $. For example, the first result is: Exploration tasks and semantic queries conversion ::: Similar entities with bias Tasks Given an entity $ e \in $ and some positive bias entities $ A = \lbrace a_1, \dots , a_k\rbrace $ known as expected results, find entities that are similar to $ e $ following the bias in $ A $. For example, given AuthorA and some successfully collaborating authors, find other similar authors that may also result in good collaborations with AuthorA. Semantic query We can solve this task by looking for the entities with highest similarity to both $ e $ and $ A $. For example, denoting the arithmetic mean of embedding vectors in $ A $ as $ \bar{A} $, the first result is: Exploration tasks and semantic queries conversion ::: Analogy query Tasks Given an entity $ e \in $, positive bias $ A = \lbrace a_1, \dots , a_k\rbrace $, and negative bias $ B = \lbrace b_1, \dots , b_k\rbrace $, find entities that are similar to $ e $ following the biases in $ A $ and $ B $. The essence of this task is tracing along a semantic direction defined by the positive and negative biases. For example, start with AuthorA, we can trace along the expertise direction to find authors that are similar to AuthorA but with higher or lower expertise. Semantic query We can solve this task by looking for the entities with highest similarity to $ e $ and $ A $ but not $ B $. For example, denoting the arithmetic mean of embedding vectors in $ A $ and $ B $ as $ \bar{A} $ and $ \bar{B} $, respectively, note that $ \bar{A} - \bar{B} $ defines the semantic direction along the positive and negative biases, the first result is: Exploration tasks and semantic queries conversion ::: Analogy browsing Tasks This task is an extension of the above analogy query task, by tracing along multiple semantic directions defined by multiple pairs of positive and negative biases. This task can be implemented as an interactive data analysis tool. For example, start with AuthorA, we can trace to authors with higher expertise, then continue tracing to new domains to find all authors similar to AuthorA with high expertise in the new domain. For another example, start with Paper1, we can trace to papers with higher quality, then continue tracing to new domain to look for papers similar to Paper1 with high quality in the new domain. Semantic query We can solve this task by simply repeating the semantic query for analogy query with each pair of positive and negative bias. Note that we can also combine different operations in different order to support flexible browsing. Conclusion In this paper, we studied the application of knowledge graph embedding in exploratory data analysis. We analyzed the CP$ _h $ model and provided understandings to its semantic structures. We then defined the semantic queries on knowledge graph embedding space to efficiently approximate some operations on heterogeneous data such as scholarly data. We designed a general framework to systematically apply semantic queries to solve scholarly data exploration tasks. Finally, we outlined and discussed the solutions to some traditional and pioneering exploration tasks emerged from the semantic structures of the knowledge graph embedding space. This paper is dedicated to the theoretical foundation of a new approach and discussions of emerging tasks, whereas experiments and evaluations are left for the future work. There are several other promising directions for future research. One direction is to explore new tasks or new solutions of traditional tasks using the proposed method. Another direction is to implement the proposed exploration tasks on real-life digital libraries for online evaluation. Acknowledgments This work was supported by “Cross-ministerial Strategic Innovation Promotion Program (SIP) Second Phase, Big-data and AI-enabled Cyberspace Technologies” by New Energy and Industrial Technology Development Organization (NEDO). 1.0
Task processing: converting data exploration tasks to algebraic operations on the embedding space, Query processing: executing semantic query on the embedding space and return results
81669c550d32d756f516dab5d2b76ff5f21c0f36
81669c550d32d756f516dab5d2b76ff5f21c0f36_0
Q: what are the existing models they compared with? Text: Introduction Machine reading comprehension (MRC) aims to infer the answer to a question given the document. In recent years, researchers have proposed lots of MRC models BIBREF0, BIBREF1, BIBREF2, BIBREF3 and these models have achieved remarkable results in various public benchmarks such as SQuAD BIBREF4 and RACE BIBREF5. The success of these models is due to two reasons: (1) Multi-layer architectures which allow these models to read the document and the question iteratively for reasoning; (2) Attention mechanisms which would enable these models to focus on the part related to the question in the document. However, most of existing MRC models are still weak in numerical reasoning such as addition, subtraction, sorting and counting BIBREF6, which are naturally required when reading financial news, scientific articles, etc. BIBREF6 proposed a numerically-aware QANet (NAQANet) model, which divides the answer generation for numerical MRC into three types: (1) extracting spans; (2) counting; (3) addition or subtraction over numbers. NAQANet makes a pioneering attempt to answer numerical questions but still does not explicitly consider numerical reasoning. To tackle this problem, we introduce a novel model NumNet that integrates numerical reasoning into existing MRC models. A key problem to answer questions requiring numerical reasoning is how to perform numerical comparison in MRC systems, which is crucial for two common types of questions: (1) Numerical Comparison: The answers of the questions can be directly obtained via performing numerical comparison, such as sorting and comparison, in the documents. For example, in Table TABREF1, for the first question, if the MRC system knows the fact that “$49>47>36>31>22$”, it could easily extract that the second longest field goal is 47-yard. (2) Numerical Condition: The answers of the questions cannot be directly obtained through simple numerical comparison in the documents, but often require numerical comparison for understanding the text. For example, for the second question in Table TABREF1, an MRC system needs to know which age group made up more than 7% of the population to count the group number. Hence, our NumNet model considers numerical comparing information among numbers when answering numerical questions. As shown in Figure FIGREF3, NumNet first encodes both the question and passages through an encoding module consisting of convolution layers, self-attention layers and feed-forward layers as well as a passage-question attention layer. After that, we feed the question and passage representations into a numerically-aware graph neural network (NumGNN) to further integrate the comparison information among numbers into their representations. Finally, we utilize the numerically-aware representation of passages to infer the answer to the question. The experimental results on a public numerical MRC dataset DROP BIBREF6 show that our NumNet model achieves significant and consistent improvement as compared to all baseline methods by explicitly performing numerical reasoning over numbers in the question and passage. In particular, we show that our model could effectively deal with questions requiring sorting with multi-layer NumGNN. The source code of our paper is available at https://github.com/ranqiu92/NumNet. Related Work ::: Machine Reading Comprehension Machine reading comprehension (MRC) has become an important research area in NLP. In recent years, researchers have published a large number of annotated MRC datasets such as CNN/Daily Mail BIBREF7, SQuAD BIBREF4, RACE BIBREF5, TriviaQA BIBREF8 and so on. With the blooming of available large-scale MRC datasets, a great number of neural network-based MRC models have been proposed to answer questions for a given document including Attentive Reader BIBREF9, BiDAF BIBREF3, Interactive AoA Reader BIBREF2, Gated Attention Reader BIBREF1, R-Net BIBREF10, DCN BIBREF11, QANet BIBREF12, and achieve promising results in most existing public MRC datasets. Despite the success of neural network-based MRC models, researchers began to analyze the data and rethink to what extent we have solved the problem of MRC. Some works BIBREF0, BIBREF13, BIBREF14 classify the reasoning skills required to answer the questions into the following types: (1) Exact matching/Paraphrasing; (2) Summary; (3) Logic reasoning; (4) Utilizing external knowledge; (5) Numerical reasoning. They found that most existing MRC models are focusing on dealing with the first three types of questions. However, all these models suffer from problems when answering the questions requiring numerical reasoning. To the best of our knowledge, our work is the first one that explicitly incorporates numerical reasoning into the MRC system. The most relevant work to ours is NAQANet BIBREF6, which adapts the output layer of QANet BIBREF12 to support predicting answers based on counting and addition/subtraction over numbers. However, it does not consider numerical reasoning explicitly during encoding or inference. Related Work ::: Arithmetic Word Problem Solving Recently, understanding and solving arithmetic word problems (AWP) has attracted the growing interest of NLP researchers. BIBREF15 proposed a simple method to address arithmetic word problems, but mostly focusing on subsets of problems which only require addition and subtraction. After that, BIBREF16 proposed an algorithmic approach which could handle arithmetic word problems with multiple steps and operations. BIBREF17 further formalized the AWP problem as that of generating and scoring equation trees via integer linear programming. BIBREF18 and BIBREF19 proposed sequence to sequence solvers for the AWP problems, which are capable of generating unseen expressions and do not rely on sophisticated manual features. BIBREF20 leveraged deep Q-network to solve the AWP problems, achieving a good balance between effectiveness and efficiency. However, all the existing AWP systems are only trained and validated on small benchmark datasets. BIBREF21 found that the performance of these AWP systems sharply degrades on larger datasets. Moreover, from the perspective of NLP, MRC problems are more challenging than AWP since the passages in MRC are mostly real-world texts which require more complex skills to be understood. Above all, it is nontrivial to adapt most existing AWP models to the MRC scenario. Therefore, we focus on enhancing MRC models with numerical reasoning abilities in this work. Methodology In this section, we will introduce the framework of our model NumNet and provide the details of the proposed numerically-aware graph neural network (NumGNN) for numerical reasoning. Methodology ::: Framework An overview of our model NumNet is shown in Figure FIGREF3. We compose our model with encoding module, reasoning module and prediction module. Our major contribution is the reasoning module, which leverages a NumGNN between the encoding module and prediction module to explicitly consider the numerical comparison information and perform numerical reasoning. As NAQANet has been shown effective for handling numerical MRC problem BIBREF6, we leverage it as our base model and mainly focus on the design and integration of the NumGNN in this work. Methodology ::: Framework ::: Encoding Module Without loss of generality, we use the encoding components of QANet and NAQANet to encode the question and passage into vector-space representations. Formally, the question $Q$ and passage $P$ are first encoded as: and then the passage-aware question representation and the question-aware passage representation are computed as: where $\texttt {QANet-Emb-Enc}(\cdot )$ and $\texttt {QANet-Att}(\cdot )$ denote the “stacked embedding encoder layer” and “context-query attention layer” of QANet respectively. The former consists of convolution, self-attention and feed-forward layers. The latter is a passage-question attention layer. $\bar{\mathbf {Q}}$ and $\bar{\mathbf {P}}$ are used by the following components. Methodology ::: Framework ::: Reasoning Module First we build a heterogeneous directed graph $\mathcal {G}=(\mathbf {V};\mathbf {E})$, whose nodes ($\mathbf {V}$) are corresponding to the numbers in the question and passage, and edges ($\mathbf {E}$) are used to encode numerical relationships among the numbers. The details will be explained in Sec. SECREF19. Then we perform reasoning on the graph based on a graph neural network, which can be formally denoted as: where $\mathbf {W}^M$ is a shared weight matrix, $\mathbf {U}$ is the representations of the nodes corresponding to the numbers, $\texttt {QANet-Mod-Enc}(\cdot )$ is the “model encoder layer” defined in QANet which is similar to $\texttt {QANet-Emb-Enc}(\cdot )$, and the definition of $\texttt {Reasoning}(\cdot )$ will be given in Sec. SECREF23. Finally, as $\mathbf {U}$ only contains the representations of numbers, to tackle span-style answers containing non-numerical words, we concatenate $\mathbf {U}$ with $\mathbf {M}^P$ to produce numerically-aware passage representation $\mathbf {M}_0$. Formally, where $[\cdot ;\cdot ]$ denotes matrix concatenation, $\mathbf {W}[k]$ denotes the $k$-th column of a matrix $\mathbf {W}$, $\mathbf {0}$ is a zero vector, $I(i)$ denotes the node index corresponding to the passage word $w_i^p$ which is a number, $\mathbf {W}_0$ is a weight matrix, and $\mathbf {b}_0$ is a bias vector. Methodology ::: Framework ::: Prediction Module Following NAQANet BIBREF6, we divide the answers into four types and use a unique output layer to calculate the conditional answer probability $\Pr (\text{answer}|\text{type})$ for each type : Passage span: The answer is a span of the passage, and the answer probability is defined as the product of the probabilities of the start and end positions. Question span: The answer is a span of the question, and the answer probability is also defined as the product of the probabilities of the start and end positions. Count: The answer is obtained by counting, and it is treated as a multi-class classification problem over ten numbers (0-9), which covers most of the Count type answers in the DROP dataset. Arithmetic expression: The answer is the result of an arithmetic expression. The expression is obtained in three steps: (1) extract all numbers from the passage; (2) assign a sign (plus, minus or zero) for each number; (3) sum the signed numbers . Meanwhile, an extra output layer is also used to predict the probability $\Pr (\text{type})$ of each answer type. At training time, the final answer probability is defined as the joint probability over all feasible answer types, i.e., $\sum _{\text{type}}\Pr (\text{type})\Pr (\text{answer}|\text{type})$. Here, the answer type annotation is not required and the probability $\Pr (\text{type})$ is learnt by the model. At test time, the model first selects the most probable answer type greedily and then predicts the best answer accordingly. Without loss of generality, we leverage the definition of the five output layers in BIBREF6, with $\mathbf {M_0}$ and $\mathbf {Q}$ as inputs. Please refer to the paper for more details due to space limitation. Methodology ::: Framework ::: Comparison with NAQANet The major difference between our model and NAQANet is that NAQANet does not have the reasoning module, i.e., $\mathbf {M}_0$ is simply set as $\mathbf {M}^P$. As a result, numbers are treated as common words in NAQANet except in the prediction module, thus NAQANet may struggle to learn the numerical relationships between numbers, and potentially cannot well generalize to unseen numbers. However, as discussed in Sec. SECREF1, the numerical comparison is essential for answering questions requiring numerical reasoning. In our model, the numerical relationships are explicitly represented with the topology of the graph and a NumGNN is used to perform numerical reasoning. Therefore, our NumNet model can handle questions requiring numerical reasoning more effectively, which is verified by the experiments in Sec. SECREF4. Methodology ::: Numerically-aware Graph Construction We regard all numbers from the question and passage as nodes in the graph for reasoning . The set of nodes corresponding to the numbers occurring in question and passage are denoted as $\mathbf {V}^Q$ and $\mathbf {V}^P$ respectively. And we denote all the nodes as $\mathbf {V}=\mathbf {V}^Q\cup \mathbf {V}^P$, and the number corresponding to a node $v\in \mathbf {V}$ as $n(v)$. Two sets of edges are considered in this work: Greater Relation Edge ($\overrightarrow{\mathbf {E}}$): For two nodes $v_i, v_j\in \mathbf {V}$, a directed edge $\overrightarrow{e}_{ij}=(v_i, v_j)$ pointing from $v_i$ to $v_j$ will be added to the graph if $n(v_i)>n(v_j)$, which is denoted as solid arrow in Figure FIGREF3. Lower or Equal Relation Edge ($\overleftarrow{\mathbf {E}}$): For two nodes $v_i, v_j\in \mathbf {V}$, a directed edge $\overleftarrow{e}_{ij}=(v_j, v_i)$ will be added to the graph if $n(v_i)\le n(v_j)$, which is denoted as dashed arrow in Figure FIGREF3. Theoretically, $\overrightarrow{\mathbf {E}}$ and $\overleftarrow{\mathbf {E}}$ are complement to each other . However, as a number may occur several times and represent different facts in a document, we add a distinct node for each occurrence in the graph to prevent potential ambiguity. Therefore, it is more reasonable to use both $\overrightarrow{\mathbf {E}}$ and $\overleftarrow{\mathbf {E}}$ in order to encode the equal information among nodes. Methodology ::: Numerical Reasoning As we built the graph $\mathcal {G}=(\mathbf {V},\mathbf {E})$, we leverage NumGNN to perform reasoning, which is corresponding to the function $\texttt {Reasoning}(\cdot )$ in Eq. DISPLAY_FORM10. The reasoning process is as follows: Methodology ::: Numerical Reasoning ::: Initialization For each node $v^P_i\in \mathbf {V}^P$, its representation is initialized as the corresponding column vector of $\mathbf {M}^P$. Formally, the initial representation is $\mathbf {v}_i^P=\mathbf {M}^P[I^P(v_i^P)]$, where $I^P(v^P_i)$ denotes the word index corresponding to $v_i^P$. Similarly, the initial representation $\mathbf {v}_j^Q$ for a node $v^Q_j\in \mathbf {V}^Q$ is set as the corresponding column vector of $\mathbf {M}^Q$. We denote all the initial node representations as $\mathbf {v}^0=\lbrace \mathbf {v}_i^P\rbrace \cup \lbrace \mathbf {v}_j^Q\rbrace $. Methodology ::: Numerical Reasoning ::: One-step Reasoning Given the graph $\mathcal {G}$ and the node representations $\mathbf {v}$, we use a GNN to perform reasoning in three steps: (1) Node Relatedness Measure: As only a few numbers are relevant for answering a question generally, we compute a weight for each node to by-pass irrelevant numbers in reasoning. Formally, the weight for node $v_i$ is computed as: where $\mathbf {W}_v$ is a weight matrix, and $b_v$ is a bias. (2) Message Propagation: As the role a number plays in reasoning is not only decided by itself, but also related to the context, we propagate messages from each node to its neighbors to help to perform reasoning. As numbers in question and passage may play different roles in reasoning and edges corresponding to different numerical relations should be distinguished, we use relation-specific transform matrices in the message propagation. Formally, we define the following propagation function for calculating the forward-pass update of a node: where $\widetilde{\mathbf {v}}^{\prime }_i$ is the message representation of node $v_i$, $\texttt {r}_{ji}$ is the relation assigned to edge $e_{ji}$, $\mathbf {W}^{\texttt {r}_{ji}}$ are relation-specific transform matrices, and $\mathcal {N}_i=\lbrace j|(v_j,v_i)\in \mathbf {E}\rbrace $ is the neighbors of node $v_i$. For each edge $e_{ji}$, $\texttt {r}_{ji}$ is determined by the following two attributes: Number relation: $>$ or $\le $; Node types: the two nodes of the edge corresponding to two numbers that: (1) both from the question ($\text{q-q}$); (2) both from the passage ($\text{p-p}$); (3) from the question and the passage respectively ($\text{q-p}$); (4) from the passage and the question respectively ($\text{p-q}$). Formally, $\texttt {r}_{ij}\in \lbrace >,\le \rbrace \times \lbrace \text{q-q},\text{p-p},\text{q-p},\text{p-q}\rbrace $. (3) Node Representation Update: As the message representation obtained in the previous step only contains information from the neighbors, it needs to be fused with the node representation to combine with the information carried by the node itself, which is performed as: where $\mathbf {W}_f$ is a weight matrix, and $\mathbf {b}_f$ is a bias vector. We denote the entire one-step reasoning process (Eq. DISPLAY_FORM26-DISPLAY_FORM30) as a single function As the graph $\mathcal {G}$ constructed in Sec. SECREF19 has encoded the numerical relations via its topology, the reasoning process is numerically-aware. Methodology ::: Numerical Reasoning ::: Multi-step Reasoning By single-step reasoning, we can only infer relations between adjacent nodes. However, relations between multiple nodes may be required for certain tasks, e.g., sorting. Therefore, it is essential to perform multi-step reasoning, which can be done as follows: where $t\ge 1$. Suppose we perform $K$ steps of reasoning, $\mathbf {v}^K$ is used as $\mathbf {U}$ in Eq. DISPLAY_FORM10. Experiments ::: Dataset and Evaluation Metrics We evaluate our proposed model on DROP dataset BIBREF6, which is a public numerical MRC dataset. The DROP dataset is constructed by crowd-sourcing, which asks the annotators to generate question-answer pairs according to the given Wikipedia passages, which require numerical reasoning such as addition, counting, or sorting over numbers in the passages. There are $77,409$ training samples, $9,536$ development samples and $9,622$ testing samples in the dataset. In this paper, we adopt two metrics including Exact Match (EM) and numerically-focused F1 scores to evaluate our model following BIBREF6. The numerically-focused F1 is set to be 0 when the predicted answer is mismatched for those questions with the numeric golden answer. Experiments ::: Baselines For comparison, we select several public models as baselines including semantic parsing models: [topsep=2pt, itemsep=0pt] Syn Dep BIBREF6, the neural semantic parsing model (KDG) BIBREF22 with Stanford dependencies based sentence representations; OpenIE BIBREF6, KDG with open information extraction based sentence representations; SRL BIBREF6, KDG with semantic role labeling based sentence representations; and traditional MRC models: [topsep=2pt, itemsep=0pt] BiDAF BIBREF3, an MRC model which utilizes a bi-directional attention flow network to encode the question and passage; QANet BIBREF12, which utilizes convolutions and self-attentions as the building blocks of encoders to represent the question and passage; BERT BIBREF23, a pre-trained bidirectional Transformer-based language model which achieves state-of-the-art performance on lots of public MRC datasets recently; and numerical MRC models: [topsep=2pt, itemsep=0pt] NAQANet BIBREF6, a numerical version of QANet model. NAQANet+, an enhanced version of NAQANet implemented by ourselves, which further considers real number (e.g. “2.5”), richer arithmetic expression, data augmentation, etc. The enhancements are also used in our NumNet model and the details are given in the Appendix. Experiments ::: Experimental Settings In this paper, we tune our model on the development set and use a grid search to determine the optimal parameters. The dimensions of all the representations (e.g., $\mathbf {Q}$, $\mathbf {P}$, $\mathbf {M}^Q$, $\mathbf {M}^P$, $\mathbf {U}$, $\mathbf {M}_0^{\prime }$, $\mathbf {M}_0$ and $\mathbf {v}$) are set to 128. If not specified, the reasoning step $K$ is set to 3. Since other parameters have little effect on the results, we simply follow the settings used in BIBREF6. We use the Adam optimizer BIBREF24 with $\beta _1=0.8$, $\beta _2=0.999$, $\epsilon =10^{-7}$ to minimize the objective function. The learning rate is $5 \times 10^{-4}$, L2 weight decay $\lambda $ is $10^{-7}$ and the maximum norm value of gradient clipping is 5. We also apply exponential moving average with a decay rate $0.9999$ on all trainable variables. The model is trained with a batch size of 16 for 40 epochs. Passages and questions are trimmed to 400 and 50 tokens respectively during training, and trimmed to $1,000$ and 100 tokens respectively during prediction . Experiments ::: Overall Results The performance of our NumNet model and other baselines on DROP dataset are shown in Table TABREF47. From the results, we can observe that: (1) Our NumNet model achieves better results on both the development and testing sets on DROP dataset as compared to semantic parsing-based models, traditional MRC models and even numerical MRC models NAQANet and NAQANet+. The reason is that our NumNet model can make full use of the numerical comparison information over numbers in both question and passage via the proposed NumGNN module. (2) Our implemented NAQANet+ has a much better performance compared to the original version of NAQANet. It verifies the effectiveness of our proposed enhancements for baseline. Experiments ::: Effect of GNN Structure In this part, we investigate the effect of different GNN structures on the DROP development set. The results are shown in Table TABREF51. The “Comparison”, “Number” and “ALL” are corresponding to the comparing question subset , the number-type answer subset, and the entire development set, respectively . If we replace the proposed numerically-aware graph (Sec. SECREF19) with a fully connected graph, our model fallbacks to a traditional GNN, denoted as “GNN” in the table. Moreover, “- question num” denotes the numbers in the question is not included in the graph, and “- $\le $ type edge” and “- $>$ type edge” denote edges of $\le $ and $>$ types are not adopted respectively. As shown in Table TABREF51, our proposed NumGNN leads to statistically significant improvements compared to traditional GNN on both EM and F1 scores especially for comparing questions. It indicates that considering the comparing information over numbers could effectively help the numerical reasoning for comparing questions. Moreover, we find that the numbers in the question are often related to the numerical reasoning for answering the question, thus considering numbers in questions in NumGNN achieves better performance. And the results also justify that encoding “greater relation” and “lower or equal relation” simultaneously in the graph also benefits our model. Experiments ::: Effect of GNN Layer Number The number of NumGNN layers represents the numerical reasoning ability of our models. A $K$-layer version has the ability for $K$-step numerical inference. In this part, we additionally perform experiments to understand the values of the numbers of NumGNN layers. From Figure FIGREF52, we could observe that: (1) The 2-layer version of NumNet achieves the best performance for the comparing questions. From careful analysis, we find that most comparing questions only require at most 2-step reasoning (e.g., “Who was the second oldest player in the MLB, Clemens or Franco?”), and therefore the 3-layer version of NumNet is more complex but brings no gains for these questions. (2) The performance of our NumNet model on the overall development set is improved consistently as the number of GNN layers increases. The reason is that some of the numerical questions require reasoning over many numbers in the passage, which could benefit from the multi-step reasoning ability of multi-layer GNN. However, further investigation shows that the performance gain is not stable when $K\ge 4$. We believe it is due to the intrinsic over smoothing problem of GNNs BIBREF25. Experiments ::: Case Study We further give some examples to show why incorporating comparing information over numbers in the passage could help numerical reasoning in MRC in Table TABREF53. For the first case, we observe that NAQANet+ gives a wrong prediction, and we find that NAQANet+ will give the same prediction for the question “Which age group is smaller: under the age of 18 or 18 and 24?”. The reason is that NAQANet+ cannot distinguish which one is larger for $10.1\%$ and $56.2\%$. For the second case, NAQANet+ cannot recognize the second longest field goal is 22-yard and also gives a wrong prediction. For these two cases, our NumNet model could give the correct answer through the numeric reasoning, which indicates the effectiveness of our NumNet model. Experiments ::: Error Analysis To investigate how well our NumNet model handles sorting/comparison questions and better understand the remaining challenges, we perform an error analysis on a random sample of NumNet predictions. We find that: (1) Our NumNet model can answer about 76% of sorting/comparison questions correctly, which indicates that our NumNet model has achieved numerical reasoning ability to some extend. (2) Among the incorrectly answered sorting/comparison questions, the most ones (26%) are those whose golden answers are multiple nonadjacent spans (row 1 in Table TABREF54), and the second most ones (19%) are those involving comparison with an intermediate number that does not literally occur in the document/question but has to be derived from counting or arithmetic operation (row 1 in Table TABREF54). Experiments ::: Discussion By combining the numerically-aware graph and the NumGNN together, our NumNet model achieves the numerical reasoning ability. On one hand, the numerically-aware graph encodes numbers as nodes and relationships between them as the edges, which is required for numerical comparison. On the other hand, through one-step reasoning, our NumGNN could perform comparison and identify the numerical condition. After multiple-step reasoning, our NumGNN could further perform sorting. However, since the numerically-aware graph is pre-defined, our NumNet is not applicable to the case where an intermediate number has to be derived (e.g., from arithmetic operation) in the reasoning process, which is a major limitation of our model. Conclusion and Future Work Numerical reasoning skills such as addition, subtraction, sorting and counting are naturally required by machine reading comprehension (MRC) problems in practice. Nevertheless, these skills are not taken into account explicitly for most existing MRC models. In this work, we propose a numerical MRC model named NumNet which performs explicit numerical reasoning while reading the passages. To be specific, NumNet encodes the numerical relations among numbers in the question and passage into a graph as its topology, and leverages a numerically-aware graph neural network to perform numerical reasoning on the graph. Our NumNet model outperforms strong baselines with a large margin on the DROP dataset. In the future, we will explore the following directions: (1)As we use a pre-defined reasoning graph in our model, it is incapable of handling reasoning process which involves intermediate numbers that not presented in the graph. How to incorporate dynamic graph into our model is an interesting problem. (2) Compared with methods proposed for arithmetic word problems (AWPs), our model has better natural language understanding ability. However, the methods for AWPs can handle much richer arithmetic expressions. Therefore, how to combine both of their abilities to develop a more powerful numerical MRC model is an interesting future direction. (3) Symbolic reasoning plays a crucial role in human reading comprehension. Our work integrates numerical reasoning, which is a special case of symbolic reasoning, into traditional MRC systems. How to incorporate more sophisticated symbolic reasoning abilities into MRC systems is also a valuable future direction. Acknowledgments We would like to thank all anonymous reviewers for their insightful comments, and thank Yan Zhang for her help on improving the presentation of Figure FIGREF3. Appendix: Baseline Enhancements The major enhancements leveraged by our implemented NAQANet+ model include: (1) “real number”: Unlike NAQANet only considers integer numbers, we also consider real numbers. (2) “richer arithmetic expression”: We conceptually append an extra number “100” to the passage to support arithmetic expressions like “100-25”, which is required for answering questions such as “How many percent were not American?”. (3) “passage-preferred”: If an answer is both a span of the question and the passage, we only propagate gradients through the output layer for processing “Passage span” type answers. (4) “data augmentation”: The original questions in the DROP dataset are generated by crowdsourced workers. For the comparing questions which contain answer candidates, we observe that the workers frequently only change the incorrect answer candidate to generate a new question. For example, “How many from the census is bigger: Germans or English?” whose golden answer is “Germans” is modified to “How many from the census is bigger: Germans or Irish?”. This may introduce undesired inductive bias to the model. Therefore, we propose to augment the training dataset with new questions automatically generated by swapping the candidate answers, e.g., “How many from the census is bigger: English or Germans?” is added to the training dataset. We further conduct ablation studies on the enhancements. And the validation scores on the development set are shown in Table TABREF59. As can be seen from Table TABREF59: (1) The uses of real number and richer arithmetic expression are crucial for answering numerical questions: both EM and F1 drop drastically by up to $15-21$ points if they are removed. (2) The passage-preferred strategy and data augmentation are also necessary components that contribute significant improvements for those comparing questions.
Syn Dep, OpenIE, SRL, BiDAF, QANet, BERT, NAQANet, NAQANet+
b0b1ff2d6515fb40d74a4538614a0db537e020ea
b0b1ff2d6515fb40d74a4538614a0db537e020ea_0
Q: Do they report results only on English data? Text: Introduction Event temporal relation understanding is a major component of story/narrative comprehension. It is an important natural language understanding (NLU) task with broad applications to downstream tasks such as story understanding BIBREF0 , BIBREF1 , BIBREF2 , question answering BIBREF3 , BIBREF4 , and text summarization BIBREF5 , BIBREF6 . The goal of event temporal relation extraction is to build a directed graph where nodes correspond to events, and edges reflect temporal relations between the events. Figure FIGREF1 illustrates an example of such a graph for the text shown above. Different types of edges specify different temporal relations: the event assassination is before slaughtered, slaughtered is included in rampage, and the relation between rampage and war is vague. Modeling event temporal relations is crucial for story/narrative understanding and storytelling, because a story is typically composed of a sequence of events BIBREF7 . Several story corpora are thus annotated with various event-event relations to understand commonsense event knowledge. CaTeRS BIBREF8 is created by annotating 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. RED BIBREF9 contains annotations of rich relations between event pairs for storyline understanding, including co-reference and partial co-reference relations, temporal; causal, and sub-event relations. Despite multiple productive research threads on temporal and causal relation modeling among events BIBREF10 , BIBREF11 , BIBREF12 and event relation annotation for story understanding BIBREF8 , the intersection of these two threads seems flimsy. To the best of our knowledge, no event relation extraction results have been reported on CaTeRS and RED. We apply neural network models that leverage recent advances in contextualized embeddings (BERT BIBREF13 ) to event-event relation extraction tasks for CaTeRS and RED. Our goal in this paper is to increase understanding of how well the state-of-the-art event relation models work for story/narrative comprehension. In this paper, we report the first results of event temporal relation extraction on two under-explored story comprehension datasets: CaTeRS and RED. We establish strong baselines with neural network models enhanced by recent breakthrough of contextualized embeddings, BERT BIBREF13 . We summarize the contributions of the paper as follows: Models We investigate both neural network-based models and traditional feature-based models. We briefly introduce them in this section. Data is created by annotating 1600 sentences of 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. CaTeRS contains both temporal and causal relations in an effort to understand and predict commonsense relations between events. As demonstrated in Table TABREF16 , we split all stories into 220 training and 80 test. We do not construct the development set because the dataset is small. Note that some relations have compounded labels such as “CAUSE_BEFORE”, “ENABLE_BEFORE”, etc. We only take the temporal portion of the annotations. annotates a wide range of relations of event pairs including their coreference and partial coreference relations, and temporal, causal and subevent relationships. We split data according to the standard train, development, test sets, and only focus on the temporal relations. The common issue of these two datasets is that they are not densely annotated – not every pair of events is annotated with a relation. We provide one way to handle negative (unannotated) pairs in this paper. When constructing negative examples, we take all event pairs that occur within the same or neighboring sentences with no annotations, labeling them as “NONE”. The negative to positive samples ratio is 1.00 and 11.5 for CaTeRS and RED respectively. Note that RED data has much higher negative ratio (as shown in Table TABREF16 ) because it contains longer articles, more complicated sentence structures, and richer entity types than CaTeRS where all stories consist of 5 (mostly short) sentences. In both the development and test sets, we add all negative pairs as candidates for the relation prediction. During training, the number of negative pairs we add is based on a hyper-parameter that we tune to control the negative-to-positive sample ratio. To justify our decision of selecting negative pairs within the same or neighboring sentences, we show the distribution of distances across positive sentence pairs in Table TABREF18 . Although CaTeRS data has pair distance more evenly distributed than RED, we observe that the vast majority (85.87% and 93.99% respectively) of positive pairs have sentence distance less than or equal to one. To handle negative pairs that are more than two sentences away, we automatically predict all out-of-window pairs as “NONE”. This means that some positive pairs will be automatically labeled as negative pairs. Since the percentage of out-of-window positive pairs is small, we believe the impact on performance is small. We can investigate expanding the prediction window in future research, but the trade-off is that we will get more negative pairs that are hard to predict. Implementation Details CAEVO consists of both linguistic-rule-based sieves and feature-based trainable sieves. We train CAEVO sieves with our train set and evaluate them on both dev and test sets. CAEVO is an end-to-end system that automatically annotates both events and relations. In order to resolve label annotation mismatch between CAEVO and our gold data, we create our own final input files to CAEVO system. Default parameter settings are used when running the CAEVO system. In an effort of building a general model and reducing the number of hand-crafted features, we leverage pre-trained (GloVe 300) embeddings in place of linguistic features. The only linguistic feature we use in our experiment is token distance. We notice in our experiments that hidden layer size, dropout ratio and negative sample ratio impact model performance significantly. We conduct grid search to find the best hyper-parameter combination according to the performance of the development set. Note that since the CaTeRS data is small and there is no standard train, development, and test splits, we conduct cross-validation on training data to choose the best hyper-parameters and predict on test. For RED data, the standard train, development, test splits are used. As we mentioned briefly in the introduction, using BERT output as word embeddings could provide an additional performance boost in our NN architecture. We pre-process our raw data by feeding original sentences into a pre-trained BERT model and output the last layer of BERT as token representations. In this experiment, we fix the negative sample ratio according to the result obtained from the previous step and only search for the best hidden layer size and dropout ratio. Result and Analysis Table TABREF25 contains the best hyper-parameters and Table TABREF26 contains micro-average F1 scores for both datasets on dev and test sets. We only consider positive pairs, i.e. correct predictions on NONE pairs are excluded for evaluation. In general, the baseline model CAEVO is outperformed by both NN models, and NN model with BERT embedding achieves the greatest performance. We now provide more detailed analysis and discussion for each dataset. Temporal Relation Data Collecting dense TempRel corpora with event pairs fully annotated has been reported challenging since annotators could easily overlook some pairs BIBREF18 , BIBREF19 , BIBREF10 . TimeBank BIBREF20 is an example with events and their relations annotated sparsely. TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences. However, densely annotated datasets are relatively small both in terms of number of documents and event pairs, which restricts the complexity of machine learning models used in previous research. Feature-based Models The series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity. Neural Network Model Neural network-based methods have been employed for event temporal relation extraction BIBREF14 , BIBREF15 , BIBREF16 , BIBREF12 which achieved impressive results. However, the dataset they focus on is TB-Dense. We have explored neural network models on CaTeRS and RED, which are more related to story narrative understanding and generation. In our NN model, we also leverage Bidrectional Encoder Representations from Transformers (BERT) BIBREF30 which has shown significant improvement in many NLP tasks by allowing fine-tuning of pre-trained language representations. Unlike the Generative Pre-trained Transformer (OpenAI GPT) BIBREF31 , BERT uses a biderctional Transformer BIBREF32 instead of a unidirectional (left-to-right) Transformer to incorporate context from both directions. As mentioned earlier, we do not fine-tune BERT in our experiments and simply leverage the last layer as our contextualized word representations. Conclusion We established strong baselines for two story narrative understanding datasets: CaTeRS and RED. We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models. Further research can focus on more systematic study or build stronger NN models over the same datasets used in this work. Exploring possibilities to directly apply temporal relation extraction to enhance performance of story generation systems is another promising research direction. Acknowledgement We thank the anonymous reviewers for their constructive comments, as well as the members of the USC PLUS lab for their early feedback. This work is supported by Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).
Unanswerable
4266aacb575b4be7dbcdb8616766324f8790763c
4266aacb575b4be7dbcdb8616766324f8790763c_0
Q: What conclusions do the authors draw from their detailed analyses? Text: Introduction Event temporal relation understanding is a major component of story/narrative comprehension. It is an important natural language understanding (NLU) task with broad applications to downstream tasks such as story understanding BIBREF0 , BIBREF1 , BIBREF2 , question answering BIBREF3 , BIBREF4 , and text summarization BIBREF5 , BIBREF6 . The goal of event temporal relation extraction is to build a directed graph where nodes correspond to events, and edges reflect temporal relations between the events. Figure FIGREF1 illustrates an example of such a graph for the text shown above. Different types of edges specify different temporal relations: the event assassination is before slaughtered, slaughtered is included in rampage, and the relation between rampage and war is vague. Modeling event temporal relations is crucial for story/narrative understanding and storytelling, because a story is typically composed of a sequence of events BIBREF7 . Several story corpora are thus annotated with various event-event relations to understand commonsense event knowledge. CaTeRS BIBREF8 is created by annotating 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. RED BIBREF9 contains annotations of rich relations between event pairs for storyline understanding, including co-reference and partial co-reference relations, temporal; causal, and sub-event relations. Despite multiple productive research threads on temporal and causal relation modeling among events BIBREF10 , BIBREF11 , BIBREF12 and event relation annotation for story understanding BIBREF8 , the intersection of these two threads seems flimsy. To the best of our knowledge, no event relation extraction results have been reported on CaTeRS and RED. We apply neural network models that leverage recent advances in contextualized embeddings (BERT BIBREF13 ) to event-event relation extraction tasks for CaTeRS and RED. Our goal in this paper is to increase understanding of how well the state-of-the-art event relation models work for story/narrative comprehension. In this paper, we report the first results of event temporal relation extraction on two under-explored story comprehension datasets: CaTeRS and RED. We establish strong baselines with neural network models enhanced by recent breakthrough of contextualized embeddings, BERT BIBREF13 . We summarize the contributions of the paper as follows: Models We investigate both neural network-based models and traditional feature-based models. We briefly introduce them in this section. Data is created by annotating 1600 sentences of 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. CaTeRS contains both temporal and causal relations in an effort to understand and predict commonsense relations between events. As demonstrated in Table TABREF16 , we split all stories into 220 training and 80 test. We do not construct the development set because the dataset is small. Note that some relations have compounded labels such as “CAUSE_BEFORE”, “ENABLE_BEFORE”, etc. We only take the temporal portion of the annotations. annotates a wide range of relations of event pairs including their coreference and partial coreference relations, and temporal, causal and subevent relationships. We split data according to the standard train, development, test sets, and only focus on the temporal relations. The common issue of these two datasets is that they are not densely annotated – not every pair of events is annotated with a relation. We provide one way to handle negative (unannotated) pairs in this paper. When constructing negative examples, we take all event pairs that occur within the same or neighboring sentences with no annotations, labeling them as “NONE”. The negative to positive samples ratio is 1.00 and 11.5 for CaTeRS and RED respectively. Note that RED data has much higher negative ratio (as shown in Table TABREF16 ) because it contains longer articles, more complicated sentence structures, and richer entity types than CaTeRS where all stories consist of 5 (mostly short) sentences. In both the development and test sets, we add all negative pairs as candidates for the relation prediction. During training, the number of negative pairs we add is based on a hyper-parameter that we tune to control the negative-to-positive sample ratio. To justify our decision of selecting negative pairs within the same or neighboring sentences, we show the distribution of distances across positive sentence pairs in Table TABREF18 . Although CaTeRS data has pair distance more evenly distributed than RED, we observe that the vast majority (85.87% and 93.99% respectively) of positive pairs have sentence distance less than or equal to one. To handle negative pairs that are more than two sentences away, we automatically predict all out-of-window pairs as “NONE”. This means that some positive pairs will be automatically labeled as negative pairs. Since the percentage of out-of-window positive pairs is small, we believe the impact on performance is small. We can investigate expanding the prediction window in future research, but the trade-off is that we will get more negative pairs that are hard to predict. Implementation Details CAEVO consists of both linguistic-rule-based sieves and feature-based trainable sieves. We train CAEVO sieves with our train set and evaluate them on both dev and test sets. CAEVO is an end-to-end system that automatically annotates both events and relations. In order to resolve label annotation mismatch between CAEVO and our gold data, we create our own final input files to CAEVO system. Default parameter settings are used when running the CAEVO system. In an effort of building a general model and reducing the number of hand-crafted features, we leverage pre-trained (GloVe 300) embeddings in place of linguistic features. The only linguistic feature we use in our experiment is token distance. We notice in our experiments that hidden layer size, dropout ratio and negative sample ratio impact model performance significantly. We conduct grid search to find the best hyper-parameter combination according to the performance of the development set. Note that since the CaTeRS data is small and there is no standard train, development, and test splits, we conduct cross-validation on training data to choose the best hyper-parameters and predict on test. For RED data, the standard train, development, test splits are used. As we mentioned briefly in the introduction, using BERT output as word embeddings could provide an additional performance boost in our NN architecture. We pre-process our raw data by feeding original sentences into a pre-trained BERT model and output the last layer of BERT as token representations. In this experiment, we fix the negative sample ratio according to the result obtained from the previous step and only search for the best hidden layer size and dropout ratio. Result and Analysis Table TABREF25 contains the best hyper-parameters and Table TABREF26 contains micro-average F1 scores for both datasets on dev and test sets. We only consider positive pairs, i.e. correct predictions on NONE pairs are excluded for evaluation. In general, the baseline model CAEVO is outperformed by both NN models, and NN model with BERT embedding achieves the greatest performance. We now provide more detailed analysis and discussion for each dataset. Temporal Relation Data Collecting dense TempRel corpora with event pairs fully annotated has been reported challenging since annotators could easily overlook some pairs BIBREF18 , BIBREF19 , BIBREF10 . TimeBank BIBREF20 is an example with events and their relations annotated sparsely. TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences. However, densely annotated datasets are relatively small both in terms of number of documents and event pairs, which restricts the complexity of machine learning models used in previous research. Feature-based Models The series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity. Neural Network Model Neural network-based methods have been employed for event temporal relation extraction BIBREF14 , BIBREF15 , BIBREF16 , BIBREF12 which achieved impressive results. However, the dataset they focus on is TB-Dense. We have explored neural network models on CaTeRS and RED, which are more related to story narrative understanding and generation. In our NN model, we also leverage Bidrectional Encoder Representations from Transformers (BERT) BIBREF30 which has shown significant improvement in many NLP tasks by allowing fine-tuning of pre-trained language representations. Unlike the Generative Pre-trained Transformer (OpenAI GPT) BIBREF31 , BERT uses a biderctional Transformer BIBREF32 instead of a unidirectional (left-to-right) Transformer to incorporate context from both directions. As mentioned earlier, we do not fine-tune BERT in our experiments and simply leverage the last layer as our contextualized word representations. Conclusion We established strong baselines for two story narrative understanding datasets: CaTeRS and RED. We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models. Further research can focus on more systematic study or build stronger NN models over the same datasets used in this work. Exploring possibilities to directly apply temporal relation extraction to enhance performance of story generation systems is another promising research direction. Acknowledgement We thank the anonymous reviewers for their constructive comments, as well as the members of the USC PLUS lab for their early feedback. This work is supported by Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).
neural network-based models can outperform feature-based models with wide margins, contextualized representation learning can boost performance of NN models
191107cd112f7ee6d19c1dc43177e6899452a2c7
191107cd112f7ee6d19c1dc43177e6899452a2c7_0
Q: Do the BERT-based embeddings improve results? Text: Introduction Event temporal relation understanding is a major component of story/narrative comprehension. It is an important natural language understanding (NLU) task with broad applications to downstream tasks such as story understanding BIBREF0 , BIBREF1 , BIBREF2 , question answering BIBREF3 , BIBREF4 , and text summarization BIBREF5 , BIBREF6 . The goal of event temporal relation extraction is to build a directed graph where nodes correspond to events, and edges reflect temporal relations between the events. Figure FIGREF1 illustrates an example of such a graph for the text shown above. Different types of edges specify different temporal relations: the event assassination is before slaughtered, slaughtered is included in rampage, and the relation between rampage and war is vague. Modeling event temporal relations is crucial for story/narrative understanding and storytelling, because a story is typically composed of a sequence of events BIBREF7 . Several story corpora are thus annotated with various event-event relations to understand commonsense event knowledge. CaTeRS BIBREF8 is created by annotating 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. RED BIBREF9 contains annotations of rich relations between event pairs for storyline understanding, including co-reference and partial co-reference relations, temporal; causal, and sub-event relations. Despite multiple productive research threads on temporal and causal relation modeling among events BIBREF10 , BIBREF11 , BIBREF12 and event relation annotation for story understanding BIBREF8 , the intersection of these two threads seems flimsy. To the best of our knowledge, no event relation extraction results have been reported on CaTeRS and RED. We apply neural network models that leverage recent advances in contextualized embeddings (BERT BIBREF13 ) to event-event relation extraction tasks for CaTeRS and RED. Our goal in this paper is to increase understanding of how well the state-of-the-art event relation models work for story/narrative comprehension. In this paper, we report the first results of event temporal relation extraction on two under-explored story comprehension datasets: CaTeRS and RED. We establish strong baselines with neural network models enhanced by recent breakthrough of contextualized embeddings, BERT BIBREF13 . We summarize the contributions of the paper as follows: Models We investigate both neural network-based models and traditional feature-based models. We briefly introduce them in this section. Data is created by annotating 1600 sentences of 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. CaTeRS contains both temporal and causal relations in an effort to understand and predict commonsense relations between events. As demonstrated in Table TABREF16 , we split all stories into 220 training and 80 test. We do not construct the development set because the dataset is small. Note that some relations have compounded labels such as “CAUSE_BEFORE”, “ENABLE_BEFORE”, etc. We only take the temporal portion of the annotations. annotates a wide range of relations of event pairs including their coreference and partial coreference relations, and temporal, causal and subevent relationships. We split data according to the standard train, development, test sets, and only focus on the temporal relations. The common issue of these two datasets is that they are not densely annotated – not every pair of events is annotated with a relation. We provide one way to handle negative (unannotated) pairs in this paper. When constructing negative examples, we take all event pairs that occur within the same or neighboring sentences with no annotations, labeling them as “NONE”. The negative to positive samples ratio is 1.00 and 11.5 for CaTeRS and RED respectively. Note that RED data has much higher negative ratio (as shown in Table TABREF16 ) because it contains longer articles, more complicated sentence structures, and richer entity types than CaTeRS where all stories consist of 5 (mostly short) sentences. In both the development and test sets, we add all negative pairs as candidates for the relation prediction. During training, the number of negative pairs we add is based on a hyper-parameter that we tune to control the negative-to-positive sample ratio. To justify our decision of selecting negative pairs within the same or neighboring sentences, we show the distribution of distances across positive sentence pairs in Table TABREF18 . Although CaTeRS data has pair distance more evenly distributed than RED, we observe that the vast majority (85.87% and 93.99% respectively) of positive pairs have sentence distance less than or equal to one. To handle negative pairs that are more than two sentences away, we automatically predict all out-of-window pairs as “NONE”. This means that some positive pairs will be automatically labeled as negative pairs. Since the percentage of out-of-window positive pairs is small, we believe the impact on performance is small. We can investigate expanding the prediction window in future research, but the trade-off is that we will get more negative pairs that are hard to predict. Implementation Details CAEVO consists of both linguistic-rule-based sieves and feature-based trainable sieves. We train CAEVO sieves with our train set and evaluate them on both dev and test sets. CAEVO is an end-to-end system that automatically annotates both events and relations. In order to resolve label annotation mismatch between CAEVO and our gold data, we create our own final input files to CAEVO system. Default parameter settings are used when running the CAEVO system. In an effort of building a general model and reducing the number of hand-crafted features, we leverage pre-trained (GloVe 300) embeddings in place of linguistic features. The only linguistic feature we use in our experiment is token distance. We notice in our experiments that hidden layer size, dropout ratio and negative sample ratio impact model performance significantly. We conduct grid search to find the best hyper-parameter combination according to the performance of the development set. Note that since the CaTeRS data is small and there is no standard train, development, and test splits, we conduct cross-validation on training data to choose the best hyper-parameters and predict on test. For RED data, the standard train, development, test splits are used. As we mentioned briefly in the introduction, using BERT output as word embeddings could provide an additional performance boost in our NN architecture. We pre-process our raw data by feeding original sentences into a pre-trained BERT model and output the last layer of BERT as token representations. In this experiment, we fix the negative sample ratio according to the result obtained from the previous step and only search for the best hidden layer size and dropout ratio. Result and Analysis Table TABREF25 contains the best hyper-parameters and Table TABREF26 contains micro-average F1 scores for both datasets on dev and test sets. We only consider positive pairs, i.e. correct predictions on NONE pairs are excluded for evaluation. In general, the baseline model CAEVO is outperformed by both NN models, and NN model with BERT embedding achieves the greatest performance. We now provide more detailed analysis and discussion for each dataset. Temporal Relation Data Collecting dense TempRel corpora with event pairs fully annotated has been reported challenging since annotators could easily overlook some pairs BIBREF18 , BIBREF19 , BIBREF10 . TimeBank BIBREF20 is an example with events and their relations annotated sparsely. TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences. However, densely annotated datasets are relatively small both in terms of number of documents and event pairs, which restricts the complexity of machine learning models used in previous research. Feature-based Models The series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity. Neural Network Model Neural network-based methods have been employed for event temporal relation extraction BIBREF14 , BIBREF15 , BIBREF16 , BIBREF12 which achieved impressive results. However, the dataset they focus on is TB-Dense. We have explored neural network models on CaTeRS and RED, which are more related to story narrative understanding and generation. In our NN model, we also leverage Bidrectional Encoder Representations from Transformers (BERT) BIBREF30 which has shown significant improvement in many NLP tasks by allowing fine-tuning of pre-trained language representations. Unlike the Generative Pre-trained Transformer (OpenAI GPT) BIBREF31 , BERT uses a biderctional Transformer BIBREF32 instead of a unidirectional (left-to-right) Transformer to incorporate context from both directions. As mentioned earlier, we do not fine-tune BERT in our experiments and simply leverage the last layer as our contextualized word representations. Conclusion We established strong baselines for two story narrative understanding datasets: CaTeRS and RED. We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models. Further research can focus on more systematic study or build stronger NN models over the same datasets used in this work. Exploring possibilities to directly apply temporal relation extraction to enhance performance of story generation systems is another promising research direction. Acknowledgement We thank the anonymous reviewers for their constructive comments, as well as the members of the USC PLUS lab for their early feedback. This work is supported by Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).
Yes
b0dca7b74934f51ff3da0c074ad659c25d84174d
b0dca7b74934f51ff3da0c074ad659c25d84174d_0
Q: What were the traditional linguistic feature-based models? Text: Introduction Event temporal relation understanding is a major component of story/narrative comprehension. It is an important natural language understanding (NLU) task with broad applications to downstream tasks such as story understanding BIBREF0 , BIBREF1 , BIBREF2 , question answering BIBREF3 , BIBREF4 , and text summarization BIBREF5 , BIBREF6 . The goal of event temporal relation extraction is to build a directed graph where nodes correspond to events, and edges reflect temporal relations between the events. Figure FIGREF1 illustrates an example of such a graph for the text shown above. Different types of edges specify different temporal relations: the event assassination is before slaughtered, slaughtered is included in rampage, and the relation between rampage and war is vague. Modeling event temporal relations is crucial for story/narrative understanding and storytelling, because a story is typically composed of a sequence of events BIBREF7 . Several story corpora are thus annotated with various event-event relations to understand commonsense event knowledge. CaTeRS BIBREF8 is created by annotating 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. RED BIBREF9 contains annotations of rich relations between event pairs for storyline understanding, including co-reference and partial co-reference relations, temporal; causal, and sub-event relations. Despite multiple productive research threads on temporal and causal relation modeling among events BIBREF10 , BIBREF11 , BIBREF12 and event relation annotation for story understanding BIBREF8 , the intersection of these two threads seems flimsy. To the best of our knowledge, no event relation extraction results have been reported on CaTeRS and RED. We apply neural network models that leverage recent advances in contextualized embeddings (BERT BIBREF13 ) to event-event relation extraction tasks for CaTeRS and RED. Our goal in this paper is to increase understanding of how well the state-of-the-art event relation models work for story/narrative comprehension. In this paper, we report the first results of event temporal relation extraction on two under-explored story comprehension datasets: CaTeRS and RED. We establish strong baselines with neural network models enhanced by recent breakthrough of contextualized embeddings, BERT BIBREF13 . We summarize the contributions of the paper as follows: Models We investigate both neural network-based models and traditional feature-based models. We briefly introduce them in this section. Data is created by annotating 1600 sentences of 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. CaTeRS contains both temporal and causal relations in an effort to understand and predict commonsense relations between events. As demonstrated in Table TABREF16 , we split all stories into 220 training and 80 test. We do not construct the development set because the dataset is small. Note that some relations have compounded labels such as “CAUSE_BEFORE”, “ENABLE_BEFORE”, etc. We only take the temporal portion of the annotations. annotates a wide range of relations of event pairs including their coreference and partial coreference relations, and temporal, causal and subevent relationships. We split data according to the standard train, development, test sets, and only focus on the temporal relations. The common issue of these two datasets is that they are not densely annotated – not every pair of events is annotated with a relation. We provide one way to handle negative (unannotated) pairs in this paper. When constructing negative examples, we take all event pairs that occur within the same or neighboring sentences with no annotations, labeling them as “NONE”. The negative to positive samples ratio is 1.00 and 11.5 for CaTeRS and RED respectively. Note that RED data has much higher negative ratio (as shown in Table TABREF16 ) because it contains longer articles, more complicated sentence structures, and richer entity types than CaTeRS where all stories consist of 5 (mostly short) sentences. In both the development and test sets, we add all negative pairs as candidates for the relation prediction. During training, the number of negative pairs we add is based on a hyper-parameter that we tune to control the negative-to-positive sample ratio. To justify our decision of selecting negative pairs within the same or neighboring sentences, we show the distribution of distances across positive sentence pairs in Table TABREF18 . Although CaTeRS data has pair distance more evenly distributed than RED, we observe that the vast majority (85.87% and 93.99% respectively) of positive pairs have sentence distance less than or equal to one. To handle negative pairs that are more than two sentences away, we automatically predict all out-of-window pairs as “NONE”. This means that some positive pairs will be automatically labeled as negative pairs. Since the percentage of out-of-window positive pairs is small, we believe the impact on performance is small. We can investigate expanding the prediction window in future research, but the trade-off is that we will get more negative pairs that are hard to predict. Implementation Details CAEVO consists of both linguistic-rule-based sieves and feature-based trainable sieves. We train CAEVO sieves with our train set and evaluate them on both dev and test sets. CAEVO is an end-to-end system that automatically annotates both events and relations. In order to resolve label annotation mismatch between CAEVO and our gold data, we create our own final input files to CAEVO system. Default parameter settings are used when running the CAEVO system. In an effort of building a general model and reducing the number of hand-crafted features, we leverage pre-trained (GloVe 300) embeddings in place of linguistic features. The only linguistic feature we use in our experiment is token distance. We notice in our experiments that hidden layer size, dropout ratio and negative sample ratio impact model performance significantly. We conduct grid search to find the best hyper-parameter combination according to the performance of the development set. Note that since the CaTeRS data is small and there is no standard train, development, and test splits, we conduct cross-validation on training data to choose the best hyper-parameters and predict on test. For RED data, the standard train, development, test splits are used. As we mentioned briefly in the introduction, using BERT output as word embeddings could provide an additional performance boost in our NN architecture. We pre-process our raw data by feeding original sentences into a pre-trained BERT model and output the last layer of BERT as token representations. In this experiment, we fix the negative sample ratio according to the result obtained from the previous step and only search for the best hidden layer size and dropout ratio. Result and Analysis Table TABREF25 contains the best hyper-parameters and Table TABREF26 contains micro-average F1 scores for both datasets on dev and test sets. We only consider positive pairs, i.e. correct predictions on NONE pairs are excluded for evaluation. In general, the baseline model CAEVO is outperformed by both NN models, and NN model with BERT embedding achieves the greatest performance. We now provide more detailed analysis and discussion for each dataset. Temporal Relation Data Collecting dense TempRel corpora with event pairs fully annotated has been reported challenging since annotators could easily overlook some pairs BIBREF18 , BIBREF19 , BIBREF10 . TimeBank BIBREF20 is an example with events and their relations annotated sparsely. TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences. However, densely annotated datasets are relatively small both in terms of number of documents and event pairs, which restricts the complexity of machine learning models used in previous research. Feature-based Models The series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity. Neural Network Model Neural network-based methods have been employed for event temporal relation extraction BIBREF14 , BIBREF15 , BIBREF16 , BIBREF12 which achieved impressive results. However, the dataset they focus on is TB-Dense. We have explored neural network models on CaTeRS and RED, which are more related to story narrative understanding and generation. In our NN model, we also leverage Bidrectional Encoder Representations from Transformers (BERT) BIBREF30 which has shown significant improvement in many NLP tasks by allowing fine-tuning of pre-trained language representations. Unlike the Generative Pre-trained Transformer (OpenAI GPT) BIBREF31 , BERT uses a biderctional Transformer BIBREF32 instead of a unidirectional (left-to-right) Transformer to incorporate context from both directions. As mentioned earlier, we do not fine-tune BERT in our experiments and simply leverage the last layer as our contextualized word representations. Conclusion We established strong baselines for two story narrative understanding datasets: CaTeRS and RED. We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models. Further research can focus on more systematic study or build stronger NN models over the same datasets used in this work. Exploring possibilities to directly apply temporal relation extraction to enhance performance of story generation systems is another promising research direction. Acknowledgement We thank the anonymous reviewers for their constructive comments, as well as the members of the USC PLUS lab for their early feedback. This work is supported by Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).
CAEVO
601e58a3d2c03a0b4cd627c81c6228a714e43903
601e58a3d2c03a0b4cd627c81c6228a714e43903_0
Q: What type of baseline are established for the two datasets? Text: Introduction Event temporal relation understanding is a major component of story/narrative comprehension. It is an important natural language understanding (NLU) task with broad applications to downstream tasks such as story understanding BIBREF0 , BIBREF1 , BIBREF2 , question answering BIBREF3 , BIBREF4 , and text summarization BIBREF5 , BIBREF6 . The goal of event temporal relation extraction is to build a directed graph where nodes correspond to events, and edges reflect temporal relations between the events. Figure FIGREF1 illustrates an example of such a graph for the text shown above. Different types of edges specify different temporal relations: the event assassination is before slaughtered, slaughtered is included in rampage, and the relation between rampage and war is vague. Modeling event temporal relations is crucial for story/narrative understanding and storytelling, because a story is typically composed of a sequence of events BIBREF7 . Several story corpora are thus annotated with various event-event relations to understand commonsense event knowledge. CaTeRS BIBREF8 is created by annotating 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. RED BIBREF9 contains annotations of rich relations between event pairs for storyline understanding, including co-reference and partial co-reference relations, temporal; causal, and sub-event relations. Despite multiple productive research threads on temporal and causal relation modeling among events BIBREF10 , BIBREF11 , BIBREF12 and event relation annotation for story understanding BIBREF8 , the intersection of these two threads seems flimsy. To the best of our knowledge, no event relation extraction results have been reported on CaTeRS and RED. We apply neural network models that leverage recent advances in contextualized embeddings (BERT BIBREF13 ) to event-event relation extraction tasks for CaTeRS and RED. Our goal in this paper is to increase understanding of how well the state-of-the-art event relation models work for story/narrative comprehension. In this paper, we report the first results of event temporal relation extraction on two under-explored story comprehension datasets: CaTeRS and RED. We establish strong baselines with neural network models enhanced by recent breakthrough of contextualized embeddings, BERT BIBREF13 . We summarize the contributions of the paper as follows: Models We investigate both neural network-based models and traditional feature-based models. We briefly introduce them in this section. Data is created by annotating 1600 sentences of 320 five-sentence stories sampled from ROCStories BIBREF7 dataset. CaTeRS contains both temporal and causal relations in an effort to understand and predict commonsense relations between events. As demonstrated in Table TABREF16 , we split all stories into 220 training and 80 test. We do not construct the development set because the dataset is small. Note that some relations have compounded labels such as “CAUSE_BEFORE”, “ENABLE_BEFORE”, etc. We only take the temporal portion of the annotations. annotates a wide range of relations of event pairs including their coreference and partial coreference relations, and temporal, causal and subevent relationships. We split data according to the standard train, development, test sets, and only focus on the temporal relations. The common issue of these two datasets is that they are not densely annotated – not every pair of events is annotated with a relation. We provide one way to handle negative (unannotated) pairs in this paper. When constructing negative examples, we take all event pairs that occur within the same or neighboring sentences with no annotations, labeling them as “NONE”. The negative to positive samples ratio is 1.00 and 11.5 for CaTeRS and RED respectively. Note that RED data has much higher negative ratio (as shown in Table TABREF16 ) because it contains longer articles, more complicated sentence structures, and richer entity types than CaTeRS where all stories consist of 5 (mostly short) sentences. In both the development and test sets, we add all negative pairs as candidates for the relation prediction. During training, the number of negative pairs we add is based on a hyper-parameter that we tune to control the negative-to-positive sample ratio. To justify our decision of selecting negative pairs within the same or neighboring sentences, we show the distribution of distances across positive sentence pairs in Table TABREF18 . Although CaTeRS data has pair distance more evenly distributed than RED, we observe that the vast majority (85.87% and 93.99% respectively) of positive pairs have sentence distance less than or equal to one. To handle negative pairs that are more than two sentences away, we automatically predict all out-of-window pairs as “NONE”. This means that some positive pairs will be automatically labeled as negative pairs. Since the percentage of out-of-window positive pairs is small, we believe the impact on performance is small. We can investigate expanding the prediction window in future research, but the trade-off is that we will get more negative pairs that are hard to predict. Implementation Details CAEVO consists of both linguistic-rule-based sieves and feature-based trainable sieves. We train CAEVO sieves with our train set and evaluate them on both dev and test sets. CAEVO is an end-to-end system that automatically annotates both events and relations. In order to resolve label annotation mismatch between CAEVO and our gold data, we create our own final input files to CAEVO system. Default parameter settings are used when running the CAEVO system. In an effort of building a general model and reducing the number of hand-crafted features, we leverage pre-trained (GloVe 300) embeddings in place of linguistic features. The only linguistic feature we use in our experiment is token distance. We notice in our experiments that hidden layer size, dropout ratio and negative sample ratio impact model performance significantly. We conduct grid search to find the best hyper-parameter combination according to the performance of the development set. Note that since the CaTeRS data is small and there is no standard train, development, and test splits, we conduct cross-validation on training data to choose the best hyper-parameters and predict on test. For RED data, the standard train, development, test splits are used. As we mentioned briefly in the introduction, using BERT output as word embeddings could provide an additional performance boost in our NN architecture. We pre-process our raw data by feeding original sentences into a pre-trained BERT model and output the last layer of BERT as token representations. In this experiment, we fix the negative sample ratio according to the result obtained from the previous step and only search for the best hidden layer size and dropout ratio. Result and Analysis Table TABREF25 contains the best hyper-parameters and Table TABREF26 contains micro-average F1 scores for both datasets on dev and test sets. We only consider positive pairs, i.e. correct predictions on NONE pairs are excluded for evaluation. In general, the baseline model CAEVO is outperformed by both NN models, and NN model with BERT embedding achieves the greatest performance. We now provide more detailed analysis and discussion for each dataset. Temporal Relation Data Collecting dense TempRel corpora with event pairs fully annotated has been reported challenging since annotators could easily overlook some pairs BIBREF18 , BIBREF19 , BIBREF10 . TimeBank BIBREF20 is an example with events and their relations annotated sparsely. TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences. However, densely annotated datasets are relatively small both in terms of number of documents and event pairs, which restricts the complexity of machine learning models used in previous research. Feature-based Models The series of TempEval competitions BIBREF21 , BIBREF22 , BIBREF23 have attracted many research interests in predicting event temporal relations. Early attempts by BIBREF24 , BIBREF21 , BIBREF25 , BIBREF26 only use pair-wise classification models. State-of-the-art local methods, such as ClearTK BIBREF27 , UTTime BIBREF28 , and NavyTime BIBREF29 improve on earlier work by feature engineering with linguistic and syntactic rules. As we mention in the Section 2, CAEVO is the current state-of-the-art system for feature-based temporal event relation extraction BIBREF10 . It's widely used as the baseline for evaluating TB-Dense data. We adopt it as our baseline for evaluating CaTeRS and RED datasets. Additionally, several models BramsenDLB2006, ChambersJ2008, DoLuRo12, NingWuRo18, P18-1212 have successfully incorporated global inference to impose global prediction consistency such as temporal transitivity. Neural Network Model Neural network-based methods have been employed for event temporal relation extraction BIBREF14 , BIBREF15 , BIBREF16 , BIBREF12 which achieved impressive results. However, the dataset they focus on is TB-Dense. We have explored neural network models on CaTeRS and RED, which are more related to story narrative understanding and generation. In our NN model, we also leverage Bidrectional Encoder Representations from Transformers (BERT) BIBREF30 which has shown significant improvement in many NLP tasks by allowing fine-tuning of pre-trained language representations. Unlike the Generative Pre-trained Transformer (OpenAI GPT) BIBREF31 , BERT uses a biderctional Transformer BIBREF32 instead of a unidirectional (left-to-right) Transformer to incorporate context from both directions. As mentioned earlier, we do not fine-tune BERT in our experiments and simply leverage the last layer as our contextualized word representations. Conclusion We established strong baselines for two story narrative understanding datasets: CaTeRS and RED. We have shown that neural network-based models can outperform feature-based models with wide margins, and we conducted an ablation study to show that contextualized representation learning can boost performance of NN models. Further research can focus on more systematic study or build stronger NN models over the same datasets used in this work. Exploring possibilities to directly apply temporal relation extraction to enhance performance of story generation systems is another promising research direction. Acknowledgement We thank the anonymous reviewers for their constructive comments, as well as the members of the USC PLUS lab for their early feedback. This work is supported by Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA).
CAEVO
a0fbf90ceb520626b80ff0f9160b3cd5029585a5
a0fbf90ceb520626b80ff0f9160b3cd5029585a5_0
Q: What model achieves state of the art performance on this task? Text: Introduction Machine Learning, in general, and affective computing, in particular, rely on good data representations or features that have a good discriminatory faculty in classification and regression experiments, such as emotion recognition from speech. To derive efficient representations of data, researchers have adopted two main strategies: (1) carefully crafted and tailored feature extractors designed for a particular task BIBREF0 and (2) algorithms that learn representations automatically from the data itself BIBREF1 . The latter approach is called Representation Learning (RL), and has received growing attention in the past few years and is highly reliant on large quantities of data. Most approaches for emotion recognition from speech still rely on the extraction of standard acoustic features such as pitch, shimmer, jitter and MFCCs (Mel-Frequency Cepstral Coefficients), with a few notable exceptions BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In this work we leverage RL strategies and automatically learn representations of emotional speech from the spectrogram directly using a deep convolutional neural network (CNN) architecture. To learn strong representations of speech we seek to leverage as much data as possible. However, emotion annotations are difficult to obtain and scarce BIBREF6 . We leverage the USC-IEMOCAP dataset, which comprises of around 12 hours of highly emotional and partly acted data from 10 speakers BIBREF7 . However, we aim to improve the learned representations of emotional speech with unlabeled speech data from an unrelated meeting corpus, which consists of about 100 hours of data BIBREF8 . While the meeting corpus is qualitatively quite different from the highly emotional USC-IEMOCAP data, we believe that the learned representations will improve through the use of these additional data. This combination of two separate data sources leads to a semi-supervised machine learning task and we extend the CNN architecture to a deep convolutional generative neural network (DCGAN) that can be trained in an unsupervised fashion BIBREF9 . Within this work, we particularly target emotional valence as the primary task, as it has been shown to be the most challenging emotional dimension for acoustic analyses in a number of studies BIBREF10 , BIBREF11 . Apart from solely targeting valence classification, we further investigate the principle of multitask learning. In multitask learning, a set of related tasks are learned (e.g., emotional activation), along with a primary task (e.g., emotional valence); both tasks share parts of the network topology and are hence jointly trained, as depicted in Figure FIGREF4 . It is expected that data for the secondary task models information, which would also be discriminative in learning the primary task. In fact, this approach has been shown to improve generalizability across corpora BIBREF12 . The remainder of this paper is organized as follows: First we introduce the DCGAN model and discuss prior work, in Section SECREF2 . Then we describe our specific multitask DCGAN model in Section SECREF3 , introduce the datasets in Section SECREF4 , and describe our experimental design in Section SECREF5 . Finally, we report our results in Section SECREF6 and discuss our findings in Section SECREF7 . Related Work The proposed model builds upon previous results in the field of emotion recognition, and leverages prior work in representation learning. Multitask learning has been effective in some prior experiments on emotion detection. In particular, Xia and Liu proposed a multitask model for emotion recognition which, like the investigated model, has activation and valence as targets BIBREF13 . Their work uses a Deep Belief Network (DBN) architecture to classify the emotion of audio input, with valence and activation as secondary tasks. Their experiments indicate that the use of multitask learning produces improved unweighted accuracy on the emotion classification task. Like Xia and Liu, the proposed model uses multitask learning with valence and activation as targets. Unlike them, however, we are primarily interested not in emotion classification, but in valence classification as a primary task. Thus, our multitask model has valence as a primary target and activation as a secondary target. Also, while our experiments use the IEMOCAP database like Xia and Liu do, our method of speaker split differs from theirs. Xia and Liu use a leave-one-speaker-out cross validation scheme with separate train and test sets that have no speaker overlap. This method lacks a distinct validation set; instead, they validate and test on the same set. Our experimental setup, on the other hand, splits the data into distinct train, validation, and test sets, still with no speaker overlap. This is described in greater detail in Section SECREF5 . The unsupervised learning part of the investigated model builds upon an architecture known as the deep convolutional generative adversarial network, or DCGAN. DCGAN consists of two components, known as the generator and discriminator, which are trained against each other in a minimax setup. The generator learns to map samples from a random distribution to output matrices of some pre-specified form. The discriminator takes an input which is either a generator output or a “real” sample from a dataset. The discriminator learns to classify the input as either generated or real BIBREF9 . For training, the discriminator uses a cross entropy loss function based on how many inputs were correctly classified as real and how many were correctly classified as generated. The cross entropy loss between true labels INLINEFORM0 and predictions INLINEFORM1 is defined as: DISPLAYFORM0 Where INLINEFORM0 is the learned vector of weights, and INLINEFORM1 is the number of samples. For purposes of this computation, labels are represented as numerical values of 1 for real and 0 for generated. Then, letting INLINEFORM2 represent the discriminator's predictions for all real inputs, the cross entropy for correct predictions of “real” simplifies to: INLINEFORM3 Because in this case the correct predictions are all ones. Similarly, letting INLINEFORM0 represent the discriminator's predictions for all generated inputs, the cross entropy for correct predictions of “generated” simplifies to: INLINEFORM1 Because here the correct predictions are all zeroes. The total loss for the discriminator is given by the sum of the previous two terms INLINEFORM0 . The generator also uses a cross entropy loss, but its loss is defined in terms of how many generated outputs got incorrectly classified as real: INLINEFORM0 Thus, the generator's loss gets lower the better it is able to produce outputs that the discriminator thinks are real. This leads the generator to eventually produce outputs that look like real samples of speech given sufficient training iterations. Multitask Deep Convolutional Generative Adversarial Network The investigated multitask model is based upon the DCGAN architecture described in Section SECREF2 and is implemented in TensorFlow. For emotion classification a fully connected layer is attached to the final convolutional layer of the DCGAN's discriminator. The output of this layer is then fed to two separate fully connected layers, one of which outputs a valence label and the other of which outputs an activation label. This setup is shown visually in Figure FIGREF4 . Through this setup, the model is able to take advantage of unlabeled data during training by feeding it through the DCGAN layers in the model, and is also able to take advantage of multitask learning and train the valence and activation outputs simultaneously. In particular, the model is trained by iteratively running the generator, discriminator, valence classifier, and activation classifier, and back-propagating the error for each component through the network. The loss functions for the generator and discriminator are unaltered, and remain as shown in Section SECREF2 . Both the valence classifier and activation classifier use cross entropy loss as in Equation EQREF2 . Since the valence and activation classifiers share layers with the discriminator the model learns features and convolutional filters that are effective for the tasks of valence classification, activation classification, and discriminating between real and generated samples. Data Corpus Due to the semi-supervised nature of the proposed Multitask DCGAN model, we utilize both labeled and unlabeled data. For the unlabeled data, we use audio from the AMI BIBREF8 and IEMOCAP BIBREF7 datasets. For the labeled data, we use audio from the IEMOCAP dataset, which comes with labels for activation and valence, both measured on a 5-point Likert scale from three distinct annotators. Although IEMOCAP provides per-word activation and valence labels, in practice these labels do not generally change over time in a given audio file, and so for simplicity we label each audio clip with the average valence and activation. Since valence and activation are both measured on a 5-point scale, the labels are encoded in 5-element one-hot vectors. For instance, a valence of 5 is represented with the vector INLINEFORM0 . The one-hot encoding can be thought of as a probability distribution representing the likelihood of the correct label being some particular value. Thus, in cases where the annotators disagree on the valence or activation label, this can be represented by assigning probabilities to multiple positions in the label vector. For instance, a label of 4.5 conceptually means that the “correct” valence is either 4 or 5 with equal probability, so the corresponding vector would be INLINEFORM1 . These “fuzzy labels” have been shown to improve classification performance in a number of applications BIBREF14 , BIBREF15 . It should be noted here that we had generally greater success with this fuzzy label method than training the neural network model on the valence label directly, i.e. classification task vs. regression. Pre-processing. Audio data is fed to the network models in the form of spectrograms. The spectrograms are computed using a short time Fourier transform with window size of 1024 samples, which at the 16 kHz sampling rate is equivalent to 64 ms. Each spectrogram is 128 pixels high, representing the frequency range 0-11 kHz. Due to the varying lengths of the IEMOCAP audio files, the spectrograms vary in width, which poses a problem for the batching process of the neural network training. To compensate for this, the model randomly crops a region of each input spectrogram. The crop width is determined in advance. To ensure that the selected crop region contains at least some data (i.e. is not entirely silence), cropping occurs using the following procedure: a random word in the transcript of the audio file is selected, and the corresponding time range is looked up. A random point within this time range is selected, which is then treated as the center line of the crop. The crop is then made using the region defined by the center line and crop width. Early on, we found that there is a noticeable imbalance in the valence labels for the IEMOCAP data, in that the labels skew heavily towards the neutral (2-3) range. In order to prevent the model from overfitting to this distribution during training, we normalize the training data by oversampling underrepresented valence data, such that the overall distribution of valence labels is more even. Experimental Design Investigated Models. We investigate the impact of both unlabeled data for improved emotional speech representations and multitask learning on emotional valence classification performance. To this end, we compared four different neural network models: The BasicCNN represents a “bare minimum” valence classifier and thus sets a lower bound for expected performance. Comparison with MultitaskCNN indicates the effect of the inclusion of a secondary task, i.e. emotional activation recognition. Comparison with BasicDCGAN indicates the effect of the incorporation of unlabeled data during training. For fairness, the architectures of all three baselines are based upon the full MultitaskDCGAN model. BasicDCGAN for example is simply the MultitaskDCGAN model with the activation layer removed, while the two fully supervised baselines were built by taking the convolutional layers from the discriminator component of the MultitaskDCGAN, and adding fully connected layers for valence and activation output. Specifically, the discriminator contains four convolutional layers; there is no explicit pooling but the kernel stride size is 2 so image size gets halved at each step. Thus, by design, all four models have this same convolutional structure. This is to ensure that potential performance gains do not stem from a larger complexity or higher number of trainable weights within the DCGAN models, but rather stem from improved representations of speech. Experimental Procedure. The parameters for each model, including batch size, filter size (for convolution), and learning rate, were determined by randomly sampling different parameter combinations, training the model with those parameters, and computing accuracy on a held-out validation set. For each model, we kept the parameters that yield the best accuracy on the held-out set. This procedure ensures that each model is fairly represented during evaluation. Our hyper-parameters included crop width of the input signal INLINEFORM0 , convolutional layer filter sizes INLINEFORM1 (where INLINEFORM2 is the selected crop width and gets divided by 8 to account for each halving of image size from the 3 convolutional layers leading up to the last one), number of convolutional filters INLINEFORM3 (step size 4), batch size INLINEFORM4 , and learning rates INLINEFORM5 . Identified parameters per model are shown in Table TABREF9 . For evaluation, we utilized a 5-fold leave-one-session-out validation. Each fold leaves one of the five sessions in the labeled IEMOCAP data out of the training set entirely. From this left-out conversation, one speaker's audio is used as a validation set, while the other speaker's audio is used as a test set. For each fold, the evaluation procedure is as follows: the model being evaluated is trained on the training set, and after each full pass through the training set, accuracy is computed on the validation set. This process continues until the accuracy on the validation set is found to no longer increase; in other words, we locate a local maximum in validation accuracy. To increase the certainty that this local maximum is truly representative of the model's best performance, we continue to run more iterations after a local maximum is found, and look for 5 consecutive iterations with lower accuracy values. If, in the course of these 5 iterations, a higher accuracy value is found, that is treated as the new local maximum and the search restarts from there. Once a best accuracy value is found in this manner, we restore the model's weights to those of the iteration corresponding to the best accuracy, and evaluate the accuracy on the test set. We evaluated each model on all 5 folds using the methodology described above, recording test accuracies for each fold. Evaluation Strategy. We collected several statistics about our models' performances. We were primarily interested in the unweighted per-class accuracy. In addition, we converted the network's output from probability distributions back into numerical labels by taking the expected value; that is: INLINEFORM0 where INLINEFORM1 is the model's prediction, in its original form as a 5 element vector probability distribution. We then used this to compute the Pearson correlation ( INLINEFORM2 measure) between predicted and actual labels. Some pre-processing was needed to obtain accurate measures. In particular, in cases where human annotators were perfectly split on what the correct label for a particular sample should be, both possibilities should be accepted as correct predictions. For instance, if the correct label is 4.5 (vector form INLINEFORM0 ), a correct prediction could be either 4 or 5 (i.e. the maximum index in the output vector should either be 4 or 5). The above measures are for a 5-point labeling scale, which is how the IEMOCAP data is originally labeled. However, prior experiments have evaluated performance on valence classification on a 3-point scale BIBREF16 . The authors provide an example of this, with valence levels 1 and 2 being pooled into a single “negative” category, valence level 3 remaining untouched as the “neutral” category, and valence levels 4 and 5 being pooled into a single “positive” category. Thus, to allow for comparison with our models, we also report results on such a 3-point scale. We construct these results by taking the results for 5 class comparison and pooling them as just described. Results Table TABREF10 shows the unweighted per-class accuracies and Pearson correlation coeffecients ( INLINEFORM0 values) between actual and predicted labels for each model. All values shown are average values across the test sets for all 5 folds. Results indicate that the use of unsupervised learning yields a clear improvement in performance. Both BasicDCGAN and MultitaskDCGAN have considerably better accuracies and linear correlations compared to the fully supervised CNN models. This is a strong indication that the use of large quantities of task-unrelated speech data improved the filter learning in the CNN layers of the DCGAN discriminator. Multitask learning, on the other hand, does not appear to have any positive impact on performance. Comparing the two CNN models, the addition of multitask learning actually appears to impair performance, with MultitaskCNN doing worse than BasicCNN in all three metrics. The difference is smaller when comparing BasicDCGAN and MultitaskDCGAN, and may not be enough to decidedly conclude that the use of multitask learning has a net negative impact there, but certainly there is no indication of a net positive impact. The observed performance of both the BasicDCGAN and MultitaskDCGAN using 3-classes is comparable to the state-of-the-art, with 49.80% compared to 49.99% reported in BIBREF16 . It needs to be noted that in BIBREF16 data from the test speaker's session partner was utilized in the training of the model. Our models in contrast are trained on only four of the five sessions as discussed in SECREF5 . Further, the here presented models are trained on the raw spectrograms of the audio and no feature extraction was employed whatsoever. This representation learning approach is employed in order to allow the DCGAN component of the model to train on vast amounts of unsupervised speech data. We further report the confusion matrix of the best performing model BasicDCGAN in Table TABREF11 . It is noted that the “negative” class (i.e., the second row) is classified the best. However, it appears that this class is picked more frequently by the model resulting in high recall = 0.7701 and low precision = 0.3502. The class with the highest F1 score is “very positive” (i.e., the last row) with INLINEFORM0 . The confusion of “very negative” valence with “very positive” valence in the top right corner is interesting and has been previously observed BIBREF4 . Conclusions We investigated the use of unsupervised and multitask learning to improve the performance of an emotional valence classifier. Overall, we found that unsupervised learning yields considerable improvements in classification accuracy for the emotional valence recognition task. The best performing model achieves 43.88% in the 5-class case and 49.80% in the 3-class case with a significant Pearson correlation between continuous target label and prediction of INLINEFORM0 ( INLINEFORM1 ). There is no indication that multitask learning provides any advantage. The results for multitask learning are somewhat surprising. It may be that the valence and activation classification tasks are not sufficiently related for multitask learning to yield improvements in accuracy. Alternatively, a different neural network architecture may be needed for multitask learning to work. Further, the alternating update strategy employed in the present work might not have been the optimal strategy for training. The iterative swapping of target tasks valence/activation might have created instabilities in the weight updates of the backpropagation algorithm. There may yet be other explanations; further investigation may be warranted. Lastly, it is important to note that this model's performance is only approaching state-of-the-art, which employs potentially better suited sequential classifiers such as Long Short-term Memory (LSTM) networks BIBREF17 . However, basic LSTM are not suited to learn from entirely unsupervised data, which we leveraged for the proposed DCGAN models. For future work, we hope to adapt the technique of using unlabeled data to sequential models, including LSTM. We expect that combining our work here with the advantages of sequential models may result in further performance gains which may be more competitive with today's leading models and potentially outperform them. For the purposes of this investigation, the key takeaway is that the use of unsupervised learning yields clear performance gains on the emotional valence classification task, and that this represents a technique that may be adapted to other models to achieve even higher classification accuracies.
BIBREF16
e8ca81d5b36952259ef3e0dbeac7b3a622eabe8e
e8ca81d5b36952259ef3e0dbeac7b3a622eabe8e_0
Q: Which multitask annotated corpus is used? Text: Introduction Machine Learning, in general, and affective computing, in particular, rely on good data representations or features that have a good discriminatory faculty in classification and regression experiments, such as emotion recognition from speech. To derive efficient representations of data, researchers have adopted two main strategies: (1) carefully crafted and tailored feature extractors designed for a particular task BIBREF0 and (2) algorithms that learn representations automatically from the data itself BIBREF1 . The latter approach is called Representation Learning (RL), and has received growing attention in the past few years and is highly reliant on large quantities of data. Most approaches for emotion recognition from speech still rely on the extraction of standard acoustic features such as pitch, shimmer, jitter and MFCCs (Mel-Frequency Cepstral Coefficients), with a few notable exceptions BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In this work we leverage RL strategies and automatically learn representations of emotional speech from the spectrogram directly using a deep convolutional neural network (CNN) architecture. To learn strong representations of speech we seek to leverage as much data as possible. However, emotion annotations are difficult to obtain and scarce BIBREF6 . We leverage the USC-IEMOCAP dataset, which comprises of around 12 hours of highly emotional and partly acted data from 10 speakers BIBREF7 . However, we aim to improve the learned representations of emotional speech with unlabeled speech data from an unrelated meeting corpus, which consists of about 100 hours of data BIBREF8 . While the meeting corpus is qualitatively quite different from the highly emotional USC-IEMOCAP data, we believe that the learned representations will improve through the use of these additional data. This combination of two separate data sources leads to a semi-supervised machine learning task and we extend the CNN architecture to a deep convolutional generative neural network (DCGAN) that can be trained in an unsupervised fashion BIBREF9 . Within this work, we particularly target emotional valence as the primary task, as it has been shown to be the most challenging emotional dimension for acoustic analyses in a number of studies BIBREF10 , BIBREF11 . Apart from solely targeting valence classification, we further investigate the principle of multitask learning. In multitask learning, a set of related tasks are learned (e.g., emotional activation), along with a primary task (e.g., emotional valence); both tasks share parts of the network topology and are hence jointly trained, as depicted in Figure FIGREF4 . It is expected that data for the secondary task models information, which would also be discriminative in learning the primary task. In fact, this approach has been shown to improve generalizability across corpora BIBREF12 . The remainder of this paper is organized as follows: First we introduce the DCGAN model and discuss prior work, in Section SECREF2 . Then we describe our specific multitask DCGAN model in Section SECREF3 , introduce the datasets in Section SECREF4 , and describe our experimental design in Section SECREF5 . Finally, we report our results in Section SECREF6 and discuss our findings in Section SECREF7 . Related Work The proposed model builds upon previous results in the field of emotion recognition, and leverages prior work in representation learning. Multitask learning has been effective in some prior experiments on emotion detection. In particular, Xia and Liu proposed a multitask model for emotion recognition which, like the investigated model, has activation and valence as targets BIBREF13 . Their work uses a Deep Belief Network (DBN) architecture to classify the emotion of audio input, with valence and activation as secondary tasks. Their experiments indicate that the use of multitask learning produces improved unweighted accuracy on the emotion classification task. Like Xia and Liu, the proposed model uses multitask learning with valence and activation as targets. Unlike them, however, we are primarily interested not in emotion classification, but in valence classification as a primary task. Thus, our multitask model has valence as a primary target and activation as a secondary target. Also, while our experiments use the IEMOCAP database like Xia and Liu do, our method of speaker split differs from theirs. Xia and Liu use a leave-one-speaker-out cross validation scheme with separate train and test sets that have no speaker overlap. This method lacks a distinct validation set; instead, they validate and test on the same set. Our experimental setup, on the other hand, splits the data into distinct train, validation, and test sets, still with no speaker overlap. This is described in greater detail in Section SECREF5 . The unsupervised learning part of the investigated model builds upon an architecture known as the deep convolutional generative adversarial network, or DCGAN. DCGAN consists of two components, known as the generator and discriminator, which are trained against each other in a minimax setup. The generator learns to map samples from a random distribution to output matrices of some pre-specified form. The discriminator takes an input which is either a generator output or a “real” sample from a dataset. The discriminator learns to classify the input as either generated or real BIBREF9 . For training, the discriminator uses a cross entropy loss function based on how many inputs were correctly classified as real and how many were correctly classified as generated. The cross entropy loss between true labels INLINEFORM0 and predictions INLINEFORM1 is defined as: DISPLAYFORM0 Where INLINEFORM0 is the learned vector of weights, and INLINEFORM1 is the number of samples. For purposes of this computation, labels are represented as numerical values of 1 for real and 0 for generated. Then, letting INLINEFORM2 represent the discriminator's predictions for all real inputs, the cross entropy for correct predictions of “real” simplifies to: INLINEFORM3 Because in this case the correct predictions are all ones. Similarly, letting INLINEFORM0 represent the discriminator's predictions for all generated inputs, the cross entropy for correct predictions of “generated” simplifies to: INLINEFORM1 Because here the correct predictions are all zeroes. The total loss for the discriminator is given by the sum of the previous two terms INLINEFORM0 . The generator also uses a cross entropy loss, but its loss is defined in terms of how many generated outputs got incorrectly classified as real: INLINEFORM0 Thus, the generator's loss gets lower the better it is able to produce outputs that the discriminator thinks are real. This leads the generator to eventually produce outputs that look like real samples of speech given sufficient training iterations. Multitask Deep Convolutional Generative Adversarial Network The investigated multitask model is based upon the DCGAN architecture described in Section SECREF2 and is implemented in TensorFlow. For emotion classification a fully connected layer is attached to the final convolutional layer of the DCGAN's discriminator. The output of this layer is then fed to two separate fully connected layers, one of which outputs a valence label and the other of which outputs an activation label. This setup is shown visually in Figure FIGREF4 . Through this setup, the model is able to take advantage of unlabeled data during training by feeding it through the DCGAN layers in the model, and is also able to take advantage of multitask learning and train the valence and activation outputs simultaneously. In particular, the model is trained by iteratively running the generator, discriminator, valence classifier, and activation classifier, and back-propagating the error for each component through the network. The loss functions for the generator and discriminator are unaltered, and remain as shown in Section SECREF2 . Both the valence classifier and activation classifier use cross entropy loss as in Equation EQREF2 . Since the valence and activation classifiers share layers with the discriminator the model learns features and convolutional filters that are effective for the tasks of valence classification, activation classification, and discriminating between real and generated samples. Data Corpus Due to the semi-supervised nature of the proposed Multitask DCGAN model, we utilize both labeled and unlabeled data. For the unlabeled data, we use audio from the AMI BIBREF8 and IEMOCAP BIBREF7 datasets. For the labeled data, we use audio from the IEMOCAP dataset, which comes with labels for activation and valence, both measured on a 5-point Likert scale from three distinct annotators. Although IEMOCAP provides per-word activation and valence labels, in practice these labels do not generally change over time in a given audio file, and so for simplicity we label each audio clip with the average valence and activation. Since valence and activation are both measured on a 5-point scale, the labels are encoded in 5-element one-hot vectors. For instance, a valence of 5 is represented with the vector INLINEFORM0 . The one-hot encoding can be thought of as a probability distribution representing the likelihood of the correct label being some particular value. Thus, in cases where the annotators disagree on the valence or activation label, this can be represented by assigning probabilities to multiple positions in the label vector. For instance, a label of 4.5 conceptually means that the “correct” valence is either 4 or 5 with equal probability, so the corresponding vector would be INLINEFORM1 . These “fuzzy labels” have been shown to improve classification performance in a number of applications BIBREF14 , BIBREF15 . It should be noted here that we had generally greater success with this fuzzy label method than training the neural network model on the valence label directly, i.e. classification task vs. regression. Pre-processing. Audio data is fed to the network models in the form of spectrograms. The spectrograms are computed using a short time Fourier transform with window size of 1024 samples, which at the 16 kHz sampling rate is equivalent to 64 ms. Each spectrogram is 128 pixels high, representing the frequency range 0-11 kHz. Due to the varying lengths of the IEMOCAP audio files, the spectrograms vary in width, which poses a problem for the batching process of the neural network training. To compensate for this, the model randomly crops a region of each input spectrogram. The crop width is determined in advance. To ensure that the selected crop region contains at least some data (i.e. is not entirely silence), cropping occurs using the following procedure: a random word in the transcript of the audio file is selected, and the corresponding time range is looked up. A random point within this time range is selected, which is then treated as the center line of the crop. The crop is then made using the region defined by the center line and crop width. Early on, we found that there is a noticeable imbalance in the valence labels for the IEMOCAP data, in that the labels skew heavily towards the neutral (2-3) range. In order to prevent the model from overfitting to this distribution during training, we normalize the training data by oversampling underrepresented valence data, such that the overall distribution of valence labels is more even. Experimental Design Investigated Models. We investigate the impact of both unlabeled data for improved emotional speech representations and multitask learning on emotional valence classification performance. To this end, we compared four different neural network models: The BasicCNN represents a “bare minimum” valence classifier and thus sets a lower bound for expected performance. Comparison with MultitaskCNN indicates the effect of the inclusion of a secondary task, i.e. emotional activation recognition. Comparison with BasicDCGAN indicates the effect of the incorporation of unlabeled data during training. For fairness, the architectures of all three baselines are based upon the full MultitaskDCGAN model. BasicDCGAN for example is simply the MultitaskDCGAN model with the activation layer removed, while the two fully supervised baselines were built by taking the convolutional layers from the discriminator component of the MultitaskDCGAN, and adding fully connected layers for valence and activation output. Specifically, the discriminator contains four convolutional layers; there is no explicit pooling but the kernel stride size is 2 so image size gets halved at each step. Thus, by design, all four models have this same convolutional structure. This is to ensure that potential performance gains do not stem from a larger complexity or higher number of trainable weights within the DCGAN models, but rather stem from improved representations of speech. Experimental Procedure. The parameters for each model, including batch size, filter size (for convolution), and learning rate, were determined by randomly sampling different parameter combinations, training the model with those parameters, and computing accuracy on a held-out validation set. For each model, we kept the parameters that yield the best accuracy on the held-out set. This procedure ensures that each model is fairly represented during evaluation. Our hyper-parameters included crop width of the input signal INLINEFORM0 , convolutional layer filter sizes INLINEFORM1 (where INLINEFORM2 is the selected crop width and gets divided by 8 to account for each halving of image size from the 3 convolutional layers leading up to the last one), number of convolutional filters INLINEFORM3 (step size 4), batch size INLINEFORM4 , and learning rates INLINEFORM5 . Identified parameters per model are shown in Table TABREF9 . For evaluation, we utilized a 5-fold leave-one-session-out validation. Each fold leaves one of the five sessions in the labeled IEMOCAP data out of the training set entirely. From this left-out conversation, one speaker's audio is used as a validation set, while the other speaker's audio is used as a test set. For each fold, the evaluation procedure is as follows: the model being evaluated is trained on the training set, and after each full pass through the training set, accuracy is computed on the validation set. This process continues until the accuracy on the validation set is found to no longer increase; in other words, we locate a local maximum in validation accuracy. To increase the certainty that this local maximum is truly representative of the model's best performance, we continue to run more iterations after a local maximum is found, and look for 5 consecutive iterations with lower accuracy values. If, in the course of these 5 iterations, a higher accuracy value is found, that is treated as the new local maximum and the search restarts from there. Once a best accuracy value is found in this manner, we restore the model's weights to those of the iteration corresponding to the best accuracy, and evaluate the accuracy on the test set. We evaluated each model on all 5 folds using the methodology described above, recording test accuracies for each fold. Evaluation Strategy. We collected several statistics about our models' performances. We were primarily interested in the unweighted per-class accuracy. In addition, we converted the network's output from probability distributions back into numerical labels by taking the expected value; that is: INLINEFORM0 where INLINEFORM1 is the model's prediction, in its original form as a 5 element vector probability distribution. We then used this to compute the Pearson correlation ( INLINEFORM2 measure) between predicted and actual labels. Some pre-processing was needed to obtain accurate measures. In particular, in cases where human annotators were perfectly split on what the correct label for a particular sample should be, both possibilities should be accepted as correct predictions. For instance, if the correct label is 4.5 (vector form INLINEFORM0 ), a correct prediction could be either 4 or 5 (i.e. the maximum index in the output vector should either be 4 or 5). The above measures are for a 5-point labeling scale, which is how the IEMOCAP data is originally labeled. However, prior experiments have evaluated performance on valence classification on a 3-point scale BIBREF16 . The authors provide an example of this, with valence levels 1 and 2 being pooled into a single “negative” category, valence level 3 remaining untouched as the “neutral” category, and valence levels 4 and 5 being pooled into a single “positive” category. Thus, to allow for comparison with our models, we also report results on such a 3-point scale. We construct these results by taking the results for 5 class comparison and pooling them as just described. Results Table TABREF10 shows the unweighted per-class accuracies and Pearson correlation coeffecients ( INLINEFORM0 values) between actual and predicted labels for each model. All values shown are average values across the test sets for all 5 folds. Results indicate that the use of unsupervised learning yields a clear improvement in performance. Both BasicDCGAN and MultitaskDCGAN have considerably better accuracies and linear correlations compared to the fully supervised CNN models. This is a strong indication that the use of large quantities of task-unrelated speech data improved the filter learning in the CNN layers of the DCGAN discriminator. Multitask learning, on the other hand, does not appear to have any positive impact on performance. Comparing the two CNN models, the addition of multitask learning actually appears to impair performance, with MultitaskCNN doing worse than BasicCNN in all three metrics. The difference is smaller when comparing BasicDCGAN and MultitaskDCGAN, and may not be enough to decidedly conclude that the use of multitask learning has a net negative impact there, but certainly there is no indication of a net positive impact. The observed performance of both the BasicDCGAN and MultitaskDCGAN using 3-classes is comparable to the state-of-the-art, with 49.80% compared to 49.99% reported in BIBREF16 . It needs to be noted that in BIBREF16 data from the test speaker's session partner was utilized in the training of the model. Our models in contrast are trained on only four of the five sessions as discussed in SECREF5 . Further, the here presented models are trained on the raw spectrograms of the audio and no feature extraction was employed whatsoever. This representation learning approach is employed in order to allow the DCGAN component of the model to train on vast amounts of unsupervised speech data. We further report the confusion matrix of the best performing model BasicDCGAN in Table TABREF11 . It is noted that the “negative” class (i.e., the second row) is classified the best. However, it appears that this class is picked more frequently by the model resulting in high recall = 0.7701 and low precision = 0.3502. The class with the highest F1 score is “very positive” (i.e., the last row) with INLINEFORM0 . The confusion of “very negative” valence with “very positive” valence in the top right corner is interesting and has been previously observed BIBREF4 . Conclusions We investigated the use of unsupervised and multitask learning to improve the performance of an emotional valence classifier. Overall, we found that unsupervised learning yields considerable improvements in classification accuracy for the emotional valence recognition task. The best performing model achieves 43.88% in the 5-class case and 49.80% in the 3-class case with a significant Pearson correlation between continuous target label and prediction of INLINEFORM0 ( INLINEFORM1 ). There is no indication that multitask learning provides any advantage. The results for multitask learning are somewhat surprising. It may be that the valence and activation classification tasks are not sufficiently related for multitask learning to yield improvements in accuracy. Alternatively, a different neural network architecture may be needed for multitask learning to work. Further, the alternating update strategy employed in the present work might not have been the optimal strategy for training. The iterative swapping of target tasks valence/activation might have created instabilities in the weight updates of the backpropagation algorithm. There may yet be other explanations; further investigation may be warranted. Lastly, it is important to note that this model's performance is only approaching state-of-the-art, which employs potentially better suited sequential classifiers such as Long Short-term Memory (LSTM) networks BIBREF17 . However, basic LSTM are not suited to learn from entirely unsupervised data, which we leveraged for the proposed DCGAN models. For future work, we hope to adapt the technique of using unlabeled data to sequential models, including LSTM. We expect that combining our work here with the advantages of sequential models may result in further performance gains which may be more competitive with today's leading models and potentially outperform them. For the purposes of this investigation, the key takeaway is that the use of unsupervised learning yields clear performance gains on the emotional valence classification task, and that this represents a technique that may be adapted to other models to achieve even higher classification accuracies.
IEMOCAP
e75685ef5f58027be44f42f30cb3988b509b2768
e75685ef5f58027be44f42f30cb3988b509b2768_0
Q: What are the tasks in the multitask learning setup? Text: Introduction Machine Learning, in general, and affective computing, in particular, rely on good data representations or features that have a good discriminatory faculty in classification and regression experiments, such as emotion recognition from speech. To derive efficient representations of data, researchers have adopted two main strategies: (1) carefully crafted and tailored feature extractors designed for a particular task BIBREF0 and (2) algorithms that learn representations automatically from the data itself BIBREF1 . The latter approach is called Representation Learning (RL), and has received growing attention in the past few years and is highly reliant on large quantities of data. Most approaches for emotion recognition from speech still rely on the extraction of standard acoustic features such as pitch, shimmer, jitter and MFCCs (Mel-Frequency Cepstral Coefficients), with a few notable exceptions BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In this work we leverage RL strategies and automatically learn representations of emotional speech from the spectrogram directly using a deep convolutional neural network (CNN) architecture. To learn strong representations of speech we seek to leverage as much data as possible. However, emotion annotations are difficult to obtain and scarce BIBREF6 . We leverage the USC-IEMOCAP dataset, which comprises of around 12 hours of highly emotional and partly acted data from 10 speakers BIBREF7 . However, we aim to improve the learned representations of emotional speech with unlabeled speech data from an unrelated meeting corpus, which consists of about 100 hours of data BIBREF8 . While the meeting corpus is qualitatively quite different from the highly emotional USC-IEMOCAP data, we believe that the learned representations will improve through the use of these additional data. This combination of two separate data sources leads to a semi-supervised machine learning task and we extend the CNN architecture to a deep convolutional generative neural network (DCGAN) that can be trained in an unsupervised fashion BIBREF9 . Within this work, we particularly target emotional valence as the primary task, as it has been shown to be the most challenging emotional dimension for acoustic analyses in a number of studies BIBREF10 , BIBREF11 . Apart from solely targeting valence classification, we further investigate the principle of multitask learning. In multitask learning, a set of related tasks are learned (e.g., emotional activation), along with a primary task (e.g., emotional valence); both tasks share parts of the network topology and are hence jointly trained, as depicted in Figure FIGREF4 . It is expected that data for the secondary task models information, which would also be discriminative in learning the primary task. In fact, this approach has been shown to improve generalizability across corpora BIBREF12 . The remainder of this paper is organized as follows: First we introduce the DCGAN model and discuss prior work, in Section SECREF2 . Then we describe our specific multitask DCGAN model in Section SECREF3 , introduce the datasets in Section SECREF4 , and describe our experimental design in Section SECREF5 . Finally, we report our results in Section SECREF6 and discuss our findings in Section SECREF7 . Related Work The proposed model builds upon previous results in the field of emotion recognition, and leverages prior work in representation learning. Multitask learning has been effective in some prior experiments on emotion detection. In particular, Xia and Liu proposed a multitask model for emotion recognition which, like the investigated model, has activation and valence as targets BIBREF13 . Their work uses a Deep Belief Network (DBN) architecture to classify the emotion of audio input, with valence and activation as secondary tasks. Their experiments indicate that the use of multitask learning produces improved unweighted accuracy on the emotion classification task. Like Xia and Liu, the proposed model uses multitask learning with valence and activation as targets. Unlike them, however, we are primarily interested not in emotion classification, but in valence classification as a primary task. Thus, our multitask model has valence as a primary target and activation as a secondary target. Also, while our experiments use the IEMOCAP database like Xia and Liu do, our method of speaker split differs from theirs. Xia and Liu use a leave-one-speaker-out cross validation scheme with separate train and test sets that have no speaker overlap. This method lacks a distinct validation set; instead, they validate and test on the same set. Our experimental setup, on the other hand, splits the data into distinct train, validation, and test sets, still with no speaker overlap. This is described in greater detail in Section SECREF5 . The unsupervised learning part of the investigated model builds upon an architecture known as the deep convolutional generative adversarial network, or DCGAN. DCGAN consists of two components, known as the generator and discriminator, which are trained against each other in a minimax setup. The generator learns to map samples from a random distribution to output matrices of some pre-specified form. The discriminator takes an input which is either a generator output or a “real” sample from a dataset. The discriminator learns to classify the input as either generated or real BIBREF9 . For training, the discriminator uses a cross entropy loss function based on how many inputs were correctly classified as real and how many were correctly classified as generated. The cross entropy loss between true labels INLINEFORM0 and predictions INLINEFORM1 is defined as: DISPLAYFORM0 Where INLINEFORM0 is the learned vector of weights, and INLINEFORM1 is the number of samples. For purposes of this computation, labels are represented as numerical values of 1 for real and 0 for generated. Then, letting INLINEFORM2 represent the discriminator's predictions for all real inputs, the cross entropy for correct predictions of “real” simplifies to: INLINEFORM3 Because in this case the correct predictions are all ones. Similarly, letting INLINEFORM0 represent the discriminator's predictions for all generated inputs, the cross entropy for correct predictions of “generated” simplifies to: INLINEFORM1 Because here the correct predictions are all zeroes. The total loss for the discriminator is given by the sum of the previous two terms INLINEFORM0 . The generator also uses a cross entropy loss, but its loss is defined in terms of how many generated outputs got incorrectly classified as real: INLINEFORM0 Thus, the generator's loss gets lower the better it is able to produce outputs that the discriminator thinks are real. This leads the generator to eventually produce outputs that look like real samples of speech given sufficient training iterations. Multitask Deep Convolutional Generative Adversarial Network The investigated multitask model is based upon the DCGAN architecture described in Section SECREF2 and is implemented in TensorFlow. For emotion classification a fully connected layer is attached to the final convolutional layer of the DCGAN's discriminator. The output of this layer is then fed to two separate fully connected layers, one of which outputs a valence label and the other of which outputs an activation label. This setup is shown visually in Figure FIGREF4 . Through this setup, the model is able to take advantage of unlabeled data during training by feeding it through the DCGAN layers in the model, and is also able to take advantage of multitask learning and train the valence and activation outputs simultaneously. In particular, the model is trained by iteratively running the generator, discriminator, valence classifier, and activation classifier, and back-propagating the error for each component through the network. The loss functions for the generator and discriminator are unaltered, and remain as shown in Section SECREF2 . Both the valence classifier and activation classifier use cross entropy loss as in Equation EQREF2 . Since the valence and activation classifiers share layers with the discriminator the model learns features and convolutional filters that are effective for the tasks of valence classification, activation classification, and discriminating between real and generated samples. Data Corpus Due to the semi-supervised nature of the proposed Multitask DCGAN model, we utilize both labeled and unlabeled data. For the unlabeled data, we use audio from the AMI BIBREF8 and IEMOCAP BIBREF7 datasets. For the labeled data, we use audio from the IEMOCAP dataset, which comes with labels for activation and valence, both measured on a 5-point Likert scale from three distinct annotators. Although IEMOCAP provides per-word activation and valence labels, in practice these labels do not generally change over time in a given audio file, and so for simplicity we label each audio clip with the average valence and activation. Since valence and activation are both measured on a 5-point scale, the labels are encoded in 5-element one-hot vectors. For instance, a valence of 5 is represented with the vector INLINEFORM0 . The one-hot encoding can be thought of as a probability distribution representing the likelihood of the correct label being some particular value. Thus, in cases where the annotators disagree on the valence or activation label, this can be represented by assigning probabilities to multiple positions in the label vector. For instance, a label of 4.5 conceptually means that the “correct” valence is either 4 or 5 with equal probability, so the corresponding vector would be INLINEFORM1 . These “fuzzy labels” have been shown to improve classification performance in a number of applications BIBREF14 , BIBREF15 . It should be noted here that we had generally greater success with this fuzzy label method than training the neural network model on the valence label directly, i.e. classification task vs. regression. Pre-processing. Audio data is fed to the network models in the form of spectrograms. The spectrograms are computed using a short time Fourier transform with window size of 1024 samples, which at the 16 kHz sampling rate is equivalent to 64 ms. Each spectrogram is 128 pixels high, representing the frequency range 0-11 kHz. Due to the varying lengths of the IEMOCAP audio files, the spectrograms vary in width, which poses a problem for the batching process of the neural network training. To compensate for this, the model randomly crops a region of each input spectrogram. The crop width is determined in advance. To ensure that the selected crop region contains at least some data (i.e. is not entirely silence), cropping occurs using the following procedure: a random word in the transcript of the audio file is selected, and the corresponding time range is looked up. A random point within this time range is selected, which is then treated as the center line of the crop. The crop is then made using the region defined by the center line and crop width. Early on, we found that there is a noticeable imbalance in the valence labels for the IEMOCAP data, in that the labels skew heavily towards the neutral (2-3) range. In order to prevent the model from overfitting to this distribution during training, we normalize the training data by oversampling underrepresented valence data, such that the overall distribution of valence labels is more even. Experimental Design Investigated Models. We investigate the impact of both unlabeled data for improved emotional speech representations and multitask learning on emotional valence classification performance. To this end, we compared four different neural network models: The BasicCNN represents a “bare minimum” valence classifier and thus sets a lower bound for expected performance. Comparison with MultitaskCNN indicates the effect of the inclusion of a secondary task, i.e. emotional activation recognition. Comparison with BasicDCGAN indicates the effect of the incorporation of unlabeled data during training. For fairness, the architectures of all three baselines are based upon the full MultitaskDCGAN model. BasicDCGAN for example is simply the MultitaskDCGAN model with the activation layer removed, while the two fully supervised baselines were built by taking the convolutional layers from the discriminator component of the MultitaskDCGAN, and adding fully connected layers for valence and activation output. Specifically, the discriminator contains four convolutional layers; there is no explicit pooling but the kernel stride size is 2 so image size gets halved at each step. Thus, by design, all four models have this same convolutional structure. This is to ensure that potential performance gains do not stem from a larger complexity or higher number of trainable weights within the DCGAN models, but rather stem from improved representations of speech. Experimental Procedure. The parameters for each model, including batch size, filter size (for convolution), and learning rate, were determined by randomly sampling different parameter combinations, training the model with those parameters, and computing accuracy on a held-out validation set. For each model, we kept the parameters that yield the best accuracy on the held-out set. This procedure ensures that each model is fairly represented during evaluation. Our hyper-parameters included crop width of the input signal INLINEFORM0 , convolutional layer filter sizes INLINEFORM1 (where INLINEFORM2 is the selected crop width and gets divided by 8 to account for each halving of image size from the 3 convolutional layers leading up to the last one), number of convolutional filters INLINEFORM3 (step size 4), batch size INLINEFORM4 , and learning rates INLINEFORM5 . Identified parameters per model are shown in Table TABREF9 . For evaluation, we utilized a 5-fold leave-one-session-out validation. Each fold leaves one of the five sessions in the labeled IEMOCAP data out of the training set entirely. From this left-out conversation, one speaker's audio is used as a validation set, while the other speaker's audio is used as a test set. For each fold, the evaluation procedure is as follows: the model being evaluated is trained on the training set, and after each full pass through the training set, accuracy is computed on the validation set. This process continues until the accuracy on the validation set is found to no longer increase; in other words, we locate a local maximum in validation accuracy. To increase the certainty that this local maximum is truly representative of the model's best performance, we continue to run more iterations after a local maximum is found, and look for 5 consecutive iterations with lower accuracy values. If, in the course of these 5 iterations, a higher accuracy value is found, that is treated as the new local maximum and the search restarts from there. Once a best accuracy value is found in this manner, we restore the model's weights to those of the iteration corresponding to the best accuracy, and evaluate the accuracy on the test set. We evaluated each model on all 5 folds using the methodology described above, recording test accuracies for each fold. Evaluation Strategy. We collected several statistics about our models' performances. We were primarily interested in the unweighted per-class accuracy. In addition, we converted the network's output from probability distributions back into numerical labels by taking the expected value; that is: INLINEFORM0 where INLINEFORM1 is the model's prediction, in its original form as a 5 element vector probability distribution. We then used this to compute the Pearson correlation ( INLINEFORM2 measure) between predicted and actual labels. Some pre-processing was needed to obtain accurate measures. In particular, in cases where human annotators were perfectly split on what the correct label for a particular sample should be, both possibilities should be accepted as correct predictions. For instance, if the correct label is 4.5 (vector form INLINEFORM0 ), a correct prediction could be either 4 or 5 (i.e. the maximum index in the output vector should either be 4 or 5). The above measures are for a 5-point labeling scale, which is how the IEMOCAP data is originally labeled. However, prior experiments have evaluated performance on valence classification on a 3-point scale BIBREF16 . The authors provide an example of this, with valence levels 1 and 2 being pooled into a single “negative” category, valence level 3 remaining untouched as the “neutral” category, and valence levels 4 and 5 being pooled into a single “positive” category. Thus, to allow for comparison with our models, we also report results on such a 3-point scale. We construct these results by taking the results for 5 class comparison and pooling them as just described. Results Table TABREF10 shows the unweighted per-class accuracies and Pearson correlation coeffecients ( INLINEFORM0 values) between actual and predicted labels for each model. All values shown are average values across the test sets for all 5 folds. Results indicate that the use of unsupervised learning yields a clear improvement in performance. Both BasicDCGAN and MultitaskDCGAN have considerably better accuracies and linear correlations compared to the fully supervised CNN models. This is a strong indication that the use of large quantities of task-unrelated speech data improved the filter learning in the CNN layers of the DCGAN discriminator. Multitask learning, on the other hand, does not appear to have any positive impact on performance. Comparing the two CNN models, the addition of multitask learning actually appears to impair performance, with MultitaskCNN doing worse than BasicCNN in all three metrics. The difference is smaller when comparing BasicDCGAN and MultitaskDCGAN, and may not be enough to decidedly conclude that the use of multitask learning has a net negative impact there, but certainly there is no indication of a net positive impact. The observed performance of both the BasicDCGAN and MultitaskDCGAN using 3-classes is comparable to the state-of-the-art, with 49.80% compared to 49.99% reported in BIBREF16 . It needs to be noted that in BIBREF16 data from the test speaker's session partner was utilized in the training of the model. Our models in contrast are trained on only four of the five sessions as discussed in SECREF5 . Further, the here presented models are trained on the raw spectrograms of the audio and no feature extraction was employed whatsoever. This representation learning approach is employed in order to allow the DCGAN component of the model to train on vast amounts of unsupervised speech data. We further report the confusion matrix of the best performing model BasicDCGAN in Table TABREF11 . It is noted that the “negative” class (i.e., the second row) is classified the best. However, it appears that this class is picked more frequently by the model resulting in high recall = 0.7701 and low precision = 0.3502. The class with the highest F1 score is “very positive” (i.e., the last row) with INLINEFORM0 . The confusion of “very negative” valence with “very positive” valence in the top right corner is interesting and has been previously observed BIBREF4 . Conclusions We investigated the use of unsupervised and multitask learning to improve the performance of an emotional valence classifier. Overall, we found that unsupervised learning yields considerable improvements in classification accuracy for the emotional valence recognition task. The best performing model achieves 43.88% in the 5-class case and 49.80% in the 3-class case with a significant Pearson correlation between continuous target label and prediction of INLINEFORM0 ( INLINEFORM1 ). There is no indication that multitask learning provides any advantage. The results for multitask learning are somewhat surprising. It may be that the valence and activation classification tasks are not sufficiently related for multitask learning to yield improvements in accuracy. Alternatively, a different neural network architecture may be needed for multitask learning to work. Further, the alternating update strategy employed in the present work might not have been the optimal strategy for training. The iterative swapping of target tasks valence/activation might have created instabilities in the weight updates of the backpropagation algorithm. There may yet be other explanations; further investigation may be warranted. Lastly, it is important to note that this model's performance is only approaching state-of-the-art, which employs potentially better suited sequential classifiers such as Long Short-term Memory (LSTM) networks BIBREF17 . However, basic LSTM are not suited to learn from entirely unsupervised data, which we leveraged for the proposed DCGAN models. For future work, we hope to adapt the technique of using unlabeled data to sequential models, including LSTM. We expect that combining our work here with the advantages of sequential models may result in further performance gains which may be more competitive with today's leading models and potentially outperform them. For the purposes of this investigation, the key takeaway is that the use of unsupervised learning yields clear performance gains on the emotional valence classification task, and that this represents a technique that may be adapted to other models to achieve even higher classification accuracies.
set of related tasks are learned (e.g., emotional activation), primary task (e.g., emotional valence)
1df24849e50fcf22f0855e0c0937c1288450ed5c
1df24849e50fcf22f0855e0c0937c1288450ed5c_0
Q: What are the subtle changes in voice which have been previously overshadowed? Text: Introduction Machine Learning, in general, and affective computing, in particular, rely on good data representations or features that have a good discriminatory faculty in classification and regression experiments, such as emotion recognition from speech. To derive efficient representations of data, researchers have adopted two main strategies: (1) carefully crafted and tailored feature extractors designed for a particular task BIBREF0 and (2) algorithms that learn representations automatically from the data itself BIBREF1 . The latter approach is called Representation Learning (RL), and has received growing attention in the past few years and is highly reliant on large quantities of data. Most approaches for emotion recognition from speech still rely on the extraction of standard acoustic features such as pitch, shimmer, jitter and MFCCs (Mel-Frequency Cepstral Coefficients), with a few notable exceptions BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In this work we leverage RL strategies and automatically learn representations of emotional speech from the spectrogram directly using a deep convolutional neural network (CNN) architecture. To learn strong representations of speech we seek to leverage as much data as possible. However, emotion annotations are difficult to obtain and scarce BIBREF6 . We leverage the USC-IEMOCAP dataset, which comprises of around 12 hours of highly emotional and partly acted data from 10 speakers BIBREF7 . However, we aim to improve the learned representations of emotional speech with unlabeled speech data from an unrelated meeting corpus, which consists of about 100 hours of data BIBREF8 . While the meeting corpus is qualitatively quite different from the highly emotional USC-IEMOCAP data, we believe that the learned representations will improve through the use of these additional data. This combination of two separate data sources leads to a semi-supervised machine learning task and we extend the CNN architecture to a deep convolutional generative neural network (DCGAN) that can be trained in an unsupervised fashion BIBREF9 . Within this work, we particularly target emotional valence as the primary task, as it has been shown to be the most challenging emotional dimension for acoustic analyses in a number of studies BIBREF10 , BIBREF11 . Apart from solely targeting valence classification, we further investigate the principle of multitask learning. In multitask learning, a set of related tasks are learned (e.g., emotional activation), along with a primary task (e.g., emotional valence); both tasks share parts of the network topology and are hence jointly trained, as depicted in Figure FIGREF4 . It is expected that data for the secondary task models information, which would also be discriminative in learning the primary task. In fact, this approach has been shown to improve generalizability across corpora BIBREF12 . The remainder of this paper is organized as follows: First we introduce the DCGAN model and discuss prior work, in Section SECREF2 . Then we describe our specific multitask DCGAN model in Section SECREF3 , introduce the datasets in Section SECREF4 , and describe our experimental design in Section SECREF5 . Finally, we report our results in Section SECREF6 and discuss our findings in Section SECREF7 . Related Work The proposed model builds upon previous results in the field of emotion recognition, and leverages prior work in representation learning. Multitask learning has been effective in some prior experiments on emotion detection. In particular, Xia and Liu proposed a multitask model for emotion recognition which, like the investigated model, has activation and valence as targets BIBREF13 . Their work uses a Deep Belief Network (DBN) architecture to classify the emotion of audio input, with valence and activation as secondary tasks. Their experiments indicate that the use of multitask learning produces improved unweighted accuracy on the emotion classification task. Like Xia and Liu, the proposed model uses multitask learning with valence and activation as targets. Unlike them, however, we are primarily interested not in emotion classification, but in valence classification as a primary task. Thus, our multitask model has valence as a primary target and activation as a secondary target. Also, while our experiments use the IEMOCAP database like Xia and Liu do, our method of speaker split differs from theirs. Xia and Liu use a leave-one-speaker-out cross validation scheme with separate train and test sets that have no speaker overlap. This method lacks a distinct validation set; instead, they validate and test on the same set. Our experimental setup, on the other hand, splits the data into distinct train, validation, and test sets, still with no speaker overlap. This is described in greater detail in Section SECREF5 . The unsupervised learning part of the investigated model builds upon an architecture known as the deep convolutional generative adversarial network, or DCGAN. DCGAN consists of two components, known as the generator and discriminator, which are trained against each other in a minimax setup. The generator learns to map samples from a random distribution to output matrices of some pre-specified form. The discriminator takes an input which is either a generator output or a “real” sample from a dataset. The discriminator learns to classify the input as either generated or real BIBREF9 . For training, the discriminator uses a cross entropy loss function based on how many inputs were correctly classified as real and how many were correctly classified as generated. The cross entropy loss between true labels INLINEFORM0 and predictions INLINEFORM1 is defined as: DISPLAYFORM0 Where INLINEFORM0 is the learned vector of weights, and INLINEFORM1 is the number of samples. For purposes of this computation, labels are represented as numerical values of 1 for real and 0 for generated. Then, letting INLINEFORM2 represent the discriminator's predictions for all real inputs, the cross entropy for correct predictions of “real” simplifies to: INLINEFORM3 Because in this case the correct predictions are all ones. Similarly, letting INLINEFORM0 represent the discriminator's predictions for all generated inputs, the cross entropy for correct predictions of “generated” simplifies to: INLINEFORM1 Because here the correct predictions are all zeroes. The total loss for the discriminator is given by the sum of the previous two terms INLINEFORM0 . The generator also uses a cross entropy loss, but its loss is defined in terms of how many generated outputs got incorrectly classified as real: INLINEFORM0 Thus, the generator's loss gets lower the better it is able to produce outputs that the discriminator thinks are real. This leads the generator to eventually produce outputs that look like real samples of speech given sufficient training iterations. Multitask Deep Convolutional Generative Adversarial Network The investigated multitask model is based upon the DCGAN architecture described in Section SECREF2 and is implemented in TensorFlow. For emotion classification a fully connected layer is attached to the final convolutional layer of the DCGAN's discriminator. The output of this layer is then fed to two separate fully connected layers, one of which outputs a valence label and the other of which outputs an activation label. This setup is shown visually in Figure FIGREF4 . Through this setup, the model is able to take advantage of unlabeled data during training by feeding it through the DCGAN layers in the model, and is also able to take advantage of multitask learning and train the valence and activation outputs simultaneously. In particular, the model is trained by iteratively running the generator, discriminator, valence classifier, and activation classifier, and back-propagating the error for each component through the network. The loss functions for the generator and discriminator are unaltered, and remain as shown in Section SECREF2 . Both the valence classifier and activation classifier use cross entropy loss as in Equation EQREF2 . Since the valence and activation classifiers share layers with the discriminator the model learns features and convolutional filters that are effective for the tasks of valence classification, activation classification, and discriminating between real and generated samples. Data Corpus Due to the semi-supervised nature of the proposed Multitask DCGAN model, we utilize both labeled and unlabeled data. For the unlabeled data, we use audio from the AMI BIBREF8 and IEMOCAP BIBREF7 datasets. For the labeled data, we use audio from the IEMOCAP dataset, which comes with labels for activation and valence, both measured on a 5-point Likert scale from three distinct annotators. Although IEMOCAP provides per-word activation and valence labels, in practice these labels do not generally change over time in a given audio file, and so for simplicity we label each audio clip with the average valence and activation. Since valence and activation are both measured on a 5-point scale, the labels are encoded in 5-element one-hot vectors. For instance, a valence of 5 is represented with the vector INLINEFORM0 . The one-hot encoding can be thought of as a probability distribution representing the likelihood of the correct label being some particular value. Thus, in cases where the annotators disagree on the valence or activation label, this can be represented by assigning probabilities to multiple positions in the label vector. For instance, a label of 4.5 conceptually means that the “correct” valence is either 4 or 5 with equal probability, so the corresponding vector would be INLINEFORM1 . These “fuzzy labels” have been shown to improve classification performance in a number of applications BIBREF14 , BIBREF15 . It should be noted here that we had generally greater success with this fuzzy label method than training the neural network model on the valence label directly, i.e. classification task vs. regression. Pre-processing. Audio data is fed to the network models in the form of spectrograms. The spectrograms are computed using a short time Fourier transform with window size of 1024 samples, which at the 16 kHz sampling rate is equivalent to 64 ms. Each spectrogram is 128 pixels high, representing the frequency range 0-11 kHz. Due to the varying lengths of the IEMOCAP audio files, the spectrograms vary in width, which poses a problem for the batching process of the neural network training. To compensate for this, the model randomly crops a region of each input spectrogram. The crop width is determined in advance. To ensure that the selected crop region contains at least some data (i.e. is not entirely silence), cropping occurs using the following procedure: a random word in the transcript of the audio file is selected, and the corresponding time range is looked up. A random point within this time range is selected, which is then treated as the center line of the crop. The crop is then made using the region defined by the center line and crop width. Early on, we found that there is a noticeable imbalance in the valence labels for the IEMOCAP data, in that the labels skew heavily towards the neutral (2-3) range. In order to prevent the model from overfitting to this distribution during training, we normalize the training data by oversampling underrepresented valence data, such that the overall distribution of valence labels is more even. Experimental Design Investigated Models. We investigate the impact of both unlabeled data for improved emotional speech representations and multitask learning on emotional valence classification performance. To this end, we compared four different neural network models: The BasicCNN represents a “bare minimum” valence classifier and thus sets a lower bound for expected performance. Comparison with MultitaskCNN indicates the effect of the inclusion of a secondary task, i.e. emotional activation recognition. Comparison with BasicDCGAN indicates the effect of the incorporation of unlabeled data during training. For fairness, the architectures of all three baselines are based upon the full MultitaskDCGAN model. BasicDCGAN for example is simply the MultitaskDCGAN model with the activation layer removed, while the two fully supervised baselines were built by taking the convolutional layers from the discriminator component of the MultitaskDCGAN, and adding fully connected layers for valence and activation output. Specifically, the discriminator contains four convolutional layers; there is no explicit pooling but the kernel stride size is 2 so image size gets halved at each step. Thus, by design, all four models have this same convolutional structure. This is to ensure that potential performance gains do not stem from a larger complexity or higher number of trainable weights within the DCGAN models, but rather stem from improved representations of speech. Experimental Procedure. The parameters for each model, including batch size, filter size (for convolution), and learning rate, were determined by randomly sampling different parameter combinations, training the model with those parameters, and computing accuracy on a held-out validation set. For each model, we kept the parameters that yield the best accuracy on the held-out set. This procedure ensures that each model is fairly represented during evaluation. Our hyper-parameters included crop width of the input signal INLINEFORM0 , convolutional layer filter sizes INLINEFORM1 (where INLINEFORM2 is the selected crop width and gets divided by 8 to account for each halving of image size from the 3 convolutional layers leading up to the last one), number of convolutional filters INLINEFORM3 (step size 4), batch size INLINEFORM4 , and learning rates INLINEFORM5 . Identified parameters per model are shown in Table TABREF9 . For evaluation, we utilized a 5-fold leave-one-session-out validation. Each fold leaves one of the five sessions in the labeled IEMOCAP data out of the training set entirely. From this left-out conversation, one speaker's audio is used as a validation set, while the other speaker's audio is used as a test set. For each fold, the evaluation procedure is as follows: the model being evaluated is trained on the training set, and after each full pass through the training set, accuracy is computed on the validation set. This process continues until the accuracy on the validation set is found to no longer increase; in other words, we locate a local maximum in validation accuracy. To increase the certainty that this local maximum is truly representative of the model's best performance, we continue to run more iterations after a local maximum is found, and look for 5 consecutive iterations with lower accuracy values. If, in the course of these 5 iterations, a higher accuracy value is found, that is treated as the new local maximum and the search restarts from there. Once a best accuracy value is found in this manner, we restore the model's weights to those of the iteration corresponding to the best accuracy, and evaluate the accuracy on the test set. We evaluated each model on all 5 folds using the methodology described above, recording test accuracies for each fold. Evaluation Strategy. We collected several statistics about our models' performances. We were primarily interested in the unweighted per-class accuracy. In addition, we converted the network's output from probability distributions back into numerical labels by taking the expected value; that is: INLINEFORM0 where INLINEFORM1 is the model's prediction, in its original form as a 5 element vector probability distribution. We then used this to compute the Pearson correlation ( INLINEFORM2 measure) between predicted and actual labels. Some pre-processing was needed to obtain accurate measures. In particular, in cases where human annotators were perfectly split on what the correct label for a particular sample should be, both possibilities should be accepted as correct predictions. For instance, if the correct label is 4.5 (vector form INLINEFORM0 ), a correct prediction could be either 4 or 5 (i.e. the maximum index in the output vector should either be 4 or 5). The above measures are for a 5-point labeling scale, which is how the IEMOCAP data is originally labeled. However, prior experiments have evaluated performance on valence classification on a 3-point scale BIBREF16 . The authors provide an example of this, with valence levels 1 and 2 being pooled into a single “negative” category, valence level 3 remaining untouched as the “neutral” category, and valence levels 4 and 5 being pooled into a single “positive” category. Thus, to allow for comparison with our models, we also report results on such a 3-point scale. We construct these results by taking the results for 5 class comparison and pooling them as just described. Results Table TABREF10 shows the unweighted per-class accuracies and Pearson correlation coeffecients ( INLINEFORM0 values) between actual and predicted labels for each model. All values shown are average values across the test sets for all 5 folds. Results indicate that the use of unsupervised learning yields a clear improvement in performance. Both BasicDCGAN and MultitaskDCGAN have considerably better accuracies and linear correlations compared to the fully supervised CNN models. This is a strong indication that the use of large quantities of task-unrelated speech data improved the filter learning in the CNN layers of the DCGAN discriminator. Multitask learning, on the other hand, does not appear to have any positive impact on performance. Comparing the two CNN models, the addition of multitask learning actually appears to impair performance, with MultitaskCNN doing worse than BasicCNN in all three metrics. The difference is smaller when comparing BasicDCGAN and MultitaskDCGAN, and may not be enough to decidedly conclude that the use of multitask learning has a net negative impact there, but certainly there is no indication of a net positive impact. The observed performance of both the BasicDCGAN and MultitaskDCGAN using 3-classes is comparable to the state-of-the-art, with 49.80% compared to 49.99% reported in BIBREF16 . It needs to be noted that in BIBREF16 data from the test speaker's session partner was utilized in the training of the model. Our models in contrast are trained on only four of the five sessions as discussed in SECREF5 . Further, the here presented models are trained on the raw spectrograms of the audio and no feature extraction was employed whatsoever. This representation learning approach is employed in order to allow the DCGAN component of the model to train on vast amounts of unsupervised speech data. We further report the confusion matrix of the best performing model BasicDCGAN in Table TABREF11 . It is noted that the “negative” class (i.e., the second row) is classified the best. However, it appears that this class is picked more frequently by the model resulting in high recall = 0.7701 and low precision = 0.3502. The class with the highest F1 score is “very positive” (i.e., the last row) with INLINEFORM0 . The confusion of “very negative” valence with “very positive” valence in the top right corner is interesting and has been previously observed BIBREF4 . Conclusions We investigated the use of unsupervised and multitask learning to improve the performance of an emotional valence classifier. Overall, we found that unsupervised learning yields considerable improvements in classification accuracy for the emotional valence recognition task. The best performing model achieves 43.88% in the 5-class case and 49.80% in the 3-class case with a significant Pearson correlation between continuous target label and prediction of INLINEFORM0 ( INLINEFORM1 ). There is no indication that multitask learning provides any advantage. The results for multitask learning are somewhat surprising. It may be that the valence and activation classification tasks are not sufficiently related for multitask learning to yield improvements in accuracy. Alternatively, a different neural network architecture may be needed for multitask learning to work. Further, the alternating update strategy employed in the present work might not have been the optimal strategy for training. The iterative swapping of target tasks valence/activation might have created instabilities in the weight updates of the backpropagation algorithm. There may yet be other explanations; further investigation may be warranted. Lastly, it is important to note that this model's performance is only approaching state-of-the-art, which employs potentially better suited sequential classifiers such as Long Short-term Memory (LSTM) networks BIBREF17 . However, basic LSTM are not suited to learn from entirely unsupervised data, which we leveraged for the proposed DCGAN models. For future work, we hope to adapt the technique of using unlabeled data to sequential models, including LSTM. We expect that combining our work here with the advantages of sequential models may result in further performance gains which may be more competitive with today's leading models and potentially outperform them. For the purposes of this investigation, the key takeaway is that the use of unsupervised learning yields clear performance gains on the emotional valence classification task, and that this represents a technique that may be adapted to other models to achieve even higher classification accuracies.
Unanswerable
859e0bed084f47796417656d7a68849eb9cb324f
859e0bed084f47796417656d7a68849eb9cb324f_0
Q: how are rare words defined? Text: Introduction This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ A recent hot challenge is to train machines to read and comprehend human languages. Towards this end, various machine reading comprehension datasets have been released, including cloze-style BIBREF0 , BIBREF1 , BIBREF2 and user-query types BIBREF3 , BIBREF4 . Meanwhile, a number of deep learning models are designed to take up the challenges, most of which focus on attention mechanism BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . However, how to represent word in an effective way remains an open problem for diverse natural language processing tasks, including machine reading comprehension for different languages. Particularly, for a language like Chinese with a large set of characters (typically, thousands of), lots of which are semantically ambiguous, using either word-level or character-level embedding alone to build the word representations would not be accurate enough. This work especially focuses on a cloze-style reading comprehension task over fairy stories, which is highly challenging due to diverse semantic patterns with personified expressions and reference. In real practice, a reading comprehension model or system which is often called reader in literatures easily suffers from out-of-vocabulary (OOV) word issues, especially for the cloze-style reading comprehension tasks when the ground-truth answers tend to include rare words or named entities (NE), which are hardly fully recorded in the vocabulary. This is more challenging in Chinese. There are over 13,000 characters in Chinese while there are only 26 letters in English without regard to punctuation marks. If a reading comprehension system cannot effectively manage the OOV issues, the performance will not be semantically accurate for the task. Commonly, words are represented as vectors using either word embedding or character embedding. For the former, each word is mapped into low dimensional dense vectors from a lookup table. Character representations are usually obtained by applying neural networks on the character sequence of the word, and their hidden states are obtained to form the representation. Intuitively, word-level representation is good at catching global context and dependency relationships between words, while character embedding helps for dealing with rare word representation. However, the minimal meaningful unit below word usually is not character, which motivates researchers to explore the potential unit (subword) between character and word to model sub-word morphologies or lexical semantics. In fact, morphological compounding (e.g. sunshine or playground) is one of the most common and productive methods of word formation across human languages, which inspires us to represent word by meaningful sub-word units. Recently, researchers have started to work on morphologically informed word embeddings BIBREF11 , BIBREF12 , aiming at better capturing syntactic, lexical and morphological information. With ready subwords, we do not have to work with characters, and segmentation could be stopped at the subword-level to reach a meaningful representation. In this paper, we present various simple yet accurate subword-augmented embedding (SAW) strategies and propose SAW Reader as an instance. Specifically, we adopt subword information to enrich word embedding and survey different SAW operations to integrate word-level and subword-level embedding for a fine-grained representation. To ensure adequate training of OOV and low-frequency words, we employ a short list mechanism. Our evaluation will be performed on three public Chinese reading comprehension datasets and one English benchmark dataset for showing our method is also effective in multi-lingual case. The Subword-augmented Word Embedding The concerned reading comprehension task can be roughly categorized as user-query type and cloze-style according to the answer form. Answers in the former are usually a span of texts while in the cloze-style task, the answers are words or phrases which lets the latter be the harder-hit area of OOV issues, inspiring us to select the cloze-style as our testbed for SAW strategies. Our preliminary study shows even for the advanced word-character based GA reader, OOV answers still account for nearly 1/5 in the error results. This also motivates us to explore better representations to further performance improvement. The cloze-style task in this work can be described as a triple INLINEFORM0 , where INLINEFORM1 is a document (context), INLINEFORM2 is a query over the contents of INLINEFORM3 , in which a word or phrase is the right answer INLINEFORM4 . This section will introduce the proposed SAW Reader in the context of cloze-style reading comprehension. Given the triple INLINEFORM5 , the SAW Reader will be built in the following steps. BPE Subword Segmentation Word in most languages usually can be split into meaningful subword units despite of the writing form. For example, “indispensable" could be split into the following subwords: INLINEFORM0 . In our implementation, we adopt Byte Pair Encoding (BPE) BIBREF13 which is a simple data compression technique that iteratively replaces the most frequent pair of bytes in a sequence by a single, unused byte. BPE allows for the representation of an open vocabulary through a fixed-size vocabulary of variable-length character sequences, making it a very suitable word segmentation strategy for neural network models. The generalized framework can be described as follows. Firstly, all the input sequences (strings) are tokenized into a sequence of single-character subwords, then we repeat, Count all bigrams under the current segmentation status of all sequences. Find the bigram with the highest frequency and merge them in all the sequences. Note the segmentation status is updating now. If the merging times do not reach the specified number, go back to 1, otherwise the algorithm ends. In BIBREF14 , BPE is adopted to segment infrequent words into sub-word units for machine translation. However, there is a key difference between the motivations for subword segmentation. We aim to refine the word representations by using subwords, for both frequent and infrequent words, which is more generally motivated. To this end, we adaptively tokenize words in multi-granularity by controlling the merging times. Subword-augmented Word Embedding Our subwords are also formed as character n-grams, do not cross word boundaries. After using unsupervised segmentation methods to split each word into a subword sequence, an augmented embedding (AE) is to straightforwardly integrate word embedding INLINEFORM0 and subword embedding INLINEFORM1 for a given word INLINEFORM2 . INLINEFORM3 where INLINEFORM0 denotes the detailed integration operation. In this work, we investigate concatenation (concat), element-wise summation (sum) and element-wise multiplication (mul). Thus, each document INLINEFORM1 and query INLINEFORM2 is represented as INLINEFORM3 matrix where INLINEFORM4 denotes the dimension of word embedding and INLINEFORM5 is the number of words in the input. Subword embedding could be useful to refine the word embedding in a finer-grained way, we also consider improving word representation from itself. For quite a lot of words, especially those rare ones, their word embedding is extremely hard to learn due to the data sparse issue. Actually, if all the words in the dataset are used to build the vocabulary, the OOV words from the test set will not obtain adequate training. If they are initiated inappropriately, either with relatively high or low weights, they will harm the answer prediction. To alleviate the OOV issues, we keep a short list INLINEFORM0 for specific words. INLINEFORM1 If INLINEFORM0 is in INLINEFORM1 , the immediate word embedding INLINEFORM2 is indexed from word lookup table INLINEFORM3 where INLINEFORM4 denotes the size (recorded words) of lookup table. Otherwise, it will be represented as the randomly initialized default word (denoted by a specific mark INLINEFORM5 ). Note that, this is intuitively like “guessing” the possible unknown words (which will appear during test) from the vocabulary during training and only the word embedding of the OOV words will be replaced by INLINEFORM6 while their subword embedding INLINEFORM7 will still be processed using the original word. In this way, the OOV words could be tuned sufficiently with expressive meaning after training. During test, the word embedding of unknown words would not severely bias its final representation. Thus, INLINEFORM8 ( INLINEFORM9 ) can be rewritten as INLINEFORM10 In our experiments, the short list is determined according to the word frequency. Concretely, we sort the vocabulary according to the word frequency from high to low. A frequency filter ratio INLINEFORM0 is set to filter out the low-frequency words (rare words) from the lookup table. For example, INLINEFORM1 =0.9 means the least frequent 10% words are replaced with the default UNK notation. The subword embedding INLINEFORM0 is generated by taking the final outputs of a bidirectional gated recurrent unit (GRU) BIBREF15 applied to the embeddings from a lookup table of subwords. The structure of GRU used in this paper are described as follows. INLINEFORM1 where INLINEFORM0 denotes the element-wise multiplication. INLINEFORM1 and INLINEFORM2 are the reset and update gates respectively, and INLINEFORM3 are the hidden states. A bi-directional GRU (BiGRU) processes the sequence in both forward and backward directions. Subwords of each word are successively fed to forward GRU and backward GRU to obtain the internal features of two directions. The output for each input is the concatenation of the two vectors from both directions: INLINEFORM4 . Then, the output of BiGRUs is passed to a fully connected layer to obtain the final subword embedding INLINEFORM5 . INLINEFORM6 Attention Module Our attention module is based on the Gated attention Reader (GA Reader) proposed by BIBREF9 . We choose this model due to its simplicity with comparable performance so that we can focus on the effectiveness of SAW strategies. This module can be described in the following two steps. After augmented embedding, we use two BiGRUs to get contextual representations of the document and query respectively, where the representation of each word is formed by concatenating the forward and backward hidden states. INLINEFORM0 For each word INLINEFORM0 in INLINEFORM1 , we form a word-specific representation of the query INLINEFORM2 using soft attention, and then adopt element-wise product to multiply the query representation with the document word representation. INLINEFORM3 where INLINEFORM0 denotes the multiplication operator to model the interactions between INLINEFORM1 and INLINEFORM2 . Then, the document contextual representation INLINEFORM3 is gated by query representation. Suppose the network has INLINEFORM0 layers. At each layer, the document representation INLINEFORM1 is updated through above attention learning. After going through all the layers, our model comes to answer prediction phase. We use all the words in the document to form the candidate set INLINEFORM2 . Let INLINEFORM3 denote the INLINEFORM4 -th intermediate output of query representation INLINEFORM5 and INLINEFORM6 represent the full output of document representation INLINEFORM7 . The probability of each candidate word INLINEFORM8 as being the answer is predicted using a softmax layer over the inner-product between INLINEFORM9 and INLINEFORM10 . INLINEFORM11 where vector INLINEFORM0 denotes the probability distribution over all the words in the document. Note that each word may occur several times in the document. Thus, the probabilities of each candidate word occurring in different positions of the document are summed up for final prediction. INLINEFORM1 where INLINEFORM0 denotes the set of positions that a particular word INLINEFORM1 occurs in the document INLINEFORM2 . The training objective is to maximize INLINEFORM3 where INLINEFORM4 is the correct answer. Finally, the candidate word with the highest probability will be chosen as the predicted answer. INLINEFORM0 Different from recent work employing complex attention mechanisms BIBREF5 , BIBREF7 , BIBREF16 , our attention mechanism is much more simple with comparable performance so that we can focus on the effectiveness of SAW strategies. Dataset and Settings To verify the effectiveness of our proposed model, we conduct multiple experiments on three Chinese Machine Reading Comprehension datasets, namely CMRC-2017 BIBREF17 , People's Daily (PD) and Children Fairy Tales (CFT) BIBREF2 . In these datasets, a story containing consecutive sentences is formed as the Document and one of the sentences is either automatically or manually selected as the Query where one token is replaced by a placeholder to indicate the answer to fill in. Table TABREF8 gives data statistics. Different from the current cloze-style datasets for English reading comprehension, such as CBT, Daily Mail and CNN BIBREF0 , the three Chinese datasets do not provide candidate answers. Thus, the model has to find the correct answer from the entire document. Besides, we also use the Children's Book Test (CBT) dataset BIBREF1 to test the generalization ability in multi-lingual case. We only focus on subsets where the answer is either a common noun (CN) or NE which is more challenging since the answer is likely to be rare words. We evaluate all the models in terms of accuracy, which is the standard evaluation metric for this task. Throughout this paper, we use the same model setting to make fair comparisons. According to our preliminary experiments, we report the results based on the following settings. The default integration strategy is element-wise product. Word embeddings were 200 INLINEFORM0 and pre-trained by word2vec BIBREF18 toolkit on Wikipedia corpus. Subword embedding were 100 INLINEFORM1 and randomly initialized with the uniformed distribution in the interval [-0:05; 0:05]. Our model was implemented using the Theano and Lasagne Python libraries. We used stochastic gradient descent with ADAM updates for optimization BIBREF19 . The batch size was 64 and the initial learning rate was 0.001 which was halved every epoch after the second epoch. We also used gradient clipping with a threshold of 10 to stabilize GRU training BIBREF20 . We use three attention layers for all experiments. The GRU hidden units for both the word and subword representation were 128. The default frequency filter proportion was 0.9 and the default merging times of BPE was 1,000. We also apply dropout between layers with a dropout rate of 0.5 . Main Results [7]http://www.hfl-tek.com/cmrc2017/leaderboard.html Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline. Although WHU's model achieves the best besides our model on the valid set with only 0.75% below ours, their result on the test set is lower than ours by 2.27%, indicating our model has a satisfactory generalization ability. We also list different integration operations for word and subword embeddings. Table TABREF19 shows the comparisons. From the results, we can see that Word + BPE outperforms Word + Char which indicates subword embedding works essentially. We also observe that mul outperforms the other two operations, concat and sum. This reveals that mul might be more informative than concat and sum operations. The superiority might be due to element-wise product being capable of modeling the interactions and eliminating distribution differences between word and subword embedding. Intuitively, this is also similar to endow subword-aware “attention” over the word embedding. In contrast, concatenation operation may cause too high dimension, which leads to serious over-fitting issues, and sum operation is too simple to prevent from detailed information losing. Since there is no training set for CFT dataset, our model is trained on PD training set. Note that the CFT dataset is harder for the machine to answer because the test set is further processed by human evaluation, and may not be accordance with the pattern of PD dataset. The results on PD and CFT datasets are listed in Table TABREF20 . As we see that, our SAW Reader significantly outperforms the CAS Reader in all types of testing, with improvements of 7.0% on PD and 8.8% on CFT test sets, respectively. Although the domain and topic of PD and CFT datasets are quite different, the results indicate that our model also works effectively for out-of-domain learning. To verify if our method can only work for Chinese, we also evaluate the effectiveness of the proposed method on benchmark English dataset. We use CBT dataset as our testbed to evaluate the performance. For a fair comparison, we simply set the same parameters as before. Table TABREF22 shows the results. We observe that our model outperforms most of the previously public works, with 2.4 % gains on the CBT-NE test set compared with GA Reader which adopts word and character embedding concatenation. Our SAW Reader also achieves comparable performance with FG Reader who adopts neural gates to combine word-level and character-level representations with assistance of extra features including NE, POS and word frequency while our model is much simpler and faster. This result shows our SAW Reader is not restricted to Chinese reading comprehension, but also for other languages. Merging Times of BPE The vocabulary size could seriously involve the segmentation granularity. For BPE segmentation, the resulted subword vocabulary size is equal to the merging times plus the number of single-character types. To have an insight of the influence, we adopt merge times from 0 to 20 INLINEFORM0 , and conduct quantitative study on CMRC-2017 for BPE segmentation. Figure FIGREF25 shows the results. We observe that when the vocabulary size is 1 INLINEFORM1 , the models could obtain the best performance. The results indicate that for a task like reading comprehension the subwords, being a highly flexible grained representation between character and word, tends to be more like characters instead of words. However, when the subwords completely fall into characters, the model performs the worst. This indicates that the balance between word and character is quite critical and an appropriate grain of character-word segmentation could essentially improve the word representation. Filter Mechanism To investigate the impact of the short list to the model performance, we conduct quantitative study on the filter ratio from [0.1, 0.2, ..., 1]. The results on the CMRC-2017 dataset are depicted in Figure FIGREF25 . As we can see that when INLINEFORM0 our SAW reader can obtain the best performance, showing that building the vocabulary among all the training set is not optimal and properly reducing the frequency filter ratio can boost the accuracy. This is partially attributed to training the model from the full vocabulary would cause serious over-fitting as the rare words representations can not obtain sufficient tuning. If the rare words are not initialized properly, they would also bias the whole word representations. Thus a model without OOV mechanism will fail to precisely represent those inevitable OOV words from test sets. Subword-Augmented Representations In text understanding tasks, if the ground-truth answer is OOV word or contains OOV word(s), the performance of deep neural networks would severely drop due to the incomplete representation, especially for cloze-style reading comprehension task where the answer is only one word or phrase. In CMRC-2017, we observe questions with OOV answers (denoted as “OOV questions") account for 17.22% in the error results of the best Word + Char embedding based model. With BPE subword embedding, 12.17% of these “OOV questions" could be correctly answered. This shows the subword representations could be essentially useful for modeling rare and unseen words. To analyze the reading process of SAW Reader, we draw the attention distributions at intermediate layers as shown in Figure FIGREF28 . We observe the salient candidates in the document can be focused after the pair-wise matching of document and query and the right answer (“The mole") could obtain a high weight at the very beginning. After attention learning, the key evidence of the answer would be collected and irrelevant parts would be ignored. This shows our SAW Reader is effective at selecting the vital points at the fundamental embedding layer, guiding the attention layers to collect more relevant pieces. Machine Reading Comprehension Recently, many deep learning models have been proposed for reading comprehension BIBREF16 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF9 , BIBREF26 , BIBREF27 . Notably, Chen2016A conducted an in-depth and thoughtful examination on the comprehension task based on an attentive neural network and an entity-centric classifier with a careful analysis based on handful features. kadlec2016text proposed the Attention Sum Reader (AS Reader) that uses attention to directly pick the answer from the context, which is motivated by the Pointer Network BIBREF28 . Instead of summing the attention of query-to-document, GA Reader BIBREF9 defined an element-wise product to endowing attention on each word of the document using the entire query representation to build query-specific representations of words in the document for accurate answer selection. Wang2017Gated employed gated self-matching networks (R-net) on passage against passage itself to refine passage representation with information from the whole passage. Cui2016Attention introduced an “attended attention" mechanism (AoA) where query-to-document and document-to-query are mutually attentive and interactive to each other. Augmented Word Embedding Distributed word representation plays a fundamental role in neural models BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . Recently, character embeddings are widely used to enrich word representations BIBREF37 , BIBREF21 , BIBREF38 , BIBREF39 . Yang2016Words explored a fine-grained gating mechanism (FG Reader) to dynamically combine word-level and character-level representations based on properties of the words. However, this method is computationally complex and it is not end-to-end, requiring extra labels such as NE and POS tags. Seo2016Bidirectional concatenated the character and word embedding to feed a two-layer Highway Network. Not only for machine reading comprehension tasks, character embedding has also benefit other natural language process tasks, such as word segmentation BIBREF40 , machine translation BIBREF38 , tagging BIBREF41 , BIBREF42 and language modeling BIBREF43 , BIBREF44 . However, character embedding only shows marginal improvement due to a lack internal semantics. Lexical, syntactic and morphological information are also considered to improve word representation BIBREF12 , BIBREF45 . Bojanowski2016Enriching proposed to learn representations for character INLINEFORM0 -gram vectors and represent words as the sum of the INLINEFORM1 -gram vectors. Avraham2017The built a model inspired by BIBREF46 , who used morphological tags instead of INLINEFORM2 -grams. They jointly trained their morphological and semantic embeddings, implicitly assuming that morphological and semantic information should live in the same space. However, the linguistic knowledge resulting subwords, typically, morphological suffix, prefix or stem, may not be suitable for different kinds of languages and tasks. Sennrich2015Neural introduced the byte pair encoding (BPE) compression algorithm into neural machine translation for being capable of open-vocabulary translation by encoding rare and unknown words as subword units. Instead, we consider refining the word representations for both frequent and infrequent words from a computational perspective. Our proposed subword-augmented embedding approach is more general, which can be adopted to enhance the representation for each word by adaptively altering the segmentation granularity in multiple NLP tasks. Conclusion This paper presents an effective neural architecture, called subword-augmented word embedding to enhance the model performance for the cloze-style reading comprehension task. The proposed SAW Reader uses subword embedding to enhance the word representation and limit the word frequency spectrum to train rare words efficiently. With the help of the short list, the model size will also be reduced together with training speedup. Unlike most existing works, which introduce either complex attentive architectures or many manual features, our model is much more simple yet effective. Giving state-of-the-art performance on multiple benchmarks, the proposed reader has been proved effective for learning joint representation at both word and subword level and alleviating OOV difficulties.
low-frequency words
04e90c93d046cd89acef5a7c58952f54de689103
04e90c93d046cd89acef5a7c58952f54de689103_0
Q: which public datasets were used? Text: Introduction This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ A recent hot challenge is to train machines to read and comprehend human languages. Towards this end, various machine reading comprehension datasets have been released, including cloze-style BIBREF0 , BIBREF1 , BIBREF2 and user-query types BIBREF3 , BIBREF4 . Meanwhile, a number of deep learning models are designed to take up the challenges, most of which focus on attention mechanism BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . However, how to represent word in an effective way remains an open problem for diverse natural language processing tasks, including machine reading comprehension for different languages. Particularly, for a language like Chinese with a large set of characters (typically, thousands of), lots of which are semantically ambiguous, using either word-level or character-level embedding alone to build the word representations would not be accurate enough. This work especially focuses on a cloze-style reading comprehension task over fairy stories, which is highly challenging due to diverse semantic patterns with personified expressions and reference. In real practice, a reading comprehension model or system which is often called reader in literatures easily suffers from out-of-vocabulary (OOV) word issues, especially for the cloze-style reading comprehension tasks when the ground-truth answers tend to include rare words or named entities (NE), which are hardly fully recorded in the vocabulary. This is more challenging in Chinese. There are over 13,000 characters in Chinese while there are only 26 letters in English without regard to punctuation marks. If a reading comprehension system cannot effectively manage the OOV issues, the performance will not be semantically accurate for the task. Commonly, words are represented as vectors using either word embedding or character embedding. For the former, each word is mapped into low dimensional dense vectors from a lookup table. Character representations are usually obtained by applying neural networks on the character sequence of the word, and their hidden states are obtained to form the representation. Intuitively, word-level representation is good at catching global context and dependency relationships between words, while character embedding helps for dealing with rare word representation. However, the minimal meaningful unit below word usually is not character, which motivates researchers to explore the potential unit (subword) between character and word to model sub-word morphologies or lexical semantics. In fact, morphological compounding (e.g. sunshine or playground) is one of the most common and productive methods of word formation across human languages, which inspires us to represent word by meaningful sub-word units. Recently, researchers have started to work on morphologically informed word embeddings BIBREF11 , BIBREF12 , aiming at better capturing syntactic, lexical and morphological information. With ready subwords, we do not have to work with characters, and segmentation could be stopped at the subword-level to reach a meaningful representation. In this paper, we present various simple yet accurate subword-augmented embedding (SAW) strategies and propose SAW Reader as an instance. Specifically, we adopt subword information to enrich word embedding and survey different SAW operations to integrate word-level and subword-level embedding for a fine-grained representation. To ensure adequate training of OOV and low-frequency words, we employ a short list mechanism. Our evaluation will be performed on three public Chinese reading comprehension datasets and one English benchmark dataset for showing our method is also effective in multi-lingual case. The Subword-augmented Word Embedding The concerned reading comprehension task can be roughly categorized as user-query type and cloze-style according to the answer form. Answers in the former are usually a span of texts while in the cloze-style task, the answers are words or phrases which lets the latter be the harder-hit area of OOV issues, inspiring us to select the cloze-style as our testbed for SAW strategies. Our preliminary study shows even for the advanced word-character based GA reader, OOV answers still account for nearly 1/5 in the error results. This also motivates us to explore better representations to further performance improvement. The cloze-style task in this work can be described as a triple INLINEFORM0 , where INLINEFORM1 is a document (context), INLINEFORM2 is a query over the contents of INLINEFORM3 , in which a word or phrase is the right answer INLINEFORM4 . This section will introduce the proposed SAW Reader in the context of cloze-style reading comprehension. Given the triple INLINEFORM5 , the SAW Reader will be built in the following steps. BPE Subword Segmentation Word in most languages usually can be split into meaningful subword units despite of the writing form. For example, “indispensable" could be split into the following subwords: INLINEFORM0 . In our implementation, we adopt Byte Pair Encoding (BPE) BIBREF13 which is a simple data compression technique that iteratively replaces the most frequent pair of bytes in a sequence by a single, unused byte. BPE allows for the representation of an open vocabulary through a fixed-size vocabulary of variable-length character sequences, making it a very suitable word segmentation strategy for neural network models. The generalized framework can be described as follows. Firstly, all the input sequences (strings) are tokenized into a sequence of single-character subwords, then we repeat, Count all bigrams under the current segmentation status of all sequences. Find the bigram with the highest frequency and merge them in all the sequences. Note the segmentation status is updating now. If the merging times do not reach the specified number, go back to 1, otherwise the algorithm ends. In BIBREF14 , BPE is adopted to segment infrequent words into sub-word units for machine translation. However, there is a key difference between the motivations for subword segmentation. We aim to refine the word representations by using subwords, for both frequent and infrequent words, which is more generally motivated. To this end, we adaptively tokenize words in multi-granularity by controlling the merging times. Subword-augmented Word Embedding Our subwords are also formed as character n-grams, do not cross word boundaries. After using unsupervised segmentation methods to split each word into a subword sequence, an augmented embedding (AE) is to straightforwardly integrate word embedding INLINEFORM0 and subword embedding INLINEFORM1 for a given word INLINEFORM2 . INLINEFORM3 where INLINEFORM0 denotes the detailed integration operation. In this work, we investigate concatenation (concat), element-wise summation (sum) and element-wise multiplication (mul). Thus, each document INLINEFORM1 and query INLINEFORM2 is represented as INLINEFORM3 matrix where INLINEFORM4 denotes the dimension of word embedding and INLINEFORM5 is the number of words in the input. Subword embedding could be useful to refine the word embedding in a finer-grained way, we also consider improving word representation from itself. For quite a lot of words, especially those rare ones, their word embedding is extremely hard to learn due to the data sparse issue. Actually, if all the words in the dataset are used to build the vocabulary, the OOV words from the test set will not obtain adequate training. If they are initiated inappropriately, either with relatively high or low weights, they will harm the answer prediction. To alleviate the OOV issues, we keep a short list INLINEFORM0 for specific words. INLINEFORM1 If INLINEFORM0 is in INLINEFORM1 , the immediate word embedding INLINEFORM2 is indexed from word lookup table INLINEFORM3 where INLINEFORM4 denotes the size (recorded words) of lookup table. Otherwise, it will be represented as the randomly initialized default word (denoted by a specific mark INLINEFORM5 ). Note that, this is intuitively like “guessing” the possible unknown words (which will appear during test) from the vocabulary during training and only the word embedding of the OOV words will be replaced by INLINEFORM6 while their subword embedding INLINEFORM7 will still be processed using the original word. In this way, the OOV words could be tuned sufficiently with expressive meaning after training. During test, the word embedding of unknown words would not severely bias its final representation. Thus, INLINEFORM8 ( INLINEFORM9 ) can be rewritten as INLINEFORM10 In our experiments, the short list is determined according to the word frequency. Concretely, we sort the vocabulary according to the word frequency from high to low. A frequency filter ratio INLINEFORM0 is set to filter out the low-frequency words (rare words) from the lookup table. For example, INLINEFORM1 =0.9 means the least frequent 10% words are replaced with the default UNK notation. The subword embedding INLINEFORM0 is generated by taking the final outputs of a bidirectional gated recurrent unit (GRU) BIBREF15 applied to the embeddings from a lookup table of subwords. The structure of GRU used in this paper are described as follows. INLINEFORM1 where INLINEFORM0 denotes the element-wise multiplication. INLINEFORM1 and INLINEFORM2 are the reset and update gates respectively, and INLINEFORM3 are the hidden states. A bi-directional GRU (BiGRU) processes the sequence in both forward and backward directions. Subwords of each word are successively fed to forward GRU and backward GRU to obtain the internal features of two directions. The output for each input is the concatenation of the two vectors from both directions: INLINEFORM4 . Then, the output of BiGRUs is passed to a fully connected layer to obtain the final subword embedding INLINEFORM5 . INLINEFORM6 Attention Module Our attention module is based on the Gated attention Reader (GA Reader) proposed by BIBREF9 . We choose this model due to its simplicity with comparable performance so that we can focus on the effectiveness of SAW strategies. This module can be described in the following two steps. After augmented embedding, we use two BiGRUs to get contextual representations of the document and query respectively, where the representation of each word is formed by concatenating the forward and backward hidden states. INLINEFORM0 For each word INLINEFORM0 in INLINEFORM1 , we form a word-specific representation of the query INLINEFORM2 using soft attention, and then adopt element-wise product to multiply the query representation with the document word representation. INLINEFORM3 where INLINEFORM0 denotes the multiplication operator to model the interactions between INLINEFORM1 and INLINEFORM2 . Then, the document contextual representation INLINEFORM3 is gated by query representation. Suppose the network has INLINEFORM0 layers. At each layer, the document representation INLINEFORM1 is updated through above attention learning. After going through all the layers, our model comes to answer prediction phase. We use all the words in the document to form the candidate set INLINEFORM2 . Let INLINEFORM3 denote the INLINEFORM4 -th intermediate output of query representation INLINEFORM5 and INLINEFORM6 represent the full output of document representation INLINEFORM7 . The probability of each candidate word INLINEFORM8 as being the answer is predicted using a softmax layer over the inner-product between INLINEFORM9 and INLINEFORM10 . INLINEFORM11 where vector INLINEFORM0 denotes the probability distribution over all the words in the document. Note that each word may occur several times in the document. Thus, the probabilities of each candidate word occurring in different positions of the document are summed up for final prediction. INLINEFORM1 where INLINEFORM0 denotes the set of positions that a particular word INLINEFORM1 occurs in the document INLINEFORM2 . The training objective is to maximize INLINEFORM3 where INLINEFORM4 is the correct answer. Finally, the candidate word with the highest probability will be chosen as the predicted answer. INLINEFORM0 Different from recent work employing complex attention mechanisms BIBREF5 , BIBREF7 , BIBREF16 , our attention mechanism is much more simple with comparable performance so that we can focus on the effectiveness of SAW strategies. Dataset and Settings To verify the effectiveness of our proposed model, we conduct multiple experiments on three Chinese Machine Reading Comprehension datasets, namely CMRC-2017 BIBREF17 , People's Daily (PD) and Children Fairy Tales (CFT) BIBREF2 . In these datasets, a story containing consecutive sentences is formed as the Document and one of the sentences is either automatically or manually selected as the Query where one token is replaced by a placeholder to indicate the answer to fill in. Table TABREF8 gives data statistics. Different from the current cloze-style datasets for English reading comprehension, such as CBT, Daily Mail and CNN BIBREF0 , the three Chinese datasets do not provide candidate answers. Thus, the model has to find the correct answer from the entire document. Besides, we also use the Children's Book Test (CBT) dataset BIBREF1 to test the generalization ability in multi-lingual case. We only focus on subsets where the answer is either a common noun (CN) or NE which is more challenging since the answer is likely to be rare words. We evaluate all the models in terms of accuracy, which is the standard evaluation metric for this task. Throughout this paper, we use the same model setting to make fair comparisons. According to our preliminary experiments, we report the results based on the following settings. The default integration strategy is element-wise product. Word embeddings were 200 INLINEFORM0 and pre-trained by word2vec BIBREF18 toolkit on Wikipedia corpus. Subword embedding were 100 INLINEFORM1 and randomly initialized with the uniformed distribution in the interval [-0:05; 0:05]. Our model was implemented using the Theano and Lasagne Python libraries. We used stochastic gradient descent with ADAM updates for optimization BIBREF19 . The batch size was 64 and the initial learning rate was 0.001 which was halved every epoch after the second epoch. We also used gradient clipping with a threshold of 10 to stabilize GRU training BIBREF20 . We use three attention layers for all experiments. The GRU hidden units for both the word and subword representation were 128. The default frequency filter proportion was 0.9 and the default merging times of BPE was 1,000. We also apply dropout between layers with a dropout rate of 0.5 . Main Results [7]http://www.hfl-tek.com/cmrc2017/leaderboard.html Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline. Although WHU's model achieves the best besides our model on the valid set with only 0.75% below ours, their result on the test set is lower than ours by 2.27%, indicating our model has a satisfactory generalization ability. We also list different integration operations for word and subword embeddings. Table TABREF19 shows the comparisons. From the results, we can see that Word + BPE outperforms Word + Char which indicates subword embedding works essentially. We also observe that mul outperforms the other two operations, concat and sum. This reveals that mul might be more informative than concat and sum operations. The superiority might be due to element-wise product being capable of modeling the interactions and eliminating distribution differences between word and subword embedding. Intuitively, this is also similar to endow subword-aware “attention” over the word embedding. In contrast, concatenation operation may cause too high dimension, which leads to serious over-fitting issues, and sum operation is too simple to prevent from detailed information losing. Since there is no training set for CFT dataset, our model is trained on PD training set. Note that the CFT dataset is harder for the machine to answer because the test set is further processed by human evaluation, and may not be accordance with the pattern of PD dataset. The results on PD and CFT datasets are listed in Table TABREF20 . As we see that, our SAW Reader significantly outperforms the CAS Reader in all types of testing, with improvements of 7.0% on PD and 8.8% on CFT test sets, respectively. Although the domain and topic of PD and CFT datasets are quite different, the results indicate that our model also works effectively for out-of-domain learning. To verify if our method can only work for Chinese, we also evaluate the effectiveness of the proposed method on benchmark English dataset. We use CBT dataset as our testbed to evaluate the performance. For a fair comparison, we simply set the same parameters as before. Table TABREF22 shows the results. We observe that our model outperforms most of the previously public works, with 2.4 % gains on the CBT-NE test set compared with GA Reader which adopts word and character embedding concatenation. Our SAW Reader also achieves comparable performance with FG Reader who adopts neural gates to combine word-level and character-level representations with assistance of extra features including NE, POS and word frequency while our model is much simpler and faster. This result shows our SAW Reader is not restricted to Chinese reading comprehension, but also for other languages. Merging Times of BPE The vocabulary size could seriously involve the segmentation granularity. For BPE segmentation, the resulted subword vocabulary size is equal to the merging times plus the number of single-character types. To have an insight of the influence, we adopt merge times from 0 to 20 INLINEFORM0 , and conduct quantitative study on CMRC-2017 for BPE segmentation. Figure FIGREF25 shows the results. We observe that when the vocabulary size is 1 INLINEFORM1 , the models could obtain the best performance. The results indicate that for a task like reading comprehension the subwords, being a highly flexible grained representation between character and word, tends to be more like characters instead of words. However, when the subwords completely fall into characters, the model performs the worst. This indicates that the balance between word and character is quite critical and an appropriate grain of character-word segmentation could essentially improve the word representation. Filter Mechanism To investigate the impact of the short list to the model performance, we conduct quantitative study on the filter ratio from [0.1, 0.2, ..., 1]. The results on the CMRC-2017 dataset are depicted in Figure FIGREF25 . As we can see that when INLINEFORM0 our SAW reader can obtain the best performance, showing that building the vocabulary among all the training set is not optimal and properly reducing the frequency filter ratio can boost the accuracy. This is partially attributed to training the model from the full vocabulary would cause serious over-fitting as the rare words representations can not obtain sufficient tuning. If the rare words are not initialized properly, they would also bias the whole word representations. Thus a model without OOV mechanism will fail to precisely represent those inevitable OOV words from test sets. Subword-Augmented Representations In text understanding tasks, if the ground-truth answer is OOV word or contains OOV word(s), the performance of deep neural networks would severely drop due to the incomplete representation, especially for cloze-style reading comprehension task where the answer is only one word or phrase. In CMRC-2017, we observe questions with OOV answers (denoted as “OOV questions") account for 17.22% in the error results of the best Word + Char embedding based model. With BPE subword embedding, 12.17% of these “OOV questions" could be correctly answered. This shows the subword representations could be essentially useful for modeling rare and unseen words. To analyze the reading process of SAW Reader, we draw the attention distributions at intermediate layers as shown in Figure FIGREF28 . We observe the salient candidates in the document can be focused after the pair-wise matching of document and query and the right answer (“The mole") could obtain a high weight at the very beginning. After attention learning, the key evidence of the answer would be collected and irrelevant parts would be ignored. This shows our SAW Reader is effective at selecting the vital points at the fundamental embedding layer, guiding the attention layers to collect more relevant pieces. Machine Reading Comprehension Recently, many deep learning models have been proposed for reading comprehension BIBREF16 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF9 , BIBREF26 , BIBREF27 . Notably, Chen2016A conducted an in-depth and thoughtful examination on the comprehension task based on an attentive neural network and an entity-centric classifier with a careful analysis based on handful features. kadlec2016text proposed the Attention Sum Reader (AS Reader) that uses attention to directly pick the answer from the context, which is motivated by the Pointer Network BIBREF28 . Instead of summing the attention of query-to-document, GA Reader BIBREF9 defined an element-wise product to endowing attention on each word of the document using the entire query representation to build query-specific representations of words in the document for accurate answer selection. Wang2017Gated employed gated self-matching networks (R-net) on passage against passage itself to refine passage representation with information from the whole passage. Cui2016Attention introduced an “attended attention" mechanism (AoA) where query-to-document and document-to-query are mutually attentive and interactive to each other. Augmented Word Embedding Distributed word representation plays a fundamental role in neural models BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . Recently, character embeddings are widely used to enrich word representations BIBREF37 , BIBREF21 , BIBREF38 , BIBREF39 . Yang2016Words explored a fine-grained gating mechanism (FG Reader) to dynamically combine word-level and character-level representations based on properties of the words. However, this method is computationally complex and it is not end-to-end, requiring extra labels such as NE and POS tags. Seo2016Bidirectional concatenated the character and word embedding to feed a two-layer Highway Network. Not only for machine reading comprehension tasks, character embedding has also benefit other natural language process tasks, such as word segmentation BIBREF40 , machine translation BIBREF38 , tagging BIBREF41 , BIBREF42 and language modeling BIBREF43 , BIBREF44 . However, character embedding only shows marginal improvement due to a lack internal semantics. Lexical, syntactic and morphological information are also considered to improve word representation BIBREF12 , BIBREF45 . Bojanowski2016Enriching proposed to learn representations for character INLINEFORM0 -gram vectors and represent words as the sum of the INLINEFORM1 -gram vectors. Avraham2017The built a model inspired by BIBREF46 , who used morphological tags instead of INLINEFORM2 -grams. They jointly trained their morphological and semantic embeddings, implicitly assuming that morphological and semantic information should live in the same space. However, the linguistic knowledge resulting subwords, typically, morphological suffix, prefix or stem, may not be suitable for different kinds of languages and tasks. Sennrich2015Neural introduced the byte pair encoding (BPE) compression algorithm into neural machine translation for being capable of open-vocabulary translation by encoding rare and unknown words as subword units. Instead, we consider refining the word representations for both frequent and infrequent words from a computational perspective. Our proposed subword-augmented embedding approach is more general, which can be adopted to enhance the representation for each word by adaptively altering the segmentation granularity in multiple NLP tasks. Conclusion This paper presents an effective neural architecture, called subword-augmented word embedding to enhance the model performance for the cloze-style reading comprehension task. The proposed SAW Reader uses subword embedding to enhance the word representation and limit the word frequency spectrum to train rare words efficiently. With the help of the short list, the model size will also be reduced together with training speedup. Unlike most existing works, which introduce either complex attentive architectures or many manual features, our model is much more simple yet effective. Giving state-of-the-art performance on multiple benchmarks, the proposed reader has been proved effective for learning joint representation at both word and subword level and alleviating OOV difficulties.
CMRC-2017, People's Daily (PD), Children Fairy Tales (CFT) , Children's Book Test (CBT)
f513e27db363c28d19a29e01f758437d7477eb24
f513e27db363c28d19a29e01f758437d7477eb24_0
Q: what are the baselines? Text: Introduction This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ A recent hot challenge is to train machines to read and comprehend human languages. Towards this end, various machine reading comprehension datasets have been released, including cloze-style BIBREF0 , BIBREF1 , BIBREF2 and user-query types BIBREF3 , BIBREF4 . Meanwhile, a number of deep learning models are designed to take up the challenges, most of which focus on attention mechanism BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . However, how to represent word in an effective way remains an open problem for diverse natural language processing tasks, including machine reading comprehension for different languages. Particularly, for a language like Chinese with a large set of characters (typically, thousands of), lots of which are semantically ambiguous, using either word-level or character-level embedding alone to build the word representations would not be accurate enough. This work especially focuses on a cloze-style reading comprehension task over fairy stories, which is highly challenging due to diverse semantic patterns with personified expressions and reference. In real practice, a reading comprehension model or system which is often called reader in literatures easily suffers from out-of-vocabulary (OOV) word issues, especially for the cloze-style reading comprehension tasks when the ground-truth answers tend to include rare words or named entities (NE), which are hardly fully recorded in the vocabulary. This is more challenging in Chinese. There are over 13,000 characters in Chinese while there are only 26 letters in English without regard to punctuation marks. If a reading comprehension system cannot effectively manage the OOV issues, the performance will not be semantically accurate for the task. Commonly, words are represented as vectors using either word embedding or character embedding. For the former, each word is mapped into low dimensional dense vectors from a lookup table. Character representations are usually obtained by applying neural networks on the character sequence of the word, and their hidden states are obtained to form the representation. Intuitively, word-level representation is good at catching global context and dependency relationships between words, while character embedding helps for dealing with rare word representation. However, the minimal meaningful unit below word usually is not character, which motivates researchers to explore the potential unit (subword) between character and word to model sub-word morphologies or lexical semantics. In fact, morphological compounding (e.g. sunshine or playground) is one of the most common and productive methods of word formation across human languages, which inspires us to represent word by meaningful sub-word units. Recently, researchers have started to work on morphologically informed word embeddings BIBREF11 , BIBREF12 , aiming at better capturing syntactic, lexical and morphological information. With ready subwords, we do not have to work with characters, and segmentation could be stopped at the subword-level to reach a meaningful representation. In this paper, we present various simple yet accurate subword-augmented embedding (SAW) strategies and propose SAW Reader as an instance. Specifically, we adopt subword information to enrich word embedding and survey different SAW operations to integrate word-level and subword-level embedding for a fine-grained representation. To ensure adequate training of OOV and low-frequency words, we employ a short list mechanism. Our evaluation will be performed on three public Chinese reading comprehension datasets and one English benchmark dataset for showing our method is also effective in multi-lingual case. The Subword-augmented Word Embedding The concerned reading comprehension task can be roughly categorized as user-query type and cloze-style according to the answer form. Answers in the former are usually a span of texts while in the cloze-style task, the answers are words or phrases which lets the latter be the harder-hit area of OOV issues, inspiring us to select the cloze-style as our testbed for SAW strategies. Our preliminary study shows even for the advanced word-character based GA reader, OOV answers still account for nearly 1/5 in the error results. This also motivates us to explore better representations to further performance improvement. The cloze-style task in this work can be described as a triple INLINEFORM0 , where INLINEFORM1 is a document (context), INLINEFORM2 is a query over the contents of INLINEFORM3 , in which a word or phrase is the right answer INLINEFORM4 . This section will introduce the proposed SAW Reader in the context of cloze-style reading comprehension. Given the triple INLINEFORM5 , the SAW Reader will be built in the following steps. BPE Subword Segmentation Word in most languages usually can be split into meaningful subword units despite of the writing form. For example, “indispensable" could be split into the following subwords: INLINEFORM0 . In our implementation, we adopt Byte Pair Encoding (BPE) BIBREF13 which is a simple data compression technique that iteratively replaces the most frequent pair of bytes in a sequence by a single, unused byte. BPE allows for the representation of an open vocabulary through a fixed-size vocabulary of variable-length character sequences, making it a very suitable word segmentation strategy for neural network models. The generalized framework can be described as follows. Firstly, all the input sequences (strings) are tokenized into a sequence of single-character subwords, then we repeat, Count all bigrams under the current segmentation status of all sequences. Find the bigram with the highest frequency and merge them in all the sequences. Note the segmentation status is updating now. If the merging times do not reach the specified number, go back to 1, otherwise the algorithm ends. In BIBREF14 , BPE is adopted to segment infrequent words into sub-word units for machine translation. However, there is a key difference between the motivations for subword segmentation. We aim to refine the word representations by using subwords, for both frequent and infrequent words, which is more generally motivated. To this end, we adaptively tokenize words in multi-granularity by controlling the merging times. Subword-augmented Word Embedding Our subwords are also formed as character n-grams, do not cross word boundaries. After using unsupervised segmentation methods to split each word into a subword sequence, an augmented embedding (AE) is to straightforwardly integrate word embedding INLINEFORM0 and subword embedding INLINEFORM1 for a given word INLINEFORM2 . INLINEFORM3 where INLINEFORM0 denotes the detailed integration operation. In this work, we investigate concatenation (concat), element-wise summation (sum) and element-wise multiplication (mul). Thus, each document INLINEFORM1 and query INLINEFORM2 is represented as INLINEFORM3 matrix where INLINEFORM4 denotes the dimension of word embedding and INLINEFORM5 is the number of words in the input. Subword embedding could be useful to refine the word embedding in a finer-grained way, we also consider improving word representation from itself. For quite a lot of words, especially those rare ones, their word embedding is extremely hard to learn due to the data sparse issue. Actually, if all the words in the dataset are used to build the vocabulary, the OOV words from the test set will not obtain adequate training. If they are initiated inappropriately, either with relatively high or low weights, they will harm the answer prediction. To alleviate the OOV issues, we keep a short list INLINEFORM0 for specific words. INLINEFORM1 If INLINEFORM0 is in INLINEFORM1 , the immediate word embedding INLINEFORM2 is indexed from word lookup table INLINEFORM3 where INLINEFORM4 denotes the size (recorded words) of lookup table. Otherwise, it will be represented as the randomly initialized default word (denoted by a specific mark INLINEFORM5 ). Note that, this is intuitively like “guessing” the possible unknown words (which will appear during test) from the vocabulary during training and only the word embedding of the OOV words will be replaced by INLINEFORM6 while their subword embedding INLINEFORM7 will still be processed using the original word. In this way, the OOV words could be tuned sufficiently with expressive meaning after training. During test, the word embedding of unknown words would not severely bias its final representation. Thus, INLINEFORM8 ( INLINEFORM9 ) can be rewritten as INLINEFORM10 In our experiments, the short list is determined according to the word frequency. Concretely, we sort the vocabulary according to the word frequency from high to low. A frequency filter ratio INLINEFORM0 is set to filter out the low-frequency words (rare words) from the lookup table. For example, INLINEFORM1 =0.9 means the least frequent 10% words are replaced with the default UNK notation. The subword embedding INLINEFORM0 is generated by taking the final outputs of a bidirectional gated recurrent unit (GRU) BIBREF15 applied to the embeddings from a lookup table of subwords. The structure of GRU used in this paper are described as follows. INLINEFORM1 where INLINEFORM0 denotes the element-wise multiplication. INLINEFORM1 and INLINEFORM2 are the reset and update gates respectively, and INLINEFORM3 are the hidden states. A bi-directional GRU (BiGRU) processes the sequence in both forward and backward directions. Subwords of each word are successively fed to forward GRU and backward GRU to obtain the internal features of two directions. The output for each input is the concatenation of the two vectors from both directions: INLINEFORM4 . Then, the output of BiGRUs is passed to a fully connected layer to obtain the final subword embedding INLINEFORM5 . INLINEFORM6 Attention Module Our attention module is based on the Gated attention Reader (GA Reader) proposed by BIBREF9 . We choose this model due to its simplicity with comparable performance so that we can focus on the effectiveness of SAW strategies. This module can be described in the following two steps. After augmented embedding, we use two BiGRUs to get contextual representations of the document and query respectively, where the representation of each word is formed by concatenating the forward and backward hidden states. INLINEFORM0 For each word INLINEFORM0 in INLINEFORM1 , we form a word-specific representation of the query INLINEFORM2 using soft attention, and then adopt element-wise product to multiply the query representation with the document word representation. INLINEFORM3 where INLINEFORM0 denotes the multiplication operator to model the interactions between INLINEFORM1 and INLINEFORM2 . Then, the document contextual representation INLINEFORM3 is gated by query representation. Suppose the network has INLINEFORM0 layers. At each layer, the document representation INLINEFORM1 is updated through above attention learning. After going through all the layers, our model comes to answer prediction phase. We use all the words in the document to form the candidate set INLINEFORM2 . Let INLINEFORM3 denote the INLINEFORM4 -th intermediate output of query representation INLINEFORM5 and INLINEFORM6 represent the full output of document representation INLINEFORM7 . The probability of each candidate word INLINEFORM8 as being the answer is predicted using a softmax layer over the inner-product between INLINEFORM9 and INLINEFORM10 . INLINEFORM11 where vector INLINEFORM0 denotes the probability distribution over all the words in the document. Note that each word may occur several times in the document. Thus, the probabilities of each candidate word occurring in different positions of the document are summed up for final prediction. INLINEFORM1 where INLINEFORM0 denotes the set of positions that a particular word INLINEFORM1 occurs in the document INLINEFORM2 . The training objective is to maximize INLINEFORM3 where INLINEFORM4 is the correct answer. Finally, the candidate word with the highest probability will be chosen as the predicted answer. INLINEFORM0 Different from recent work employing complex attention mechanisms BIBREF5 , BIBREF7 , BIBREF16 , our attention mechanism is much more simple with comparable performance so that we can focus on the effectiveness of SAW strategies. Dataset and Settings To verify the effectiveness of our proposed model, we conduct multiple experiments on three Chinese Machine Reading Comprehension datasets, namely CMRC-2017 BIBREF17 , People's Daily (PD) and Children Fairy Tales (CFT) BIBREF2 . In these datasets, a story containing consecutive sentences is formed as the Document and one of the sentences is either automatically or manually selected as the Query where one token is replaced by a placeholder to indicate the answer to fill in. Table TABREF8 gives data statistics. Different from the current cloze-style datasets for English reading comprehension, such as CBT, Daily Mail and CNN BIBREF0 , the three Chinese datasets do not provide candidate answers. Thus, the model has to find the correct answer from the entire document. Besides, we also use the Children's Book Test (CBT) dataset BIBREF1 to test the generalization ability in multi-lingual case. We only focus on subsets where the answer is either a common noun (CN) or NE which is more challenging since the answer is likely to be rare words. We evaluate all the models in terms of accuracy, which is the standard evaluation metric for this task. Throughout this paper, we use the same model setting to make fair comparisons. According to our preliminary experiments, we report the results based on the following settings. The default integration strategy is element-wise product. Word embeddings were 200 INLINEFORM0 and pre-trained by word2vec BIBREF18 toolkit on Wikipedia corpus. Subword embedding were 100 INLINEFORM1 and randomly initialized with the uniformed distribution in the interval [-0:05; 0:05]. Our model was implemented using the Theano and Lasagne Python libraries. We used stochastic gradient descent with ADAM updates for optimization BIBREF19 . The batch size was 64 and the initial learning rate was 0.001 which was halved every epoch after the second epoch. We also used gradient clipping with a threshold of 10 to stabilize GRU training BIBREF20 . We use three attention layers for all experiments. The GRU hidden units for both the word and subword representation were 128. The default frequency filter proportion was 0.9 and the default merging times of BPE was 1,000. We also apply dropout between layers with a dropout rate of 0.5 . Main Results [7]http://www.hfl-tek.com/cmrc2017/leaderboard.html Table TABREF17 shows our results on CMRC-2017 dataset, which shows that our SAW Reader (mul) outperforms all other single models on the test set, with 7.57% improvements compared with Attention Sum Reader (AS Reader) baseline. Although WHU's model achieves the best besides our model on the valid set with only 0.75% below ours, their result on the test set is lower than ours by 2.27%, indicating our model has a satisfactory generalization ability. We also list different integration operations for word and subword embeddings. Table TABREF19 shows the comparisons. From the results, we can see that Word + BPE outperforms Word + Char which indicates subword embedding works essentially. We also observe that mul outperforms the other two operations, concat and sum. This reveals that mul might be more informative than concat and sum operations. The superiority might be due to element-wise product being capable of modeling the interactions and eliminating distribution differences between word and subword embedding. Intuitively, this is also similar to endow subword-aware “attention” over the word embedding. In contrast, concatenation operation may cause too high dimension, which leads to serious over-fitting issues, and sum operation is too simple to prevent from detailed information losing. Since there is no training set for CFT dataset, our model is trained on PD training set. Note that the CFT dataset is harder for the machine to answer because the test set is further processed by human evaluation, and may not be accordance with the pattern of PD dataset. The results on PD and CFT datasets are listed in Table TABREF20 . As we see that, our SAW Reader significantly outperforms the CAS Reader in all types of testing, with improvements of 7.0% on PD and 8.8% on CFT test sets, respectively. Although the domain and topic of PD and CFT datasets are quite different, the results indicate that our model also works effectively for out-of-domain learning. To verify if our method can only work for Chinese, we also evaluate the effectiveness of the proposed method on benchmark English dataset. We use CBT dataset as our testbed to evaluate the performance. For a fair comparison, we simply set the same parameters as before. Table TABREF22 shows the results. We observe that our model outperforms most of the previously public works, with 2.4 % gains on the CBT-NE test set compared with GA Reader which adopts word and character embedding concatenation. Our SAW Reader also achieves comparable performance with FG Reader who adopts neural gates to combine word-level and character-level representations with assistance of extra features including NE, POS and word frequency while our model is much simpler and faster. This result shows our SAW Reader is not restricted to Chinese reading comprehension, but also for other languages. Merging Times of BPE The vocabulary size could seriously involve the segmentation granularity. For BPE segmentation, the resulted subword vocabulary size is equal to the merging times plus the number of single-character types. To have an insight of the influence, we adopt merge times from 0 to 20 INLINEFORM0 , and conduct quantitative study on CMRC-2017 for BPE segmentation. Figure FIGREF25 shows the results. We observe that when the vocabulary size is 1 INLINEFORM1 , the models could obtain the best performance. The results indicate that for a task like reading comprehension the subwords, being a highly flexible grained representation between character and word, tends to be more like characters instead of words. However, when the subwords completely fall into characters, the model performs the worst. This indicates that the balance between word and character is quite critical and an appropriate grain of character-word segmentation could essentially improve the word representation. Filter Mechanism To investigate the impact of the short list to the model performance, we conduct quantitative study on the filter ratio from [0.1, 0.2, ..., 1]. The results on the CMRC-2017 dataset are depicted in Figure FIGREF25 . As we can see that when INLINEFORM0 our SAW reader can obtain the best performance, showing that building the vocabulary among all the training set is not optimal and properly reducing the frequency filter ratio can boost the accuracy. This is partially attributed to training the model from the full vocabulary would cause serious over-fitting as the rare words representations can not obtain sufficient tuning. If the rare words are not initialized properly, they would also bias the whole word representations. Thus a model without OOV mechanism will fail to precisely represent those inevitable OOV words from test sets. Subword-Augmented Representations In text understanding tasks, if the ground-truth answer is OOV word or contains OOV word(s), the performance of deep neural networks would severely drop due to the incomplete representation, especially for cloze-style reading comprehension task where the answer is only one word or phrase. In CMRC-2017, we observe questions with OOV answers (denoted as “OOV questions") account for 17.22% in the error results of the best Word + Char embedding based model. With BPE subword embedding, 12.17% of these “OOV questions" could be correctly answered. This shows the subword representations could be essentially useful for modeling rare and unseen words. To analyze the reading process of SAW Reader, we draw the attention distributions at intermediate layers as shown in Figure FIGREF28 . We observe the salient candidates in the document can be focused after the pair-wise matching of document and query and the right answer (“The mole") could obtain a high weight at the very beginning. After attention learning, the key evidence of the answer would be collected and irrelevant parts would be ignored. This shows our SAW Reader is effective at selecting the vital points at the fundamental embedding layer, guiding the attention layers to collect more relevant pieces. Machine Reading Comprehension Recently, many deep learning models have been proposed for reading comprehension BIBREF16 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 , BIBREF9 , BIBREF26 , BIBREF27 . Notably, Chen2016A conducted an in-depth and thoughtful examination on the comprehension task based on an attentive neural network and an entity-centric classifier with a careful analysis based on handful features. kadlec2016text proposed the Attention Sum Reader (AS Reader) that uses attention to directly pick the answer from the context, which is motivated by the Pointer Network BIBREF28 . Instead of summing the attention of query-to-document, GA Reader BIBREF9 defined an element-wise product to endowing attention on each word of the document using the entire query representation to build query-specific representations of words in the document for accurate answer selection. Wang2017Gated employed gated self-matching networks (R-net) on passage against passage itself to refine passage representation with information from the whole passage. Cui2016Attention introduced an “attended attention" mechanism (AoA) where query-to-document and document-to-query are mutually attentive and interactive to each other. Augmented Word Embedding Distributed word representation plays a fundamental role in neural models BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . Recently, character embeddings are widely used to enrich word representations BIBREF37 , BIBREF21 , BIBREF38 , BIBREF39 . Yang2016Words explored a fine-grained gating mechanism (FG Reader) to dynamically combine word-level and character-level representations based on properties of the words. However, this method is computationally complex and it is not end-to-end, requiring extra labels such as NE and POS tags. Seo2016Bidirectional concatenated the character and word embedding to feed a two-layer Highway Network. Not only for machine reading comprehension tasks, character embedding has also benefit other natural language process tasks, such as word segmentation BIBREF40 , machine translation BIBREF38 , tagging BIBREF41 , BIBREF42 and language modeling BIBREF43 , BIBREF44 . However, character embedding only shows marginal improvement due to a lack internal semantics. Lexical, syntactic and morphological information are also considered to improve word representation BIBREF12 , BIBREF45 . Bojanowski2016Enriching proposed to learn representations for character INLINEFORM0 -gram vectors and represent words as the sum of the INLINEFORM1 -gram vectors. Avraham2017The built a model inspired by BIBREF46 , who used morphological tags instead of INLINEFORM2 -grams. They jointly trained their morphological and semantic embeddings, implicitly assuming that morphological and semantic information should live in the same space. However, the linguistic knowledge resulting subwords, typically, morphological suffix, prefix or stem, may not be suitable for different kinds of languages and tasks. Sennrich2015Neural introduced the byte pair encoding (BPE) compression algorithm into neural machine translation for being capable of open-vocabulary translation by encoding rare and unknown words as subword units. Instead, we consider refining the word representations for both frequent and infrequent words from a computational perspective. Our proposed subword-augmented embedding approach is more general, which can be adopted to enhance the representation for each word by adaptively altering the segmentation granularity in multiple NLP tasks. Conclusion This paper presents an effective neural architecture, called subword-augmented word embedding to enhance the model performance for the cloze-style reading comprehension task. The proposed SAW Reader uses subword embedding to enhance the word representation and limit the word frequency spectrum to train rare words efficiently. With the help of the short list, the model size will also be reduced together with training speedup. Unlike most existing works, which introduce either complex attentive architectures or many manual features, our model is much more simple yet effective. Giving state-of-the-art performance on multiple benchmarks, the proposed reader has been proved effective for learning joint representation at both word and subword level and alleviating OOV difficulties.
AS Reader, GA Reader, CAS Reader
938688871913862c9f8a28b42165237b7324e0de
938688871913862c9f8a28b42165237b7324e0de_0
Q: Do the authors mention any possible confounds in their study? Text: Abstract We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. Introduction Social-media activities often reflect phenomena that occur in other complex systems. By observing social networks and the content propagated through these networks, we can describe or even predict the interplay between the observed social-media activities and another complex system that is more difficult, if not impossible, to monitor. There are numerous studies reported in the literature that successfully correlate social-media activities to phenomena like election outcomes BIBREF0 , BIBREF1 or stock-price movements BIBREF2 , BIBREF3 . In this paper we study the cohesion and coalitions exhibited by political groups in the Eighth European Parliament (2014–2019). We analyze two entirely different aspects of how the Members of the European Parliament (MEPs) behave in policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting (i.e., endorsing) behavior. We use two diverse datasets in the analysis: the roll-call votes and the Twitter data. A roll-call vote (RCV) is a vote in the parliament in which the names of the MEPs are recorded along with their votes. The RCV data is available as part of the minutes of the parliament's plenary sessions. From this perspective, cohesion is seen as the tendency to co-vote (i.e., cast the same vote) within a group, and a coalition is formed when members of two or more groups exhibit a high degree of co-voting on a subject. The second dataset comes from Twitter. It captures the retweeting behavior of MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between the groups) from a completely different perspective. With over 300 million monthly active users and 500 million tweets posted daily, Twitter is one of the most popular social networks. Twitter allows its users to post short messages (tweets) and to follow other users. A user who follows another user is able to read his/her public tweets. Twitter also supports other types of interaction, such as user mentions, replies, and retweets. Of these, retweeting is the most important activity as it is used to share and endorse content created by other users. When a user retweets a tweet, the information about the original author as well as the tweet's content are preserved, and the tweet is shared with the user's followers. Typically, users retweet content that they agree with and thus endorse the views expressed by the original tweeter. We apply two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 which measures the agreement among observers, or voters in our case. The second one is based on Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach and is often used in social-network analyses. Even though these two methodologies come with two different sets of techniques and are based on different assumptions, they provide consistent results. The main contributions of this paper are as follows: (i) We give general insights into the cohesion of political groups in the Eighth European Parliament, both overall and across different policy areas. (ii) We explore whether coalitions are formed in the same way for different policy areas. (iii) We explore to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. (iv) We employ two statistically sound methodologies and examine the extent to which the results are sensitive to the choice of methodology. While the results are mostly consistent, we show that the difference are due to the different treatment of non-attending and abstaining MEPs by INLINEFORM0 and ERGM. The most novel and interesting aspect of our work is the relationship between the co-voting and the retweeting patterns. The increased use of Twitter by MEPs on days with a roll-call vote session (see Fig FIGREF1 ) is an indicator that these two processes are related. In addition, the force-based layouts of the co-voting network and the retweet network reveal a very similar structure on the left-to-center side of the political spectrum (see Fig FIGREF2 ). They also show a discrepancy on the far-right side of the spectrum, which calls for a more detailed analysis. Related work In this paper we study and relate two very different aspects of how MEPs behave in policy-making processes. First, we look at their co-voting behavior, and second, we examine their retweeting patterns. Thus, we draw related work from two different fields of science. On one hand, we look at how co-voting behavior is analyzed in the political-science literature and, on the other, we explore how Twitter is used to better understand political and policy-making processes. The latter has been more thoroughly explored in the field of data mining (specifically, text mining and network analysis). To the best of our knowledge, this is the first paper that studies legislative behavior in the Eighth European Parliament. The legislative behavior of the previous parliaments was thoroughly studied by Hix, Attina, and others BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . These studies found that voting behavior is determined to a large extent—and when viewed over time, increasingly so—by affiliation to a political group, as an organizational reflection of the ideological position. The authors found that the cohesion of political groups in the parliament has increased, while nationality has been less and less of a decisive factor BIBREF12 . The literature also reports that a split into political camps on the left and right of the political spectrum has recently replaced the `grand coalition' between the two big blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. The authors conclude that coalitions are to a large extent formed along the left-to-right axis BIBREF12 . In this paper we analyze the roll-call vote data published in the minutes of the parliament's plenary sessions. For a given subject, the data contains the vote of each MEP present at the respective sitting. Roll-call vote data from the European Parliament has already been extensively studied by other authors, most notably by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 . To be able to study the cohesion and coalitions, authors like Hix, Attina, and Rice BIBREF6 , BIBREF13 , BIBREF14 defined and employed a variety of agreement measures. The most prominent measure is the Agreement Index proposed by Hix et al. BIBREF13 . This measure computes the agreement score from the size of the majority class for a particular vote. The Agreement Index, however, exhibits two drawbacks: (i) it does not account for co-voting by chance, and (ii) without a proper adaptation, it does not accommodate the scenario in which the agreement is to be measured between two different political groups. We employ two statistically sound methodologies developed in two different fields of science. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 . INLINEFORM1 is a measure of the agreement among observers, coders, or measuring instruments that assign values to items or phenomena. It compares the observed agreement to the agreement expected by chance. INLINEFORM2 is used to measure the inter- and self-annotator agreement of human experts when labeling data, and the performance of classification models in machine learning scenarios BIBREF15 . In addition to INLINEFORM3 , we employ Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach, often used in social-network analyses. ERGM can be employed to investigate how different network statistics (e.g., number of edges and triangles) or external factors (e.g., political group membership) govern the network-formation process. The second important aspect of our study is related to analyzing the behavior of participants in social networks, specifically Twitter. Twitter is studied by researchers to better understand different political processes, and in some cases to predict their outcomes. Eom et al. BIBREF1 consider the number of tweets by a party as a proxy for the collective attention to the party, explore the dynamics of the volume, and show that this quantity contains information about an election's outcome. Other studies BIBREF16 reach similar conclusions. Conover et al. BIBREF17 predicted the political alignment of Twitter users in the run-up to the 2010 US elections based on content and network structure. They analyzed the polarization of the retweet and mention networks for the same elections BIBREF18 . Borondo et al. BIBREF19 analyzed user activity during the Spanish presidential elections. They additionally analyzed the 2012 Catalan elections, focusing on the interplay between the language and the community structure of the network BIBREF20 . Most existing research, as Larsson points out BIBREF21 , focuses on the online behavior of leading political figures during election campaigns. This paper continues our research on communities that MEPs (and their followers) form on Twitter BIBREF22 . The goal of our research was to evaluate the role of Twitter in identifying communities of influence when the actual communities are known. We represent the influence on Twitter by the number of retweets that MEPs “receive”. We construct two networks of influence: (i) core, which consists only of MEPs, and (ii) extended, which also involves their followers. We compare the detected communities in both networks to the groups formed by the political, country, and language membership of MEPs. The results show that the detected communities in the core network closely match the political groups, while the communities in the extended network correspond to the countries of residence. This provides empirical evidence that analyzing retweet networks can reveal real-world relationships and can be used to uncover hidden properties of the networks. Lazer BIBREF23 highlights the importance of network-based approaches in political science in general by arguing that politics is a relational phenomenon at its core. Some researchers have adopted the network-based approach to investigate the structure of legislative work in the US Congress, including committee and sub-committee membership BIBREF24 , bill co-sponsoring BIBREF25 , and roll-call votes BIBREF26 . More recently, Dal Maso et al. BIBREF27 examined the community structure with respect to political coalitions and government structure in the Italian Parliament. Scherpereel et al. BIBREF28 examined the constituency, personal, and strategic characteristics of MEPs that influence their tweeting behavior. They suggested that Twitter's characteristics, like immediacy, interactivity, spontaneity, personality, and informality, are likely to resonate with political parties across Europe. By fitting regression models, the authors find that MEPs from incohesive groups have a greater tendency to retweet. In contrast to most of these studies, we focus on the Eighth European Parliament, and more importantly, we study and relate two entirely different behavioral aspects, co-voting and retweeting. The goal of this research is to better understand the cohesion and coalition formation processes in the European Parliament by quantifying and comparing the co-voting patterns and social behavior. Methods In this section we present the methods to quantify cohesion and coalitions from the roll-call votes and Twitter activities. Co-voting measured by agreement We first show how the co-voting behaviour of MEPs can be quantified by a measure of the agreement between them. We treat individual RCVs as observations, and MEPs as independent observers or raters. When they cast the same vote, there is a high level of agreement, and when they vote differently, there is a high level of disagreement. We define cohesion as the level of agreement within a political group, a coalition as a voting agreement between political groups, and opposition as a disagreement between different groups. There are many well-known measures of agreement in the literature. We selected Krippendorff's Alpha-reliability ( INLINEFORM0 ) BIBREF4 , which is a generalization of several specialized measures. It works for any number of observers, and is applicable to different variable types and metrics (e.g., nominal, ordered, interval, etc.). In general, INLINEFORM1 is defined as follows: INLINEFORM2 where INLINEFORM0 is the actual disagreement between observers (MEPs), and INLINEFORM1 is disagreement expected by chance. When observers agree perfectly, INLINEFORM2 INLINEFORM3 , when the agreement equals the agreement by chance, INLINEFORM4 INLINEFORM5 , and when the observers disagree systematically, INLINEFORM6 INLINEFORM7 . The two disagreement measures are defined as follows: INLINEFORM0 INLINEFORM0 The arguments INLINEFORM0 , and INLINEFORM1 are defined below and refer to the values in the coincidence matrix that is constructed from the RCVs data. In roll-call votes, INLINEFORM2 (and INLINEFORM3 ) is a nominal variable with two possible values: yes and no. INLINEFORM4 is a difference function between the values of INLINEFORM5 and INLINEFORM6 , defined as: INLINEFORM7 The RCVs data has the form of a reliability data matrix: INLINEFORM0 where INLINEFORM0 is the number of RCVs, INLINEFORM1 is the number of MEPs, INLINEFORM2 is the number of votes cast in the voting INLINEFORM3 , and INLINEFORM4 is the actual vote of an MEP INLINEFORM5 in voting INLINEFORM6 (yes or no). A coincidence matrix is constructed from the reliability data matrix, and is in general a INLINEFORM0 -by- INLINEFORM1 square matrix, where INLINEFORM2 is the number of possible values of INLINEFORM3 . In our case, where only yes/no votes are relevant, the coincidence matrix is a 2-by-2 matrix of the following form: INLINEFORM4 A cell INLINEFORM0 accounts for all coincidences from all pairs of MEPs in all RCVs where one MEP has voted INLINEFORM1 and the other INLINEFORM2 . INLINEFORM3 and INLINEFORM4 are the totals for each vote outcome, and INLINEFORM5 is the grand total. The coincidences INLINEFORM6 are computed as: INLINEFORM7 where INLINEFORM0 is the number of INLINEFORM1 pairs in vote INLINEFORM2 , and INLINEFORM3 is the number of MEPs that voted in INLINEFORM4 . When computing INLINEFORM5 , each pair of votes is considered twice, once as a INLINEFORM6 pair, and once as a INLINEFORM7 pair. The coincidence matrix is therefore symmetrical around the diagonal, and the diagonal contains all the equal votes. The INLINEFORM0 agreement is used to measure the agreement between two MEPs or within a group of MEPs. When applied to a political group, INLINEFORM1 corresponds to the cohesion of the group. The closer INLINEFORM2 is to 1, the higher the agreement of the MEPs in the group, and hence the higher the cohesion of the group. We propose a modified version of INLINEFORM0 to measure the agreement between two different groups, INLINEFORM1 and INLINEFORM2 . In the case of a voting agreement between political groups, high INLINEFORM3 is interpreted as a coalition between the groups, whereas negative INLINEFORM4 indicates political opposition. Suppose INLINEFORM0 and INLINEFORM1 are disjoint subsets of all the MEPs, INLINEFORM2 , INLINEFORM3 . The respective number of votes cast by both group members in vote INLINEFORM4 is INLINEFORM5 and INLINEFORM6 . The coincidences are then computed as: INLINEFORM7 where the INLINEFORM0 pairs come from different groups, INLINEFORM1 and INLINEFORM2 . The total number of such pairs in vote INLINEFORM3 is INLINEFORM4 . The actual number INLINEFORM5 of the pairs is multiplied by INLINEFORM6 so that the total contribution of vote INLINEFORM7 to the coincidence matrix is INLINEFORM8 . A network-based measure of co-voting In this section we describe a network-based approach to analyzing the co-voting behavior of MEPs. For each roll-call vote we form a network, where the nodes in the network are MEPs, and an undirected edge between two MEPs is formed when they cast the same vote. We are interested in the factors that determine the cohesion within political groups and coalition formation between political groups. Furthermore, we investigate to what extent communication in a different social context, i.e., the retweeting behavior of MEPs, can explain the co-voting of MEPs. For this purpose we apply an Exponential Random Graph Model BIBREF5 to individual roll-call vote networks, and aggregate the results by means of the meta-analysis. ERGMs allow us to investigate the factors relevant for the network-formation process. Network metrics, as described in the abundant literature, serve to gain information about the structural properties of the observed network. A model investigating the processes driving the network formation, however, has to take into account that there can be a multitude of alternative networks. If we are interested in the parameters influencing the network formation we have to consider all possible networks and measure their similarity to the originally observed network. The family of ERGMs builds upon this idea. Assume a random graph INLINEFORM0 , in the form of a binary adjacency matrix, made up of a set of INLINEFORM1 nodes and INLINEFORM2 edges INLINEFORM3 where, similar to a binary choice model, INLINEFORM4 if the nodes INLINEFORM5 are connected and INLINEFORM6 if not. Since network data is by definition relational and thus violates assumptions of independence, classical binary choice models, like logistic regression, cannot be applied in this context. Within an ERGM, the probability for a given network is modelled by DISPLAYFORM0 where INLINEFORM0 is the vector of parameters and INLINEFORM1 is the vector of network statistics (counts of network substructures), which are a function of the adjacency matrix INLINEFORM2 . INLINEFORM3 is a normalization constant corresponding to the sample of all possible networks, which ensures a proper probability distribution. Evaluating the above expression allows us to make assertions if and how specific nodal attributes influence the network formation process. These nodal attributes can be endogenous (dyad-dependent parameters) to the network, like the in- and out-degrees of a node, or exogenous (dyad-independent parameters), as the party affiliation, or the country of origin in our case. An alternative formulation of the ERGM provides the interpretation of the coefficients. We introduce the change statistic, which is defined as the change in the network statistics when an edge between nodes INLINEFORM0 and INLINEFORM1 is added or not. If INLINEFORM2 and INLINEFORM3 denote the vectors of counts of network substructures when the edge is added or not, the change statistics is defined as follows: INLINEFORM4 With this at hand it can be shown that the distribution of the variable INLINEFORM0 , conditional on the rest of the graph INLINEFORM1 , corresponds to: INLINEFORM2 This implies on the one hand that the probability depends on INLINEFORM0 via the change statistic INLINEFORM1 , and on the other hand, that each coefficient within the vector INLINEFORM2 represents an increase in the conditional log-odds ( INLINEFORM3 ) of the graph when the corresponding element in the vector INLINEFORM4 increases by one. The need to condition the probability on the rest of the network can be illustrated by a simple example. The addition (removal) of a single edge alters the network statistics. If a network has only edges INLINEFORM5 and INLINEFORM6 , the creation of an edge INLINEFORM7 would not only add an additional edge but would also alter the count for other network substructures included in the model. In this example, the creation of the edge INLINEFORM8 also increases the number of triangles by one. The coefficients are transformed into probabilities with the logistic function: INLINEFORM9 For example, in the context of roll-call votes, the probability that an additional co-voting edge is formed between two nodes (MEPs) of the same political group is computed with that equation. In this context, the nodematch (nodemix) coefficients of the ERGM (described in detail bellow) therefore refer to the degree of homophilous (heterophilous) matching of MEPs with regard to their political affiliation, or, expressed differently, the propensity of MEPs to co-vote with other MEPs of their respective political group or another group. A positive coefficient reflects an increased chance that an edge between two nodes with respective properties, like group affiliation, given all other parameters unchanged, is formed. Or, put differently, a positive coefficient implies that the probability of observing a network with a higher number of corresponding pairs relative to the hypothetical baseline network, is higher than to observe the baseline network itself BIBREF31 . For an intuitive interpretation, log-odds value of 0 corresponds to the even chance probability of INLINEFORM0 . Log-odds of INLINEFORM1 correspond to an increase of probability by INLINEFORM2 , whereas log-odds of INLINEFORM3 correspond to a decrease of probability by INLINEFORM4 . The computational challenges of estimating ERGMs is to a large degree due to the estimation of the normalizing constant. The number of possible networks is already extremely large for very small networks and the computation is simply not feasible. Therefore, an appropriate sample has to be found, ideally covering the most probable areas of the probability distribution. For this we make use of a method from the Markov Chain Monte Carlo (MCMC) family, namely the Metropolis-Hastings algorithm. The idea behind this algorithm is to generate and sample highly weighted random networks departing from the observed network. The Metropolis-Hastings algorithm is an iterative algorithm which samples from the space of possible networks by randomly adding or removing edges from the starting network conditional on its density. If the likelihood, in the ERGM context also denoted as weights, of the newly generated network is higher than that of the departure network it is retained, otherwise it is discarded. In the former case, the algorithm starts anew from the newly generated network. Otherwise departure network is used again. Repeating this procedure sufficiently often and summing the weights associated to the stored (sampled) networks allows to compute an approximation of the denominator in equation EQREF18 (normalizing constant). The algorithm starts sampling from the originally observed network INLINEFORM0 . The optimization of the coefficients is done simultaneously, equivalently with the Metropolis-Hastings algorithm. At the beginning starting values have to be supplied. For the study at hand we used the “ergm” library from the statistical R software package BIBREF5 implementing the Gibbs-Sampling algorithm BIBREF32 which is a special case of the Metropolis-Hastings algorithm outlined. In order to answer our question of the importance of the factors which drive the network formation process in the roll-call co-voting network, the ERGM is specified with the following parameters: nodematch country: This parameter adds one network statistic to the model, i.e., the number of edges INLINEFORM0 where INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of MEPs with respect to their country of origin. In other words, this coefficient indicates how relevant nationality is in the formation of edges in the co-voting network. nodematch national party: This parameter adds one network statistic to the model: the number of edges INLINEFORM0 with INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of the MEPs with regard to their party affiliation at the national level. In the context of this study, this coefficient can be interpreted as an indicator for within-party cohesion at the national level. nodemix EP group: This parameter adds one network statistic for each pair of European political groups. These coefficients shed light on the degree of coalitions between different groups as well as the within group cohesion . Given that there are nine groups in the European Parliament, this coefficient adds in total 81 statistics to the model. edge covariate Twitter: This parameter corresponds to a square matrix with the dimension of the adjacency matrix of the network, which corresponds to the number of mutual retweets between the MEPs. It provides an insight about the extent to which communication in one social context (Twitter), can explain cooperation in another social context (co-voting in RCVs). An ERGM as specified above is estimated for each of the 2535 roll-call votes. Each roll-call vote is thereby interpreted as a binary network and as an independent study. It is assumed that a priori each MEP could possibly form an edge with each other MEP in the context of a roll-call vote. Assumptions over the presence or absence of individual MEPs in a voting session are not made. In other words the dimensions of the adjacency matrix (the node set), and therefore the distribution from which new networks are drawn, is kept constant over all RCVs and therefore for every ERGM. The ERGM results therefore implicitly generalize to the case where potentially all MEPs are present and could be voting. Not voting is incorporated implicitly by the disconnectedness of a node. The coefficients of the 2535 roll-call vote studies are aggregated by means of a meta-analysis approach proposed by Lubbers BIBREF33 and Snijders et al. BIBREF34 . We are interested in average effect sizes of different matching patterns over different topics and overall. Considering the number of RCVs, it seems straightforward to interpret the different RCV networks as multiplex networks and collapse them into one weighted network, which could then be analysed by means of a valued ERGM BIBREF35 . There are, however, two reasons why we chose the meta-analysis approach instead. First, aggregating the RCV data results into an extremely dense network, leading to severe convergence (degeneracy) problems for the ERGM. Second, the RCV data contains information about the different policy areas the individual votes were about. Since we are interested in how the coalition formation in the European Parliament differs over different areas, a method is needed that allows for an ex-post analysis of the corresponding results. We therefore opted for the meta-analysis approach by Lubbers and Snijders et al. This approach allows us to summarize the results by decomposing the coefficients into average effects and (class) subject-specific deviations. The different ERGM runs for each RCV are thereby regarded as different studies with identical samples that are combined to obtain a general overview of effect sizes. The meta-regression model is defined as: INLINEFORM0 Here INLINEFORM0 is a parameter estimate for class INLINEFORM1 , and INLINEFORM2 is the average coefficient. INLINEFORM3 denotes the normally distributed deviation of the class INLINEFORM4 with a mean of 0 and a variance of INLINEFORM5 . INLINEFORM6 is the estimation error of the parameter value INLINEFORM7 from the ERGM. The meta-analysis model is fitted by an iterated, weighted, least-squares model in which the observations are weighted by the inverse of their variances. For the overall nodematch between political groups, we weighted the coefficients by group sizes. The results from the meta analysis can be interpreted as if they stemmed from an individual ERGM run. In our study, the meta-analysis was performed using the RSiena library BIBREF36 , which implements the method proposed by Lubbers and Snijders et al. BIBREF33 , BIBREF34 . Measuring cohesion and coalitions on Twitter The retweeting behavior of MEPs is captured by their retweet network. Each MEP active on Twitter is a node in this network. An edge in the network between two MEPs exists when one MEP retweeted the other. The weight of the edge is the number of retweets between the two MEPs. The resulting retweet network is an undirected, weighted network. We measure the cohesion of a political group INLINEFORM0 as the average retweets, i.e., the ratio of the number of retweets between the MEPs in the group INLINEFORM1 to the number of MEPs in the group INLINEFORM2 . The higher the ratio, the more each MEP (on average) retweets the MEPs from the same political group, hence, the higher the cohesion of the political group. The definition of the average retweeets ( INLINEFORM3 ) of a group INLINEFORM4 is: INLINEFORM5 This measure of cohesion captures the aggregate retweeting behavior of the group. If we consider retweets as endorsements, a larger number of retweets within the group is an indicator of agreement between the MEPs in the group. It does not take into account the patterns of retweeting within the group, thus ignoring the social sub-structure of the group. This is a potentially interesting direction and we leave it for future work. We employ an analogous measure for the strength of coalitions in the retweet network. The coalition strength between two groups INLINEFORM0 and INLINEFORM1 is the ratio of the number of retweets from one group to the other (but not within groups) INLINEFORM2 to the total number of MEPs in both groups, INLINEFORM3 . The definition of the average retweeets ( INLINEFORM4 ) between groups INLINEFORM5 and INLINEFORM6 is: INLINEFORM7 Cohesion of political groups In this section we first report on the level of cohesion of the European Parliament's groups by analyzing the co-voting through the agreement and ERGM measures. Next, we explore two important policy areas, namely Economic and monetary system and State and evolution of the Union. Finally, we analyze the cohesion of the European Parliament's groups on Twitter. Existing research by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 shows that the cohesion of the European political groups has been rising since the 1990s, and the level of cohesion remained high even after the EU's enlargement in 2004, when the number of MEPs increased from 626 to 732. We measure the co-voting cohesion of the political groups in the Eighth European Parliament using Krippendorff's Alpha—the results are shown in Fig FIGREF30 (panel Overall). The Greens-EFA have the highest cohesion of all the groups. This finding is in line with an analysis of previous compositions of the Fifth and Sixth European Parliaments by Hix and Noury BIBREF11 , and the Seventh by VoteWatch BIBREF37 . They are closely followed by the S&D and EPP. Hix and Noury reported on the high cohesion of S&D in the Fifth and Sixth European Parliaments, and we also observe this in the current composition. They also reported a slightly less cohesive EPP-ED. This group split in 2009 into EPP and ECR. VoteWatch reports EPP to have cohesion on a par with Greens-EFA and S&D in the Seventh European Parliament. The cohesion level we observe in the current European Parliament is also similar to the level of Greens-EFA and S&D. The catch-all group of the non-aligned (NI) comes out as the group with the lowest cohesion. In addition, among the least cohesive groups in the European Parliament are the Eurosceptics EFDD, which include the British UKIP led by Nigel Farage, and the ENL whose largest party are the French National Front, led by Marine Le Pen. Similarly, Hix and Noury found that the least cohesive groups in the Seventh European Parliament are the nationalists and Eurosceptics. The Eurosceptic IND/DEM, which participated in the Sixth European Parliament, transformed into the current EFDD, while the nationalistic UEN was dissolved in 2009. We also measure the voting cohesion of the European Parliament groups using an ERGM, a network-based method—the results are shown in Fig FIGREF31 (panel Overall). The cohesion results obtained with ERGM are comparable to the results based on agreement. In this context, the parameters estimated by the ERGM refer to the matching of MEPs who belong to the same political group (one parameter per group). The parameters measure the homophilous matching between MEPs who have the same political affiliation. A positive value for the estimated parameter indicates that the co-voting of MEPs from that group is greater than what is expected by chance, where the expected number of co-voting links by chance in a group is taken to be uniformly random. A negative value indicates that there are fewer co-voting links within a group than expected by chance. Even though INLINEFORM0 and ERGM compute scores relative to what is expected by chance, they refer to different interpretations of chance. INLINEFORM1 's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group, knowing the votes of these MEPs on all RCVs. ERGM's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group on a given RCV, knowing the network-related properties of the co-voting network on that particular RCV. The main difference between INLINEFORM2 and ERGM, though, is the treatment of non-voting and abstained MEPs. INLINEFORM3 considers only the yes/no votes, and consequently, agreements by the voting MEPs of the same groups are considerably higher than co-voting by chance. ERGM, on the other hand, always considers all MEPs, and non-voting and abstained MEPs are treated as disconnected nodes. The level of co-voting by chance is therefore considerably lower, since there is often a large fraction of MEPs that do not attand or abstain. As with INLINEFORM0 , Greens-EFA, S&D, and EPP exhibit the highest cohesion, even though their ranking is permuted when compared to the ranking obtained with INLINEFORM1 . At the other end of the scale, we observe the same situation as with INLINEFORM2 . The non-aligned members NI have the lowest cohesion, followed by EFDD and ENL. The only place where the two methods disagree is the level of cohesion of GUE-NGL. The Alpha attributes GUE-NGL a rather high level of cohesion, on a par with ALDE, whereas the ERGM attributes them a much lower cohesion. The reason for this difference is the relatively high abstention rate of GUE-NGL. Whereas the overall fraction of non-attending and abstaining MEPs across all RCVs and all political groups is 25%, the GUE-NGL abstention rate is 34%. This is reflected in an above average cohesion by INLINEFORM0 where only yes/no votes are considered, and in a relatively lower, below average cohesion by ERGM. In the later case, the non-attendance is interpreted as a non-cohesive voting of a political groups as a whole. In addition to the overall cohesion, we also focus on two selected policy areas. The cohesion of the political groups related to these two policy areas is shown in the first two panels in Fig FIGREF30 ( INLINEFORM0 ) and Fig FIGREF31 (ERGM). The most important observation is that the level of cohesion of the political groups is very stable across different policy areas. These results are corroborated by both methodologies. Similar to the overall cohesion, the most cohesive political groups are the S&D, Greens-EFA, and EPP. The least cohesive group is the NI, followed by the ENL and EFDD. The two methodologies agree on the level of cohesion for all the political groups, except for GUE-NGL, due to a lower attendance rate. We determine the cohesion of political groups on Twitter by using the average number of retweets between MEPs within the same group. The results are shown in Fig FIGREF33 . The right-wing ENL and EFDD come out as the most cohesive groups, while all the other groups have a far lower average number of retweets. MEPs from ENL and EFDD post by far the largest number of retweets (over 240), and at the same time over 94% of their retweets are directed to MEPs from the same group. Moreover, these two groups stand out in the way the retweets are distributed within the group. A large portion of the retweets of EFDD (1755) go to Nigel Farage, the leader of the group. Likewise, a very large portion of retweets of ENL (2324) go to Marine Le Pen, the leader of the group. Farage and Le Pen are by far the two most retweeted MEPs, with the third one having only 666 retweets. Coalitions in the European Parliament Coalition formation in the European Parliament is largely determined by ideological positions, reflected in the degree of cooperation of parties at the national and European levels. The observation of ideological inclinations in the coalition formation within the European Parliament was already made by other authors BIBREF11 and is confirmed in this study. The basic patterns of coalition formation in the European Parliament can already be seen in the co-voting network in Fig FIGREF2 A. It is remarkable that the degree of attachment between the political groups, which indicates the degree of cooperation in the European Parliament, nearly exactly corresponds to the left-to-right seating order. The liberal ALDE seems to have an intermediator role between the left and right parts of the spectrum in the parliament. Between the extreme (GUE-NGL) and center left (S&D) groups, this function seems to be occupied by Greens-EFA. The non-aligned members NI, as well as the Eurosceptic EFFD and ENL, seem to alternately tip the balance on both poles of the political spectrum. Being ideologically more inclined to vote with other conservative and right-wing groups (EPP, ECR), they sometimes also cooperate with the extreme left-wing group (GUE-NGL) with which they share their Euroscepticism as a common denominator. Figs FIGREF36 and FIGREF37 give a more detailed understanding of the coalition formation in the European Parliament. Fig FIGREF36 displays the degree of agreement or cooperation between political groups measured by Krippendorff's INLINEFORM0 , whereas Fig FIGREF37 is based on the result from the ERGM. We first focus on the overall results displayed in the right-hand plots of Figs FIGREF36 and FIGREF37 . The strongest degrees of cooperation are observed, with both methods, between the two major parties (EPP and S&D) on the one hand, and the liberal ALDE on the other. Furthermore, we see a strong propensity for Greens-EFA to vote with the Social Democrats (5th strongest coalition by INLINEFORM0 , and 3rd by ERGM) and the GUE-NGL (3rd strongest coalition by INLINEFORM1 , and 5th by ERGM). These results underline the role of ALDE and Greens-EFA as intermediaries for the larger groups to achieve a majority. Although the two largest groups together have 405 seats and thus significantly more than the 376 votes needed for a simple majority, the degree of cooperation between the two major groups is ranked only as the fourth strongest by both methods. This suggests that these two political groups find it easier to negotiate deals with smaller counterparts than with the other large group. This observation was also made by Hix et al. BIBREF12 , who noted that alignments on the left and right of the political spectrum have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. Next, we focus on the coalition formation within the two selected policy areas. The area State and Evolution of the Union is dominated by cooperation between the two major groups, S&D and EPP, as well as ALDE. We also observe a high degree of cooperation between groups that are generally regarded as integration friendly, like Greens-EFA and GUE-NGL. We see, particularly in Fig FIGREF36 , a relatively high degree of cooperation between groups considered as Eurosceptic, like ECR, EFFD, ENL, and the group of non-aligned members. The dichotomy between supporters and opponents of European integration is even more pronounced within the policy area Economic and Monetary System. In fact, we are taking a closer look specifically at these two areas as they are, at the same time, both contentious and important. Both methods rank the cooperation between S&D and EPP on the one hand, and ALDE on the other, as the strongest. We also observe a certain degree of unanimity among the Eurosceptic and right-wing groups (EFDD, ENL, and NI) in this policy area. This seems plausible, as these groups were (especially in the aftermath of the global financial crisis and the subsequent European debt crisis) in fierce opposition to further payments to financially troubled member states. However, we also observe a number of strong coalitions that might, at first glance, seem unusual, specifically involving the left-wing group GUE-NGL on the one hand, and the right-wing EFDD, ENL, and NI on the other. These links also show up in the network plot in Fig FIGREF2 A. This might be attributable to a certain degree of Euroscepticism on both sides: rooted in criticism of capitalism on the left, and at least partly a raison d'être on the right. Hix et al. BIBREF11 discovered this pattern as well, and proposed an additional explanation—these coalitions also relate to a form of government-opposition dynamic that is rooted at the national level, but is reflected in voting patterns at the European level. In general, we observe two main differences between the INLINEFORM0 and ERGM results: the baseline cooperation as estimated by INLINEFORM1 is higher, and the ordering of coalitions from the strongest to the weakest is not exactly the same. The reason is the same as for the cohesion, namely different treatment of non-voting and abstaining MEPs. When they are ignored, as by INLINEFORM2 , the baseline level of inter-group co-voting is higher. When non-attending and abstaining is treated as voting differently, as by ERGM, it is considerably more difficult to achieve co-voting coalitions, specially when there are on average 25% MEPs that do not attend or abstain. Groups with higher non-attendance rates, such as GUE-NGL (34%) and NI (40%) are less likely to form coalitions, and therefore have relatively lower ERGM coefficients (Fig FIGREF37 ) than INLINEFORM3 scores (Fig FIGREF36 ). The first insight into coalition formation on Twitter can be observed in the retweet network in Fig FIGREF2 B. The ideological left to right alignment of the political groups is reflected in the retweet network. Fig FIGREF40 shows the strength of the coalitions on Twitter, as estimated by the number of retweets between MEPs from different groups. The strongest coalitions are formed between the right-wing groups EFDD and ECR, as well as ENL and NI. At first, this might come as a surprise, since these groups do not form strong coalitions in the European Parliament, as can be seen in Figs FIGREF36 and FIGREF37 . On the other hand, the MEPs from these groups are very active Twitter users. As previously stated, MEPs from ENL and EFDD post the largest number of retweets. Moreover, 63% of the retweets outside of ENL are retweets of NI. This effect is even more pronounced with MEPs from EFDD, whose retweets of ECR account for 74% of their retweets from other groups. In addition to these strong coalitions on the right wing, we find coalition patterns to be very similar to the voting coalitions observed in the European Parliament, seen in Figs FIGREF36 and FIGREF37 . The strongest coalitions, which come immediately after the right-wing coalitions, are between Greens-EFA on the one hand, and GUE-NGL and S&D on the other, as well as ALDE on the one hand, and EPP and S&D on the other. These results corroborate the role of ALDE and Greens-EFA as intermediaries in the European Parliament, not only in the legislative process, but also in the debate on social media. To better understand the formation of coalitions in the European Parliament and on Twitter, we examine the strongest cooperation between political groups at three different thresholds. For co-voting coalitions in the European Parliament we choose a high threshold of INLINEFORM0 , a medium threshold of INLINEFORM1 , and a negative threshold of INLINEFORM2 (which corresponds to strong oppositions). In this way we observe the overall patterns of coalition and opposition formation in the European Parliament and in the two specific policy areas. For cooperation on Twitter, we choose a high threshold of INLINEFORM3 , a medium threshold of INLINEFORM4 , and a very low threshold of INLINEFORM5 . The strongest cooperations in the European Parliament over all policy areas are shown in Fig FIGREF42 G. It comes as no surprise that the strongest cooperations are within the groups (in the diagonal). Moreover, we again observe GUE-NGL, S&D, Greens-EFA, ALDE, and EPP as the most cohesive groups. In Fig FIGREF42 H, we observe coalitions forming along the diagonal, which represents the seating order in the European Parliament. Within this pattern, we observe four blocks of coalitions: on the left, between GUE-NGL, S&D, and Greens-EFA; in the center, between S&D, Greens-EFA, ALDE, and EPP; on the right-center between ALDE, EPP, and ECR; and finally, on the far-right between ECR, EFDD, ENL, and NI. Fig FIGREF42 I shows the strongest opposition between groups that systematically disagree in voting. The strongest disagreements are between left- and right-aligned groups, but not between the left-most and right-most groups, in particular, between GUE-NGL and ECR, but also between S&D and Greens-EFA on one side, and ENL and NI on the other. In the area of Economic and monetary system we see a strong cooperation between EPP and S&D (Fig FIGREF42 A), which is on a par with the cohesion of the most cohesive groups (GUE-NGL, S&D, Greens-EFA, ALDE, and EPP), and is above the cohesion of the other groups. As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B. On one hand, we observe cooperation between S&D, ALDE, EPP, and ECR, and on the other, cooperation between GUE-NGL, Greens-EFA, EFDD, ENL, and NI. This division in blocks is seen again in Fig FIGREF42 C, which shows the strongest disagreements. Here, we observe two blocks composed of S&D, EPP, and ALDE on one hand, and GUE-NGL, EFDD, ENL, and NI on the other, which are in strong opposition to each other. In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis. The patterns of coalitions forming on Twitter closely resemble those in the European Parliament. In Fig FIGREF42 J we see that the strongest degrees of cooperation on Twitter are within the groups. The only group with low cohesion is the NI, whose members have only seven retweets between them. The coalitions on Twitter follow the seating order in the European Parliament remarkably well (see Fig FIGREF42 K). What is striking is that the same blocks form on the left, center, and on the center-right, both in the European Parliament and on Twitter. The largest difference between the coalitions in the European Parliament and on Twitter is on the far-right, where we observe ENL and NI as isolated blocks. The results shown in Fig FIGREF44 quantify the extent to which communication in one social context (Twitter) can explain cooperation in another social context (co-voting in the European Parliament). A positive value indicates that the matching behavior in the retweet network is similar to the one in the co-voting network, specific for an individual policy area. On the other hand, a negative value implies a negative “correlation” between the retweeting and co-voting of MEPs in the two different contexts. The bars in Fig FIGREF44 correspond to the coefficients from the edge covariate terms of the ERGM, describing the relationship between the retweeting and co-voting behavior of MEPs. The coefficients are aggregated for individual policy areas by means of a meta-analysis. Overall, we observe a positive correlation between retweeting and co-voting, which is significantly different from zero. The strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets. Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union. The only exception, with a significantly negative coefficient, is the area Economic and monetary system. This implies that in the area Economic and monetary system we observe a significant deviation from the usual co-voting patterns. Results from section “sec:coalitionpolicy”, confirm that this is indeed the case. Especially noteworthy are the coalitions between GUE-NGL and Greens-EFA on the left wing, and EFDD and ENL on the right wing. In the section “sec:coalitionpolicy” we interpret these results as a combination of Euroscepticism on both sides, motivated on the left by a skeptical attitude towards the market orientation of the EU, and on the right by a reluctance to give up national sovereignty. Discussion We study cohesion and coalitions in the Eighth European Parliament by analyzing, on one hand, MEPs' co-voting tendencies and, on the other, their retweeting behavior. We reveal that the most cohesive political group in the European Parliament, when it comes to co-voting, is Greens-EFA, closely followed by S&D and EPP. This is consistent with what VoteWatch BIBREF37 reported for the Seventh European Parliament. The non-aligned (NI) come out as the least cohesive group, followed by the Eurosceptic EFDD. Hix and Noury BIBREF11 also report that nationalists and Eurosceptics form the least cohesive groups in the Sixth European Parliament. We reaffirm most of these results with both of the two employed methodologies. The only point where the two methodologies disagree is in the level of cohesion for the left-wing GUE-NGL, which is portrayed by ERGM as a much less cohesive group, due to their relatively lower attendance rate. The level of cohesion of the political groups is quite stable across different policy areas and similar conclusions apply. On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration. We observed the same phenomenon recently during the Brexit campaign BIBREF38 . Along our interpretation the Brexit was “won” to some extent due to these social media activities, which are practically non-existent among the pro-EU political groups. The fact that ENL and EFDD are the least cohesive groups in the European Parliament can be attributed to their political focus. It seems more important for the group to agree on its anti-EU stance and to call for independence and sovereignty, and much less important to agree on other issues put forward in the parliament. The basic pattern of coalition formation, with respect to co-voting, can already be seen in Fig FIGREF2 A: the force-based layout almost completely corresponds to the seating order in the European Parliament (from the left- to the right-wing groups). A more thorough examination shows that the strongest cooperation can be observed, for both methodologies, between EPP, S&D, and ALDE, where EPP and S&D are the two largest groups, while the liberal ALDE plays the role of an intermediary in this context. On the other hand, the role of an intermediary between the far-left GUE-NGL and its center-left neighbor, S&D, is played by the Greens-EFA. These three parties also form a strong coalition in the European Parliament. On the far right of the spectrum, the non-aligned, EFDD, and ENL form another coalition. This behavior was also observed by Hix et al. BIBREF12 , stating that alignments on the left and right have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the European Parliament. When looking at the policy area Economic and monetary system, we see the same coalitions. However, interestingly, EFDD, ENL, and NI often co-vote with the far-left GUE-NGL. This can be attributed to a certain degree of Euroscepticism on both sides: as a criticism of capitalism, on one hand, and as the main political agenda, on the other. This pattern was also discovered by Hix et al. BIBREF12 , who argued that these coalitions emerge from a form of government-opposition dynamics, rooted at the national level, but also reflected at the European level. When studying coalitions on Twitter, the strongest coalitions can be observed on the right of the spectrum (between EFDD, ECR, ENL, and NI). This is, yet again, in contrast to what was observed in the RCV data. The reason lies in the anti-EU messages they tend to collectively spread (retweet) across the network. This behavior forms strong retweet ties, not only within, but also between, these groups. For example, MEPs of EFDD mainly retweet MEPs from ECR (with the exception of MEPs from their own group). In contrast to these right-wing coalitions, we find the other coalitions to be consistent with what is observed in the RCV data. The strongest coalitions on the left-to-center part of the axis are those between GUE-NGL, Greens-EFA, and S&D, and between S&D, ALDE, and EPP. These results reaffirm the role of Greens-EFA and ALDE as intermediaries, not only in the European Parliament but also in the debates on social media. Last, but not least, with the ERGM methodology we measure the extent to which the retweet network can explain the co-voting activities in the European Parliament. We compute this for each policy area separately and also over all RCVs. We conclude that the retweet network indeed matches the co-voting behavior, with the exception of one specific policy area. In the area Economic and monetary system, the links in the (overall) retweet network do not match the links in the co-voting network. Moreover, the negative coefficients imply a radically different formation of coalitions in the European Parliament. This is consistent with the results in Figs FIGREF36 and FIGREF37 (the left-hand panels), and is also observed in Fig FIGREF42 (the top charts). From these figures we see that in this particular case, the coalitions are also formed between the right-wing groups and the far-left GUE-NGL. As already explained, we attribute this to the degree of Euroscepticism that these groups share on this particular policy issue. Conclusions In this paper we analyze (co-)voting patterns and social behavior of members of the European Parliament, as well as the interaction between these two systems. More precisely, we analyze a set of 2535 roll-call votes as well as the tweets and retweets of members of the MEPs in the period from October 2014 to February 2016. The results indicate a considerable level of correlation between these two complex systems. This is consistent with previous findings of Cherepnalkoski et al. BIBREF22 , who reconstructed the adherence of MEPs to their respective political or national group solely from their retweeting behavior. We employ two different methodologies to quantify the co-voting patterns: Krippendorff's INLINEFORM0 and ERGM. They were developed in different fields of research, use different techniques, and are based on different assumptions, but in general they yield consistent results. However, there are some differences which have consequences for the interpretation of the results. INLINEFORM0 is a measure of agreement, designed as a generalization of several specialized measures, that can compare different numbers of observations, in our case roll-call votes. It only considers yes/no votes. Absence and abstention by MEPs is ignored. Its baseline ( INLINEFORM1 ), i.e., co-voting by chance, is computed from the yes/no votes of all MEPs on all RCVs. ERGMs are used in social-network analyses to determine factors influencing the edge formation process. In our case an edge between two MEPs is formed when they cast the same yes/no vote within a RCV. It is assumed that a priori each MEP can form a link with any other MEP. No assumptions about the presence or absence of individual MEPs in a voting session are made. Each RCV is analyzed as a separate binary network. The node set is thereby kept constant for each RCV network. While the ERGM departs from the originally observed network, where MEPs who didn't vote or abstained appear as isolated nodes, links between these nodes are possible within the network sampling process which is part of the ERGM optimization process. The results of several RCVs are aggregated by means of the meta-analysis approach. The baseline (ERGM coefficients INLINEFORM0 ), i.e., co-voting by chance, is computed from a large sample of randomly generated networks. These two different baselines have to be taken into account when interpreting the results of INLINEFORM0 and ERGM. In a typical voting session, 25% of the MEPs are missing or abstaining. When assessing cohesion of political groups, all INLINEFORM1 values are well above the baseline, and the average INLINEFORM2 . The average ERGM cohesion coefficients, on the other hand, are around the baseline. The difference is even more pronounced for groups with higher non-attendance/abstention rates like GUE-NGL (34%) and NI (40%). When assessing strength of coalitions between pairs of groups, INLINEFORM3 values are balanced around the baseline, while the ERGM coefficients are mostly negative. The ordering of coalitions from the strongest to the weakest is therefor different when groups with high non-attendance/abstention rates are involved. The choice of the methodology to asses cohesion and coalitions is not obvious. Roll-call voting is used for decisions which demand a simple majority only. One might however argue that non-attendance/abstention corresponds to a no vote, or that absence is used strategically. Also, the importance of individual votes, i.e., how high on the agenda of a political group is the subject, affects their attendance, and consequently the perception of their cohesion and the potential to act as a reliable coalition partner. Acknowledgments This work was supported in part by the EC projects SIMPOL (no. 610704) and DOLFINS (no. 640772), and by the Slovenian ARRS programme Knowledge Technologies (no. P2-103).
Yes
4170ed011b02663f5b1b1a3c1f0415b7abfaa85d
4170ed011b02663f5b1b1a3c1f0415b7abfaa85d_0
Q: What is the relationship between the co-voting and retweeting patterns? Text: Abstract We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. Introduction Social-media activities often reflect phenomena that occur in other complex systems. By observing social networks and the content propagated through these networks, we can describe or even predict the interplay between the observed social-media activities and another complex system that is more difficult, if not impossible, to monitor. There are numerous studies reported in the literature that successfully correlate social-media activities to phenomena like election outcomes BIBREF0 , BIBREF1 or stock-price movements BIBREF2 , BIBREF3 . In this paper we study the cohesion and coalitions exhibited by political groups in the Eighth European Parliament (2014–2019). We analyze two entirely different aspects of how the Members of the European Parliament (MEPs) behave in policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting (i.e., endorsing) behavior. We use two diverse datasets in the analysis: the roll-call votes and the Twitter data. A roll-call vote (RCV) is a vote in the parliament in which the names of the MEPs are recorded along with their votes. The RCV data is available as part of the minutes of the parliament's plenary sessions. From this perspective, cohesion is seen as the tendency to co-vote (i.e., cast the same vote) within a group, and a coalition is formed when members of two or more groups exhibit a high degree of co-voting on a subject. The second dataset comes from Twitter. It captures the retweeting behavior of MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between the groups) from a completely different perspective. With over 300 million monthly active users and 500 million tweets posted daily, Twitter is one of the most popular social networks. Twitter allows its users to post short messages (tweets) and to follow other users. A user who follows another user is able to read his/her public tweets. Twitter also supports other types of interaction, such as user mentions, replies, and retweets. Of these, retweeting is the most important activity as it is used to share and endorse content created by other users. When a user retweets a tweet, the information about the original author as well as the tweet's content are preserved, and the tweet is shared with the user's followers. Typically, users retweet content that they agree with and thus endorse the views expressed by the original tweeter. We apply two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 which measures the agreement among observers, or voters in our case. The second one is based on Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach and is often used in social-network analyses. Even though these two methodologies come with two different sets of techniques and are based on different assumptions, they provide consistent results. The main contributions of this paper are as follows: (i) We give general insights into the cohesion of political groups in the Eighth European Parliament, both overall and across different policy areas. (ii) We explore whether coalitions are formed in the same way for different policy areas. (iii) We explore to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. (iv) We employ two statistically sound methodologies and examine the extent to which the results are sensitive to the choice of methodology. While the results are mostly consistent, we show that the difference are due to the different treatment of non-attending and abstaining MEPs by INLINEFORM0 and ERGM. The most novel and interesting aspect of our work is the relationship between the co-voting and the retweeting patterns. The increased use of Twitter by MEPs on days with a roll-call vote session (see Fig FIGREF1 ) is an indicator that these two processes are related. In addition, the force-based layouts of the co-voting network and the retweet network reveal a very similar structure on the left-to-center side of the political spectrum (see Fig FIGREF2 ). They also show a discrepancy on the far-right side of the spectrum, which calls for a more detailed analysis. Related work In this paper we study and relate two very different aspects of how MEPs behave in policy-making processes. First, we look at their co-voting behavior, and second, we examine their retweeting patterns. Thus, we draw related work from two different fields of science. On one hand, we look at how co-voting behavior is analyzed in the political-science literature and, on the other, we explore how Twitter is used to better understand political and policy-making processes. The latter has been more thoroughly explored in the field of data mining (specifically, text mining and network analysis). To the best of our knowledge, this is the first paper that studies legislative behavior in the Eighth European Parliament. The legislative behavior of the previous parliaments was thoroughly studied by Hix, Attina, and others BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . These studies found that voting behavior is determined to a large extent—and when viewed over time, increasingly so—by affiliation to a political group, as an organizational reflection of the ideological position. The authors found that the cohesion of political groups in the parliament has increased, while nationality has been less and less of a decisive factor BIBREF12 . The literature also reports that a split into political camps on the left and right of the political spectrum has recently replaced the `grand coalition' between the two big blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. The authors conclude that coalitions are to a large extent formed along the left-to-right axis BIBREF12 . In this paper we analyze the roll-call vote data published in the minutes of the parliament's plenary sessions. For a given subject, the data contains the vote of each MEP present at the respective sitting. Roll-call vote data from the European Parliament has already been extensively studied by other authors, most notably by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 . To be able to study the cohesion and coalitions, authors like Hix, Attina, and Rice BIBREF6 , BIBREF13 , BIBREF14 defined and employed a variety of agreement measures. The most prominent measure is the Agreement Index proposed by Hix et al. BIBREF13 . This measure computes the agreement score from the size of the majority class for a particular vote. The Agreement Index, however, exhibits two drawbacks: (i) it does not account for co-voting by chance, and (ii) without a proper adaptation, it does not accommodate the scenario in which the agreement is to be measured between two different political groups. We employ two statistically sound methodologies developed in two different fields of science. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 . INLINEFORM1 is a measure of the agreement among observers, coders, or measuring instruments that assign values to items or phenomena. It compares the observed agreement to the agreement expected by chance. INLINEFORM2 is used to measure the inter- and self-annotator agreement of human experts when labeling data, and the performance of classification models in machine learning scenarios BIBREF15 . In addition to INLINEFORM3 , we employ Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach, often used in social-network analyses. ERGM can be employed to investigate how different network statistics (e.g., number of edges and triangles) or external factors (e.g., political group membership) govern the network-formation process. The second important aspect of our study is related to analyzing the behavior of participants in social networks, specifically Twitter. Twitter is studied by researchers to better understand different political processes, and in some cases to predict their outcomes. Eom et al. BIBREF1 consider the number of tweets by a party as a proxy for the collective attention to the party, explore the dynamics of the volume, and show that this quantity contains information about an election's outcome. Other studies BIBREF16 reach similar conclusions. Conover et al. BIBREF17 predicted the political alignment of Twitter users in the run-up to the 2010 US elections based on content and network structure. They analyzed the polarization of the retweet and mention networks for the same elections BIBREF18 . Borondo et al. BIBREF19 analyzed user activity during the Spanish presidential elections. They additionally analyzed the 2012 Catalan elections, focusing on the interplay between the language and the community structure of the network BIBREF20 . Most existing research, as Larsson points out BIBREF21 , focuses on the online behavior of leading political figures during election campaigns. This paper continues our research on communities that MEPs (and their followers) form on Twitter BIBREF22 . The goal of our research was to evaluate the role of Twitter in identifying communities of influence when the actual communities are known. We represent the influence on Twitter by the number of retweets that MEPs “receive”. We construct two networks of influence: (i) core, which consists only of MEPs, and (ii) extended, which also involves their followers. We compare the detected communities in both networks to the groups formed by the political, country, and language membership of MEPs. The results show that the detected communities in the core network closely match the political groups, while the communities in the extended network correspond to the countries of residence. This provides empirical evidence that analyzing retweet networks can reveal real-world relationships and can be used to uncover hidden properties of the networks. Lazer BIBREF23 highlights the importance of network-based approaches in political science in general by arguing that politics is a relational phenomenon at its core. Some researchers have adopted the network-based approach to investigate the structure of legislative work in the US Congress, including committee and sub-committee membership BIBREF24 , bill co-sponsoring BIBREF25 , and roll-call votes BIBREF26 . More recently, Dal Maso et al. BIBREF27 examined the community structure with respect to political coalitions and government structure in the Italian Parliament. Scherpereel et al. BIBREF28 examined the constituency, personal, and strategic characteristics of MEPs that influence their tweeting behavior. They suggested that Twitter's characteristics, like immediacy, interactivity, spontaneity, personality, and informality, are likely to resonate with political parties across Europe. By fitting regression models, the authors find that MEPs from incohesive groups have a greater tendency to retweet. In contrast to most of these studies, we focus on the Eighth European Parliament, and more importantly, we study and relate two entirely different behavioral aspects, co-voting and retweeting. The goal of this research is to better understand the cohesion and coalition formation processes in the European Parliament by quantifying and comparing the co-voting patterns and social behavior. Methods In this section we present the methods to quantify cohesion and coalitions from the roll-call votes and Twitter activities. Co-voting measured by agreement We first show how the co-voting behaviour of MEPs can be quantified by a measure of the agreement between them. We treat individual RCVs as observations, and MEPs as independent observers or raters. When they cast the same vote, there is a high level of agreement, and when they vote differently, there is a high level of disagreement. We define cohesion as the level of agreement within a political group, a coalition as a voting agreement between political groups, and opposition as a disagreement between different groups. There are many well-known measures of agreement in the literature. We selected Krippendorff's Alpha-reliability ( INLINEFORM0 ) BIBREF4 , which is a generalization of several specialized measures. It works for any number of observers, and is applicable to different variable types and metrics (e.g., nominal, ordered, interval, etc.). In general, INLINEFORM1 is defined as follows: INLINEFORM2 where INLINEFORM0 is the actual disagreement between observers (MEPs), and INLINEFORM1 is disagreement expected by chance. When observers agree perfectly, INLINEFORM2 INLINEFORM3 , when the agreement equals the agreement by chance, INLINEFORM4 INLINEFORM5 , and when the observers disagree systematically, INLINEFORM6 INLINEFORM7 . The two disagreement measures are defined as follows: INLINEFORM0 INLINEFORM0 The arguments INLINEFORM0 , and INLINEFORM1 are defined below and refer to the values in the coincidence matrix that is constructed from the RCVs data. In roll-call votes, INLINEFORM2 (and INLINEFORM3 ) is a nominal variable with two possible values: yes and no. INLINEFORM4 is a difference function between the values of INLINEFORM5 and INLINEFORM6 , defined as: INLINEFORM7 The RCVs data has the form of a reliability data matrix: INLINEFORM0 where INLINEFORM0 is the number of RCVs, INLINEFORM1 is the number of MEPs, INLINEFORM2 is the number of votes cast in the voting INLINEFORM3 , and INLINEFORM4 is the actual vote of an MEP INLINEFORM5 in voting INLINEFORM6 (yes or no). A coincidence matrix is constructed from the reliability data matrix, and is in general a INLINEFORM0 -by- INLINEFORM1 square matrix, where INLINEFORM2 is the number of possible values of INLINEFORM3 . In our case, where only yes/no votes are relevant, the coincidence matrix is a 2-by-2 matrix of the following form: INLINEFORM4 A cell INLINEFORM0 accounts for all coincidences from all pairs of MEPs in all RCVs where one MEP has voted INLINEFORM1 and the other INLINEFORM2 . INLINEFORM3 and INLINEFORM4 are the totals for each vote outcome, and INLINEFORM5 is the grand total. The coincidences INLINEFORM6 are computed as: INLINEFORM7 where INLINEFORM0 is the number of INLINEFORM1 pairs in vote INLINEFORM2 , and INLINEFORM3 is the number of MEPs that voted in INLINEFORM4 . When computing INLINEFORM5 , each pair of votes is considered twice, once as a INLINEFORM6 pair, and once as a INLINEFORM7 pair. The coincidence matrix is therefore symmetrical around the diagonal, and the diagonal contains all the equal votes. The INLINEFORM0 agreement is used to measure the agreement between two MEPs or within a group of MEPs. When applied to a political group, INLINEFORM1 corresponds to the cohesion of the group. The closer INLINEFORM2 is to 1, the higher the agreement of the MEPs in the group, and hence the higher the cohesion of the group. We propose a modified version of INLINEFORM0 to measure the agreement between two different groups, INLINEFORM1 and INLINEFORM2 . In the case of a voting agreement between political groups, high INLINEFORM3 is interpreted as a coalition between the groups, whereas negative INLINEFORM4 indicates political opposition. Suppose INLINEFORM0 and INLINEFORM1 are disjoint subsets of all the MEPs, INLINEFORM2 , INLINEFORM3 . The respective number of votes cast by both group members in vote INLINEFORM4 is INLINEFORM5 and INLINEFORM6 . The coincidences are then computed as: INLINEFORM7 where the INLINEFORM0 pairs come from different groups, INLINEFORM1 and INLINEFORM2 . The total number of such pairs in vote INLINEFORM3 is INLINEFORM4 . The actual number INLINEFORM5 of the pairs is multiplied by INLINEFORM6 so that the total contribution of vote INLINEFORM7 to the coincidence matrix is INLINEFORM8 . A network-based measure of co-voting In this section we describe a network-based approach to analyzing the co-voting behavior of MEPs. For each roll-call vote we form a network, where the nodes in the network are MEPs, and an undirected edge between two MEPs is formed when they cast the same vote. We are interested in the factors that determine the cohesion within political groups and coalition formation between political groups. Furthermore, we investigate to what extent communication in a different social context, i.e., the retweeting behavior of MEPs, can explain the co-voting of MEPs. For this purpose we apply an Exponential Random Graph Model BIBREF5 to individual roll-call vote networks, and aggregate the results by means of the meta-analysis. ERGMs allow us to investigate the factors relevant for the network-formation process. Network metrics, as described in the abundant literature, serve to gain information about the structural properties of the observed network. A model investigating the processes driving the network formation, however, has to take into account that there can be a multitude of alternative networks. If we are interested in the parameters influencing the network formation we have to consider all possible networks and measure their similarity to the originally observed network. The family of ERGMs builds upon this idea. Assume a random graph INLINEFORM0 , in the form of a binary adjacency matrix, made up of a set of INLINEFORM1 nodes and INLINEFORM2 edges INLINEFORM3 where, similar to a binary choice model, INLINEFORM4 if the nodes INLINEFORM5 are connected and INLINEFORM6 if not. Since network data is by definition relational and thus violates assumptions of independence, classical binary choice models, like logistic regression, cannot be applied in this context. Within an ERGM, the probability for a given network is modelled by DISPLAYFORM0 where INLINEFORM0 is the vector of parameters and INLINEFORM1 is the vector of network statistics (counts of network substructures), which are a function of the adjacency matrix INLINEFORM2 . INLINEFORM3 is a normalization constant corresponding to the sample of all possible networks, which ensures a proper probability distribution. Evaluating the above expression allows us to make assertions if and how specific nodal attributes influence the network formation process. These nodal attributes can be endogenous (dyad-dependent parameters) to the network, like the in- and out-degrees of a node, or exogenous (dyad-independent parameters), as the party affiliation, or the country of origin in our case. An alternative formulation of the ERGM provides the interpretation of the coefficients. We introduce the change statistic, which is defined as the change in the network statistics when an edge between nodes INLINEFORM0 and INLINEFORM1 is added or not. If INLINEFORM2 and INLINEFORM3 denote the vectors of counts of network substructures when the edge is added or not, the change statistics is defined as follows: INLINEFORM4 With this at hand it can be shown that the distribution of the variable INLINEFORM0 , conditional on the rest of the graph INLINEFORM1 , corresponds to: INLINEFORM2 This implies on the one hand that the probability depends on INLINEFORM0 via the change statistic INLINEFORM1 , and on the other hand, that each coefficient within the vector INLINEFORM2 represents an increase in the conditional log-odds ( INLINEFORM3 ) of the graph when the corresponding element in the vector INLINEFORM4 increases by one. The need to condition the probability on the rest of the network can be illustrated by a simple example. The addition (removal) of a single edge alters the network statistics. If a network has only edges INLINEFORM5 and INLINEFORM6 , the creation of an edge INLINEFORM7 would not only add an additional edge but would also alter the count for other network substructures included in the model. In this example, the creation of the edge INLINEFORM8 also increases the number of triangles by one. The coefficients are transformed into probabilities with the logistic function: INLINEFORM9 For example, in the context of roll-call votes, the probability that an additional co-voting edge is formed between two nodes (MEPs) of the same political group is computed with that equation. In this context, the nodematch (nodemix) coefficients of the ERGM (described in detail bellow) therefore refer to the degree of homophilous (heterophilous) matching of MEPs with regard to their political affiliation, or, expressed differently, the propensity of MEPs to co-vote with other MEPs of their respective political group or another group. A positive coefficient reflects an increased chance that an edge between two nodes with respective properties, like group affiliation, given all other parameters unchanged, is formed. Or, put differently, a positive coefficient implies that the probability of observing a network with a higher number of corresponding pairs relative to the hypothetical baseline network, is higher than to observe the baseline network itself BIBREF31 . For an intuitive interpretation, log-odds value of 0 corresponds to the even chance probability of INLINEFORM0 . Log-odds of INLINEFORM1 correspond to an increase of probability by INLINEFORM2 , whereas log-odds of INLINEFORM3 correspond to a decrease of probability by INLINEFORM4 . The computational challenges of estimating ERGMs is to a large degree due to the estimation of the normalizing constant. The number of possible networks is already extremely large for very small networks and the computation is simply not feasible. Therefore, an appropriate sample has to be found, ideally covering the most probable areas of the probability distribution. For this we make use of a method from the Markov Chain Monte Carlo (MCMC) family, namely the Metropolis-Hastings algorithm. The idea behind this algorithm is to generate and sample highly weighted random networks departing from the observed network. The Metropolis-Hastings algorithm is an iterative algorithm which samples from the space of possible networks by randomly adding or removing edges from the starting network conditional on its density. If the likelihood, in the ERGM context also denoted as weights, of the newly generated network is higher than that of the departure network it is retained, otherwise it is discarded. In the former case, the algorithm starts anew from the newly generated network. Otherwise departure network is used again. Repeating this procedure sufficiently often and summing the weights associated to the stored (sampled) networks allows to compute an approximation of the denominator in equation EQREF18 (normalizing constant). The algorithm starts sampling from the originally observed network INLINEFORM0 . The optimization of the coefficients is done simultaneously, equivalently with the Metropolis-Hastings algorithm. At the beginning starting values have to be supplied. For the study at hand we used the “ergm” library from the statistical R software package BIBREF5 implementing the Gibbs-Sampling algorithm BIBREF32 which is a special case of the Metropolis-Hastings algorithm outlined. In order to answer our question of the importance of the factors which drive the network formation process in the roll-call co-voting network, the ERGM is specified with the following parameters: nodematch country: This parameter adds one network statistic to the model, i.e., the number of edges INLINEFORM0 where INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of MEPs with respect to their country of origin. In other words, this coefficient indicates how relevant nationality is in the formation of edges in the co-voting network. nodematch national party: This parameter adds one network statistic to the model: the number of edges INLINEFORM0 with INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of the MEPs with regard to their party affiliation at the national level. In the context of this study, this coefficient can be interpreted as an indicator for within-party cohesion at the national level. nodemix EP group: This parameter adds one network statistic for each pair of European political groups. These coefficients shed light on the degree of coalitions between different groups as well as the within group cohesion . Given that there are nine groups in the European Parliament, this coefficient adds in total 81 statistics to the model. edge covariate Twitter: This parameter corresponds to a square matrix with the dimension of the adjacency matrix of the network, which corresponds to the number of mutual retweets between the MEPs. It provides an insight about the extent to which communication in one social context (Twitter), can explain cooperation in another social context (co-voting in RCVs). An ERGM as specified above is estimated for each of the 2535 roll-call votes. Each roll-call vote is thereby interpreted as a binary network and as an independent study. It is assumed that a priori each MEP could possibly form an edge with each other MEP in the context of a roll-call vote. Assumptions over the presence or absence of individual MEPs in a voting session are not made. In other words the dimensions of the adjacency matrix (the node set), and therefore the distribution from which new networks are drawn, is kept constant over all RCVs and therefore for every ERGM. The ERGM results therefore implicitly generalize to the case where potentially all MEPs are present and could be voting. Not voting is incorporated implicitly by the disconnectedness of a node. The coefficients of the 2535 roll-call vote studies are aggregated by means of a meta-analysis approach proposed by Lubbers BIBREF33 and Snijders et al. BIBREF34 . We are interested in average effect sizes of different matching patterns over different topics and overall. Considering the number of RCVs, it seems straightforward to interpret the different RCV networks as multiplex networks and collapse them into one weighted network, which could then be analysed by means of a valued ERGM BIBREF35 . There are, however, two reasons why we chose the meta-analysis approach instead. First, aggregating the RCV data results into an extremely dense network, leading to severe convergence (degeneracy) problems for the ERGM. Second, the RCV data contains information about the different policy areas the individual votes were about. Since we are interested in how the coalition formation in the European Parliament differs over different areas, a method is needed that allows for an ex-post analysis of the corresponding results. We therefore opted for the meta-analysis approach by Lubbers and Snijders et al. This approach allows us to summarize the results by decomposing the coefficients into average effects and (class) subject-specific deviations. The different ERGM runs for each RCV are thereby regarded as different studies with identical samples that are combined to obtain a general overview of effect sizes. The meta-regression model is defined as: INLINEFORM0 Here INLINEFORM0 is a parameter estimate for class INLINEFORM1 , and INLINEFORM2 is the average coefficient. INLINEFORM3 denotes the normally distributed deviation of the class INLINEFORM4 with a mean of 0 and a variance of INLINEFORM5 . INLINEFORM6 is the estimation error of the parameter value INLINEFORM7 from the ERGM. The meta-analysis model is fitted by an iterated, weighted, least-squares model in which the observations are weighted by the inverse of their variances. For the overall nodematch between political groups, we weighted the coefficients by group sizes. The results from the meta analysis can be interpreted as if they stemmed from an individual ERGM run. In our study, the meta-analysis was performed using the RSiena library BIBREF36 , which implements the method proposed by Lubbers and Snijders et al. BIBREF33 , BIBREF34 . Measuring cohesion and coalitions on Twitter The retweeting behavior of MEPs is captured by their retweet network. Each MEP active on Twitter is a node in this network. An edge in the network between two MEPs exists when one MEP retweeted the other. The weight of the edge is the number of retweets between the two MEPs. The resulting retweet network is an undirected, weighted network. We measure the cohesion of a political group INLINEFORM0 as the average retweets, i.e., the ratio of the number of retweets between the MEPs in the group INLINEFORM1 to the number of MEPs in the group INLINEFORM2 . The higher the ratio, the more each MEP (on average) retweets the MEPs from the same political group, hence, the higher the cohesion of the political group. The definition of the average retweeets ( INLINEFORM3 ) of a group INLINEFORM4 is: INLINEFORM5 This measure of cohesion captures the aggregate retweeting behavior of the group. If we consider retweets as endorsements, a larger number of retweets within the group is an indicator of agreement between the MEPs in the group. It does not take into account the patterns of retweeting within the group, thus ignoring the social sub-structure of the group. This is a potentially interesting direction and we leave it for future work. We employ an analogous measure for the strength of coalitions in the retweet network. The coalition strength between two groups INLINEFORM0 and INLINEFORM1 is the ratio of the number of retweets from one group to the other (but not within groups) INLINEFORM2 to the total number of MEPs in both groups, INLINEFORM3 . The definition of the average retweeets ( INLINEFORM4 ) between groups INLINEFORM5 and INLINEFORM6 is: INLINEFORM7 Cohesion of political groups In this section we first report on the level of cohesion of the European Parliament's groups by analyzing the co-voting through the agreement and ERGM measures. Next, we explore two important policy areas, namely Economic and monetary system and State and evolution of the Union. Finally, we analyze the cohesion of the European Parliament's groups on Twitter. Existing research by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 shows that the cohesion of the European political groups has been rising since the 1990s, and the level of cohesion remained high even after the EU's enlargement in 2004, when the number of MEPs increased from 626 to 732. We measure the co-voting cohesion of the political groups in the Eighth European Parliament using Krippendorff's Alpha—the results are shown in Fig FIGREF30 (panel Overall). The Greens-EFA have the highest cohesion of all the groups. This finding is in line with an analysis of previous compositions of the Fifth and Sixth European Parliaments by Hix and Noury BIBREF11 , and the Seventh by VoteWatch BIBREF37 . They are closely followed by the S&D and EPP. Hix and Noury reported on the high cohesion of S&D in the Fifth and Sixth European Parliaments, and we also observe this in the current composition. They also reported a slightly less cohesive EPP-ED. This group split in 2009 into EPP and ECR. VoteWatch reports EPP to have cohesion on a par with Greens-EFA and S&D in the Seventh European Parliament. The cohesion level we observe in the current European Parliament is also similar to the level of Greens-EFA and S&D. The catch-all group of the non-aligned (NI) comes out as the group with the lowest cohesion. In addition, among the least cohesive groups in the European Parliament are the Eurosceptics EFDD, which include the British UKIP led by Nigel Farage, and the ENL whose largest party are the French National Front, led by Marine Le Pen. Similarly, Hix and Noury found that the least cohesive groups in the Seventh European Parliament are the nationalists and Eurosceptics. The Eurosceptic IND/DEM, which participated in the Sixth European Parliament, transformed into the current EFDD, while the nationalistic UEN was dissolved in 2009. We also measure the voting cohesion of the European Parliament groups using an ERGM, a network-based method—the results are shown in Fig FIGREF31 (panel Overall). The cohesion results obtained with ERGM are comparable to the results based on agreement. In this context, the parameters estimated by the ERGM refer to the matching of MEPs who belong to the same political group (one parameter per group). The parameters measure the homophilous matching between MEPs who have the same political affiliation. A positive value for the estimated parameter indicates that the co-voting of MEPs from that group is greater than what is expected by chance, where the expected number of co-voting links by chance in a group is taken to be uniformly random. A negative value indicates that there are fewer co-voting links within a group than expected by chance. Even though INLINEFORM0 and ERGM compute scores relative to what is expected by chance, they refer to different interpretations of chance. INLINEFORM1 's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group, knowing the votes of these MEPs on all RCVs. ERGM's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group on a given RCV, knowing the network-related properties of the co-voting network on that particular RCV. The main difference between INLINEFORM2 and ERGM, though, is the treatment of non-voting and abstained MEPs. INLINEFORM3 considers only the yes/no votes, and consequently, agreements by the voting MEPs of the same groups are considerably higher than co-voting by chance. ERGM, on the other hand, always considers all MEPs, and non-voting and abstained MEPs are treated as disconnected nodes. The level of co-voting by chance is therefore considerably lower, since there is often a large fraction of MEPs that do not attand or abstain. As with INLINEFORM0 , Greens-EFA, S&D, and EPP exhibit the highest cohesion, even though their ranking is permuted when compared to the ranking obtained with INLINEFORM1 . At the other end of the scale, we observe the same situation as with INLINEFORM2 . The non-aligned members NI have the lowest cohesion, followed by EFDD and ENL. The only place where the two methods disagree is the level of cohesion of GUE-NGL. The Alpha attributes GUE-NGL a rather high level of cohesion, on a par with ALDE, whereas the ERGM attributes them a much lower cohesion. The reason for this difference is the relatively high abstention rate of GUE-NGL. Whereas the overall fraction of non-attending and abstaining MEPs across all RCVs and all political groups is 25%, the GUE-NGL abstention rate is 34%. This is reflected in an above average cohesion by INLINEFORM0 where only yes/no votes are considered, and in a relatively lower, below average cohesion by ERGM. In the later case, the non-attendance is interpreted as a non-cohesive voting of a political groups as a whole. In addition to the overall cohesion, we also focus on two selected policy areas. The cohesion of the political groups related to these two policy areas is shown in the first two panels in Fig FIGREF30 ( INLINEFORM0 ) and Fig FIGREF31 (ERGM). The most important observation is that the level of cohesion of the political groups is very stable across different policy areas. These results are corroborated by both methodologies. Similar to the overall cohesion, the most cohesive political groups are the S&D, Greens-EFA, and EPP. The least cohesive group is the NI, followed by the ENL and EFDD. The two methodologies agree on the level of cohesion for all the political groups, except for GUE-NGL, due to a lower attendance rate. We determine the cohesion of political groups on Twitter by using the average number of retweets between MEPs within the same group. The results are shown in Fig FIGREF33 . The right-wing ENL and EFDD come out as the most cohesive groups, while all the other groups have a far lower average number of retweets. MEPs from ENL and EFDD post by far the largest number of retweets (over 240), and at the same time over 94% of their retweets are directed to MEPs from the same group. Moreover, these two groups stand out in the way the retweets are distributed within the group. A large portion of the retweets of EFDD (1755) go to Nigel Farage, the leader of the group. Likewise, a very large portion of retweets of ENL (2324) go to Marine Le Pen, the leader of the group. Farage and Le Pen are by far the two most retweeted MEPs, with the third one having only 666 retweets. Coalitions in the European Parliament Coalition formation in the European Parliament is largely determined by ideological positions, reflected in the degree of cooperation of parties at the national and European levels. The observation of ideological inclinations in the coalition formation within the European Parliament was already made by other authors BIBREF11 and is confirmed in this study. The basic patterns of coalition formation in the European Parliament can already be seen in the co-voting network in Fig FIGREF2 A. It is remarkable that the degree of attachment between the political groups, which indicates the degree of cooperation in the European Parliament, nearly exactly corresponds to the left-to-right seating order. The liberal ALDE seems to have an intermediator role between the left and right parts of the spectrum in the parliament. Between the extreme (GUE-NGL) and center left (S&D) groups, this function seems to be occupied by Greens-EFA. The non-aligned members NI, as well as the Eurosceptic EFFD and ENL, seem to alternately tip the balance on both poles of the political spectrum. Being ideologically more inclined to vote with other conservative and right-wing groups (EPP, ECR), they sometimes also cooperate with the extreme left-wing group (GUE-NGL) with which they share their Euroscepticism as a common denominator. Figs FIGREF36 and FIGREF37 give a more detailed understanding of the coalition formation in the European Parliament. Fig FIGREF36 displays the degree of agreement or cooperation between political groups measured by Krippendorff's INLINEFORM0 , whereas Fig FIGREF37 is based on the result from the ERGM. We first focus on the overall results displayed in the right-hand plots of Figs FIGREF36 and FIGREF37 . The strongest degrees of cooperation are observed, with both methods, between the two major parties (EPP and S&D) on the one hand, and the liberal ALDE on the other. Furthermore, we see a strong propensity for Greens-EFA to vote with the Social Democrats (5th strongest coalition by INLINEFORM0 , and 3rd by ERGM) and the GUE-NGL (3rd strongest coalition by INLINEFORM1 , and 5th by ERGM). These results underline the role of ALDE and Greens-EFA as intermediaries for the larger groups to achieve a majority. Although the two largest groups together have 405 seats and thus significantly more than the 376 votes needed for a simple majority, the degree of cooperation between the two major groups is ranked only as the fourth strongest by both methods. This suggests that these two political groups find it easier to negotiate deals with smaller counterparts than with the other large group. This observation was also made by Hix et al. BIBREF12 , who noted that alignments on the left and right of the political spectrum have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. Next, we focus on the coalition formation within the two selected policy areas. The area State and Evolution of the Union is dominated by cooperation between the two major groups, S&D and EPP, as well as ALDE. We also observe a high degree of cooperation between groups that are generally regarded as integration friendly, like Greens-EFA and GUE-NGL. We see, particularly in Fig FIGREF36 , a relatively high degree of cooperation between groups considered as Eurosceptic, like ECR, EFFD, ENL, and the group of non-aligned members. The dichotomy between supporters and opponents of European integration is even more pronounced within the policy area Economic and Monetary System. In fact, we are taking a closer look specifically at these two areas as they are, at the same time, both contentious and important. Both methods rank the cooperation between S&D and EPP on the one hand, and ALDE on the other, as the strongest. We also observe a certain degree of unanimity among the Eurosceptic and right-wing groups (EFDD, ENL, and NI) in this policy area. This seems plausible, as these groups were (especially in the aftermath of the global financial crisis and the subsequent European debt crisis) in fierce opposition to further payments to financially troubled member states. However, we also observe a number of strong coalitions that might, at first glance, seem unusual, specifically involving the left-wing group GUE-NGL on the one hand, and the right-wing EFDD, ENL, and NI on the other. These links also show up in the network plot in Fig FIGREF2 A. This might be attributable to a certain degree of Euroscepticism on both sides: rooted in criticism of capitalism on the left, and at least partly a raison d'être on the right. Hix et al. BIBREF11 discovered this pattern as well, and proposed an additional explanation—these coalitions also relate to a form of government-opposition dynamic that is rooted at the national level, but is reflected in voting patterns at the European level. In general, we observe two main differences between the INLINEFORM0 and ERGM results: the baseline cooperation as estimated by INLINEFORM1 is higher, and the ordering of coalitions from the strongest to the weakest is not exactly the same. The reason is the same as for the cohesion, namely different treatment of non-voting and abstaining MEPs. When they are ignored, as by INLINEFORM2 , the baseline level of inter-group co-voting is higher. When non-attending and abstaining is treated as voting differently, as by ERGM, it is considerably more difficult to achieve co-voting coalitions, specially when there are on average 25% MEPs that do not attend or abstain. Groups with higher non-attendance rates, such as GUE-NGL (34%) and NI (40%) are less likely to form coalitions, and therefore have relatively lower ERGM coefficients (Fig FIGREF37 ) than INLINEFORM3 scores (Fig FIGREF36 ). The first insight into coalition formation on Twitter can be observed in the retweet network in Fig FIGREF2 B. The ideological left to right alignment of the political groups is reflected in the retweet network. Fig FIGREF40 shows the strength of the coalitions on Twitter, as estimated by the number of retweets between MEPs from different groups. The strongest coalitions are formed between the right-wing groups EFDD and ECR, as well as ENL and NI. At first, this might come as a surprise, since these groups do not form strong coalitions in the European Parliament, as can be seen in Figs FIGREF36 and FIGREF37 . On the other hand, the MEPs from these groups are very active Twitter users. As previously stated, MEPs from ENL and EFDD post the largest number of retweets. Moreover, 63% of the retweets outside of ENL are retweets of NI. This effect is even more pronounced with MEPs from EFDD, whose retweets of ECR account for 74% of their retweets from other groups. In addition to these strong coalitions on the right wing, we find coalition patterns to be very similar to the voting coalitions observed in the European Parliament, seen in Figs FIGREF36 and FIGREF37 . The strongest coalitions, which come immediately after the right-wing coalitions, are between Greens-EFA on the one hand, and GUE-NGL and S&D on the other, as well as ALDE on the one hand, and EPP and S&D on the other. These results corroborate the role of ALDE and Greens-EFA as intermediaries in the European Parliament, not only in the legislative process, but also in the debate on social media. To better understand the formation of coalitions in the European Parliament and on Twitter, we examine the strongest cooperation between political groups at three different thresholds. For co-voting coalitions in the European Parliament we choose a high threshold of INLINEFORM0 , a medium threshold of INLINEFORM1 , and a negative threshold of INLINEFORM2 (which corresponds to strong oppositions). In this way we observe the overall patterns of coalition and opposition formation in the European Parliament and in the two specific policy areas. For cooperation on Twitter, we choose a high threshold of INLINEFORM3 , a medium threshold of INLINEFORM4 , and a very low threshold of INLINEFORM5 . The strongest cooperations in the European Parliament over all policy areas are shown in Fig FIGREF42 G. It comes as no surprise that the strongest cooperations are within the groups (in the diagonal). Moreover, we again observe GUE-NGL, S&D, Greens-EFA, ALDE, and EPP as the most cohesive groups. In Fig FIGREF42 H, we observe coalitions forming along the diagonal, which represents the seating order in the European Parliament. Within this pattern, we observe four blocks of coalitions: on the left, between GUE-NGL, S&D, and Greens-EFA; in the center, between S&D, Greens-EFA, ALDE, and EPP; on the right-center between ALDE, EPP, and ECR; and finally, on the far-right between ECR, EFDD, ENL, and NI. Fig FIGREF42 I shows the strongest opposition between groups that systematically disagree in voting. The strongest disagreements are between left- and right-aligned groups, but not between the left-most and right-most groups, in particular, between GUE-NGL and ECR, but also between S&D and Greens-EFA on one side, and ENL and NI on the other. In the area of Economic and monetary system we see a strong cooperation between EPP and S&D (Fig FIGREF42 A), which is on a par with the cohesion of the most cohesive groups (GUE-NGL, S&D, Greens-EFA, ALDE, and EPP), and is above the cohesion of the other groups. As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B. On one hand, we observe cooperation between S&D, ALDE, EPP, and ECR, and on the other, cooperation between GUE-NGL, Greens-EFA, EFDD, ENL, and NI. This division in blocks is seen again in Fig FIGREF42 C, which shows the strongest disagreements. Here, we observe two blocks composed of S&D, EPP, and ALDE on one hand, and GUE-NGL, EFDD, ENL, and NI on the other, which are in strong opposition to each other. In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis. The patterns of coalitions forming on Twitter closely resemble those in the European Parliament. In Fig FIGREF42 J we see that the strongest degrees of cooperation on Twitter are within the groups. The only group with low cohesion is the NI, whose members have only seven retweets between them. The coalitions on Twitter follow the seating order in the European Parliament remarkably well (see Fig FIGREF42 K). What is striking is that the same blocks form on the left, center, and on the center-right, both in the European Parliament and on Twitter. The largest difference between the coalitions in the European Parliament and on Twitter is on the far-right, where we observe ENL and NI as isolated blocks. The results shown in Fig FIGREF44 quantify the extent to which communication in one social context (Twitter) can explain cooperation in another social context (co-voting in the European Parliament). A positive value indicates that the matching behavior in the retweet network is similar to the one in the co-voting network, specific for an individual policy area. On the other hand, a negative value implies a negative “correlation” between the retweeting and co-voting of MEPs in the two different contexts. The bars in Fig FIGREF44 correspond to the coefficients from the edge covariate terms of the ERGM, describing the relationship between the retweeting and co-voting behavior of MEPs. The coefficients are aggregated for individual policy areas by means of a meta-analysis. Overall, we observe a positive correlation between retweeting and co-voting, which is significantly different from zero. The strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets. Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union. The only exception, with a significantly negative coefficient, is the area Economic and monetary system. This implies that in the area Economic and monetary system we observe a significant deviation from the usual co-voting patterns. Results from section “sec:coalitionpolicy”, confirm that this is indeed the case. Especially noteworthy are the coalitions between GUE-NGL and Greens-EFA on the left wing, and EFDD and ENL on the right wing. In the section “sec:coalitionpolicy” we interpret these results as a combination of Euroscepticism on both sides, motivated on the left by a skeptical attitude towards the market orientation of the EU, and on the right by a reluctance to give up national sovereignty. Discussion We study cohesion and coalitions in the Eighth European Parliament by analyzing, on one hand, MEPs' co-voting tendencies and, on the other, their retweeting behavior. We reveal that the most cohesive political group in the European Parliament, when it comes to co-voting, is Greens-EFA, closely followed by S&D and EPP. This is consistent with what VoteWatch BIBREF37 reported for the Seventh European Parliament. The non-aligned (NI) come out as the least cohesive group, followed by the Eurosceptic EFDD. Hix and Noury BIBREF11 also report that nationalists and Eurosceptics form the least cohesive groups in the Sixth European Parliament. We reaffirm most of these results with both of the two employed methodologies. The only point where the two methodologies disagree is in the level of cohesion for the left-wing GUE-NGL, which is portrayed by ERGM as a much less cohesive group, due to their relatively lower attendance rate. The level of cohesion of the political groups is quite stable across different policy areas and similar conclusions apply. On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration. We observed the same phenomenon recently during the Brexit campaign BIBREF38 . Along our interpretation the Brexit was “won” to some extent due to these social media activities, which are practically non-existent among the pro-EU political groups. The fact that ENL and EFDD are the least cohesive groups in the European Parliament can be attributed to their political focus. It seems more important for the group to agree on its anti-EU stance and to call for independence and sovereignty, and much less important to agree on other issues put forward in the parliament. The basic pattern of coalition formation, with respect to co-voting, can already be seen in Fig FIGREF2 A: the force-based layout almost completely corresponds to the seating order in the European Parliament (from the left- to the right-wing groups). A more thorough examination shows that the strongest cooperation can be observed, for both methodologies, between EPP, S&D, and ALDE, where EPP and S&D are the two largest groups, while the liberal ALDE plays the role of an intermediary in this context. On the other hand, the role of an intermediary between the far-left GUE-NGL and its center-left neighbor, S&D, is played by the Greens-EFA. These three parties also form a strong coalition in the European Parliament. On the far right of the spectrum, the non-aligned, EFDD, and ENL form another coalition. This behavior was also observed by Hix et al. BIBREF12 , stating that alignments on the left and right have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the European Parliament. When looking at the policy area Economic and monetary system, we see the same coalitions. However, interestingly, EFDD, ENL, and NI often co-vote with the far-left GUE-NGL. This can be attributed to a certain degree of Euroscepticism on both sides: as a criticism of capitalism, on one hand, and as the main political agenda, on the other. This pattern was also discovered by Hix et al. BIBREF12 , who argued that these coalitions emerge from a form of government-opposition dynamics, rooted at the national level, but also reflected at the European level. When studying coalitions on Twitter, the strongest coalitions can be observed on the right of the spectrum (between EFDD, ECR, ENL, and NI). This is, yet again, in contrast to what was observed in the RCV data. The reason lies in the anti-EU messages they tend to collectively spread (retweet) across the network. This behavior forms strong retweet ties, not only within, but also between, these groups. For example, MEPs of EFDD mainly retweet MEPs from ECR (with the exception of MEPs from their own group). In contrast to these right-wing coalitions, we find the other coalitions to be consistent with what is observed in the RCV data. The strongest coalitions on the left-to-center part of the axis are those between GUE-NGL, Greens-EFA, and S&D, and between S&D, ALDE, and EPP. These results reaffirm the role of Greens-EFA and ALDE as intermediaries, not only in the European Parliament but also in the debates on social media. Last, but not least, with the ERGM methodology we measure the extent to which the retweet network can explain the co-voting activities in the European Parliament. We compute this for each policy area separately and also over all RCVs. We conclude that the retweet network indeed matches the co-voting behavior, with the exception of one specific policy area. In the area Economic and monetary system, the links in the (overall) retweet network do not match the links in the co-voting network. Moreover, the negative coefficients imply a radically different formation of coalitions in the European Parliament. This is consistent with the results in Figs FIGREF36 and FIGREF37 (the left-hand panels), and is also observed in Fig FIGREF42 (the top charts). From these figures we see that in this particular case, the coalitions are also formed between the right-wing groups and the far-left GUE-NGL. As already explained, we attribute this to the degree of Euroscepticism that these groups share on this particular policy issue. Conclusions In this paper we analyze (co-)voting patterns and social behavior of members of the European Parliament, as well as the interaction between these two systems. More precisely, we analyze a set of 2535 roll-call votes as well as the tweets and retweets of members of the MEPs in the period from October 2014 to February 2016. The results indicate a considerable level of correlation between these two complex systems. This is consistent with previous findings of Cherepnalkoski et al. BIBREF22 , who reconstructed the adherence of MEPs to their respective political or national group solely from their retweeting behavior. We employ two different methodologies to quantify the co-voting patterns: Krippendorff's INLINEFORM0 and ERGM. They were developed in different fields of research, use different techniques, and are based on different assumptions, but in general they yield consistent results. However, there are some differences which have consequences for the interpretation of the results. INLINEFORM0 is a measure of agreement, designed as a generalization of several specialized measures, that can compare different numbers of observations, in our case roll-call votes. It only considers yes/no votes. Absence and abstention by MEPs is ignored. Its baseline ( INLINEFORM1 ), i.e., co-voting by chance, is computed from the yes/no votes of all MEPs on all RCVs. ERGMs are used in social-network analyses to determine factors influencing the edge formation process. In our case an edge between two MEPs is formed when they cast the same yes/no vote within a RCV. It is assumed that a priori each MEP can form a link with any other MEP. No assumptions about the presence or absence of individual MEPs in a voting session are made. Each RCV is analyzed as a separate binary network. The node set is thereby kept constant for each RCV network. While the ERGM departs from the originally observed network, where MEPs who didn't vote or abstained appear as isolated nodes, links between these nodes are possible within the network sampling process which is part of the ERGM optimization process. The results of several RCVs are aggregated by means of the meta-analysis approach. The baseline (ERGM coefficients INLINEFORM0 ), i.e., co-voting by chance, is computed from a large sample of randomly generated networks. These two different baselines have to be taken into account when interpreting the results of INLINEFORM0 and ERGM. In a typical voting session, 25% of the MEPs are missing or abstaining. When assessing cohesion of political groups, all INLINEFORM1 values are well above the baseline, and the average INLINEFORM2 . The average ERGM cohesion coefficients, on the other hand, are around the baseline. The difference is even more pronounced for groups with higher non-attendance/abstention rates like GUE-NGL (34%) and NI (40%). When assessing strength of coalitions between pairs of groups, INLINEFORM3 values are balanced around the baseline, while the ERGM coefficients are mostly negative. The ordering of coalitions from the strongest to the weakest is therefor different when groups with high non-attendance/abstention rates are involved. The choice of the methodology to asses cohesion and coalitions is not obvious. Roll-call voting is used for decisions which demand a simple majority only. One might however argue that non-attendance/abstention corresponds to a no vote, or that absence is used strategically. Also, the importance of individual votes, i.e., how high on the agenda of a political group is the subject, affects their attendance, and consequently the perception of their cohesion and the potential to act as a reliable coalition partner. Acknowledgments This work was supported in part by the EC projects SIMPOL (no. 610704) and DOLFINS (no. 640772), and by the Slovenian ARRS programme Knowledge Technologies (no. P2-103).
we observe a positive correlation between retweeting and co-voting, strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets, Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union, significantly negative coefficient, is the area Economic and monetary system
fd08dc218effecbe5137a7e3b73d9e5e37ace9c1
fd08dc218effecbe5137a7e3b73d9e5e37ace9c1_0
Q: Does the analysis find that coalitions are formed in the same way for different policy areas? Text: Abstract We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. Introduction Social-media activities often reflect phenomena that occur in other complex systems. By observing social networks and the content propagated through these networks, we can describe or even predict the interplay between the observed social-media activities and another complex system that is more difficult, if not impossible, to monitor. There are numerous studies reported in the literature that successfully correlate social-media activities to phenomena like election outcomes BIBREF0 , BIBREF1 or stock-price movements BIBREF2 , BIBREF3 . In this paper we study the cohesion and coalitions exhibited by political groups in the Eighth European Parliament (2014–2019). We analyze two entirely different aspects of how the Members of the European Parliament (MEPs) behave in policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting (i.e., endorsing) behavior. We use two diverse datasets in the analysis: the roll-call votes and the Twitter data. A roll-call vote (RCV) is a vote in the parliament in which the names of the MEPs are recorded along with their votes. The RCV data is available as part of the minutes of the parliament's plenary sessions. From this perspective, cohesion is seen as the tendency to co-vote (i.e., cast the same vote) within a group, and a coalition is formed when members of two or more groups exhibit a high degree of co-voting on a subject. The second dataset comes from Twitter. It captures the retweeting behavior of MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between the groups) from a completely different perspective. With over 300 million monthly active users and 500 million tweets posted daily, Twitter is one of the most popular social networks. Twitter allows its users to post short messages (tweets) and to follow other users. A user who follows another user is able to read his/her public tweets. Twitter also supports other types of interaction, such as user mentions, replies, and retweets. Of these, retweeting is the most important activity as it is used to share and endorse content created by other users. When a user retweets a tweet, the information about the original author as well as the tweet's content are preserved, and the tweet is shared with the user's followers. Typically, users retweet content that they agree with and thus endorse the views expressed by the original tweeter. We apply two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 which measures the agreement among observers, or voters in our case. The second one is based on Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach and is often used in social-network analyses. Even though these two methodologies come with two different sets of techniques and are based on different assumptions, they provide consistent results. The main contributions of this paper are as follows: (i) We give general insights into the cohesion of political groups in the Eighth European Parliament, both overall and across different policy areas. (ii) We explore whether coalitions are formed in the same way for different policy areas. (iii) We explore to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. (iv) We employ two statistically sound methodologies and examine the extent to which the results are sensitive to the choice of methodology. While the results are mostly consistent, we show that the difference are due to the different treatment of non-attending and abstaining MEPs by INLINEFORM0 and ERGM. The most novel and interesting aspect of our work is the relationship between the co-voting and the retweeting patterns. The increased use of Twitter by MEPs on days with a roll-call vote session (see Fig FIGREF1 ) is an indicator that these two processes are related. In addition, the force-based layouts of the co-voting network and the retweet network reveal a very similar structure on the left-to-center side of the political spectrum (see Fig FIGREF2 ). They also show a discrepancy on the far-right side of the spectrum, which calls for a more detailed analysis. Related work In this paper we study and relate two very different aspects of how MEPs behave in policy-making processes. First, we look at their co-voting behavior, and second, we examine their retweeting patterns. Thus, we draw related work from two different fields of science. On one hand, we look at how co-voting behavior is analyzed in the political-science literature and, on the other, we explore how Twitter is used to better understand political and policy-making processes. The latter has been more thoroughly explored in the field of data mining (specifically, text mining and network analysis). To the best of our knowledge, this is the first paper that studies legislative behavior in the Eighth European Parliament. The legislative behavior of the previous parliaments was thoroughly studied by Hix, Attina, and others BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . These studies found that voting behavior is determined to a large extent—and when viewed over time, increasingly so—by affiliation to a political group, as an organizational reflection of the ideological position. The authors found that the cohesion of political groups in the parliament has increased, while nationality has been less and less of a decisive factor BIBREF12 . The literature also reports that a split into political camps on the left and right of the political spectrum has recently replaced the `grand coalition' between the two big blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. The authors conclude that coalitions are to a large extent formed along the left-to-right axis BIBREF12 . In this paper we analyze the roll-call vote data published in the minutes of the parliament's plenary sessions. For a given subject, the data contains the vote of each MEP present at the respective sitting. Roll-call vote data from the European Parliament has already been extensively studied by other authors, most notably by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 . To be able to study the cohesion and coalitions, authors like Hix, Attina, and Rice BIBREF6 , BIBREF13 , BIBREF14 defined and employed a variety of agreement measures. The most prominent measure is the Agreement Index proposed by Hix et al. BIBREF13 . This measure computes the agreement score from the size of the majority class for a particular vote. The Agreement Index, however, exhibits two drawbacks: (i) it does not account for co-voting by chance, and (ii) without a proper adaptation, it does not accommodate the scenario in which the agreement is to be measured between two different political groups. We employ two statistically sound methodologies developed in two different fields of science. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 . INLINEFORM1 is a measure of the agreement among observers, coders, or measuring instruments that assign values to items or phenomena. It compares the observed agreement to the agreement expected by chance. INLINEFORM2 is used to measure the inter- and self-annotator agreement of human experts when labeling data, and the performance of classification models in machine learning scenarios BIBREF15 . In addition to INLINEFORM3 , we employ Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach, often used in social-network analyses. ERGM can be employed to investigate how different network statistics (e.g., number of edges and triangles) or external factors (e.g., political group membership) govern the network-formation process. The second important aspect of our study is related to analyzing the behavior of participants in social networks, specifically Twitter. Twitter is studied by researchers to better understand different political processes, and in some cases to predict their outcomes. Eom et al. BIBREF1 consider the number of tweets by a party as a proxy for the collective attention to the party, explore the dynamics of the volume, and show that this quantity contains information about an election's outcome. Other studies BIBREF16 reach similar conclusions. Conover et al. BIBREF17 predicted the political alignment of Twitter users in the run-up to the 2010 US elections based on content and network structure. They analyzed the polarization of the retweet and mention networks for the same elections BIBREF18 . Borondo et al. BIBREF19 analyzed user activity during the Spanish presidential elections. They additionally analyzed the 2012 Catalan elections, focusing on the interplay between the language and the community structure of the network BIBREF20 . Most existing research, as Larsson points out BIBREF21 , focuses on the online behavior of leading political figures during election campaigns. This paper continues our research on communities that MEPs (and their followers) form on Twitter BIBREF22 . The goal of our research was to evaluate the role of Twitter in identifying communities of influence when the actual communities are known. We represent the influence on Twitter by the number of retweets that MEPs “receive”. We construct two networks of influence: (i) core, which consists only of MEPs, and (ii) extended, which also involves their followers. We compare the detected communities in both networks to the groups formed by the political, country, and language membership of MEPs. The results show that the detected communities in the core network closely match the political groups, while the communities in the extended network correspond to the countries of residence. This provides empirical evidence that analyzing retweet networks can reveal real-world relationships and can be used to uncover hidden properties of the networks. Lazer BIBREF23 highlights the importance of network-based approaches in political science in general by arguing that politics is a relational phenomenon at its core. Some researchers have adopted the network-based approach to investigate the structure of legislative work in the US Congress, including committee and sub-committee membership BIBREF24 , bill co-sponsoring BIBREF25 , and roll-call votes BIBREF26 . More recently, Dal Maso et al. BIBREF27 examined the community structure with respect to political coalitions and government structure in the Italian Parliament. Scherpereel et al. BIBREF28 examined the constituency, personal, and strategic characteristics of MEPs that influence their tweeting behavior. They suggested that Twitter's characteristics, like immediacy, interactivity, spontaneity, personality, and informality, are likely to resonate with political parties across Europe. By fitting regression models, the authors find that MEPs from incohesive groups have a greater tendency to retweet. In contrast to most of these studies, we focus on the Eighth European Parliament, and more importantly, we study and relate two entirely different behavioral aspects, co-voting and retweeting. The goal of this research is to better understand the cohesion and coalition formation processes in the European Parliament by quantifying and comparing the co-voting patterns and social behavior. Methods In this section we present the methods to quantify cohesion and coalitions from the roll-call votes and Twitter activities. Co-voting measured by agreement We first show how the co-voting behaviour of MEPs can be quantified by a measure of the agreement between them. We treat individual RCVs as observations, and MEPs as independent observers or raters. When they cast the same vote, there is a high level of agreement, and when they vote differently, there is a high level of disagreement. We define cohesion as the level of agreement within a political group, a coalition as a voting agreement between political groups, and opposition as a disagreement between different groups. There are many well-known measures of agreement in the literature. We selected Krippendorff's Alpha-reliability ( INLINEFORM0 ) BIBREF4 , which is a generalization of several specialized measures. It works for any number of observers, and is applicable to different variable types and metrics (e.g., nominal, ordered, interval, etc.). In general, INLINEFORM1 is defined as follows: INLINEFORM2 where INLINEFORM0 is the actual disagreement between observers (MEPs), and INLINEFORM1 is disagreement expected by chance. When observers agree perfectly, INLINEFORM2 INLINEFORM3 , when the agreement equals the agreement by chance, INLINEFORM4 INLINEFORM5 , and when the observers disagree systematically, INLINEFORM6 INLINEFORM7 . The two disagreement measures are defined as follows: INLINEFORM0 INLINEFORM0 The arguments INLINEFORM0 , and INLINEFORM1 are defined below and refer to the values in the coincidence matrix that is constructed from the RCVs data. In roll-call votes, INLINEFORM2 (and INLINEFORM3 ) is a nominal variable with two possible values: yes and no. INLINEFORM4 is a difference function between the values of INLINEFORM5 and INLINEFORM6 , defined as: INLINEFORM7 The RCVs data has the form of a reliability data matrix: INLINEFORM0 where INLINEFORM0 is the number of RCVs, INLINEFORM1 is the number of MEPs, INLINEFORM2 is the number of votes cast in the voting INLINEFORM3 , and INLINEFORM4 is the actual vote of an MEP INLINEFORM5 in voting INLINEFORM6 (yes or no). A coincidence matrix is constructed from the reliability data matrix, and is in general a INLINEFORM0 -by- INLINEFORM1 square matrix, where INLINEFORM2 is the number of possible values of INLINEFORM3 . In our case, where only yes/no votes are relevant, the coincidence matrix is a 2-by-2 matrix of the following form: INLINEFORM4 A cell INLINEFORM0 accounts for all coincidences from all pairs of MEPs in all RCVs where one MEP has voted INLINEFORM1 and the other INLINEFORM2 . INLINEFORM3 and INLINEFORM4 are the totals for each vote outcome, and INLINEFORM5 is the grand total. The coincidences INLINEFORM6 are computed as: INLINEFORM7 where INLINEFORM0 is the number of INLINEFORM1 pairs in vote INLINEFORM2 , and INLINEFORM3 is the number of MEPs that voted in INLINEFORM4 . When computing INLINEFORM5 , each pair of votes is considered twice, once as a INLINEFORM6 pair, and once as a INLINEFORM7 pair. The coincidence matrix is therefore symmetrical around the diagonal, and the diagonal contains all the equal votes. The INLINEFORM0 agreement is used to measure the agreement between two MEPs or within a group of MEPs. When applied to a political group, INLINEFORM1 corresponds to the cohesion of the group. The closer INLINEFORM2 is to 1, the higher the agreement of the MEPs in the group, and hence the higher the cohesion of the group. We propose a modified version of INLINEFORM0 to measure the agreement between two different groups, INLINEFORM1 and INLINEFORM2 . In the case of a voting agreement between political groups, high INLINEFORM3 is interpreted as a coalition between the groups, whereas negative INLINEFORM4 indicates political opposition. Suppose INLINEFORM0 and INLINEFORM1 are disjoint subsets of all the MEPs, INLINEFORM2 , INLINEFORM3 . The respective number of votes cast by both group members in vote INLINEFORM4 is INLINEFORM5 and INLINEFORM6 . The coincidences are then computed as: INLINEFORM7 where the INLINEFORM0 pairs come from different groups, INLINEFORM1 and INLINEFORM2 . The total number of such pairs in vote INLINEFORM3 is INLINEFORM4 . The actual number INLINEFORM5 of the pairs is multiplied by INLINEFORM6 so that the total contribution of vote INLINEFORM7 to the coincidence matrix is INLINEFORM8 . A network-based measure of co-voting In this section we describe a network-based approach to analyzing the co-voting behavior of MEPs. For each roll-call vote we form a network, where the nodes in the network are MEPs, and an undirected edge between two MEPs is formed when they cast the same vote. We are interested in the factors that determine the cohesion within political groups and coalition formation between political groups. Furthermore, we investigate to what extent communication in a different social context, i.e., the retweeting behavior of MEPs, can explain the co-voting of MEPs. For this purpose we apply an Exponential Random Graph Model BIBREF5 to individual roll-call vote networks, and aggregate the results by means of the meta-analysis. ERGMs allow us to investigate the factors relevant for the network-formation process. Network metrics, as described in the abundant literature, serve to gain information about the structural properties of the observed network. A model investigating the processes driving the network formation, however, has to take into account that there can be a multitude of alternative networks. If we are interested in the parameters influencing the network formation we have to consider all possible networks and measure their similarity to the originally observed network. The family of ERGMs builds upon this idea. Assume a random graph INLINEFORM0 , in the form of a binary adjacency matrix, made up of a set of INLINEFORM1 nodes and INLINEFORM2 edges INLINEFORM3 where, similar to a binary choice model, INLINEFORM4 if the nodes INLINEFORM5 are connected and INLINEFORM6 if not. Since network data is by definition relational and thus violates assumptions of independence, classical binary choice models, like logistic regression, cannot be applied in this context. Within an ERGM, the probability for a given network is modelled by DISPLAYFORM0 where INLINEFORM0 is the vector of parameters and INLINEFORM1 is the vector of network statistics (counts of network substructures), which are a function of the adjacency matrix INLINEFORM2 . INLINEFORM3 is a normalization constant corresponding to the sample of all possible networks, which ensures a proper probability distribution. Evaluating the above expression allows us to make assertions if and how specific nodal attributes influence the network formation process. These nodal attributes can be endogenous (dyad-dependent parameters) to the network, like the in- and out-degrees of a node, or exogenous (dyad-independent parameters), as the party affiliation, or the country of origin in our case. An alternative formulation of the ERGM provides the interpretation of the coefficients. We introduce the change statistic, which is defined as the change in the network statistics when an edge between nodes INLINEFORM0 and INLINEFORM1 is added or not. If INLINEFORM2 and INLINEFORM3 denote the vectors of counts of network substructures when the edge is added or not, the change statistics is defined as follows: INLINEFORM4 With this at hand it can be shown that the distribution of the variable INLINEFORM0 , conditional on the rest of the graph INLINEFORM1 , corresponds to: INLINEFORM2 This implies on the one hand that the probability depends on INLINEFORM0 via the change statistic INLINEFORM1 , and on the other hand, that each coefficient within the vector INLINEFORM2 represents an increase in the conditional log-odds ( INLINEFORM3 ) of the graph when the corresponding element in the vector INLINEFORM4 increases by one. The need to condition the probability on the rest of the network can be illustrated by a simple example. The addition (removal) of a single edge alters the network statistics. If a network has only edges INLINEFORM5 and INLINEFORM6 , the creation of an edge INLINEFORM7 would not only add an additional edge but would also alter the count for other network substructures included in the model. In this example, the creation of the edge INLINEFORM8 also increases the number of triangles by one. The coefficients are transformed into probabilities with the logistic function: INLINEFORM9 For example, in the context of roll-call votes, the probability that an additional co-voting edge is formed between two nodes (MEPs) of the same political group is computed with that equation. In this context, the nodematch (nodemix) coefficients of the ERGM (described in detail bellow) therefore refer to the degree of homophilous (heterophilous) matching of MEPs with regard to their political affiliation, or, expressed differently, the propensity of MEPs to co-vote with other MEPs of their respective political group or another group. A positive coefficient reflects an increased chance that an edge between two nodes with respective properties, like group affiliation, given all other parameters unchanged, is formed. Or, put differently, a positive coefficient implies that the probability of observing a network with a higher number of corresponding pairs relative to the hypothetical baseline network, is higher than to observe the baseline network itself BIBREF31 . For an intuitive interpretation, log-odds value of 0 corresponds to the even chance probability of INLINEFORM0 . Log-odds of INLINEFORM1 correspond to an increase of probability by INLINEFORM2 , whereas log-odds of INLINEFORM3 correspond to a decrease of probability by INLINEFORM4 . The computational challenges of estimating ERGMs is to a large degree due to the estimation of the normalizing constant. The number of possible networks is already extremely large for very small networks and the computation is simply not feasible. Therefore, an appropriate sample has to be found, ideally covering the most probable areas of the probability distribution. For this we make use of a method from the Markov Chain Monte Carlo (MCMC) family, namely the Metropolis-Hastings algorithm. The idea behind this algorithm is to generate and sample highly weighted random networks departing from the observed network. The Metropolis-Hastings algorithm is an iterative algorithm which samples from the space of possible networks by randomly adding or removing edges from the starting network conditional on its density. If the likelihood, in the ERGM context also denoted as weights, of the newly generated network is higher than that of the departure network it is retained, otherwise it is discarded. In the former case, the algorithm starts anew from the newly generated network. Otherwise departure network is used again. Repeating this procedure sufficiently often and summing the weights associated to the stored (sampled) networks allows to compute an approximation of the denominator in equation EQREF18 (normalizing constant). The algorithm starts sampling from the originally observed network INLINEFORM0 . The optimization of the coefficients is done simultaneously, equivalently with the Metropolis-Hastings algorithm. At the beginning starting values have to be supplied. For the study at hand we used the “ergm” library from the statistical R software package BIBREF5 implementing the Gibbs-Sampling algorithm BIBREF32 which is a special case of the Metropolis-Hastings algorithm outlined. In order to answer our question of the importance of the factors which drive the network formation process in the roll-call co-voting network, the ERGM is specified with the following parameters: nodematch country: This parameter adds one network statistic to the model, i.e., the number of edges INLINEFORM0 where INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of MEPs with respect to their country of origin. In other words, this coefficient indicates how relevant nationality is in the formation of edges in the co-voting network. nodematch national party: This parameter adds one network statistic to the model: the number of edges INLINEFORM0 with INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of the MEPs with regard to their party affiliation at the national level. In the context of this study, this coefficient can be interpreted as an indicator for within-party cohesion at the national level. nodemix EP group: This parameter adds one network statistic for each pair of European political groups. These coefficients shed light on the degree of coalitions between different groups as well as the within group cohesion . Given that there are nine groups in the European Parliament, this coefficient adds in total 81 statistics to the model. edge covariate Twitter: This parameter corresponds to a square matrix with the dimension of the adjacency matrix of the network, which corresponds to the number of mutual retweets between the MEPs. It provides an insight about the extent to which communication in one social context (Twitter), can explain cooperation in another social context (co-voting in RCVs). An ERGM as specified above is estimated for each of the 2535 roll-call votes. Each roll-call vote is thereby interpreted as a binary network and as an independent study. It is assumed that a priori each MEP could possibly form an edge with each other MEP in the context of a roll-call vote. Assumptions over the presence or absence of individual MEPs in a voting session are not made. In other words the dimensions of the adjacency matrix (the node set), and therefore the distribution from which new networks are drawn, is kept constant over all RCVs and therefore for every ERGM. The ERGM results therefore implicitly generalize to the case where potentially all MEPs are present and could be voting. Not voting is incorporated implicitly by the disconnectedness of a node. The coefficients of the 2535 roll-call vote studies are aggregated by means of a meta-analysis approach proposed by Lubbers BIBREF33 and Snijders et al. BIBREF34 . We are interested in average effect sizes of different matching patterns over different topics and overall. Considering the number of RCVs, it seems straightforward to interpret the different RCV networks as multiplex networks and collapse them into one weighted network, which could then be analysed by means of a valued ERGM BIBREF35 . There are, however, two reasons why we chose the meta-analysis approach instead. First, aggregating the RCV data results into an extremely dense network, leading to severe convergence (degeneracy) problems for the ERGM. Second, the RCV data contains information about the different policy areas the individual votes were about. Since we are interested in how the coalition formation in the European Parliament differs over different areas, a method is needed that allows for an ex-post analysis of the corresponding results. We therefore opted for the meta-analysis approach by Lubbers and Snijders et al. This approach allows us to summarize the results by decomposing the coefficients into average effects and (class) subject-specific deviations. The different ERGM runs for each RCV are thereby regarded as different studies with identical samples that are combined to obtain a general overview of effect sizes. The meta-regression model is defined as: INLINEFORM0 Here INLINEFORM0 is a parameter estimate for class INLINEFORM1 , and INLINEFORM2 is the average coefficient. INLINEFORM3 denotes the normally distributed deviation of the class INLINEFORM4 with a mean of 0 and a variance of INLINEFORM5 . INLINEFORM6 is the estimation error of the parameter value INLINEFORM7 from the ERGM. The meta-analysis model is fitted by an iterated, weighted, least-squares model in which the observations are weighted by the inverse of their variances. For the overall nodematch between political groups, we weighted the coefficients by group sizes. The results from the meta analysis can be interpreted as if they stemmed from an individual ERGM run. In our study, the meta-analysis was performed using the RSiena library BIBREF36 , which implements the method proposed by Lubbers and Snijders et al. BIBREF33 , BIBREF34 . Measuring cohesion and coalitions on Twitter The retweeting behavior of MEPs is captured by their retweet network. Each MEP active on Twitter is a node in this network. An edge in the network between two MEPs exists when one MEP retweeted the other. The weight of the edge is the number of retweets between the two MEPs. The resulting retweet network is an undirected, weighted network. We measure the cohesion of a political group INLINEFORM0 as the average retweets, i.e., the ratio of the number of retweets between the MEPs in the group INLINEFORM1 to the number of MEPs in the group INLINEFORM2 . The higher the ratio, the more each MEP (on average) retweets the MEPs from the same political group, hence, the higher the cohesion of the political group. The definition of the average retweeets ( INLINEFORM3 ) of a group INLINEFORM4 is: INLINEFORM5 This measure of cohesion captures the aggregate retweeting behavior of the group. If we consider retweets as endorsements, a larger number of retweets within the group is an indicator of agreement between the MEPs in the group. It does not take into account the patterns of retweeting within the group, thus ignoring the social sub-structure of the group. This is a potentially interesting direction and we leave it for future work. We employ an analogous measure for the strength of coalitions in the retweet network. The coalition strength between two groups INLINEFORM0 and INLINEFORM1 is the ratio of the number of retweets from one group to the other (but not within groups) INLINEFORM2 to the total number of MEPs in both groups, INLINEFORM3 . The definition of the average retweeets ( INLINEFORM4 ) between groups INLINEFORM5 and INLINEFORM6 is: INLINEFORM7 Cohesion of political groups In this section we first report on the level of cohesion of the European Parliament's groups by analyzing the co-voting through the agreement and ERGM measures. Next, we explore two important policy areas, namely Economic and monetary system and State and evolution of the Union. Finally, we analyze the cohesion of the European Parliament's groups on Twitter. Existing research by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 shows that the cohesion of the European political groups has been rising since the 1990s, and the level of cohesion remained high even after the EU's enlargement in 2004, when the number of MEPs increased from 626 to 732. We measure the co-voting cohesion of the political groups in the Eighth European Parliament using Krippendorff's Alpha—the results are shown in Fig FIGREF30 (panel Overall). The Greens-EFA have the highest cohesion of all the groups. This finding is in line with an analysis of previous compositions of the Fifth and Sixth European Parliaments by Hix and Noury BIBREF11 , and the Seventh by VoteWatch BIBREF37 . They are closely followed by the S&D and EPP. Hix and Noury reported on the high cohesion of S&D in the Fifth and Sixth European Parliaments, and we also observe this in the current composition. They also reported a slightly less cohesive EPP-ED. This group split in 2009 into EPP and ECR. VoteWatch reports EPP to have cohesion on a par with Greens-EFA and S&D in the Seventh European Parliament. The cohesion level we observe in the current European Parliament is also similar to the level of Greens-EFA and S&D. The catch-all group of the non-aligned (NI) comes out as the group with the lowest cohesion. In addition, among the least cohesive groups in the European Parliament are the Eurosceptics EFDD, which include the British UKIP led by Nigel Farage, and the ENL whose largest party are the French National Front, led by Marine Le Pen. Similarly, Hix and Noury found that the least cohesive groups in the Seventh European Parliament are the nationalists and Eurosceptics. The Eurosceptic IND/DEM, which participated in the Sixth European Parliament, transformed into the current EFDD, while the nationalistic UEN was dissolved in 2009. We also measure the voting cohesion of the European Parliament groups using an ERGM, a network-based method—the results are shown in Fig FIGREF31 (panel Overall). The cohesion results obtained with ERGM are comparable to the results based on agreement. In this context, the parameters estimated by the ERGM refer to the matching of MEPs who belong to the same political group (one parameter per group). The parameters measure the homophilous matching between MEPs who have the same political affiliation. A positive value for the estimated parameter indicates that the co-voting of MEPs from that group is greater than what is expected by chance, where the expected number of co-voting links by chance in a group is taken to be uniformly random. A negative value indicates that there are fewer co-voting links within a group than expected by chance. Even though INLINEFORM0 and ERGM compute scores relative to what is expected by chance, they refer to different interpretations of chance. INLINEFORM1 's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group, knowing the votes of these MEPs on all RCVs. ERGM's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group on a given RCV, knowing the network-related properties of the co-voting network on that particular RCV. The main difference between INLINEFORM2 and ERGM, though, is the treatment of non-voting and abstained MEPs. INLINEFORM3 considers only the yes/no votes, and consequently, agreements by the voting MEPs of the same groups are considerably higher than co-voting by chance. ERGM, on the other hand, always considers all MEPs, and non-voting and abstained MEPs are treated as disconnected nodes. The level of co-voting by chance is therefore considerably lower, since there is often a large fraction of MEPs that do not attand or abstain. As with INLINEFORM0 , Greens-EFA, S&D, and EPP exhibit the highest cohesion, even though their ranking is permuted when compared to the ranking obtained with INLINEFORM1 . At the other end of the scale, we observe the same situation as with INLINEFORM2 . The non-aligned members NI have the lowest cohesion, followed by EFDD and ENL. The only place where the two methods disagree is the level of cohesion of GUE-NGL. The Alpha attributes GUE-NGL a rather high level of cohesion, on a par with ALDE, whereas the ERGM attributes them a much lower cohesion. The reason for this difference is the relatively high abstention rate of GUE-NGL. Whereas the overall fraction of non-attending and abstaining MEPs across all RCVs and all political groups is 25%, the GUE-NGL abstention rate is 34%. This is reflected in an above average cohesion by INLINEFORM0 where only yes/no votes are considered, and in a relatively lower, below average cohesion by ERGM. In the later case, the non-attendance is interpreted as a non-cohesive voting of a political groups as a whole. In addition to the overall cohesion, we also focus on two selected policy areas. The cohesion of the political groups related to these two policy areas is shown in the first two panels in Fig FIGREF30 ( INLINEFORM0 ) and Fig FIGREF31 (ERGM). The most important observation is that the level of cohesion of the political groups is very stable across different policy areas. These results are corroborated by both methodologies. Similar to the overall cohesion, the most cohesive political groups are the S&D, Greens-EFA, and EPP. The least cohesive group is the NI, followed by the ENL and EFDD. The two methodologies agree on the level of cohesion for all the political groups, except for GUE-NGL, due to a lower attendance rate. We determine the cohesion of political groups on Twitter by using the average number of retweets between MEPs within the same group. The results are shown in Fig FIGREF33 . The right-wing ENL and EFDD come out as the most cohesive groups, while all the other groups have a far lower average number of retweets. MEPs from ENL and EFDD post by far the largest number of retweets (over 240), and at the same time over 94% of their retweets are directed to MEPs from the same group. Moreover, these two groups stand out in the way the retweets are distributed within the group. A large portion of the retweets of EFDD (1755) go to Nigel Farage, the leader of the group. Likewise, a very large portion of retweets of ENL (2324) go to Marine Le Pen, the leader of the group. Farage and Le Pen are by far the two most retweeted MEPs, with the third one having only 666 retweets. Coalitions in the European Parliament Coalition formation in the European Parliament is largely determined by ideological positions, reflected in the degree of cooperation of parties at the national and European levels. The observation of ideological inclinations in the coalition formation within the European Parliament was already made by other authors BIBREF11 and is confirmed in this study. The basic patterns of coalition formation in the European Parliament can already be seen in the co-voting network in Fig FIGREF2 A. It is remarkable that the degree of attachment between the political groups, which indicates the degree of cooperation in the European Parliament, nearly exactly corresponds to the left-to-right seating order. The liberal ALDE seems to have an intermediator role between the left and right parts of the spectrum in the parliament. Between the extreme (GUE-NGL) and center left (S&D) groups, this function seems to be occupied by Greens-EFA. The non-aligned members NI, as well as the Eurosceptic EFFD and ENL, seem to alternately tip the balance on both poles of the political spectrum. Being ideologically more inclined to vote with other conservative and right-wing groups (EPP, ECR), they sometimes also cooperate with the extreme left-wing group (GUE-NGL) with which they share their Euroscepticism as a common denominator. Figs FIGREF36 and FIGREF37 give a more detailed understanding of the coalition formation in the European Parliament. Fig FIGREF36 displays the degree of agreement or cooperation between political groups measured by Krippendorff's INLINEFORM0 , whereas Fig FIGREF37 is based on the result from the ERGM. We first focus on the overall results displayed in the right-hand plots of Figs FIGREF36 and FIGREF37 . The strongest degrees of cooperation are observed, with both methods, between the two major parties (EPP and S&D) on the one hand, and the liberal ALDE on the other. Furthermore, we see a strong propensity for Greens-EFA to vote with the Social Democrats (5th strongest coalition by INLINEFORM0 , and 3rd by ERGM) and the GUE-NGL (3rd strongest coalition by INLINEFORM1 , and 5th by ERGM). These results underline the role of ALDE and Greens-EFA as intermediaries for the larger groups to achieve a majority. Although the two largest groups together have 405 seats and thus significantly more than the 376 votes needed for a simple majority, the degree of cooperation between the two major groups is ranked only as the fourth strongest by both methods. This suggests that these two political groups find it easier to negotiate deals with smaller counterparts than with the other large group. This observation was also made by Hix et al. BIBREF12 , who noted that alignments on the left and right of the political spectrum have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. Next, we focus on the coalition formation within the two selected policy areas. The area State and Evolution of the Union is dominated by cooperation between the two major groups, S&D and EPP, as well as ALDE. We also observe a high degree of cooperation between groups that are generally regarded as integration friendly, like Greens-EFA and GUE-NGL. We see, particularly in Fig FIGREF36 , a relatively high degree of cooperation between groups considered as Eurosceptic, like ECR, EFFD, ENL, and the group of non-aligned members. The dichotomy between supporters and opponents of European integration is even more pronounced within the policy area Economic and Monetary System. In fact, we are taking a closer look specifically at these two areas as they are, at the same time, both contentious and important. Both methods rank the cooperation between S&D and EPP on the one hand, and ALDE on the other, as the strongest. We also observe a certain degree of unanimity among the Eurosceptic and right-wing groups (EFDD, ENL, and NI) in this policy area. This seems plausible, as these groups were (especially in the aftermath of the global financial crisis and the subsequent European debt crisis) in fierce opposition to further payments to financially troubled member states. However, we also observe a number of strong coalitions that might, at first glance, seem unusual, specifically involving the left-wing group GUE-NGL on the one hand, and the right-wing EFDD, ENL, and NI on the other. These links also show up in the network plot in Fig FIGREF2 A. This might be attributable to a certain degree of Euroscepticism on both sides: rooted in criticism of capitalism on the left, and at least partly a raison d'être on the right. Hix et al. BIBREF11 discovered this pattern as well, and proposed an additional explanation—these coalitions also relate to a form of government-opposition dynamic that is rooted at the national level, but is reflected in voting patterns at the European level. In general, we observe two main differences between the INLINEFORM0 and ERGM results: the baseline cooperation as estimated by INLINEFORM1 is higher, and the ordering of coalitions from the strongest to the weakest is not exactly the same. The reason is the same as for the cohesion, namely different treatment of non-voting and abstaining MEPs. When they are ignored, as by INLINEFORM2 , the baseline level of inter-group co-voting is higher. When non-attending and abstaining is treated as voting differently, as by ERGM, it is considerably more difficult to achieve co-voting coalitions, specially when there are on average 25% MEPs that do not attend or abstain. Groups with higher non-attendance rates, such as GUE-NGL (34%) and NI (40%) are less likely to form coalitions, and therefore have relatively lower ERGM coefficients (Fig FIGREF37 ) than INLINEFORM3 scores (Fig FIGREF36 ). The first insight into coalition formation on Twitter can be observed in the retweet network in Fig FIGREF2 B. The ideological left to right alignment of the political groups is reflected in the retweet network. Fig FIGREF40 shows the strength of the coalitions on Twitter, as estimated by the number of retweets between MEPs from different groups. The strongest coalitions are formed between the right-wing groups EFDD and ECR, as well as ENL and NI. At first, this might come as a surprise, since these groups do not form strong coalitions in the European Parliament, as can be seen in Figs FIGREF36 and FIGREF37 . On the other hand, the MEPs from these groups are very active Twitter users. As previously stated, MEPs from ENL and EFDD post the largest number of retweets. Moreover, 63% of the retweets outside of ENL are retweets of NI. This effect is even more pronounced with MEPs from EFDD, whose retweets of ECR account for 74% of their retweets from other groups. In addition to these strong coalitions on the right wing, we find coalition patterns to be very similar to the voting coalitions observed in the European Parliament, seen in Figs FIGREF36 and FIGREF37 . The strongest coalitions, which come immediately after the right-wing coalitions, are between Greens-EFA on the one hand, and GUE-NGL and S&D on the other, as well as ALDE on the one hand, and EPP and S&D on the other. These results corroborate the role of ALDE and Greens-EFA as intermediaries in the European Parliament, not only in the legislative process, but also in the debate on social media. To better understand the formation of coalitions in the European Parliament and on Twitter, we examine the strongest cooperation between political groups at three different thresholds. For co-voting coalitions in the European Parliament we choose a high threshold of INLINEFORM0 , a medium threshold of INLINEFORM1 , and a negative threshold of INLINEFORM2 (which corresponds to strong oppositions). In this way we observe the overall patterns of coalition and opposition formation in the European Parliament and in the two specific policy areas. For cooperation on Twitter, we choose a high threshold of INLINEFORM3 , a medium threshold of INLINEFORM4 , and a very low threshold of INLINEFORM5 . The strongest cooperations in the European Parliament over all policy areas are shown in Fig FIGREF42 G. It comes as no surprise that the strongest cooperations are within the groups (in the diagonal). Moreover, we again observe GUE-NGL, S&D, Greens-EFA, ALDE, and EPP as the most cohesive groups. In Fig FIGREF42 H, we observe coalitions forming along the diagonal, which represents the seating order in the European Parliament. Within this pattern, we observe four blocks of coalitions: on the left, between GUE-NGL, S&D, and Greens-EFA; in the center, between S&D, Greens-EFA, ALDE, and EPP; on the right-center between ALDE, EPP, and ECR; and finally, on the far-right between ECR, EFDD, ENL, and NI. Fig FIGREF42 I shows the strongest opposition between groups that systematically disagree in voting. The strongest disagreements are between left- and right-aligned groups, but not between the left-most and right-most groups, in particular, between GUE-NGL and ECR, but also between S&D and Greens-EFA on one side, and ENL and NI on the other. In the area of Economic and monetary system we see a strong cooperation between EPP and S&D (Fig FIGREF42 A), which is on a par with the cohesion of the most cohesive groups (GUE-NGL, S&D, Greens-EFA, ALDE, and EPP), and is above the cohesion of the other groups. As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B. On one hand, we observe cooperation between S&D, ALDE, EPP, and ECR, and on the other, cooperation between GUE-NGL, Greens-EFA, EFDD, ENL, and NI. This division in blocks is seen again in Fig FIGREF42 C, which shows the strongest disagreements. Here, we observe two blocks composed of S&D, EPP, and ALDE on one hand, and GUE-NGL, EFDD, ENL, and NI on the other, which are in strong opposition to each other. In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis. The patterns of coalitions forming on Twitter closely resemble those in the European Parliament. In Fig FIGREF42 J we see that the strongest degrees of cooperation on Twitter are within the groups. The only group with low cohesion is the NI, whose members have only seven retweets between them. The coalitions on Twitter follow the seating order in the European Parliament remarkably well (see Fig FIGREF42 K). What is striking is that the same blocks form on the left, center, and on the center-right, both in the European Parliament and on Twitter. The largest difference between the coalitions in the European Parliament and on Twitter is on the far-right, where we observe ENL and NI as isolated blocks. The results shown in Fig FIGREF44 quantify the extent to which communication in one social context (Twitter) can explain cooperation in another social context (co-voting in the European Parliament). A positive value indicates that the matching behavior in the retweet network is similar to the one in the co-voting network, specific for an individual policy area. On the other hand, a negative value implies a negative “correlation” between the retweeting and co-voting of MEPs in the two different contexts. The bars in Fig FIGREF44 correspond to the coefficients from the edge covariate terms of the ERGM, describing the relationship between the retweeting and co-voting behavior of MEPs. The coefficients are aggregated for individual policy areas by means of a meta-analysis. Overall, we observe a positive correlation between retweeting and co-voting, which is significantly different from zero. The strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets. Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union. The only exception, with a significantly negative coefficient, is the area Economic and monetary system. This implies that in the area Economic and monetary system we observe a significant deviation from the usual co-voting patterns. Results from section “sec:coalitionpolicy”, confirm that this is indeed the case. Especially noteworthy are the coalitions between GUE-NGL and Greens-EFA on the left wing, and EFDD and ENL on the right wing. In the section “sec:coalitionpolicy” we interpret these results as a combination of Euroscepticism on both sides, motivated on the left by a skeptical attitude towards the market orientation of the EU, and on the right by a reluctance to give up national sovereignty. Discussion We study cohesion and coalitions in the Eighth European Parliament by analyzing, on one hand, MEPs' co-voting tendencies and, on the other, their retweeting behavior. We reveal that the most cohesive political group in the European Parliament, when it comes to co-voting, is Greens-EFA, closely followed by S&D and EPP. This is consistent with what VoteWatch BIBREF37 reported for the Seventh European Parliament. The non-aligned (NI) come out as the least cohesive group, followed by the Eurosceptic EFDD. Hix and Noury BIBREF11 also report that nationalists and Eurosceptics form the least cohesive groups in the Sixth European Parliament. We reaffirm most of these results with both of the two employed methodologies. The only point where the two methodologies disagree is in the level of cohesion for the left-wing GUE-NGL, which is portrayed by ERGM as a much less cohesive group, due to their relatively lower attendance rate. The level of cohesion of the political groups is quite stable across different policy areas and similar conclusions apply. On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration. We observed the same phenomenon recently during the Brexit campaign BIBREF38 . Along our interpretation the Brexit was “won” to some extent due to these social media activities, which are practically non-existent among the pro-EU political groups. The fact that ENL and EFDD are the least cohesive groups in the European Parliament can be attributed to their political focus. It seems more important for the group to agree on its anti-EU stance and to call for independence and sovereignty, and much less important to agree on other issues put forward in the parliament. The basic pattern of coalition formation, with respect to co-voting, can already be seen in Fig FIGREF2 A: the force-based layout almost completely corresponds to the seating order in the European Parliament (from the left- to the right-wing groups). A more thorough examination shows that the strongest cooperation can be observed, for both methodologies, between EPP, S&D, and ALDE, where EPP and S&D are the two largest groups, while the liberal ALDE plays the role of an intermediary in this context. On the other hand, the role of an intermediary between the far-left GUE-NGL and its center-left neighbor, S&D, is played by the Greens-EFA. These three parties also form a strong coalition in the European Parliament. On the far right of the spectrum, the non-aligned, EFDD, and ENL form another coalition. This behavior was also observed by Hix et al. BIBREF12 , stating that alignments on the left and right have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the European Parliament. When looking at the policy area Economic and monetary system, we see the same coalitions. However, interestingly, EFDD, ENL, and NI often co-vote with the far-left GUE-NGL. This can be attributed to a certain degree of Euroscepticism on both sides: as a criticism of capitalism, on one hand, and as the main political agenda, on the other. This pattern was also discovered by Hix et al. BIBREF12 , who argued that these coalitions emerge from a form of government-opposition dynamics, rooted at the national level, but also reflected at the European level. When studying coalitions on Twitter, the strongest coalitions can be observed on the right of the spectrum (between EFDD, ECR, ENL, and NI). This is, yet again, in contrast to what was observed in the RCV data. The reason lies in the anti-EU messages they tend to collectively spread (retweet) across the network. This behavior forms strong retweet ties, not only within, but also between, these groups. For example, MEPs of EFDD mainly retweet MEPs from ECR (with the exception of MEPs from their own group). In contrast to these right-wing coalitions, we find the other coalitions to be consistent with what is observed in the RCV data. The strongest coalitions on the left-to-center part of the axis are those between GUE-NGL, Greens-EFA, and S&D, and between S&D, ALDE, and EPP. These results reaffirm the role of Greens-EFA and ALDE as intermediaries, not only in the European Parliament but also in the debates on social media. Last, but not least, with the ERGM methodology we measure the extent to which the retweet network can explain the co-voting activities in the European Parliament. We compute this for each policy area separately and also over all RCVs. We conclude that the retweet network indeed matches the co-voting behavior, with the exception of one specific policy area. In the area Economic and monetary system, the links in the (overall) retweet network do not match the links in the co-voting network. Moreover, the negative coefficients imply a radically different formation of coalitions in the European Parliament. This is consistent with the results in Figs FIGREF36 and FIGREF37 (the left-hand panels), and is also observed in Fig FIGREF42 (the top charts). From these figures we see that in this particular case, the coalitions are also formed between the right-wing groups and the far-left GUE-NGL. As already explained, we attribute this to the degree of Euroscepticism that these groups share on this particular policy issue. Conclusions In this paper we analyze (co-)voting patterns and social behavior of members of the European Parliament, as well as the interaction between these two systems. More precisely, we analyze a set of 2535 roll-call votes as well as the tweets and retweets of members of the MEPs in the period from October 2014 to February 2016. The results indicate a considerable level of correlation between these two complex systems. This is consistent with previous findings of Cherepnalkoski et al. BIBREF22 , who reconstructed the adherence of MEPs to their respective political or national group solely from their retweeting behavior. We employ two different methodologies to quantify the co-voting patterns: Krippendorff's INLINEFORM0 and ERGM. They were developed in different fields of research, use different techniques, and are based on different assumptions, but in general they yield consistent results. However, there are some differences which have consequences for the interpretation of the results. INLINEFORM0 is a measure of agreement, designed as a generalization of several specialized measures, that can compare different numbers of observations, in our case roll-call votes. It only considers yes/no votes. Absence and abstention by MEPs is ignored. Its baseline ( INLINEFORM1 ), i.e., co-voting by chance, is computed from the yes/no votes of all MEPs on all RCVs. ERGMs are used in social-network analyses to determine factors influencing the edge formation process. In our case an edge between two MEPs is formed when they cast the same yes/no vote within a RCV. It is assumed that a priori each MEP can form a link with any other MEP. No assumptions about the presence or absence of individual MEPs in a voting session are made. Each RCV is analyzed as a separate binary network. The node set is thereby kept constant for each RCV network. While the ERGM departs from the originally observed network, where MEPs who didn't vote or abstained appear as isolated nodes, links between these nodes are possible within the network sampling process which is part of the ERGM optimization process. The results of several RCVs are aggregated by means of the meta-analysis approach. The baseline (ERGM coefficients INLINEFORM0 ), i.e., co-voting by chance, is computed from a large sample of randomly generated networks. These two different baselines have to be taken into account when interpreting the results of INLINEFORM0 and ERGM. In a typical voting session, 25% of the MEPs are missing or abstaining. When assessing cohesion of political groups, all INLINEFORM1 values are well above the baseline, and the average INLINEFORM2 . The average ERGM cohesion coefficients, on the other hand, are around the baseline. The difference is even more pronounced for groups with higher non-attendance/abstention rates like GUE-NGL (34%) and NI (40%). When assessing strength of coalitions between pairs of groups, INLINEFORM3 values are balanced around the baseline, while the ERGM coefficients are mostly negative. The ordering of coalitions from the strongest to the weakest is therefor different when groups with high non-attendance/abstention rates are involved. The choice of the methodology to asses cohesion and coalitions is not obvious. Roll-call voting is used for decisions which demand a simple majority only. One might however argue that non-attendance/abstention corresponds to a no vote, or that absence is used strategically. Also, the importance of individual votes, i.e., how high on the agenda of a political group is the subject, affects their attendance, and consequently the perception of their cohesion and the potential to act as a reliable coalition partner. Acknowledgments This work was supported in part by the EC projects SIMPOL (no. 610704) and DOLFINS (no. 640772), and by the Slovenian ARRS programme Knowledge Technologies (no. P2-103).
No
a85c2510f25c7152940b5ac4333a80e0f91ade6e
a85c2510f25c7152940b5ac4333a80e0f91ade6e_0
Q: What insights does the analysis give about the cohesion of political groups in the European parliament? Text: Abstract We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. Introduction Social-media activities often reflect phenomena that occur in other complex systems. By observing social networks and the content propagated through these networks, we can describe or even predict the interplay between the observed social-media activities and another complex system that is more difficult, if not impossible, to monitor. There are numerous studies reported in the literature that successfully correlate social-media activities to phenomena like election outcomes BIBREF0 , BIBREF1 or stock-price movements BIBREF2 , BIBREF3 . In this paper we study the cohesion and coalitions exhibited by political groups in the Eighth European Parliament (2014–2019). We analyze two entirely different aspects of how the Members of the European Parliament (MEPs) behave in policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting (i.e., endorsing) behavior. We use two diverse datasets in the analysis: the roll-call votes and the Twitter data. A roll-call vote (RCV) is a vote in the parliament in which the names of the MEPs are recorded along with their votes. The RCV data is available as part of the minutes of the parliament's plenary sessions. From this perspective, cohesion is seen as the tendency to co-vote (i.e., cast the same vote) within a group, and a coalition is formed when members of two or more groups exhibit a high degree of co-voting on a subject. The second dataset comes from Twitter. It captures the retweeting behavior of MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between the groups) from a completely different perspective. With over 300 million monthly active users and 500 million tweets posted daily, Twitter is one of the most popular social networks. Twitter allows its users to post short messages (tweets) and to follow other users. A user who follows another user is able to read his/her public tweets. Twitter also supports other types of interaction, such as user mentions, replies, and retweets. Of these, retweeting is the most important activity as it is used to share and endorse content created by other users. When a user retweets a tweet, the information about the original author as well as the tweet's content are preserved, and the tweet is shared with the user's followers. Typically, users retweet content that they agree with and thus endorse the views expressed by the original tweeter. We apply two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 which measures the agreement among observers, or voters in our case. The second one is based on Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach and is often used in social-network analyses. Even though these two methodologies come with two different sets of techniques and are based on different assumptions, they provide consistent results. The main contributions of this paper are as follows: (i) We give general insights into the cohesion of political groups in the Eighth European Parliament, both overall and across different policy areas. (ii) We explore whether coalitions are formed in the same way for different policy areas. (iii) We explore to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. (iv) We employ two statistically sound methodologies and examine the extent to which the results are sensitive to the choice of methodology. While the results are mostly consistent, we show that the difference are due to the different treatment of non-attending and abstaining MEPs by INLINEFORM0 and ERGM. The most novel and interesting aspect of our work is the relationship between the co-voting and the retweeting patterns. The increased use of Twitter by MEPs on days with a roll-call vote session (see Fig FIGREF1 ) is an indicator that these two processes are related. In addition, the force-based layouts of the co-voting network and the retweet network reveal a very similar structure on the left-to-center side of the political spectrum (see Fig FIGREF2 ). They also show a discrepancy on the far-right side of the spectrum, which calls for a more detailed analysis. Related work In this paper we study and relate two very different aspects of how MEPs behave in policy-making processes. First, we look at their co-voting behavior, and second, we examine their retweeting patterns. Thus, we draw related work from two different fields of science. On one hand, we look at how co-voting behavior is analyzed in the political-science literature and, on the other, we explore how Twitter is used to better understand political and policy-making processes. The latter has been more thoroughly explored in the field of data mining (specifically, text mining and network analysis). To the best of our knowledge, this is the first paper that studies legislative behavior in the Eighth European Parliament. The legislative behavior of the previous parliaments was thoroughly studied by Hix, Attina, and others BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . These studies found that voting behavior is determined to a large extent—and when viewed over time, increasingly so—by affiliation to a political group, as an organizational reflection of the ideological position. The authors found that the cohesion of political groups in the parliament has increased, while nationality has been less and less of a decisive factor BIBREF12 . The literature also reports that a split into political camps on the left and right of the political spectrum has recently replaced the `grand coalition' between the two big blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. The authors conclude that coalitions are to a large extent formed along the left-to-right axis BIBREF12 . In this paper we analyze the roll-call vote data published in the minutes of the parliament's plenary sessions. For a given subject, the data contains the vote of each MEP present at the respective sitting. Roll-call vote data from the European Parliament has already been extensively studied by other authors, most notably by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 . To be able to study the cohesion and coalitions, authors like Hix, Attina, and Rice BIBREF6 , BIBREF13 , BIBREF14 defined and employed a variety of agreement measures. The most prominent measure is the Agreement Index proposed by Hix et al. BIBREF13 . This measure computes the agreement score from the size of the majority class for a particular vote. The Agreement Index, however, exhibits two drawbacks: (i) it does not account for co-voting by chance, and (ii) without a proper adaptation, it does not accommodate the scenario in which the agreement is to be measured between two different political groups. We employ two statistically sound methodologies developed in two different fields of science. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 . INLINEFORM1 is a measure of the agreement among observers, coders, or measuring instruments that assign values to items or phenomena. It compares the observed agreement to the agreement expected by chance. INLINEFORM2 is used to measure the inter- and self-annotator agreement of human experts when labeling data, and the performance of classification models in machine learning scenarios BIBREF15 . In addition to INLINEFORM3 , we employ Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach, often used in social-network analyses. ERGM can be employed to investigate how different network statistics (e.g., number of edges and triangles) or external factors (e.g., political group membership) govern the network-formation process. The second important aspect of our study is related to analyzing the behavior of participants in social networks, specifically Twitter. Twitter is studied by researchers to better understand different political processes, and in some cases to predict their outcomes. Eom et al. BIBREF1 consider the number of tweets by a party as a proxy for the collective attention to the party, explore the dynamics of the volume, and show that this quantity contains information about an election's outcome. Other studies BIBREF16 reach similar conclusions. Conover et al. BIBREF17 predicted the political alignment of Twitter users in the run-up to the 2010 US elections based on content and network structure. They analyzed the polarization of the retweet and mention networks for the same elections BIBREF18 . Borondo et al. BIBREF19 analyzed user activity during the Spanish presidential elections. They additionally analyzed the 2012 Catalan elections, focusing on the interplay between the language and the community structure of the network BIBREF20 . Most existing research, as Larsson points out BIBREF21 , focuses on the online behavior of leading political figures during election campaigns. This paper continues our research on communities that MEPs (and their followers) form on Twitter BIBREF22 . The goal of our research was to evaluate the role of Twitter in identifying communities of influence when the actual communities are known. We represent the influence on Twitter by the number of retweets that MEPs “receive”. We construct two networks of influence: (i) core, which consists only of MEPs, and (ii) extended, which also involves their followers. We compare the detected communities in both networks to the groups formed by the political, country, and language membership of MEPs. The results show that the detected communities in the core network closely match the political groups, while the communities in the extended network correspond to the countries of residence. This provides empirical evidence that analyzing retweet networks can reveal real-world relationships and can be used to uncover hidden properties of the networks. Lazer BIBREF23 highlights the importance of network-based approaches in political science in general by arguing that politics is a relational phenomenon at its core. Some researchers have adopted the network-based approach to investigate the structure of legislative work in the US Congress, including committee and sub-committee membership BIBREF24 , bill co-sponsoring BIBREF25 , and roll-call votes BIBREF26 . More recently, Dal Maso et al. BIBREF27 examined the community structure with respect to political coalitions and government structure in the Italian Parliament. Scherpereel et al. BIBREF28 examined the constituency, personal, and strategic characteristics of MEPs that influence their tweeting behavior. They suggested that Twitter's characteristics, like immediacy, interactivity, spontaneity, personality, and informality, are likely to resonate with political parties across Europe. By fitting regression models, the authors find that MEPs from incohesive groups have a greater tendency to retweet. In contrast to most of these studies, we focus on the Eighth European Parliament, and more importantly, we study and relate two entirely different behavioral aspects, co-voting and retweeting. The goal of this research is to better understand the cohesion and coalition formation processes in the European Parliament by quantifying and comparing the co-voting patterns and social behavior. Methods In this section we present the methods to quantify cohesion and coalitions from the roll-call votes and Twitter activities. Co-voting measured by agreement We first show how the co-voting behaviour of MEPs can be quantified by a measure of the agreement between them. We treat individual RCVs as observations, and MEPs as independent observers or raters. When they cast the same vote, there is a high level of agreement, and when they vote differently, there is a high level of disagreement. We define cohesion as the level of agreement within a political group, a coalition as a voting agreement between political groups, and opposition as a disagreement between different groups. There are many well-known measures of agreement in the literature. We selected Krippendorff's Alpha-reliability ( INLINEFORM0 ) BIBREF4 , which is a generalization of several specialized measures. It works for any number of observers, and is applicable to different variable types and metrics (e.g., nominal, ordered, interval, etc.). In general, INLINEFORM1 is defined as follows: INLINEFORM2 where INLINEFORM0 is the actual disagreement between observers (MEPs), and INLINEFORM1 is disagreement expected by chance. When observers agree perfectly, INLINEFORM2 INLINEFORM3 , when the agreement equals the agreement by chance, INLINEFORM4 INLINEFORM5 , and when the observers disagree systematically, INLINEFORM6 INLINEFORM7 . The two disagreement measures are defined as follows: INLINEFORM0 INLINEFORM0 The arguments INLINEFORM0 , and INLINEFORM1 are defined below and refer to the values in the coincidence matrix that is constructed from the RCVs data. In roll-call votes, INLINEFORM2 (and INLINEFORM3 ) is a nominal variable with two possible values: yes and no. INLINEFORM4 is a difference function between the values of INLINEFORM5 and INLINEFORM6 , defined as: INLINEFORM7 The RCVs data has the form of a reliability data matrix: INLINEFORM0 where INLINEFORM0 is the number of RCVs, INLINEFORM1 is the number of MEPs, INLINEFORM2 is the number of votes cast in the voting INLINEFORM3 , and INLINEFORM4 is the actual vote of an MEP INLINEFORM5 in voting INLINEFORM6 (yes or no). A coincidence matrix is constructed from the reliability data matrix, and is in general a INLINEFORM0 -by- INLINEFORM1 square matrix, where INLINEFORM2 is the number of possible values of INLINEFORM3 . In our case, where only yes/no votes are relevant, the coincidence matrix is a 2-by-2 matrix of the following form: INLINEFORM4 A cell INLINEFORM0 accounts for all coincidences from all pairs of MEPs in all RCVs where one MEP has voted INLINEFORM1 and the other INLINEFORM2 . INLINEFORM3 and INLINEFORM4 are the totals for each vote outcome, and INLINEFORM5 is the grand total. The coincidences INLINEFORM6 are computed as: INLINEFORM7 where INLINEFORM0 is the number of INLINEFORM1 pairs in vote INLINEFORM2 , and INLINEFORM3 is the number of MEPs that voted in INLINEFORM4 . When computing INLINEFORM5 , each pair of votes is considered twice, once as a INLINEFORM6 pair, and once as a INLINEFORM7 pair. The coincidence matrix is therefore symmetrical around the diagonal, and the diagonal contains all the equal votes. The INLINEFORM0 agreement is used to measure the agreement between two MEPs or within a group of MEPs. When applied to a political group, INLINEFORM1 corresponds to the cohesion of the group. The closer INLINEFORM2 is to 1, the higher the agreement of the MEPs in the group, and hence the higher the cohesion of the group. We propose a modified version of INLINEFORM0 to measure the agreement between two different groups, INLINEFORM1 and INLINEFORM2 . In the case of a voting agreement between political groups, high INLINEFORM3 is interpreted as a coalition between the groups, whereas negative INLINEFORM4 indicates political opposition. Suppose INLINEFORM0 and INLINEFORM1 are disjoint subsets of all the MEPs, INLINEFORM2 , INLINEFORM3 . The respective number of votes cast by both group members in vote INLINEFORM4 is INLINEFORM5 and INLINEFORM6 . The coincidences are then computed as: INLINEFORM7 where the INLINEFORM0 pairs come from different groups, INLINEFORM1 and INLINEFORM2 . The total number of such pairs in vote INLINEFORM3 is INLINEFORM4 . The actual number INLINEFORM5 of the pairs is multiplied by INLINEFORM6 so that the total contribution of vote INLINEFORM7 to the coincidence matrix is INLINEFORM8 . A network-based measure of co-voting In this section we describe a network-based approach to analyzing the co-voting behavior of MEPs. For each roll-call vote we form a network, where the nodes in the network are MEPs, and an undirected edge between two MEPs is formed when they cast the same vote. We are interested in the factors that determine the cohesion within political groups and coalition formation between political groups. Furthermore, we investigate to what extent communication in a different social context, i.e., the retweeting behavior of MEPs, can explain the co-voting of MEPs. For this purpose we apply an Exponential Random Graph Model BIBREF5 to individual roll-call vote networks, and aggregate the results by means of the meta-analysis. ERGMs allow us to investigate the factors relevant for the network-formation process. Network metrics, as described in the abundant literature, serve to gain information about the structural properties of the observed network. A model investigating the processes driving the network formation, however, has to take into account that there can be a multitude of alternative networks. If we are interested in the parameters influencing the network formation we have to consider all possible networks and measure their similarity to the originally observed network. The family of ERGMs builds upon this idea. Assume a random graph INLINEFORM0 , in the form of a binary adjacency matrix, made up of a set of INLINEFORM1 nodes and INLINEFORM2 edges INLINEFORM3 where, similar to a binary choice model, INLINEFORM4 if the nodes INLINEFORM5 are connected and INLINEFORM6 if not. Since network data is by definition relational and thus violates assumptions of independence, classical binary choice models, like logistic regression, cannot be applied in this context. Within an ERGM, the probability for a given network is modelled by DISPLAYFORM0 where INLINEFORM0 is the vector of parameters and INLINEFORM1 is the vector of network statistics (counts of network substructures), which are a function of the adjacency matrix INLINEFORM2 . INLINEFORM3 is a normalization constant corresponding to the sample of all possible networks, which ensures a proper probability distribution. Evaluating the above expression allows us to make assertions if and how specific nodal attributes influence the network formation process. These nodal attributes can be endogenous (dyad-dependent parameters) to the network, like the in- and out-degrees of a node, or exogenous (dyad-independent parameters), as the party affiliation, or the country of origin in our case. An alternative formulation of the ERGM provides the interpretation of the coefficients. We introduce the change statistic, which is defined as the change in the network statistics when an edge between nodes INLINEFORM0 and INLINEFORM1 is added or not. If INLINEFORM2 and INLINEFORM3 denote the vectors of counts of network substructures when the edge is added or not, the change statistics is defined as follows: INLINEFORM4 With this at hand it can be shown that the distribution of the variable INLINEFORM0 , conditional on the rest of the graph INLINEFORM1 , corresponds to: INLINEFORM2 This implies on the one hand that the probability depends on INLINEFORM0 via the change statistic INLINEFORM1 , and on the other hand, that each coefficient within the vector INLINEFORM2 represents an increase in the conditional log-odds ( INLINEFORM3 ) of the graph when the corresponding element in the vector INLINEFORM4 increases by one. The need to condition the probability on the rest of the network can be illustrated by a simple example. The addition (removal) of a single edge alters the network statistics. If a network has only edges INLINEFORM5 and INLINEFORM6 , the creation of an edge INLINEFORM7 would not only add an additional edge but would also alter the count for other network substructures included in the model. In this example, the creation of the edge INLINEFORM8 also increases the number of triangles by one. The coefficients are transformed into probabilities with the logistic function: INLINEFORM9 For example, in the context of roll-call votes, the probability that an additional co-voting edge is formed between two nodes (MEPs) of the same political group is computed with that equation. In this context, the nodematch (nodemix) coefficients of the ERGM (described in detail bellow) therefore refer to the degree of homophilous (heterophilous) matching of MEPs with regard to their political affiliation, or, expressed differently, the propensity of MEPs to co-vote with other MEPs of their respective political group or another group. A positive coefficient reflects an increased chance that an edge between two nodes with respective properties, like group affiliation, given all other parameters unchanged, is formed. Or, put differently, a positive coefficient implies that the probability of observing a network with a higher number of corresponding pairs relative to the hypothetical baseline network, is higher than to observe the baseline network itself BIBREF31 . For an intuitive interpretation, log-odds value of 0 corresponds to the even chance probability of INLINEFORM0 . Log-odds of INLINEFORM1 correspond to an increase of probability by INLINEFORM2 , whereas log-odds of INLINEFORM3 correspond to a decrease of probability by INLINEFORM4 . The computational challenges of estimating ERGMs is to a large degree due to the estimation of the normalizing constant. The number of possible networks is already extremely large for very small networks and the computation is simply not feasible. Therefore, an appropriate sample has to be found, ideally covering the most probable areas of the probability distribution. For this we make use of a method from the Markov Chain Monte Carlo (MCMC) family, namely the Metropolis-Hastings algorithm. The idea behind this algorithm is to generate and sample highly weighted random networks departing from the observed network. The Metropolis-Hastings algorithm is an iterative algorithm which samples from the space of possible networks by randomly adding or removing edges from the starting network conditional on its density. If the likelihood, in the ERGM context also denoted as weights, of the newly generated network is higher than that of the departure network it is retained, otherwise it is discarded. In the former case, the algorithm starts anew from the newly generated network. Otherwise departure network is used again. Repeating this procedure sufficiently often and summing the weights associated to the stored (sampled) networks allows to compute an approximation of the denominator in equation EQREF18 (normalizing constant). The algorithm starts sampling from the originally observed network INLINEFORM0 . The optimization of the coefficients is done simultaneously, equivalently with the Metropolis-Hastings algorithm. At the beginning starting values have to be supplied. For the study at hand we used the “ergm” library from the statistical R software package BIBREF5 implementing the Gibbs-Sampling algorithm BIBREF32 which is a special case of the Metropolis-Hastings algorithm outlined. In order to answer our question of the importance of the factors which drive the network formation process in the roll-call co-voting network, the ERGM is specified with the following parameters: nodematch country: This parameter adds one network statistic to the model, i.e., the number of edges INLINEFORM0 where INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of MEPs with respect to their country of origin. In other words, this coefficient indicates how relevant nationality is in the formation of edges in the co-voting network. nodematch national party: This parameter adds one network statistic to the model: the number of edges INLINEFORM0 with INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of the MEPs with regard to their party affiliation at the national level. In the context of this study, this coefficient can be interpreted as an indicator for within-party cohesion at the national level. nodemix EP group: This parameter adds one network statistic for each pair of European political groups. These coefficients shed light on the degree of coalitions between different groups as well as the within group cohesion . Given that there are nine groups in the European Parliament, this coefficient adds in total 81 statistics to the model. edge covariate Twitter: This parameter corresponds to a square matrix with the dimension of the adjacency matrix of the network, which corresponds to the number of mutual retweets between the MEPs. It provides an insight about the extent to which communication in one social context (Twitter), can explain cooperation in another social context (co-voting in RCVs). An ERGM as specified above is estimated for each of the 2535 roll-call votes. Each roll-call vote is thereby interpreted as a binary network and as an independent study. It is assumed that a priori each MEP could possibly form an edge with each other MEP in the context of a roll-call vote. Assumptions over the presence or absence of individual MEPs in a voting session are not made. In other words the dimensions of the adjacency matrix (the node set), and therefore the distribution from which new networks are drawn, is kept constant over all RCVs and therefore for every ERGM. The ERGM results therefore implicitly generalize to the case where potentially all MEPs are present and could be voting. Not voting is incorporated implicitly by the disconnectedness of a node. The coefficients of the 2535 roll-call vote studies are aggregated by means of a meta-analysis approach proposed by Lubbers BIBREF33 and Snijders et al. BIBREF34 . We are interested in average effect sizes of different matching patterns over different topics and overall. Considering the number of RCVs, it seems straightforward to interpret the different RCV networks as multiplex networks and collapse them into one weighted network, which could then be analysed by means of a valued ERGM BIBREF35 . There are, however, two reasons why we chose the meta-analysis approach instead. First, aggregating the RCV data results into an extremely dense network, leading to severe convergence (degeneracy) problems for the ERGM. Second, the RCV data contains information about the different policy areas the individual votes were about. Since we are interested in how the coalition formation in the European Parliament differs over different areas, a method is needed that allows for an ex-post analysis of the corresponding results. We therefore opted for the meta-analysis approach by Lubbers and Snijders et al. This approach allows us to summarize the results by decomposing the coefficients into average effects and (class) subject-specific deviations. The different ERGM runs for each RCV are thereby regarded as different studies with identical samples that are combined to obtain a general overview of effect sizes. The meta-regression model is defined as: INLINEFORM0 Here INLINEFORM0 is a parameter estimate for class INLINEFORM1 , and INLINEFORM2 is the average coefficient. INLINEFORM3 denotes the normally distributed deviation of the class INLINEFORM4 with a mean of 0 and a variance of INLINEFORM5 . INLINEFORM6 is the estimation error of the parameter value INLINEFORM7 from the ERGM. The meta-analysis model is fitted by an iterated, weighted, least-squares model in which the observations are weighted by the inverse of their variances. For the overall nodematch between political groups, we weighted the coefficients by group sizes. The results from the meta analysis can be interpreted as if they stemmed from an individual ERGM run. In our study, the meta-analysis was performed using the RSiena library BIBREF36 , which implements the method proposed by Lubbers and Snijders et al. BIBREF33 , BIBREF34 . Measuring cohesion and coalitions on Twitter The retweeting behavior of MEPs is captured by their retweet network. Each MEP active on Twitter is a node in this network. An edge in the network between two MEPs exists when one MEP retweeted the other. The weight of the edge is the number of retweets between the two MEPs. The resulting retweet network is an undirected, weighted network. We measure the cohesion of a political group INLINEFORM0 as the average retweets, i.e., the ratio of the number of retweets between the MEPs in the group INLINEFORM1 to the number of MEPs in the group INLINEFORM2 . The higher the ratio, the more each MEP (on average) retweets the MEPs from the same political group, hence, the higher the cohesion of the political group. The definition of the average retweeets ( INLINEFORM3 ) of a group INLINEFORM4 is: INLINEFORM5 This measure of cohesion captures the aggregate retweeting behavior of the group. If we consider retweets as endorsements, a larger number of retweets within the group is an indicator of agreement between the MEPs in the group. It does not take into account the patterns of retweeting within the group, thus ignoring the social sub-structure of the group. This is a potentially interesting direction and we leave it for future work. We employ an analogous measure for the strength of coalitions in the retweet network. The coalition strength between two groups INLINEFORM0 and INLINEFORM1 is the ratio of the number of retweets from one group to the other (but not within groups) INLINEFORM2 to the total number of MEPs in both groups, INLINEFORM3 . The definition of the average retweeets ( INLINEFORM4 ) between groups INLINEFORM5 and INLINEFORM6 is: INLINEFORM7 Cohesion of political groups In this section we first report on the level of cohesion of the European Parliament's groups by analyzing the co-voting through the agreement and ERGM measures. Next, we explore two important policy areas, namely Economic and monetary system and State and evolution of the Union. Finally, we analyze the cohesion of the European Parliament's groups on Twitter. Existing research by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 shows that the cohesion of the European political groups has been rising since the 1990s, and the level of cohesion remained high even after the EU's enlargement in 2004, when the number of MEPs increased from 626 to 732. We measure the co-voting cohesion of the political groups in the Eighth European Parliament using Krippendorff's Alpha—the results are shown in Fig FIGREF30 (panel Overall). The Greens-EFA have the highest cohesion of all the groups. This finding is in line with an analysis of previous compositions of the Fifth and Sixth European Parliaments by Hix and Noury BIBREF11 , and the Seventh by VoteWatch BIBREF37 . They are closely followed by the S&D and EPP. Hix and Noury reported on the high cohesion of S&D in the Fifth and Sixth European Parliaments, and we also observe this in the current composition. They also reported a slightly less cohesive EPP-ED. This group split in 2009 into EPP and ECR. VoteWatch reports EPP to have cohesion on a par with Greens-EFA and S&D in the Seventh European Parliament. The cohesion level we observe in the current European Parliament is also similar to the level of Greens-EFA and S&D. The catch-all group of the non-aligned (NI) comes out as the group with the lowest cohesion. In addition, among the least cohesive groups in the European Parliament are the Eurosceptics EFDD, which include the British UKIP led by Nigel Farage, and the ENL whose largest party are the French National Front, led by Marine Le Pen. Similarly, Hix and Noury found that the least cohesive groups in the Seventh European Parliament are the nationalists and Eurosceptics. The Eurosceptic IND/DEM, which participated in the Sixth European Parliament, transformed into the current EFDD, while the nationalistic UEN was dissolved in 2009. We also measure the voting cohesion of the European Parliament groups using an ERGM, a network-based method—the results are shown in Fig FIGREF31 (panel Overall). The cohesion results obtained with ERGM are comparable to the results based on agreement. In this context, the parameters estimated by the ERGM refer to the matching of MEPs who belong to the same political group (one parameter per group). The parameters measure the homophilous matching between MEPs who have the same political affiliation. A positive value for the estimated parameter indicates that the co-voting of MEPs from that group is greater than what is expected by chance, where the expected number of co-voting links by chance in a group is taken to be uniformly random. A negative value indicates that there are fewer co-voting links within a group than expected by chance. Even though INLINEFORM0 and ERGM compute scores relative to what is expected by chance, they refer to different interpretations of chance. INLINEFORM1 's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group, knowing the votes of these MEPs on all RCVs. ERGM's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group on a given RCV, knowing the network-related properties of the co-voting network on that particular RCV. The main difference between INLINEFORM2 and ERGM, though, is the treatment of non-voting and abstained MEPs. INLINEFORM3 considers only the yes/no votes, and consequently, agreements by the voting MEPs of the same groups are considerably higher than co-voting by chance. ERGM, on the other hand, always considers all MEPs, and non-voting and abstained MEPs are treated as disconnected nodes. The level of co-voting by chance is therefore considerably lower, since there is often a large fraction of MEPs that do not attand or abstain. As with INLINEFORM0 , Greens-EFA, S&D, and EPP exhibit the highest cohesion, even though their ranking is permuted when compared to the ranking obtained with INLINEFORM1 . At the other end of the scale, we observe the same situation as with INLINEFORM2 . The non-aligned members NI have the lowest cohesion, followed by EFDD and ENL. The only place where the two methods disagree is the level of cohesion of GUE-NGL. The Alpha attributes GUE-NGL a rather high level of cohesion, on a par with ALDE, whereas the ERGM attributes them a much lower cohesion. The reason for this difference is the relatively high abstention rate of GUE-NGL. Whereas the overall fraction of non-attending and abstaining MEPs across all RCVs and all political groups is 25%, the GUE-NGL abstention rate is 34%. This is reflected in an above average cohesion by INLINEFORM0 where only yes/no votes are considered, and in a relatively lower, below average cohesion by ERGM. In the later case, the non-attendance is interpreted as a non-cohesive voting of a political groups as a whole. In addition to the overall cohesion, we also focus on two selected policy areas. The cohesion of the political groups related to these two policy areas is shown in the first two panels in Fig FIGREF30 ( INLINEFORM0 ) and Fig FIGREF31 (ERGM). The most important observation is that the level of cohesion of the political groups is very stable across different policy areas. These results are corroborated by both methodologies. Similar to the overall cohesion, the most cohesive political groups are the S&D, Greens-EFA, and EPP. The least cohesive group is the NI, followed by the ENL and EFDD. The two methodologies agree on the level of cohesion for all the political groups, except for GUE-NGL, due to a lower attendance rate. We determine the cohesion of political groups on Twitter by using the average number of retweets between MEPs within the same group. The results are shown in Fig FIGREF33 . The right-wing ENL and EFDD come out as the most cohesive groups, while all the other groups have a far lower average number of retweets. MEPs from ENL and EFDD post by far the largest number of retweets (over 240), and at the same time over 94% of their retweets are directed to MEPs from the same group. Moreover, these two groups stand out in the way the retweets are distributed within the group. A large portion of the retweets of EFDD (1755) go to Nigel Farage, the leader of the group. Likewise, a very large portion of retweets of ENL (2324) go to Marine Le Pen, the leader of the group. Farage and Le Pen are by far the two most retweeted MEPs, with the third one having only 666 retweets. Coalitions in the European Parliament Coalition formation in the European Parliament is largely determined by ideological positions, reflected in the degree of cooperation of parties at the national and European levels. The observation of ideological inclinations in the coalition formation within the European Parliament was already made by other authors BIBREF11 and is confirmed in this study. The basic patterns of coalition formation in the European Parliament can already be seen in the co-voting network in Fig FIGREF2 A. It is remarkable that the degree of attachment between the political groups, which indicates the degree of cooperation in the European Parliament, nearly exactly corresponds to the left-to-right seating order. The liberal ALDE seems to have an intermediator role between the left and right parts of the spectrum in the parliament. Between the extreme (GUE-NGL) and center left (S&D) groups, this function seems to be occupied by Greens-EFA. The non-aligned members NI, as well as the Eurosceptic EFFD and ENL, seem to alternately tip the balance on both poles of the political spectrum. Being ideologically more inclined to vote with other conservative and right-wing groups (EPP, ECR), they sometimes also cooperate with the extreme left-wing group (GUE-NGL) with which they share their Euroscepticism as a common denominator. Figs FIGREF36 and FIGREF37 give a more detailed understanding of the coalition formation in the European Parliament. Fig FIGREF36 displays the degree of agreement or cooperation between political groups measured by Krippendorff's INLINEFORM0 , whereas Fig FIGREF37 is based on the result from the ERGM. We first focus on the overall results displayed in the right-hand plots of Figs FIGREF36 and FIGREF37 . The strongest degrees of cooperation are observed, with both methods, between the two major parties (EPP and S&D) on the one hand, and the liberal ALDE on the other. Furthermore, we see a strong propensity for Greens-EFA to vote with the Social Democrats (5th strongest coalition by INLINEFORM0 , and 3rd by ERGM) and the GUE-NGL (3rd strongest coalition by INLINEFORM1 , and 5th by ERGM). These results underline the role of ALDE and Greens-EFA as intermediaries for the larger groups to achieve a majority. Although the two largest groups together have 405 seats and thus significantly more than the 376 votes needed for a simple majority, the degree of cooperation between the two major groups is ranked only as the fourth strongest by both methods. This suggests that these two political groups find it easier to negotiate deals with smaller counterparts than with the other large group. This observation was also made by Hix et al. BIBREF12 , who noted that alignments on the left and right of the political spectrum have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. Next, we focus on the coalition formation within the two selected policy areas. The area State and Evolution of the Union is dominated by cooperation between the two major groups, S&D and EPP, as well as ALDE. We also observe a high degree of cooperation between groups that are generally regarded as integration friendly, like Greens-EFA and GUE-NGL. We see, particularly in Fig FIGREF36 , a relatively high degree of cooperation between groups considered as Eurosceptic, like ECR, EFFD, ENL, and the group of non-aligned members. The dichotomy between supporters and opponents of European integration is even more pronounced within the policy area Economic and Monetary System. In fact, we are taking a closer look specifically at these two areas as they are, at the same time, both contentious and important. Both methods rank the cooperation between S&D and EPP on the one hand, and ALDE on the other, as the strongest. We also observe a certain degree of unanimity among the Eurosceptic and right-wing groups (EFDD, ENL, and NI) in this policy area. This seems plausible, as these groups were (especially in the aftermath of the global financial crisis and the subsequent European debt crisis) in fierce opposition to further payments to financially troubled member states. However, we also observe a number of strong coalitions that might, at first glance, seem unusual, specifically involving the left-wing group GUE-NGL on the one hand, and the right-wing EFDD, ENL, and NI on the other. These links also show up in the network plot in Fig FIGREF2 A. This might be attributable to a certain degree of Euroscepticism on both sides: rooted in criticism of capitalism on the left, and at least partly a raison d'être on the right. Hix et al. BIBREF11 discovered this pattern as well, and proposed an additional explanation—these coalitions also relate to a form of government-opposition dynamic that is rooted at the national level, but is reflected in voting patterns at the European level. In general, we observe two main differences between the INLINEFORM0 and ERGM results: the baseline cooperation as estimated by INLINEFORM1 is higher, and the ordering of coalitions from the strongest to the weakest is not exactly the same. The reason is the same as for the cohesion, namely different treatment of non-voting and abstaining MEPs. When they are ignored, as by INLINEFORM2 , the baseline level of inter-group co-voting is higher. When non-attending and abstaining is treated as voting differently, as by ERGM, it is considerably more difficult to achieve co-voting coalitions, specially when there are on average 25% MEPs that do not attend or abstain. Groups with higher non-attendance rates, such as GUE-NGL (34%) and NI (40%) are less likely to form coalitions, and therefore have relatively lower ERGM coefficients (Fig FIGREF37 ) than INLINEFORM3 scores (Fig FIGREF36 ). The first insight into coalition formation on Twitter can be observed in the retweet network in Fig FIGREF2 B. The ideological left to right alignment of the political groups is reflected in the retweet network. Fig FIGREF40 shows the strength of the coalitions on Twitter, as estimated by the number of retweets between MEPs from different groups. The strongest coalitions are formed between the right-wing groups EFDD and ECR, as well as ENL and NI. At first, this might come as a surprise, since these groups do not form strong coalitions in the European Parliament, as can be seen in Figs FIGREF36 and FIGREF37 . On the other hand, the MEPs from these groups are very active Twitter users. As previously stated, MEPs from ENL and EFDD post the largest number of retweets. Moreover, 63% of the retweets outside of ENL are retweets of NI. This effect is even more pronounced with MEPs from EFDD, whose retweets of ECR account for 74% of their retweets from other groups. In addition to these strong coalitions on the right wing, we find coalition patterns to be very similar to the voting coalitions observed in the European Parliament, seen in Figs FIGREF36 and FIGREF37 . The strongest coalitions, which come immediately after the right-wing coalitions, are between Greens-EFA on the one hand, and GUE-NGL and S&D on the other, as well as ALDE on the one hand, and EPP and S&D on the other. These results corroborate the role of ALDE and Greens-EFA as intermediaries in the European Parliament, not only in the legislative process, but also in the debate on social media. To better understand the formation of coalitions in the European Parliament and on Twitter, we examine the strongest cooperation between political groups at three different thresholds. For co-voting coalitions in the European Parliament we choose a high threshold of INLINEFORM0 , a medium threshold of INLINEFORM1 , and a negative threshold of INLINEFORM2 (which corresponds to strong oppositions). In this way we observe the overall patterns of coalition and opposition formation in the European Parliament and in the two specific policy areas. For cooperation on Twitter, we choose a high threshold of INLINEFORM3 , a medium threshold of INLINEFORM4 , and a very low threshold of INLINEFORM5 . The strongest cooperations in the European Parliament over all policy areas are shown in Fig FIGREF42 G. It comes as no surprise that the strongest cooperations are within the groups (in the diagonal). Moreover, we again observe GUE-NGL, S&D, Greens-EFA, ALDE, and EPP as the most cohesive groups. In Fig FIGREF42 H, we observe coalitions forming along the diagonal, which represents the seating order in the European Parliament. Within this pattern, we observe four blocks of coalitions: on the left, between GUE-NGL, S&D, and Greens-EFA; in the center, between S&D, Greens-EFA, ALDE, and EPP; on the right-center between ALDE, EPP, and ECR; and finally, on the far-right between ECR, EFDD, ENL, and NI. Fig FIGREF42 I shows the strongest opposition between groups that systematically disagree in voting. The strongest disagreements are between left- and right-aligned groups, but not between the left-most and right-most groups, in particular, between GUE-NGL and ECR, but also between S&D and Greens-EFA on one side, and ENL and NI on the other. In the area of Economic and monetary system we see a strong cooperation between EPP and S&D (Fig FIGREF42 A), which is on a par with the cohesion of the most cohesive groups (GUE-NGL, S&D, Greens-EFA, ALDE, and EPP), and is above the cohesion of the other groups. As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B. On one hand, we observe cooperation between S&D, ALDE, EPP, and ECR, and on the other, cooperation between GUE-NGL, Greens-EFA, EFDD, ENL, and NI. This division in blocks is seen again in Fig FIGREF42 C, which shows the strongest disagreements. Here, we observe two blocks composed of S&D, EPP, and ALDE on one hand, and GUE-NGL, EFDD, ENL, and NI on the other, which are in strong opposition to each other. In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis. The patterns of coalitions forming on Twitter closely resemble those in the European Parliament. In Fig FIGREF42 J we see that the strongest degrees of cooperation on Twitter are within the groups. The only group with low cohesion is the NI, whose members have only seven retweets between them. The coalitions on Twitter follow the seating order in the European Parliament remarkably well (see Fig FIGREF42 K). What is striking is that the same blocks form on the left, center, and on the center-right, both in the European Parliament and on Twitter. The largest difference between the coalitions in the European Parliament and on Twitter is on the far-right, where we observe ENL and NI as isolated blocks. The results shown in Fig FIGREF44 quantify the extent to which communication in one social context (Twitter) can explain cooperation in another social context (co-voting in the European Parliament). A positive value indicates that the matching behavior in the retweet network is similar to the one in the co-voting network, specific for an individual policy area. On the other hand, a negative value implies a negative “correlation” between the retweeting and co-voting of MEPs in the two different contexts. The bars in Fig FIGREF44 correspond to the coefficients from the edge covariate terms of the ERGM, describing the relationship between the retweeting and co-voting behavior of MEPs. The coefficients are aggregated for individual policy areas by means of a meta-analysis. Overall, we observe a positive correlation between retweeting and co-voting, which is significantly different from zero. The strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets. Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union. The only exception, with a significantly negative coefficient, is the area Economic and monetary system. This implies that in the area Economic and monetary system we observe a significant deviation from the usual co-voting patterns. Results from section “sec:coalitionpolicy”, confirm that this is indeed the case. Especially noteworthy are the coalitions between GUE-NGL and Greens-EFA on the left wing, and EFDD and ENL on the right wing. In the section “sec:coalitionpolicy” we interpret these results as a combination of Euroscepticism on both sides, motivated on the left by a skeptical attitude towards the market orientation of the EU, and on the right by a reluctance to give up national sovereignty. Discussion We study cohesion and coalitions in the Eighth European Parliament by analyzing, on one hand, MEPs' co-voting tendencies and, on the other, their retweeting behavior. We reveal that the most cohesive political group in the European Parliament, when it comes to co-voting, is Greens-EFA, closely followed by S&D and EPP. This is consistent with what VoteWatch BIBREF37 reported for the Seventh European Parliament. The non-aligned (NI) come out as the least cohesive group, followed by the Eurosceptic EFDD. Hix and Noury BIBREF11 also report that nationalists and Eurosceptics form the least cohesive groups in the Sixth European Parliament. We reaffirm most of these results with both of the two employed methodologies. The only point where the two methodologies disagree is in the level of cohesion for the left-wing GUE-NGL, which is portrayed by ERGM as a much less cohesive group, due to their relatively lower attendance rate. The level of cohesion of the political groups is quite stable across different policy areas and similar conclusions apply. On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration. We observed the same phenomenon recently during the Brexit campaign BIBREF38 . Along our interpretation the Brexit was “won” to some extent due to these social media activities, which are practically non-existent among the pro-EU political groups. The fact that ENL and EFDD are the least cohesive groups in the European Parliament can be attributed to their political focus. It seems more important for the group to agree on its anti-EU stance and to call for independence and sovereignty, and much less important to agree on other issues put forward in the parliament. The basic pattern of coalition formation, with respect to co-voting, can already be seen in Fig FIGREF2 A: the force-based layout almost completely corresponds to the seating order in the European Parliament (from the left- to the right-wing groups). A more thorough examination shows that the strongest cooperation can be observed, for both methodologies, between EPP, S&D, and ALDE, where EPP and S&D are the two largest groups, while the liberal ALDE plays the role of an intermediary in this context. On the other hand, the role of an intermediary between the far-left GUE-NGL and its center-left neighbor, S&D, is played by the Greens-EFA. These three parties also form a strong coalition in the European Parliament. On the far right of the spectrum, the non-aligned, EFDD, and ENL form another coalition. This behavior was also observed by Hix et al. BIBREF12 , stating that alignments on the left and right have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the European Parliament. When looking at the policy area Economic and monetary system, we see the same coalitions. However, interestingly, EFDD, ENL, and NI often co-vote with the far-left GUE-NGL. This can be attributed to a certain degree of Euroscepticism on both sides: as a criticism of capitalism, on one hand, and as the main political agenda, on the other. This pattern was also discovered by Hix et al. BIBREF12 , who argued that these coalitions emerge from a form of government-opposition dynamics, rooted at the national level, but also reflected at the European level. When studying coalitions on Twitter, the strongest coalitions can be observed on the right of the spectrum (between EFDD, ECR, ENL, and NI). This is, yet again, in contrast to what was observed in the RCV data. The reason lies in the anti-EU messages they tend to collectively spread (retweet) across the network. This behavior forms strong retweet ties, not only within, but also between, these groups. For example, MEPs of EFDD mainly retweet MEPs from ECR (with the exception of MEPs from their own group). In contrast to these right-wing coalitions, we find the other coalitions to be consistent with what is observed in the RCV data. The strongest coalitions on the left-to-center part of the axis are those between GUE-NGL, Greens-EFA, and S&D, and between S&D, ALDE, and EPP. These results reaffirm the role of Greens-EFA and ALDE as intermediaries, not only in the European Parliament but also in the debates on social media. Last, but not least, with the ERGM methodology we measure the extent to which the retweet network can explain the co-voting activities in the European Parliament. We compute this for each policy area separately and also over all RCVs. We conclude that the retweet network indeed matches the co-voting behavior, with the exception of one specific policy area. In the area Economic and monetary system, the links in the (overall) retweet network do not match the links in the co-voting network. Moreover, the negative coefficients imply a radically different formation of coalitions in the European Parliament. This is consistent with the results in Figs FIGREF36 and FIGREF37 (the left-hand panels), and is also observed in Fig FIGREF42 (the top charts). From these figures we see that in this particular case, the coalitions are also formed between the right-wing groups and the far-left GUE-NGL. As already explained, we attribute this to the degree of Euroscepticism that these groups share on this particular policy issue. Conclusions In this paper we analyze (co-)voting patterns and social behavior of members of the European Parliament, as well as the interaction between these two systems. More precisely, we analyze a set of 2535 roll-call votes as well as the tweets and retweets of members of the MEPs in the period from October 2014 to February 2016. The results indicate a considerable level of correlation between these two complex systems. This is consistent with previous findings of Cherepnalkoski et al. BIBREF22 , who reconstructed the adherence of MEPs to their respective political or national group solely from their retweeting behavior. We employ two different methodologies to quantify the co-voting patterns: Krippendorff's INLINEFORM0 and ERGM. They were developed in different fields of research, use different techniques, and are based on different assumptions, but in general they yield consistent results. However, there are some differences which have consequences for the interpretation of the results. INLINEFORM0 is a measure of agreement, designed as a generalization of several specialized measures, that can compare different numbers of observations, in our case roll-call votes. It only considers yes/no votes. Absence and abstention by MEPs is ignored. Its baseline ( INLINEFORM1 ), i.e., co-voting by chance, is computed from the yes/no votes of all MEPs on all RCVs. ERGMs are used in social-network analyses to determine factors influencing the edge formation process. In our case an edge between two MEPs is formed when they cast the same yes/no vote within a RCV. It is assumed that a priori each MEP can form a link with any other MEP. No assumptions about the presence or absence of individual MEPs in a voting session are made. Each RCV is analyzed as a separate binary network. The node set is thereby kept constant for each RCV network. While the ERGM departs from the originally observed network, where MEPs who didn't vote or abstained appear as isolated nodes, links between these nodes are possible within the network sampling process which is part of the ERGM optimization process. The results of several RCVs are aggregated by means of the meta-analysis approach. The baseline (ERGM coefficients INLINEFORM0 ), i.e., co-voting by chance, is computed from a large sample of randomly generated networks. These two different baselines have to be taken into account when interpreting the results of INLINEFORM0 and ERGM. In a typical voting session, 25% of the MEPs are missing or abstaining. When assessing cohesion of political groups, all INLINEFORM1 values are well above the baseline, and the average INLINEFORM2 . The average ERGM cohesion coefficients, on the other hand, are around the baseline. The difference is even more pronounced for groups with higher non-attendance/abstention rates like GUE-NGL (34%) and NI (40%). When assessing strength of coalitions between pairs of groups, INLINEFORM3 values are balanced around the baseline, while the ERGM coefficients are mostly negative. The ordering of coalitions from the strongest to the weakest is therefor different when groups with high non-attendance/abstention rates are involved. The choice of the methodology to asses cohesion and coalitions is not obvious. Roll-call voting is used for decisions which demand a simple majority only. One might however argue that non-attendance/abstention corresponds to a no vote, or that absence is used strategically. Also, the importance of individual votes, i.e., how high on the agenda of a political group is the subject, affects their attendance, and consequently the perception of their cohesion and the potential to act as a reliable coalition partner. Acknowledgments This work was supported in part by the EC projects SIMPOL (no. 610704) and DOLFINS (no. 640772), and by the Slovenian ARRS programme Knowledge Technologies (no. P2-103).
Greens-EFA, S&D, and EPP exhibit the highest cohesion, non-aligned members NI have the lowest cohesion, followed by EFDD and ENL, two methods disagree is the level of cohesion of GUE-NGL
fa572f1f3f3ce6e1f9f4c9530456329ffc2677ca
fa572f1f3f3ce6e1f9f4c9530456329ffc2677ca_0
Q: Do they authors account for differences in usage of Twitter amongst MPs into their model? Text: Abstract We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. Introduction Social-media activities often reflect phenomena that occur in other complex systems. By observing social networks and the content propagated through these networks, we can describe or even predict the interplay between the observed social-media activities and another complex system that is more difficult, if not impossible, to monitor. There are numerous studies reported in the literature that successfully correlate social-media activities to phenomena like election outcomes BIBREF0 , BIBREF1 or stock-price movements BIBREF2 , BIBREF3 . In this paper we study the cohesion and coalitions exhibited by political groups in the Eighth European Parliament (2014–2019). We analyze two entirely different aspects of how the Members of the European Parliament (MEPs) behave in policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting (i.e., endorsing) behavior. We use two diverse datasets in the analysis: the roll-call votes and the Twitter data. A roll-call vote (RCV) is a vote in the parliament in which the names of the MEPs are recorded along with their votes. The RCV data is available as part of the minutes of the parliament's plenary sessions. From this perspective, cohesion is seen as the tendency to co-vote (i.e., cast the same vote) within a group, and a coalition is formed when members of two or more groups exhibit a high degree of co-voting on a subject. The second dataset comes from Twitter. It captures the retweeting behavior of MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between the groups) from a completely different perspective. With over 300 million monthly active users and 500 million tweets posted daily, Twitter is one of the most popular social networks. Twitter allows its users to post short messages (tweets) and to follow other users. A user who follows another user is able to read his/her public tweets. Twitter also supports other types of interaction, such as user mentions, replies, and retweets. Of these, retweeting is the most important activity as it is used to share and endorse content created by other users. When a user retweets a tweet, the information about the original author as well as the tweet's content are preserved, and the tweet is shared with the user's followers. Typically, users retweet content that they agree with and thus endorse the views expressed by the original tweeter. We apply two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 which measures the agreement among observers, or voters in our case. The second one is based on Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach and is often used in social-network analyses. Even though these two methodologies come with two different sets of techniques and are based on different assumptions, they provide consistent results. The main contributions of this paper are as follows: (i) We give general insights into the cohesion of political groups in the Eighth European Parliament, both overall and across different policy areas. (ii) We explore whether coalitions are formed in the same way for different policy areas. (iii) We explore to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. (iv) We employ two statistically sound methodologies and examine the extent to which the results are sensitive to the choice of methodology. While the results are mostly consistent, we show that the difference are due to the different treatment of non-attending and abstaining MEPs by INLINEFORM0 and ERGM. The most novel and interesting aspect of our work is the relationship between the co-voting and the retweeting patterns. The increased use of Twitter by MEPs on days with a roll-call vote session (see Fig FIGREF1 ) is an indicator that these two processes are related. In addition, the force-based layouts of the co-voting network and the retweet network reveal a very similar structure on the left-to-center side of the political spectrum (see Fig FIGREF2 ). They also show a discrepancy on the far-right side of the spectrum, which calls for a more detailed analysis. Related work In this paper we study and relate two very different aspects of how MEPs behave in policy-making processes. First, we look at their co-voting behavior, and second, we examine their retweeting patterns. Thus, we draw related work from two different fields of science. On one hand, we look at how co-voting behavior is analyzed in the political-science literature and, on the other, we explore how Twitter is used to better understand political and policy-making processes. The latter has been more thoroughly explored in the field of data mining (specifically, text mining and network analysis). To the best of our knowledge, this is the first paper that studies legislative behavior in the Eighth European Parliament. The legislative behavior of the previous parliaments was thoroughly studied by Hix, Attina, and others BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . These studies found that voting behavior is determined to a large extent—and when viewed over time, increasingly so—by affiliation to a political group, as an organizational reflection of the ideological position. The authors found that the cohesion of political groups in the parliament has increased, while nationality has been less and less of a decisive factor BIBREF12 . The literature also reports that a split into political camps on the left and right of the political spectrum has recently replaced the `grand coalition' between the two big blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. The authors conclude that coalitions are to a large extent formed along the left-to-right axis BIBREF12 . In this paper we analyze the roll-call vote data published in the minutes of the parliament's plenary sessions. For a given subject, the data contains the vote of each MEP present at the respective sitting. Roll-call vote data from the European Parliament has already been extensively studied by other authors, most notably by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 . To be able to study the cohesion and coalitions, authors like Hix, Attina, and Rice BIBREF6 , BIBREF13 , BIBREF14 defined and employed a variety of agreement measures. The most prominent measure is the Agreement Index proposed by Hix et al. BIBREF13 . This measure computes the agreement score from the size of the majority class for a particular vote. The Agreement Index, however, exhibits two drawbacks: (i) it does not account for co-voting by chance, and (ii) without a proper adaptation, it does not accommodate the scenario in which the agreement is to be measured between two different political groups. We employ two statistically sound methodologies developed in two different fields of science. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 . INLINEFORM1 is a measure of the agreement among observers, coders, or measuring instruments that assign values to items or phenomena. It compares the observed agreement to the agreement expected by chance. INLINEFORM2 is used to measure the inter- and self-annotator agreement of human experts when labeling data, and the performance of classification models in machine learning scenarios BIBREF15 . In addition to INLINEFORM3 , we employ Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach, often used in social-network analyses. ERGM can be employed to investigate how different network statistics (e.g., number of edges and triangles) or external factors (e.g., political group membership) govern the network-formation process. The second important aspect of our study is related to analyzing the behavior of participants in social networks, specifically Twitter. Twitter is studied by researchers to better understand different political processes, and in some cases to predict their outcomes. Eom et al. BIBREF1 consider the number of tweets by a party as a proxy for the collective attention to the party, explore the dynamics of the volume, and show that this quantity contains information about an election's outcome. Other studies BIBREF16 reach similar conclusions. Conover et al. BIBREF17 predicted the political alignment of Twitter users in the run-up to the 2010 US elections based on content and network structure. They analyzed the polarization of the retweet and mention networks for the same elections BIBREF18 . Borondo et al. BIBREF19 analyzed user activity during the Spanish presidential elections. They additionally analyzed the 2012 Catalan elections, focusing on the interplay between the language and the community structure of the network BIBREF20 . Most existing research, as Larsson points out BIBREF21 , focuses on the online behavior of leading political figures during election campaigns. This paper continues our research on communities that MEPs (and their followers) form on Twitter BIBREF22 . The goal of our research was to evaluate the role of Twitter in identifying communities of influence when the actual communities are known. We represent the influence on Twitter by the number of retweets that MEPs “receive”. We construct two networks of influence: (i) core, which consists only of MEPs, and (ii) extended, which also involves their followers. We compare the detected communities in both networks to the groups formed by the political, country, and language membership of MEPs. The results show that the detected communities in the core network closely match the political groups, while the communities in the extended network correspond to the countries of residence. This provides empirical evidence that analyzing retweet networks can reveal real-world relationships and can be used to uncover hidden properties of the networks. Lazer BIBREF23 highlights the importance of network-based approaches in political science in general by arguing that politics is a relational phenomenon at its core. Some researchers have adopted the network-based approach to investigate the structure of legislative work in the US Congress, including committee and sub-committee membership BIBREF24 , bill co-sponsoring BIBREF25 , and roll-call votes BIBREF26 . More recently, Dal Maso et al. BIBREF27 examined the community structure with respect to political coalitions and government structure in the Italian Parliament. Scherpereel et al. BIBREF28 examined the constituency, personal, and strategic characteristics of MEPs that influence their tweeting behavior. They suggested that Twitter's characteristics, like immediacy, interactivity, spontaneity, personality, and informality, are likely to resonate with political parties across Europe. By fitting regression models, the authors find that MEPs from incohesive groups have a greater tendency to retweet. In contrast to most of these studies, we focus on the Eighth European Parliament, and more importantly, we study and relate two entirely different behavioral aspects, co-voting and retweeting. The goal of this research is to better understand the cohesion and coalition formation processes in the European Parliament by quantifying and comparing the co-voting patterns and social behavior. Methods In this section we present the methods to quantify cohesion and coalitions from the roll-call votes and Twitter activities. Co-voting measured by agreement We first show how the co-voting behaviour of MEPs can be quantified by a measure of the agreement between them. We treat individual RCVs as observations, and MEPs as independent observers or raters. When they cast the same vote, there is a high level of agreement, and when they vote differently, there is a high level of disagreement. We define cohesion as the level of agreement within a political group, a coalition as a voting agreement between political groups, and opposition as a disagreement between different groups. There are many well-known measures of agreement in the literature. We selected Krippendorff's Alpha-reliability ( INLINEFORM0 ) BIBREF4 , which is a generalization of several specialized measures. It works for any number of observers, and is applicable to different variable types and metrics (e.g., nominal, ordered, interval, etc.). In general, INLINEFORM1 is defined as follows: INLINEFORM2 where INLINEFORM0 is the actual disagreement between observers (MEPs), and INLINEFORM1 is disagreement expected by chance. When observers agree perfectly, INLINEFORM2 INLINEFORM3 , when the agreement equals the agreement by chance, INLINEFORM4 INLINEFORM5 , and when the observers disagree systematically, INLINEFORM6 INLINEFORM7 . The two disagreement measures are defined as follows: INLINEFORM0 INLINEFORM0 The arguments INLINEFORM0 , and INLINEFORM1 are defined below and refer to the values in the coincidence matrix that is constructed from the RCVs data. In roll-call votes, INLINEFORM2 (and INLINEFORM3 ) is a nominal variable with two possible values: yes and no. INLINEFORM4 is a difference function between the values of INLINEFORM5 and INLINEFORM6 , defined as: INLINEFORM7 The RCVs data has the form of a reliability data matrix: INLINEFORM0 where INLINEFORM0 is the number of RCVs, INLINEFORM1 is the number of MEPs, INLINEFORM2 is the number of votes cast in the voting INLINEFORM3 , and INLINEFORM4 is the actual vote of an MEP INLINEFORM5 in voting INLINEFORM6 (yes or no). A coincidence matrix is constructed from the reliability data matrix, and is in general a INLINEFORM0 -by- INLINEFORM1 square matrix, where INLINEFORM2 is the number of possible values of INLINEFORM3 . In our case, where only yes/no votes are relevant, the coincidence matrix is a 2-by-2 matrix of the following form: INLINEFORM4 A cell INLINEFORM0 accounts for all coincidences from all pairs of MEPs in all RCVs where one MEP has voted INLINEFORM1 and the other INLINEFORM2 . INLINEFORM3 and INLINEFORM4 are the totals for each vote outcome, and INLINEFORM5 is the grand total. The coincidences INLINEFORM6 are computed as: INLINEFORM7 where INLINEFORM0 is the number of INLINEFORM1 pairs in vote INLINEFORM2 , and INLINEFORM3 is the number of MEPs that voted in INLINEFORM4 . When computing INLINEFORM5 , each pair of votes is considered twice, once as a INLINEFORM6 pair, and once as a INLINEFORM7 pair. The coincidence matrix is therefore symmetrical around the diagonal, and the diagonal contains all the equal votes. The INLINEFORM0 agreement is used to measure the agreement between two MEPs or within a group of MEPs. When applied to a political group, INLINEFORM1 corresponds to the cohesion of the group. The closer INLINEFORM2 is to 1, the higher the agreement of the MEPs in the group, and hence the higher the cohesion of the group. We propose a modified version of INLINEFORM0 to measure the agreement between two different groups, INLINEFORM1 and INLINEFORM2 . In the case of a voting agreement between political groups, high INLINEFORM3 is interpreted as a coalition between the groups, whereas negative INLINEFORM4 indicates political opposition. Suppose INLINEFORM0 and INLINEFORM1 are disjoint subsets of all the MEPs, INLINEFORM2 , INLINEFORM3 . The respective number of votes cast by both group members in vote INLINEFORM4 is INLINEFORM5 and INLINEFORM6 . The coincidences are then computed as: INLINEFORM7 where the INLINEFORM0 pairs come from different groups, INLINEFORM1 and INLINEFORM2 . The total number of such pairs in vote INLINEFORM3 is INLINEFORM4 . The actual number INLINEFORM5 of the pairs is multiplied by INLINEFORM6 so that the total contribution of vote INLINEFORM7 to the coincidence matrix is INLINEFORM8 . A network-based measure of co-voting In this section we describe a network-based approach to analyzing the co-voting behavior of MEPs. For each roll-call vote we form a network, where the nodes in the network are MEPs, and an undirected edge between two MEPs is formed when they cast the same vote. We are interested in the factors that determine the cohesion within political groups and coalition formation between political groups. Furthermore, we investigate to what extent communication in a different social context, i.e., the retweeting behavior of MEPs, can explain the co-voting of MEPs. For this purpose we apply an Exponential Random Graph Model BIBREF5 to individual roll-call vote networks, and aggregate the results by means of the meta-analysis. ERGMs allow us to investigate the factors relevant for the network-formation process. Network metrics, as described in the abundant literature, serve to gain information about the structural properties of the observed network. A model investigating the processes driving the network formation, however, has to take into account that there can be a multitude of alternative networks. If we are interested in the parameters influencing the network formation we have to consider all possible networks and measure their similarity to the originally observed network. The family of ERGMs builds upon this idea. Assume a random graph INLINEFORM0 , in the form of a binary adjacency matrix, made up of a set of INLINEFORM1 nodes and INLINEFORM2 edges INLINEFORM3 where, similar to a binary choice model, INLINEFORM4 if the nodes INLINEFORM5 are connected and INLINEFORM6 if not. Since network data is by definition relational and thus violates assumptions of independence, classical binary choice models, like logistic regression, cannot be applied in this context. Within an ERGM, the probability for a given network is modelled by DISPLAYFORM0 where INLINEFORM0 is the vector of parameters and INLINEFORM1 is the vector of network statistics (counts of network substructures), which are a function of the adjacency matrix INLINEFORM2 . INLINEFORM3 is a normalization constant corresponding to the sample of all possible networks, which ensures a proper probability distribution. Evaluating the above expression allows us to make assertions if and how specific nodal attributes influence the network formation process. These nodal attributes can be endogenous (dyad-dependent parameters) to the network, like the in- and out-degrees of a node, or exogenous (dyad-independent parameters), as the party affiliation, or the country of origin in our case. An alternative formulation of the ERGM provides the interpretation of the coefficients. We introduce the change statistic, which is defined as the change in the network statistics when an edge between nodes INLINEFORM0 and INLINEFORM1 is added or not. If INLINEFORM2 and INLINEFORM3 denote the vectors of counts of network substructures when the edge is added or not, the change statistics is defined as follows: INLINEFORM4 With this at hand it can be shown that the distribution of the variable INLINEFORM0 , conditional on the rest of the graph INLINEFORM1 , corresponds to: INLINEFORM2 This implies on the one hand that the probability depends on INLINEFORM0 via the change statistic INLINEFORM1 , and on the other hand, that each coefficient within the vector INLINEFORM2 represents an increase in the conditional log-odds ( INLINEFORM3 ) of the graph when the corresponding element in the vector INLINEFORM4 increases by one. The need to condition the probability on the rest of the network can be illustrated by a simple example. The addition (removal) of a single edge alters the network statistics. If a network has only edges INLINEFORM5 and INLINEFORM6 , the creation of an edge INLINEFORM7 would not only add an additional edge but would also alter the count for other network substructures included in the model. In this example, the creation of the edge INLINEFORM8 also increases the number of triangles by one. The coefficients are transformed into probabilities with the logistic function: INLINEFORM9 For example, in the context of roll-call votes, the probability that an additional co-voting edge is formed between two nodes (MEPs) of the same political group is computed with that equation. In this context, the nodematch (nodemix) coefficients of the ERGM (described in detail bellow) therefore refer to the degree of homophilous (heterophilous) matching of MEPs with regard to their political affiliation, or, expressed differently, the propensity of MEPs to co-vote with other MEPs of their respective political group or another group. A positive coefficient reflects an increased chance that an edge between two nodes with respective properties, like group affiliation, given all other parameters unchanged, is formed. Or, put differently, a positive coefficient implies that the probability of observing a network with a higher number of corresponding pairs relative to the hypothetical baseline network, is higher than to observe the baseline network itself BIBREF31 . For an intuitive interpretation, log-odds value of 0 corresponds to the even chance probability of INLINEFORM0 . Log-odds of INLINEFORM1 correspond to an increase of probability by INLINEFORM2 , whereas log-odds of INLINEFORM3 correspond to a decrease of probability by INLINEFORM4 . The computational challenges of estimating ERGMs is to a large degree due to the estimation of the normalizing constant. The number of possible networks is already extremely large for very small networks and the computation is simply not feasible. Therefore, an appropriate sample has to be found, ideally covering the most probable areas of the probability distribution. For this we make use of a method from the Markov Chain Monte Carlo (MCMC) family, namely the Metropolis-Hastings algorithm. The idea behind this algorithm is to generate and sample highly weighted random networks departing from the observed network. The Metropolis-Hastings algorithm is an iterative algorithm which samples from the space of possible networks by randomly adding or removing edges from the starting network conditional on its density. If the likelihood, in the ERGM context also denoted as weights, of the newly generated network is higher than that of the departure network it is retained, otherwise it is discarded. In the former case, the algorithm starts anew from the newly generated network. Otherwise departure network is used again. Repeating this procedure sufficiently often and summing the weights associated to the stored (sampled) networks allows to compute an approximation of the denominator in equation EQREF18 (normalizing constant). The algorithm starts sampling from the originally observed network INLINEFORM0 . The optimization of the coefficients is done simultaneously, equivalently with the Metropolis-Hastings algorithm. At the beginning starting values have to be supplied. For the study at hand we used the “ergm” library from the statistical R software package BIBREF5 implementing the Gibbs-Sampling algorithm BIBREF32 which is a special case of the Metropolis-Hastings algorithm outlined. In order to answer our question of the importance of the factors which drive the network formation process in the roll-call co-voting network, the ERGM is specified with the following parameters: nodematch country: This parameter adds one network statistic to the model, i.e., the number of edges INLINEFORM0 where INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of MEPs with respect to their country of origin. In other words, this coefficient indicates how relevant nationality is in the formation of edges in the co-voting network. nodematch national party: This parameter adds one network statistic to the model: the number of edges INLINEFORM0 with INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of the MEPs with regard to their party affiliation at the national level. In the context of this study, this coefficient can be interpreted as an indicator for within-party cohesion at the national level. nodemix EP group: This parameter adds one network statistic for each pair of European political groups. These coefficients shed light on the degree of coalitions between different groups as well as the within group cohesion . Given that there are nine groups in the European Parliament, this coefficient adds in total 81 statistics to the model. edge covariate Twitter: This parameter corresponds to a square matrix with the dimension of the adjacency matrix of the network, which corresponds to the number of mutual retweets between the MEPs. It provides an insight about the extent to which communication in one social context (Twitter), can explain cooperation in another social context (co-voting in RCVs). An ERGM as specified above is estimated for each of the 2535 roll-call votes. Each roll-call vote is thereby interpreted as a binary network and as an independent study. It is assumed that a priori each MEP could possibly form an edge with each other MEP in the context of a roll-call vote. Assumptions over the presence or absence of individual MEPs in a voting session are not made. In other words the dimensions of the adjacency matrix (the node set), and therefore the distribution from which new networks are drawn, is kept constant over all RCVs and therefore for every ERGM. The ERGM results therefore implicitly generalize to the case where potentially all MEPs are present and could be voting. Not voting is incorporated implicitly by the disconnectedness of a node. The coefficients of the 2535 roll-call vote studies are aggregated by means of a meta-analysis approach proposed by Lubbers BIBREF33 and Snijders et al. BIBREF34 . We are interested in average effect sizes of different matching patterns over different topics and overall. Considering the number of RCVs, it seems straightforward to interpret the different RCV networks as multiplex networks and collapse them into one weighted network, which could then be analysed by means of a valued ERGM BIBREF35 . There are, however, two reasons why we chose the meta-analysis approach instead. First, aggregating the RCV data results into an extremely dense network, leading to severe convergence (degeneracy) problems for the ERGM. Second, the RCV data contains information about the different policy areas the individual votes were about. Since we are interested in how the coalition formation in the European Parliament differs over different areas, a method is needed that allows for an ex-post analysis of the corresponding results. We therefore opted for the meta-analysis approach by Lubbers and Snijders et al. This approach allows us to summarize the results by decomposing the coefficients into average effects and (class) subject-specific deviations. The different ERGM runs for each RCV are thereby regarded as different studies with identical samples that are combined to obtain a general overview of effect sizes. The meta-regression model is defined as: INLINEFORM0 Here INLINEFORM0 is a parameter estimate for class INLINEFORM1 , and INLINEFORM2 is the average coefficient. INLINEFORM3 denotes the normally distributed deviation of the class INLINEFORM4 with a mean of 0 and a variance of INLINEFORM5 . INLINEFORM6 is the estimation error of the parameter value INLINEFORM7 from the ERGM. The meta-analysis model is fitted by an iterated, weighted, least-squares model in which the observations are weighted by the inverse of their variances. For the overall nodematch between political groups, we weighted the coefficients by group sizes. The results from the meta analysis can be interpreted as if they stemmed from an individual ERGM run. In our study, the meta-analysis was performed using the RSiena library BIBREF36 , which implements the method proposed by Lubbers and Snijders et al. BIBREF33 , BIBREF34 . Measuring cohesion and coalitions on Twitter The retweeting behavior of MEPs is captured by their retweet network. Each MEP active on Twitter is a node in this network. An edge in the network between two MEPs exists when one MEP retweeted the other. The weight of the edge is the number of retweets between the two MEPs. The resulting retweet network is an undirected, weighted network. We measure the cohesion of a political group INLINEFORM0 as the average retweets, i.e., the ratio of the number of retweets between the MEPs in the group INLINEFORM1 to the number of MEPs in the group INLINEFORM2 . The higher the ratio, the more each MEP (on average) retweets the MEPs from the same political group, hence, the higher the cohesion of the political group. The definition of the average retweeets ( INLINEFORM3 ) of a group INLINEFORM4 is: INLINEFORM5 This measure of cohesion captures the aggregate retweeting behavior of the group. If we consider retweets as endorsements, a larger number of retweets within the group is an indicator of agreement between the MEPs in the group. It does not take into account the patterns of retweeting within the group, thus ignoring the social sub-structure of the group. This is a potentially interesting direction and we leave it for future work. We employ an analogous measure for the strength of coalitions in the retweet network. The coalition strength between two groups INLINEFORM0 and INLINEFORM1 is the ratio of the number of retweets from one group to the other (but not within groups) INLINEFORM2 to the total number of MEPs in both groups, INLINEFORM3 . The definition of the average retweeets ( INLINEFORM4 ) between groups INLINEFORM5 and INLINEFORM6 is: INLINEFORM7 Cohesion of political groups In this section we first report on the level of cohesion of the European Parliament's groups by analyzing the co-voting through the agreement and ERGM measures. Next, we explore two important policy areas, namely Economic and monetary system and State and evolution of the Union. Finally, we analyze the cohesion of the European Parliament's groups on Twitter. Existing research by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 shows that the cohesion of the European political groups has been rising since the 1990s, and the level of cohesion remained high even after the EU's enlargement in 2004, when the number of MEPs increased from 626 to 732. We measure the co-voting cohesion of the political groups in the Eighth European Parliament using Krippendorff's Alpha—the results are shown in Fig FIGREF30 (panel Overall). The Greens-EFA have the highest cohesion of all the groups. This finding is in line with an analysis of previous compositions of the Fifth and Sixth European Parliaments by Hix and Noury BIBREF11 , and the Seventh by VoteWatch BIBREF37 . They are closely followed by the S&D and EPP. Hix and Noury reported on the high cohesion of S&D in the Fifth and Sixth European Parliaments, and we also observe this in the current composition. They also reported a slightly less cohesive EPP-ED. This group split in 2009 into EPP and ECR. VoteWatch reports EPP to have cohesion on a par with Greens-EFA and S&D in the Seventh European Parliament. The cohesion level we observe in the current European Parliament is also similar to the level of Greens-EFA and S&D. The catch-all group of the non-aligned (NI) comes out as the group with the lowest cohesion. In addition, among the least cohesive groups in the European Parliament are the Eurosceptics EFDD, which include the British UKIP led by Nigel Farage, and the ENL whose largest party are the French National Front, led by Marine Le Pen. Similarly, Hix and Noury found that the least cohesive groups in the Seventh European Parliament are the nationalists and Eurosceptics. The Eurosceptic IND/DEM, which participated in the Sixth European Parliament, transformed into the current EFDD, while the nationalistic UEN was dissolved in 2009. We also measure the voting cohesion of the European Parliament groups using an ERGM, a network-based method—the results are shown in Fig FIGREF31 (panel Overall). The cohesion results obtained with ERGM are comparable to the results based on agreement. In this context, the parameters estimated by the ERGM refer to the matching of MEPs who belong to the same political group (one parameter per group). The parameters measure the homophilous matching between MEPs who have the same political affiliation. A positive value for the estimated parameter indicates that the co-voting of MEPs from that group is greater than what is expected by chance, where the expected number of co-voting links by chance in a group is taken to be uniformly random. A negative value indicates that there are fewer co-voting links within a group than expected by chance. Even though INLINEFORM0 and ERGM compute scores relative to what is expected by chance, they refer to different interpretations of chance. INLINEFORM1 's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group, knowing the votes of these MEPs on all RCVs. ERGM's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group on a given RCV, knowing the network-related properties of the co-voting network on that particular RCV. The main difference between INLINEFORM2 and ERGM, though, is the treatment of non-voting and abstained MEPs. INLINEFORM3 considers only the yes/no votes, and consequently, agreements by the voting MEPs of the same groups are considerably higher than co-voting by chance. ERGM, on the other hand, always considers all MEPs, and non-voting and abstained MEPs are treated as disconnected nodes. The level of co-voting by chance is therefore considerably lower, since there is often a large fraction of MEPs that do not attand or abstain. As with INLINEFORM0 , Greens-EFA, S&D, and EPP exhibit the highest cohesion, even though their ranking is permuted when compared to the ranking obtained with INLINEFORM1 . At the other end of the scale, we observe the same situation as with INLINEFORM2 . The non-aligned members NI have the lowest cohesion, followed by EFDD and ENL. The only place where the two methods disagree is the level of cohesion of GUE-NGL. The Alpha attributes GUE-NGL a rather high level of cohesion, on a par with ALDE, whereas the ERGM attributes them a much lower cohesion. The reason for this difference is the relatively high abstention rate of GUE-NGL. Whereas the overall fraction of non-attending and abstaining MEPs across all RCVs and all political groups is 25%, the GUE-NGL abstention rate is 34%. This is reflected in an above average cohesion by INLINEFORM0 where only yes/no votes are considered, and in a relatively lower, below average cohesion by ERGM. In the later case, the non-attendance is interpreted as a non-cohesive voting of a political groups as a whole. In addition to the overall cohesion, we also focus on two selected policy areas. The cohesion of the political groups related to these two policy areas is shown in the first two panels in Fig FIGREF30 ( INLINEFORM0 ) and Fig FIGREF31 (ERGM). The most important observation is that the level of cohesion of the political groups is very stable across different policy areas. These results are corroborated by both methodologies. Similar to the overall cohesion, the most cohesive political groups are the S&D, Greens-EFA, and EPP. The least cohesive group is the NI, followed by the ENL and EFDD. The two methodologies agree on the level of cohesion for all the political groups, except for GUE-NGL, due to a lower attendance rate. We determine the cohesion of political groups on Twitter by using the average number of retweets between MEPs within the same group. The results are shown in Fig FIGREF33 . The right-wing ENL and EFDD come out as the most cohesive groups, while all the other groups have a far lower average number of retweets. MEPs from ENL and EFDD post by far the largest number of retweets (over 240), and at the same time over 94% of their retweets are directed to MEPs from the same group. Moreover, these two groups stand out in the way the retweets are distributed within the group. A large portion of the retweets of EFDD (1755) go to Nigel Farage, the leader of the group. Likewise, a very large portion of retweets of ENL (2324) go to Marine Le Pen, the leader of the group. Farage and Le Pen are by far the two most retweeted MEPs, with the third one having only 666 retweets. Coalitions in the European Parliament Coalition formation in the European Parliament is largely determined by ideological positions, reflected in the degree of cooperation of parties at the national and European levels. The observation of ideological inclinations in the coalition formation within the European Parliament was already made by other authors BIBREF11 and is confirmed in this study. The basic patterns of coalition formation in the European Parliament can already be seen in the co-voting network in Fig FIGREF2 A. It is remarkable that the degree of attachment between the political groups, which indicates the degree of cooperation in the European Parliament, nearly exactly corresponds to the left-to-right seating order. The liberal ALDE seems to have an intermediator role between the left and right parts of the spectrum in the parliament. Between the extreme (GUE-NGL) and center left (S&D) groups, this function seems to be occupied by Greens-EFA. The non-aligned members NI, as well as the Eurosceptic EFFD and ENL, seem to alternately tip the balance on both poles of the political spectrum. Being ideologically more inclined to vote with other conservative and right-wing groups (EPP, ECR), they sometimes also cooperate with the extreme left-wing group (GUE-NGL) with which they share their Euroscepticism as a common denominator. Figs FIGREF36 and FIGREF37 give a more detailed understanding of the coalition formation in the European Parliament. Fig FIGREF36 displays the degree of agreement or cooperation between political groups measured by Krippendorff's INLINEFORM0 , whereas Fig FIGREF37 is based on the result from the ERGM. We first focus on the overall results displayed in the right-hand plots of Figs FIGREF36 and FIGREF37 . The strongest degrees of cooperation are observed, with both methods, between the two major parties (EPP and S&D) on the one hand, and the liberal ALDE on the other. Furthermore, we see a strong propensity for Greens-EFA to vote with the Social Democrats (5th strongest coalition by INLINEFORM0 , and 3rd by ERGM) and the GUE-NGL (3rd strongest coalition by INLINEFORM1 , and 5th by ERGM). These results underline the role of ALDE and Greens-EFA as intermediaries for the larger groups to achieve a majority. Although the two largest groups together have 405 seats and thus significantly more than the 376 votes needed for a simple majority, the degree of cooperation between the two major groups is ranked only as the fourth strongest by both methods. This suggests that these two political groups find it easier to negotiate deals with smaller counterparts than with the other large group. This observation was also made by Hix et al. BIBREF12 , who noted that alignments on the left and right of the political spectrum have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. Next, we focus on the coalition formation within the two selected policy areas. The area State and Evolution of the Union is dominated by cooperation between the two major groups, S&D and EPP, as well as ALDE. We also observe a high degree of cooperation between groups that are generally regarded as integration friendly, like Greens-EFA and GUE-NGL. We see, particularly in Fig FIGREF36 , a relatively high degree of cooperation between groups considered as Eurosceptic, like ECR, EFFD, ENL, and the group of non-aligned members. The dichotomy between supporters and opponents of European integration is even more pronounced within the policy area Economic and Monetary System. In fact, we are taking a closer look specifically at these two areas as they are, at the same time, both contentious and important. Both methods rank the cooperation between S&D and EPP on the one hand, and ALDE on the other, as the strongest. We also observe a certain degree of unanimity among the Eurosceptic and right-wing groups (EFDD, ENL, and NI) in this policy area. This seems plausible, as these groups were (especially in the aftermath of the global financial crisis and the subsequent European debt crisis) in fierce opposition to further payments to financially troubled member states. However, we also observe a number of strong coalitions that might, at first glance, seem unusual, specifically involving the left-wing group GUE-NGL on the one hand, and the right-wing EFDD, ENL, and NI on the other. These links also show up in the network plot in Fig FIGREF2 A. This might be attributable to a certain degree of Euroscepticism on both sides: rooted in criticism of capitalism on the left, and at least partly a raison d'être on the right. Hix et al. BIBREF11 discovered this pattern as well, and proposed an additional explanation—these coalitions also relate to a form of government-opposition dynamic that is rooted at the national level, but is reflected in voting patterns at the European level. In general, we observe two main differences between the INLINEFORM0 and ERGM results: the baseline cooperation as estimated by INLINEFORM1 is higher, and the ordering of coalitions from the strongest to the weakest is not exactly the same. The reason is the same as for the cohesion, namely different treatment of non-voting and abstaining MEPs. When they are ignored, as by INLINEFORM2 , the baseline level of inter-group co-voting is higher. When non-attending and abstaining is treated as voting differently, as by ERGM, it is considerably more difficult to achieve co-voting coalitions, specially when there are on average 25% MEPs that do not attend or abstain. Groups with higher non-attendance rates, such as GUE-NGL (34%) and NI (40%) are less likely to form coalitions, and therefore have relatively lower ERGM coefficients (Fig FIGREF37 ) than INLINEFORM3 scores (Fig FIGREF36 ). The first insight into coalition formation on Twitter can be observed in the retweet network in Fig FIGREF2 B. The ideological left to right alignment of the political groups is reflected in the retweet network. Fig FIGREF40 shows the strength of the coalitions on Twitter, as estimated by the number of retweets between MEPs from different groups. The strongest coalitions are formed between the right-wing groups EFDD and ECR, as well as ENL and NI. At first, this might come as a surprise, since these groups do not form strong coalitions in the European Parliament, as can be seen in Figs FIGREF36 and FIGREF37 . On the other hand, the MEPs from these groups are very active Twitter users. As previously stated, MEPs from ENL and EFDD post the largest number of retweets. Moreover, 63% of the retweets outside of ENL are retweets of NI. This effect is even more pronounced with MEPs from EFDD, whose retweets of ECR account for 74% of their retweets from other groups. In addition to these strong coalitions on the right wing, we find coalition patterns to be very similar to the voting coalitions observed in the European Parliament, seen in Figs FIGREF36 and FIGREF37 . The strongest coalitions, which come immediately after the right-wing coalitions, are between Greens-EFA on the one hand, and GUE-NGL and S&D on the other, as well as ALDE on the one hand, and EPP and S&D on the other. These results corroborate the role of ALDE and Greens-EFA as intermediaries in the European Parliament, not only in the legislative process, but also in the debate on social media. To better understand the formation of coalitions in the European Parliament and on Twitter, we examine the strongest cooperation between political groups at three different thresholds. For co-voting coalitions in the European Parliament we choose a high threshold of INLINEFORM0 , a medium threshold of INLINEFORM1 , and a negative threshold of INLINEFORM2 (which corresponds to strong oppositions). In this way we observe the overall patterns of coalition and opposition formation in the European Parliament and in the two specific policy areas. For cooperation on Twitter, we choose a high threshold of INLINEFORM3 , a medium threshold of INLINEFORM4 , and a very low threshold of INLINEFORM5 . The strongest cooperations in the European Parliament over all policy areas are shown in Fig FIGREF42 G. It comes as no surprise that the strongest cooperations are within the groups (in the diagonal). Moreover, we again observe GUE-NGL, S&D, Greens-EFA, ALDE, and EPP as the most cohesive groups. In Fig FIGREF42 H, we observe coalitions forming along the diagonal, which represents the seating order in the European Parliament. Within this pattern, we observe four blocks of coalitions: on the left, between GUE-NGL, S&D, and Greens-EFA; in the center, between S&D, Greens-EFA, ALDE, and EPP; on the right-center between ALDE, EPP, and ECR; and finally, on the far-right between ECR, EFDD, ENL, and NI. Fig FIGREF42 I shows the strongest opposition between groups that systematically disagree in voting. The strongest disagreements are between left- and right-aligned groups, but not between the left-most and right-most groups, in particular, between GUE-NGL and ECR, but also between S&D and Greens-EFA on one side, and ENL and NI on the other. In the area of Economic and monetary system we see a strong cooperation between EPP and S&D (Fig FIGREF42 A), which is on a par with the cohesion of the most cohesive groups (GUE-NGL, S&D, Greens-EFA, ALDE, and EPP), and is above the cohesion of the other groups. As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B. On one hand, we observe cooperation between S&D, ALDE, EPP, and ECR, and on the other, cooperation between GUE-NGL, Greens-EFA, EFDD, ENL, and NI. This division in blocks is seen again in Fig FIGREF42 C, which shows the strongest disagreements. Here, we observe two blocks composed of S&D, EPP, and ALDE on one hand, and GUE-NGL, EFDD, ENL, and NI on the other, which are in strong opposition to each other. In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis. The patterns of coalitions forming on Twitter closely resemble those in the European Parliament. In Fig FIGREF42 J we see that the strongest degrees of cooperation on Twitter are within the groups. The only group with low cohesion is the NI, whose members have only seven retweets between them. The coalitions on Twitter follow the seating order in the European Parliament remarkably well (see Fig FIGREF42 K). What is striking is that the same blocks form on the left, center, and on the center-right, both in the European Parliament and on Twitter. The largest difference between the coalitions in the European Parliament and on Twitter is on the far-right, where we observe ENL and NI as isolated blocks. The results shown in Fig FIGREF44 quantify the extent to which communication in one social context (Twitter) can explain cooperation in another social context (co-voting in the European Parliament). A positive value indicates that the matching behavior in the retweet network is similar to the one in the co-voting network, specific for an individual policy area. On the other hand, a negative value implies a negative “correlation” between the retweeting and co-voting of MEPs in the two different contexts. The bars in Fig FIGREF44 correspond to the coefficients from the edge covariate terms of the ERGM, describing the relationship between the retweeting and co-voting behavior of MEPs. The coefficients are aggregated for individual policy areas by means of a meta-analysis. Overall, we observe a positive correlation between retweeting and co-voting, which is significantly different from zero. The strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets. Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union. The only exception, with a significantly negative coefficient, is the area Economic and monetary system. This implies that in the area Economic and monetary system we observe a significant deviation from the usual co-voting patterns. Results from section “sec:coalitionpolicy”, confirm that this is indeed the case. Especially noteworthy are the coalitions between GUE-NGL and Greens-EFA on the left wing, and EFDD and ENL on the right wing. In the section “sec:coalitionpolicy” we interpret these results as a combination of Euroscepticism on both sides, motivated on the left by a skeptical attitude towards the market orientation of the EU, and on the right by a reluctance to give up national sovereignty. Discussion We study cohesion and coalitions in the Eighth European Parliament by analyzing, on one hand, MEPs' co-voting tendencies and, on the other, their retweeting behavior. We reveal that the most cohesive political group in the European Parliament, when it comes to co-voting, is Greens-EFA, closely followed by S&D and EPP. This is consistent with what VoteWatch BIBREF37 reported for the Seventh European Parliament. The non-aligned (NI) come out as the least cohesive group, followed by the Eurosceptic EFDD. Hix and Noury BIBREF11 also report that nationalists and Eurosceptics form the least cohesive groups in the Sixth European Parliament. We reaffirm most of these results with both of the two employed methodologies. The only point where the two methodologies disagree is in the level of cohesion for the left-wing GUE-NGL, which is portrayed by ERGM as a much less cohesive group, due to their relatively lower attendance rate. The level of cohesion of the political groups is quite stable across different policy areas and similar conclusions apply. On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration. We observed the same phenomenon recently during the Brexit campaign BIBREF38 . Along our interpretation the Brexit was “won” to some extent due to these social media activities, which are practically non-existent among the pro-EU political groups. The fact that ENL and EFDD are the least cohesive groups in the European Parliament can be attributed to their political focus. It seems more important for the group to agree on its anti-EU stance and to call for independence and sovereignty, and much less important to agree on other issues put forward in the parliament. The basic pattern of coalition formation, with respect to co-voting, can already be seen in Fig FIGREF2 A: the force-based layout almost completely corresponds to the seating order in the European Parliament (from the left- to the right-wing groups). A more thorough examination shows that the strongest cooperation can be observed, for both methodologies, between EPP, S&D, and ALDE, where EPP and S&D are the two largest groups, while the liberal ALDE plays the role of an intermediary in this context. On the other hand, the role of an intermediary between the far-left GUE-NGL and its center-left neighbor, S&D, is played by the Greens-EFA. These three parties also form a strong coalition in the European Parliament. On the far right of the spectrum, the non-aligned, EFDD, and ENL form another coalition. This behavior was also observed by Hix et al. BIBREF12 , stating that alignments on the left and right have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the European Parliament. When looking at the policy area Economic and monetary system, we see the same coalitions. However, interestingly, EFDD, ENL, and NI often co-vote with the far-left GUE-NGL. This can be attributed to a certain degree of Euroscepticism on both sides: as a criticism of capitalism, on one hand, and as the main political agenda, on the other. This pattern was also discovered by Hix et al. BIBREF12 , who argued that these coalitions emerge from a form of government-opposition dynamics, rooted at the national level, but also reflected at the European level. When studying coalitions on Twitter, the strongest coalitions can be observed on the right of the spectrum (between EFDD, ECR, ENL, and NI). This is, yet again, in contrast to what was observed in the RCV data. The reason lies in the anti-EU messages they tend to collectively spread (retweet) across the network. This behavior forms strong retweet ties, not only within, but also between, these groups. For example, MEPs of EFDD mainly retweet MEPs from ECR (with the exception of MEPs from their own group). In contrast to these right-wing coalitions, we find the other coalitions to be consistent with what is observed in the RCV data. The strongest coalitions on the left-to-center part of the axis are those between GUE-NGL, Greens-EFA, and S&D, and between S&D, ALDE, and EPP. These results reaffirm the role of Greens-EFA and ALDE as intermediaries, not only in the European Parliament but also in the debates on social media. Last, but not least, with the ERGM methodology we measure the extent to which the retweet network can explain the co-voting activities in the European Parliament. We compute this for each policy area separately and also over all RCVs. We conclude that the retweet network indeed matches the co-voting behavior, with the exception of one specific policy area. In the area Economic and monetary system, the links in the (overall) retweet network do not match the links in the co-voting network. Moreover, the negative coefficients imply a radically different formation of coalitions in the European Parliament. This is consistent with the results in Figs FIGREF36 and FIGREF37 (the left-hand panels), and is also observed in Fig FIGREF42 (the top charts). From these figures we see that in this particular case, the coalitions are also formed between the right-wing groups and the far-left GUE-NGL. As already explained, we attribute this to the degree of Euroscepticism that these groups share on this particular policy issue. Conclusions In this paper we analyze (co-)voting patterns and social behavior of members of the European Parliament, as well as the interaction between these two systems. More precisely, we analyze a set of 2535 roll-call votes as well as the tweets and retweets of members of the MEPs in the period from October 2014 to February 2016. The results indicate a considerable level of correlation between these two complex systems. This is consistent with previous findings of Cherepnalkoski et al. BIBREF22 , who reconstructed the adherence of MEPs to their respective political or national group solely from their retweeting behavior. We employ two different methodologies to quantify the co-voting patterns: Krippendorff's INLINEFORM0 and ERGM. They were developed in different fields of research, use different techniques, and are based on different assumptions, but in general they yield consistent results. However, there are some differences which have consequences for the interpretation of the results. INLINEFORM0 is a measure of agreement, designed as a generalization of several specialized measures, that can compare different numbers of observations, in our case roll-call votes. It only considers yes/no votes. Absence and abstention by MEPs is ignored. Its baseline ( INLINEFORM1 ), i.e., co-voting by chance, is computed from the yes/no votes of all MEPs on all RCVs. ERGMs are used in social-network analyses to determine factors influencing the edge formation process. In our case an edge between two MEPs is formed when they cast the same yes/no vote within a RCV. It is assumed that a priori each MEP can form a link with any other MEP. No assumptions about the presence or absence of individual MEPs in a voting session are made. Each RCV is analyzed as a separate binary network. The node set is thereby kept constant for each RCV network. While the ERGM departs from the originally observed network, where MEPs who didn't vote or abstained appear as isolated nodes, links between these nodes are possible within the network sampling process which is part of the ERGM optimization process. The results of several RCVs are aggregated by means of the meta-analysis approach. The baseline (ERGM coefficients INLINEFORM0 ), i.e., co-voting by chance, is computed from a large sample of randomly generated networks. These two different baselines have to be taken into account when interpreting the results of INLINEFORM0 and ERGM. In a typical voting session, 25% of the MEPs are missing or abstaining. When assessing cohesion of political groups, all INLINEFORM1 values are well above the baseline, and the average INLINEFORM2 . The average ERGM cohesion coefficients, on the other hand, are around the baseline. The difference is even more pronounced for groups with higher non-attendance/abstention rates like GUE-NGL (34%) and NI (40%). When assessing strength of coalitions between pairs of groups, INLINEFORM3 values are balanced around the baseline, while the ERGM coefficients are mostly negative. The ordering of coalitions from the strongest to the weakest is therefor different when groups with high non-attendance/abstention rates are involved. The choice of the methodology to asses cohesion and coalitions is not obvious. Roll-call voting is used for decisions which demand a simple majority only. One might however argue that non-attendance/abstention corresponds to a no vote, or that absence is used strategically. Also, the importance of individual votes, i.e., how high on the agenda of a political group is the subject, affects their attendance, and consequently the perception of their cohesion and the potential to act as a reliable coalition partner. Acknowledgments This work was supported in part by the EC projects SIMPOL (no. 610704) and DOLFINS (no. 640772), and by the Slovenian ARRS programme Knowledge Technologies (no. P2-103).
No
5e057e115f8976bf9fe70ab5321af72eb4b4c0fc
5e057e115f8976bf9fe70ab5321af72eb4b4c0fc_0
Q: Did the authors examine if any of the MEPs used the disclaimer that retweeting does not imply endorsement on their twitter profile? Text: Abstract We study the cohesion within and the coalitions between political groups in the Eighth European Parliament (2014–2019) by analyzing two entirely different aspects of the behavior of the Members of the European Parliament (MEPs) in the policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting behavior. We make use of two diverse datasets in the analysis. The first one is the roll-call vote dataset, where cohesion is regarded as the tendency to co-vote within a group, and a coalition is formed when the members of several groups exhibit a high degree of co-voting agreement on a subject. The second dataset comes from Twitter; it captures the retweeting (i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between groups) from a completely different perspective. We employ two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's Alpha reliability, used to measure the agreement between raters in data-analysis scenarios, and the second one is based on Exponential Random Graph Models, often used in social-network analysis. We give general insights into the cohesion of political groups in the European Parliament, explore whether coalitions are formed in the same way for different policy areas, and examine to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. A novel and interesting aspect of our work is the relationship between the co-voting and retweeting patterns. Introduction Social-media activities often reflect phenomena that occur in other complex systems. By observing social networks and the content propagated through these networks, we can describe or even predict the interplay between the observed social-media activities and another complex system that is more difficult, if not impossible, to monitor. There are numerous studies reported in the literature that successfully correlate social-media activities to phenomena like election outcomes BIBREF0 , BIBREF1 or stock-price movements BIBREF2 , BIBREF3 . In this paper we study the cohesion and coalitions exhibited by political groups in the Eighth European Parliament (2014–2019). We analyze two entirely different aspects of how the Members of the European Parliament (MEPs) behave in policy-making processes. On one hand, we analyze their co-voting patterns and, on the other, their retweeting (i.e., endorsing) behavior. We use two diverse datasets in the analysis: the roll-call votes and the Twitter data. A roll-call vote (RCV) is a vote in the parliament in which the names of the MEPs are recorded along with their votes. The RCV data is available as part of the minutes of the parliament's plenary sessions. From this perspective, cohesion is seen as the tendency to co-vote (i.e., cast the same vote) within a group, and a coalition is formed when members of two or more groups exhibit a high degree of co-voting on a subject. The second dataset comes from Twitter. It captures the retweeting behavior of MEPs and implies cohesion (retweets within the same group) and coalitions (retweets between the groups) from a completely different perspective. With over 300 million monthly active users and 500 million tweets posted daily, Twitter is one of the most popular social networks. Twitter allows its users to post short messages (tweets) and to follow other users. A user who follows another user is able to read his/her public tweets. Twitter also supports other types of interaction, such as user mentions, replies, and retweets. Of these, retweeting is the most important activity as it is used to share and endorse content created by other users. When a user retweets a tweet, the information about the original author as well as the tweet's content are preserved, and the tweet is shared with the user's followers. Typically, users retweet content that they agree with and thus endorse the views expressed by the original tweeter. We apply two different methodologies to analyze the cohesion and coalitions. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 which measures the agreement among observers, or voters in our case. The second one is based on Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach and is often used in social-network analyses. Even though these two methodologies come with two different sets of techniques and are based on different assumptions, they provide consistent results. The main contributions of this paper are as follows: (i) We give general insights into the cohesion of political groups in the Eighth European Parliament, both overall and across different policy areas. (ii) We explore whether coalitions are formed in the same way for different policy areas. (iii) We explore to what degree the retweeting behavior of MEPs corresponds to their co-voting patterns. (iv) We employ two statistically sound methodologies and examine the extent to which the results are sensitive to the choice of methodology. While the results are mostly consistent, we show that the difference are due to the different treatment of non-attending and abstaining MEPs by INLINEFORM0 and ERGM. The most novel and interesting aspect of our work is the relationship between the co-voting and the retweeting patterns. The increased use of Twitter by MEPs on days with a roll-call vote session (see Fig FIGREF1 ) is an indicator that these two processes are related. In addition, the force-based layouts of the co-voting network and the retweet network reveal a very similar structure on the left-to-center side of the political spectrum (see Fig FIGREF2 ). They also show a discrepancy on the far-right side of the spectrum, which calls for a more detailed analysis. Related work In this paper we study and relate two very different aspects of how MEPs behave in policy-making processes. First, we look at their co-voting behavior, and second, we examine their retweeting patterns. Thus, we draw related work from two different fields of science. On one hand, we look at how co-voting behavior is analyzed in the political-science literature and, on the other, we explore how Twitter is used to better understand political and policy-making processes. The latter has been more thoroughly explored in the field of data mining (specifically, text mining and network analysis). To the best of our knowledge, this is the first paper that studies legislative behavior in the Eighth European Parliament. The legislative behavior of the previous parliaments was thoroughly studied by Hix, Attina, and others BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . These studies found that voting behavior is determined to a large extent—and when viewed over time, increasingly so—by affiliation to a political group, as an organizational reflection of the ideological position. The authors found that the cohesion of political groups in the parliament has increased, while nationality has been less and less of a decisive factor BIBREF12 . The literature also reports that a split into political camps on the left and right of the political spectrum has recently replaced the `grand coalition' between the two big blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. The authors conclude that coalitions are to a large extent formed along the left-to-right axis BIBREF12 . In this paper we analyze the roll-call vote data published in the minutes of the parliament's plenary sessions. For a given subject, the data contains the vote of each MEP present at the respective sitting. Roll-call vote data from the European Parliament has already been extensively studied by other authors, most notably by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 . To be able to study the cohesion and coalitions, authors like Hix, Attina, and Rice BIBREF6 , BIBREF13 , BIBREF14 defined and employed a variety of agreement measures. The most prominent measure is the Agreement Index proposed by Hix et al. BIBREF13 . This measure computes the agreement score from the size of the majority class for a particular vote. The Agreement Index, however, exhibits two drawbacks: (i) it does not account for co-voting by chance, and (ii) without a proper adaptation, it does not accommodate the scenario in which the agreement is to be measured between two different political groups. We employ two statistically sound methodologies developed in two different fields of science. The first one is based on Krippendorff's INLINEFORM0 BIBREF4 . INLINEFORM1 is a measure of the agreement among observers, coders, or measuring instruments that assign values to items or phenomena. It compares the observed agreement to the agreement expected by chance. INLINEFORM2 is used to measure the inter- and self-annotator agreement of human experts when labeling data, and the performance of classification models in machine learning scenarios BIBREF15 . In addition to INLINEFORM3 , we employ Exponential Random Graph Models (ERGM) BIBREF5 . In contrast to the former, ERGM is a network-based approach, often used in social-network analyses. ERGM can be employed to investigate how different network statistics (e.g., number of edges and triangles) or external factors (e.g., political group membership) govern the network-formation process. The second important aspect of our study is related to analyzing the behavior of participants in social networks, specifically Twitter. Twitter is studied by researchers to better understand different political processes, and in some cases to predict their outcomes. Eom et al. BIBREF1 consider the number of tweets by a party as a proxy for the collective attention to the party, explore the dynamics of the volume, and show that this quantity contains information about an election's outcome. Other studies BIBREF16 reach similar conclusions. Conover et al. BIBREF17 predicted the political alignment of Twitter users in the run-up to the 2010 US elections based on content and network structure. They analyzed the polarization of the retweet and mention networks for the same elections BIBREF18 . Borondo et al. BIBREF19 analyzed user activity during the Spanish presidential elections. They additionally analyzed the 2012 Catalan elections, focusing on the interplay between the language and the community structure of the network BIBREF20 . Most existing research, as Larsson points out BIBREF21 , focuses on the online behavior of leading political figures during election campaigns. This paper continues our research on communities that MEPs (and their followers) form on Twitter BIBREF22 . The goal of our research was to evaluate the role of Twitter in identifying communities of influence when the actual communities are known. We represent the influence on Twitter by the number of retweets that MEPs “receive”. We construct two networks of influence: (i) core, which consists only of MEPs, and (ii) extended, which also involves their followers. We compare the detected communities in both networks to the groups formed by the political, country, and language membership of MEPs. The results show that the detected communities in the core network closely match the political groups, while the communities in the extended network correspond to the countries of residence. This provides empirical evidence that analyzing retweet networks can reveal real-world relationships and can be used to uncover hidden properties of the networks. Lazer BIBREF23 highlights the importance of network-based approaches in political science in general by arguing that politics is a relational phenomenon at its core. Some researchers have adopted the network-based approach to investigate the structure of legislative work in the US Congress, including committee and sub-committee membership BIBREF24 , bill co-sponsoring BIBREF25 , and roll-call votes BIBREF26 . More recently, Dal Maso et al. BIBREF27 examined the community structure with respect to political coalitions and government structure in the Italian Parliament. Scherpereel et al. BIBREF28 examined the constituency, personal, and strategic characteristics of MEPs that influence their tweeting behavior. They suggested that Twitter's characteristics, like immediacy, interactivity, spontaneity, personality, and informality, are likely to resonate with political parties across Europe. By fitting regression models, the authors find that MEPs from incohesive groups have a greater tendency to retweet. In contrast to most of these studies, we focus on the Eighth European Parliament, and more importantly, we study and relate two entirely different behavioral aspects, co-voting and retweeting. The goal of this research is to better understand the cohesion and coalition formation processes in the European Parliament by quantifying and comparing the co-voting patterns and social behavior. Methods In this section we present the methods to quantify cohesion and coalitions from the roll-call votes and Twitter activities. Co-voting measured by agreement We first show how the co-voting behaviour of MEPs can be quantified by a measure of the agreement between them. We treat individual RCVs as observations, and MEPs as independent observers or raters. When they cast the same vote, there is a high level of agreement, and when they vote differently, there is a high level of disagreement. We define cohesion as the level of agreement within a political group, a coalition as a voting agreement between political groups, and opposition as a disagreement between different groups. There are many well-known measures of agreement in the literature. We selected Krippendorff's Alpha-reliability ( INLINEFORM0 ) BIBREF4 , which is a generalization of several specialized measures. It works for any number of observers, and is applicable to different variable types and metrics (e.g., nominal, ordered, interval, etc.). In general, INLINEFORM1 is defined as follows: INLINEFORM2 where INLINEFORM0 is the actual disagreement between observers (MEPs), and INLINEFORM1 is disagreement expected by chance. When observers agree perfectly, INLINEFORM2 INLINEFORM3 , when the agreement equals the agreement by chance, INLINEFORM4 INLINEFORM5 , and when the observers disagree systematically, INLINEFORM6 INLINEFORM7 . The two disagreement measures are defined as follows: INLINEFORM0 INLINEFORM0 The arguments INLINEFORM0 , and INLINEFORM1 are defined below and refer to the values in the coincidence matrix that is constructed from the RCVs data. In roll-call votes, INLINEFORM2 (and INLINEFORM3 ) is a nominal variable with two possible values: yes and no. INLINEFORM4 is a difference function between the values of INLINEFORM5 and INLINEFORM6 , defined as: INLINEFORM7 The RCVs data has the form of a reliability data matrix: INLINEFORM0 where INLINEFORM0 is the number of RCVs, INLINEFORM1 is the number of MEPs, INLINEFORM2 is the number of votes cast in the voting INLINEFORM3 , and INLINEFORM4 is the actual vote of an MEP INLINEFORM5 in voting INLINEFORM6 (yes or no). A coincidence matrix is constructed from the reliability data matrix, and is in general a INLINEFORM0 -by- INLINEFORM1 square matrix, where INLINEFORM2 is the number of possible values of INLINEFORM3 . In our case, where only yes/no votes are relevant, the coincidence matrix is a 2-by-2 matrix of the following form: INLINEFORM4 A cell INLINEFORM0 accounts for all coincidences from all pairs of MEPs in all RCVs where one MEP has voted INLINEFORM1 and the other INLINEFORM2 . INLINEFORM3 and INLINEFORM4 are the totals for each vote outcome, and INLINEFORM5 is the grand total. The coincidences INLINEFORM6 are computed as: INLINEFORM7 where INLINEFORM0 is the number of INLINEFORM1 pairs in vote INLINEFORM2 , and INLINEFORM3 is the number of MEPs that voted in INLINEFORM4 . When computing INLINEFORM5 , each pair of votes is considered twice, once as a INLINEFORM6 pair, and once as a INLINEFORM7 pair. The coincidence matrix is therefore symmetrical around the diagonal, and the diagonal contains all the equal votes. The INLINEFORM0 agreement is used to measure the agreement between two MEPs or within a group of MEPs. When applied to a political group, INLINEFORM1 corresponds to the cohesion of the group. The closer INLINEFORM2 is to 1, the higher the agreement of the MEPs in the group, and hence the higher the cohesion of the group. We propose a modified version of INLINEFORM0 to measure the agreement between two different groups, INLINEFORM1 and INLINEFORM2 . In the case of a voting agreement between political groups, high INLINEFORM3 is interpreted as a coalition between the groups, whereas negative INLINEFORM4 indicates political opposition. Suppose INLINEFORM0 and INLINEFORM1 are disjoint subsets of all the MEPs, INLINEFORM2 , INLINEFORM3 . The respective number of votes cast by both group members in vote INLINEFORM4 is INLINEFORM5 and INLINEFORM6 . The coincidences are then computed as: INLINEFORM7 where the INLINEFORM0 pairs come from different groups, INLINEFORM1 and INLINEFORM2 . The total number of such pairs in vote INLINEFORM3 is INLINEFORM4 . The actual number INLINEFORM5 of the pairs is multiplied by INLINEFORM6 so that the total contribution of vote INLINEFORM7 to the coincidence matrix is INLINEFORM8 . A network-based measure of co-voting In this section we describe a network-based approach to analyzing the co-voting behavior of MEPs. For each roll-call vote we form a network, where the nodes in the network are MEPs, and an undirected edge between two MEPs is formed when they cast the same vote. We are interested in the factors that determine the cohesion within political groups and coalition formation between political groups. Furthermore, we investigate to what extent communication in a different social context, i.e., the retweeting behavior of MEPs, can explain the co-voting of MEPs. For this purpose we apply an Exponential Random Graph Model BIBREF5 to individual roll-call vote networks, and aggregate the results by means of the meta-analysis. ERGMs allow us to investigate the factors relevant for the network-formation process. Network metrics, as described in the abundant literature, serve to gain information about the structural properties of the observed network. A model investigating the processes driving the network formation, however, has to take into account that there can be a multitude of alternative networks. If we are interested in the parameters influencing the network formation we have to consider all possible networks and measure their similarity to the originally observed network. The family of ERGMs builds upon this idea. Assume a random graph INLINEFORM0 , in the form of a binary adjacency matrix, made up of a set of INLINEFORM1 nodes and INLINEFORM2 edges INLINEFORM3 where, similar to a binary choice model, INLINEFORM4 if the nodes INLINEFORM5 are connected and INLINEFORM6 if not. Since network data is by definition relational and thus violates assumptions of independence, classical binary choice models, like logistic regression, cannot be applied in this context. Within an ERGM, the probability for a given network is modelled by DISPLAYFORM0 where INLINEFORM0 is the vector of parameters and INLINEFORM1 is the vector of network statistics (counts of network substructures), which are a function of the adjacency matrix INLINEFORM2 . INLINEFORM3 is a normalization constant corresponding to the sample of all possible networks, which ensures a proper probability distribution. Evaluating the above expression allows us to make assertions if and how specific nodal attributes influence the network formation process. These nodal attributes can be endogenous (dyad-dependent parameters) to the network, like the in- and out-degrees of a node, or exogenous (dyad-independent parameters), as the party affiliation, or the country of origin in our case. An alternative formulation of the ERGM provides the interpretation of the coefficients. We introduce the change statistic, which is defined as the change in the network statistics when an edge between nodes INLINEFORM0 and INLINEFORM1 is added or not. If INLINEFORM2 and INLINEFORM3 denote the vectors of counts of network substructures when the edge is added or not, the change statistics is defined as follows: INLINEFORM4 With this at hand it can be shown that the distribution of the variable INLINEFORM0 , conditional on the rest of the graph INLINEFORM1 , corresponds to: INLINEFORM2 This implies on the one hand that the probability depends on INLINEFORM0 via the change statistic INLINEFORM1 , and on the other hand, that each coefficient within the vector INLINEFORM2 represents an increase in the conditional log-odds ( INLINEFORM3 ) of the graph when the corresponding element in the vector INLINEFORM4 increases by one. The need to condition the probability on the rest of the network can be illustrated by a simple example. The addition (removal) of a single edge alters the network statistics. If a network has only edges INLINEFORM5 and INLINEFORM6 , the creation of an edge INLINEFORM7 would not only add an additional edge but would also alter the count for other network substructures included in the model. In this example, the creation of the edge INLINEFORM8 also increases the number of triangles by one. The coefficients are transformed into probabilities with the logistic function: INLINEFORM9 For example, in the context of roll-call votes, the probability that an additional co-voting edge is formed between two nodes (MEPs) of the same political group is computed with that equation. In this context, the nodematch (nodemix) coefficients of the ERGM (described in detail bellow) therefore refer to the degree of homophilous (heterophilous) matching of MEPs with regard to their political affiliation, or, expressed differently, the propensity of MEPs to co-vote with other MEPs of their respective political group or another group. A positive coefficient reflects an increased chance that an edge between two nodes with respective properties, like group affiliation, given all other parameters unchanged, is formed. Or, put differently, a positive coefficient implies that the probability of observing a network with a higher number of corresponding pairs relative to the hypothetical baseline network, is higher than to observe the baseline network itself BIBREF31 . For an intuitive interpretation, log-odds value of 0 corresponds to the even chance probability of INLINEFORM0 . Log-odds of INLINEFORM1 correspond to an increase of probability by INLINEFORM2 , whereas log-odds of INLINEFORM3 correspond to a decrease of probability by INLINEFORM4 . The computational challenges of estimating ERGMs is to a large degree due to the estimation of the normalizing constant. The number of possible networks is already extremely large for very small networks and the computation is simply not feasible. Therefore, an appropriate sample has to be found, ideally covering the most probable areas of the probability distribution. For this we make use of a method from the Markov Chain Monte Carlo (MCMC) family, namely the Metropolis-Hastings algorithm. The idea behind this algorithm is to generate and sample highly weighted random networks departing from the observed network. The Metropolis-Hastings algorithm is an iterative algorithm which samples from the space of possible networks by randomly adding or removing edges from the starting network conditional on its density. If the likelihood, in the ERGM context also denoted as weights, of the newly generated network is higher than that of the departure network it is retained, otherwise it is discarded. In the former case, the algorithm starts anew from the newly generated network. Otherwise departure network is used again. Repeating this procedure sufficiently often and summing the weights associated to the stored (sampled) networks allows to compute an approximation of the denominator in equation EQREF18 (normalizing constant). The algorithm starts sampling from the originally observed network INLINEFORM0 . The optimization of the coefficients is done simultaneously, equivalently with the Metropolis-Hastings algorithm. At the beginning starting values have to be supplied. For the study at hand we used the “ergm” library from the statistical R software package BIBREF5 implementing the Gibbs-Sampling algorithm BIBREF32 which is a special case of the Metropolis-Hastings algorithm outlined. In order to answer our question of the importance of the factors which drive the network formation process in the roll-call co-voting network, the ERGM is specified with the following parameters: nodematch country: This parameter adds one network statistic to the model, i.e., the number of edges INLINEFORM0 where INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of MEPs with respect to their country of origin. In other words, this coefficient indicates how relevant nationality is in the formation of edges in the co-voting network. nodematch national party: This parameter adds one network statistic to the model: the number of edges INLINEFORM0 with INLINEFORM1 . The coefficient indicates the homophilious mixing behavior of the MEPs with regard to their party affiliation at the national level. In the context of this study, this coefficient can be interpreted as an indicator for within-party cohesion at the national level. nodemix EP group: This parameter adds one network statistic for each pair of European political groups. These coefficients shed light on the degree of coalitions between different groups as well as the within group cohesion . Given that there are nine groups in the European Parliament, this coefficient adds in total 81 statistics to the model. edge covariate Twitter: This parameter corresponds to a square matrix with the dimension of the adjacency matrix of the network, which corresponds to the number of mutual retweets between the MEPs. It provides an insight about the extent to which communication in one social context (Twitter), can explain cooperation in another social context (co-voting in RCVs). An ERGM as specified above is estimated for each of the 2535 roll-call votes. Each roll-call vote is thereby interpreted as a binary network and as an independent study. It is assumed that a priori each MEP could possibly form an edge with each other MEP in the context of a roll-call vote. Assumptions over the presence or absence of individual MEPs in a voting session are not made. In other words the dimensions of the adjacency matrix (the node set), and therefore the distribution from which new networks are drawn, is kept constant over all RCVs and therefore for every ERGM. The ERGM results therefore implicitly generalize to the case where potentially all MEPs are present and could be voting. Not voting is incorporated implicitly by the disconnectedness of a node. The coefficients of the 2535 roll-call vote studies are aggregated by means of a meta-analysis approach proposed by Lubbers BIBREF33 and Snijders et al. BIBREF34 . We are interested in average effect sizes of different matching patterns over different topics and overall. Considering the number of RCVs, it seems straightforward to interpret the different RCV networks as multiplex networks and collapse them into one weighted network, which could then be analysed by means of a valued ERGM BIBREF35 . There are, however, two reasons why we chose the meta-analysis approach instead. First, aggregating the RCV data results into an extremely dense network, leading to severe convergence (degeneracy) problems for the ERGM. Second, the RCV data contains information about the different policy areas the individual votes were about. Since we are interested in how the coalition formation in the European Parliament differs over different areas, a method is needed that allows for an ex-post analysis of the corresponding results. We therefore opted for the meta-analysis approach by Lubbers and Snijders et al. This approach allows us to summarize the results by decomposing the coefficients into average effects and (class) subject-specific deviations. The different ERGM runs for each RCV are thereby regarded as different studies with identical samples that are combined to obtain a general overview of effect sizes. The meta-regression model is defined as: INLINEFORM0 Here INLINEFORM0 is a parameter estimate for class INLINEFORM1 , and INLINEFORM2 is the average coefficient. INLINEFORM3 denotes the normally distributed deviation of the class INLINEFORM4 with a mean of 0 and a variance of INLINEFORM5 . INLINEFORM6 is the estimation error of the parameter value INLINEFORM7 from the ERGM. The meta-analysis model is fitted by an iterated, weighted, least-squares model in which the observations are weighted by the inverse of their variances. For the overall nodematch between political groups, we weighted the coefficients by group sizes. The results from the meta analysis can be interpreted as if they stemmed from an individual ERGM run. In our study, the meta-analysis was performed using the RSiena library BIBREF36 , which implements the method proposed by Lubbers and Snijders et al. BIBREF33 , BIBREF34 . Measuring cohesion and coalitions on Twitter The retweeting behavior of MEPs is captured by their retweet network. Each MEP active on Twitter is a node in this network. An edge in the network between two MEPs exists when one MEP retweeted the other. The weight of the edge is the number of retweets between the two MEPs. The resulting retweet network is an undirected, weighted network. We measure the cohesion of a political group INLINEFORM0 as the average retweets, i.e., the ratio of the number of retweets between the MEPs in the group INLINEFORM1 to the number of MEPs in the group INLINEFORM2 . The higher the ratio, the more each MEP (on average) retweets the MEPs from the same political group, hence, the higher the cohesion of the political group. The definition of the average retweeets ( INLINEFORM3 ) of a group INLINEFORM4 is: INLINEFORM5 This measure of cohesion captures the aggregate retweeting behavior of the group. If we consider retweets as endorsements, a larger number of retweets within the group is an indicator of agreement between the MEPs in the group. It does not take into account the patterns of retweeting within the group, thus ignoring the social sub-structure of the group. This is a potentially interesting direction and we leave it for future work. We employ an analogous measure for the strength of coalitions in the retweet network. The coalition strength between two groups INLINEFORM0 and INLINEFORM1 is the ratio of the number of retweets from one group to the other (but not within groups) INLINEFORM2 to the total number of MEPs in both groups, INLINEFORM3 . The definition of the average retweeets ( INLINEFORM4 ) between groups INLINEFORM5 and INLINEFORM6 is: INLINEFORM7 Cohesion of political groups In this section we first report on the level of cohesion of the European Parliament's groups by analyzing the co-voting through the agreement and ERGM measures. Next, we explore two important policy areas, namely Economic and monetary system and State and evolution of the Union. Finally, we analyze the cohesion of the European Parliament's groups on Twitter. Existing research by Hix et al. BIBREF10 , BIBREF13 , BIBREF11 shows that the cohesion of the European political groups has been rising since the 1990s, and the level of cohesion remained high even after the EU's enlargement in 2004, when the number of MEPs increased from 626 to 732. We measure the co-voting cohesion of the political groups in the Eighth European Parliament using Krippendorff's Alpha—the results are shown in Fig FIGREF30 (panel Overall). The Greens-EFA have the highest cohesion of all the groups. This finding is in line with an analysis of previous compositions of the Fifth and Sixth European Parliaments by Hix and Noury BIBREF11 , and the Seventh by VoteWatch BIBREF37 . They are closely followed by the S&D and EPP. Hix and Noury reported on the high cohesion of S&D in the Fifth and Sixth European Parliaments, and we also observe this in the current composition. They also reported a slightly less cohesive EPP-ED. This group split in 2009 into EPP and ECR. VoteWatch reports EPP to have cohesion on a par with Greens-EFA and S&D in the Seventh European Parliament. The cohesion level we observe in the current European Parliament is also similar to the level of Greens-EFA and S&D. The catch-all group of the non-aligned (NI) comes out as the group with the lowest cohesion. In addition, among the least cohesive groups in the European Parliament are the Eurosceptics EFDD, which include the British UKIP led by Nigel Farage, and the ENL whose largest party are the French National Front, led by Marine Le Pen. Similarly, Hix and Noury found that the least cohesive groups in the Seventh European Parliament are the nationalists and Eurosceptics. The Eurosceptic IND/DEM, which participated in the Sixth European Parliament, transformed into the current EFDD, while the nationalistic UEN was dissolved in 2009. We also measure the voting cohesion of the European Parliament groups using an ERGM, a network-based method—the results are shown in Fig FIGREF31 (panel Overall). The cohesion results obtained with ERGM are comparable to the results based on agreement. In this context, the parameters estimated by the ERGM refer to the matching of MEPs who belong to the same political group (one parameter per group). The parameters measure the homophilous matching between MEPs who have the same political affiliation. A positive value for the estimated parameter indicates that the co-voting of MEPs from that group is greater than what is expected by chance, where the expected number of co-voting links by chance in a group is taken to be uniformly random. A negative value indicates that there are fewer co-voting links within a group than expected by chance. Even though INLINEFORM0 and ERGM compute scores relative to what is expected by chance, they refer to different interpretations of chance. INLINEFORM1 's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group, knowing the votes of these MEPs on all RCVs. ERGM's concept of chance is based on the number of expected pair-wise co-votes between MEPs belonging to a group on a given RCV, knowing the network-related properties of the co-voting network on that particular RCV. The main difference between INLINEFORM2 and ERGM, though, is the treatment of non-voting and abstained MEPs. INLINEFORM3 considers only the yes/no votes, and consequently, agreements by the voting MEPs of the same groups are considerably higher than co-voting by chance. ERGM, on the other hand, always considers all MEPs, and non-voting and abstained MEPs are treated as disconnected nodes. The level of co-voting by chance is therefore considerably lower, since there is often a large fraction of MEPs that do not attand or abstain. As with INLINEFORM0 , Greens-EFA, S&D, and EPP exhibit the highest cohesion, even though their ranking is permuted when compared to the ranking obtained with INLINEFORM1 . At the other end of the scale, we observe the same situation as with INLINEFORM2 . The non-aligned members NI have the lowest cohesion, followed by EFDD and ENL. The only place where the two methods disagree is the level of cohesion of GUE-NGL. The Alpha attributes GUE-NGL a rather high level of cohesion, on a par with ALDE, whereas the ERGM attributes them a much lower cohesion. The reason for this difference is the relatively high abstention rate of GUE-NGL. Whereas the overall fraction of non-attending and abstaining MEPs across all RCVs and all political groups is 25%, the GUE-NGL abstention rate is 34%. This is reflected in an above average cohesion by INLINEFORM0 where only yes/no votes are considered, and in a relatively lower, below average cohesion by ERGM. In the later case, the non-attendance is interpreted as a non-cohesive voting of a political groups as a whole. In addition to the overall cohesion, we also focus on two selected policy areas. The cohesion of the political groups related to these two policy areas is shown in the first two panels in Fig FIGREF30 ( INLINEFORM0 ) and Fig FIGREF31 (ERGM). The most important observation is that the level of cohesion of the political groups is very stable across different policy areas. These results are corroborated by both methodologies. Similar to the overall cohesion, the most cohesive political groups are the S&D, Greens-EFA, and EPP. The least cohesive group is the NI, followed by the ENL and EFDD. The two methodologies agree on the level of cohesion for all the political groups, except for GUE-NGL, due to a lower attendance rate. We determine the cohesion of political groups on Twitter by using the average number of retweets between MEPs within the same group. The results are shown in Fig FIGREF33 . The right-wing ENL and EFDD come out as the most cohesive groups, while all the other groups have a far lower average number of retweets. MEPs from ENL and EFDD post by far the largest number of retweets (over 240), and at the same time over 94% of their retweets are directed to MEPs from the same group. Moreover, these two groups stand out in the way the retweets are distributed within the group. A large portion of the retweets of EFDD (1755) go to Nigel Farage, the leader of the group. Likewise, a very large portion of retweets of ENL (2324) go to Marine Le Pen, the leader of the group. Farage and Le Pen are by far the two most retweeted MEPs, with the third one having only 666 retweets. Coalitions in the European Parliament Coalition formation in the European Parliament is largely determined by ideological positions, reflected in the degree of cooperation of parties at the national and European levels. The observation of ideological inclinations in the coalition formation within the European Parliament was already made by other authors BIBREF11 and is confirmed in this study. The basic patterns of coalition formation in the European Parliament can already be seen in the co-voting network in Fig FIGREF2 A. It is remarkable that the degree of attachment between the political groups, which indicates the degree of cooperation in the European Parliament, nearly exactly corresponds to the left-to-right seating order. The liberal ALDE seems to have an intermediator role between the left and right parts of the spectrum in the parliament. Between the extreme (GUE-NGL) and center left (S&D) groups, this function seems to be occupied by Greens-EFA. The non-aligned members NI, as well as the Eurosceptic EFFD and ENL, seem to alternately tip the balance on both poles of the political spectrum. Being ideologically more inclined to vote with other conservative and right-wing groups (EPP, ECR), they sometimes also cooperate with the extreme left-wing group (GUE-NGL) with which they share their Euroscepticism as a common denominator. Figs FIGREF36 and FIGREF37 give a more detailed understanding of the coalition formation in the European Parliament. Fig FIGREF36 displays the degree of agreement or cooperation between political groups measured by Krippendorff's INLINEFORM0 , whereas Fig FIGREF37 is based on the result from the ERGM. We first focus on the overall results displayed in the right-hand plots of Figs FIGREF36 and FIGREF37 . The strongest degrees of cooperation are observed, with both methods, between the two major parties (EPP and S&D) on the one hand, and the liberal ALDE on the other. Furthermore, we see a strong propensity for Greens-EFA to vote with the Social Democrats (5th strongest coalition by INLINEFORM0 , and 3rd by ERGM) and the GUE-NGL (3rd strongest coalition by INLINEFORM1 , and 5th by ERGM). These results underline the role of ALDE and Greens-EFA as intermediaries for the larger groups to achieve a majority. Although the two largest groups together have 405 seats and thus significantly more than the 376 votes needed for a simple majority, the degree of cooperation between the two major groups is ranked only as the fourth strongest by both methods. This suggests that these two political groups find it easier to negotiate deals with smaller counterparts than with the other large group. This observation was also made by Hix et al. BIBREF12 , who noted that alignments on the left and right of the political spectrum have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the parliament. Next, we focus on the coalition formation within the two selected policy areas. The area State and Evolution of the Union is dominated by cooperation between the two major groups, S&D and EPP, as well as ALDE. We also observe a high degree of cooperation between groups that are generally regarded as integration friendly, like Greens-EFA and GUE-NGL. We see, particularly in Fig FIGREF36 , a relatively high degree of cooperation between groups considered as Eurosceptic, like ECR, EFFD, ENL, and the group of non-aligned members. The dichotomy between supporters and opponents of European integration is even more pronounced within the policy area Economic and Monetary System. In fact, we are taking a closer look specifically at these two areas as they are, at the same time, both contentious and important. Both methods rank the cooperation between S&D and EPP on the one hand, and ALDE on the other, as the strongest. We also observe a certain degree of unanimity among the Eurosceptic and right-wing groups (EFDD, ENL, and NI) in this policy area. This seems plausible, as these groups were (especially in the aftermath of the global financial crisis and the subsequent European debt crisis) in fierce opposition to further payments to financially troubled member states. However, we also observe a number of strong coalitions that might, at first glance, seem unusual, specifically involving the left-wing group GUE-NGL on the one hand, and the right-wing EFDD, ENL, and NI on the other. These links also show up in the network plot in Fig FIGREF2 A. This might be attributable to a certain degree of Euroscepticism on both sides: rooted in criticism of capitalism on the left, and at least partly a raison d'être on the right. Hix et al. BIBREF11 discovered this pattern as well, and proposed an additional explanation—these coalitions also relate to a form of government-opposition dynamic that is rooted at the national level, but is reflected in voting patterns at the European level. In general, we observe two main differences between the INLINEFORM0 and ERGM results: the baseline cooperation as estimated by INLINEFORM1 is higher, and the ordering of coalitions from the strongest to the weakest is not exactly the same. The reason is the same as for the cohesion, namely different treatment of non-voting and abstaining MEPs. When they are ignored, as by INLINEFORM2 , the baseline level of inter-group co-voting is higher. When non-attending and abstaining is treated as voting differently, as by ERGM, it is considerably more difficult to achieve co-voting coalitions, specially when there are on average 25% MEPs that do not attend or abstain. Groups with higher non-attendance rates, such as GUE-NGL (34%) and NI (40%) are less likely to form coalitions, and therefore have relatively lower ERGM coefficients (Fig FIGREF37 ) than INLINEFORM3 scores (Fig FIGREF36 ). The first insight into coalition formation on Twitter can be observed in the retweet network in Fig FIGREF2 B. The ideological left to right alignment of the political groups is reflected in the retweet network. Fig FIGREF40 shows the strength of the coalitions on Twitter, as estimated by the number of retweets between MEPs from different groups. The strongest coalitions are formed between the right-wing groups EFDD and ECR, as well as ENL and NI. At first, this might come as a surprise, since these groups do not form strong coalitions in the European Parliament, as can be seen in Figs FIGREF36 and FIGREF37 . On the other hand, the MEPs from these groups are very active Twitter users. As previously stated, MEPs from ENL and EFDD post the largest number of retweets. Moreover, 63% of the retweets outside of ENL are retweets of NI. This effect is even more pronounced with MEPs from EFDD, whose retweets of ECR account for 74% of their retweets from other groups. In addition to these strong coalitions on the right wing, we find coalition patterns to be very similar to the voting coalitions observed in the European Parliament, seen in Figs FIGREF36 and FIGREF37 . The strongest coalitions, which come immediately after the right-wing coalitions, are between Greens-EFA on the one hand, and GUE-NGL and S&D on the other, as well as ALDE on the one hand, and EPP and S&D on the other. These results corroborate the role of ALDE and Greens-EFA as intermediaries in the European Parliament, not only in the legislative process, but also in the debate on social media. To better understand the formation of coalitions in the European Parliament and on Twitter, we examine the strongest cooperation between political groups at three different thresholds. For co-voting coalitions in the European Parliament we choose a high threshold of INLINEFORM0 , a medium threshold of INLINEFORM1 , and a negative threshold of INLINEFORM2 (which corresponds to strong oppositions). In this way we observe the overall patterns of coalition and opposition formation in the European Parliament and in the two specific policy areas. For cooperation on Twitter, we choose a high threshold of INLINEFORM3 , a medium threshold of INLINEFORM4 , and a very low threshold of INLINEFORM5 . The strongest cooperations in the European Parliament over all policy areas are shown in Fig FIGREF42 G. It comes as no surprise that the strongest cooperations are within the groups (in the diagonal). Moreover, we again observe GUE-NGL, S&D, Greens-EFA, ALDE, and EPP as the most cohesive groups. In Fig FIGREF42 H, we observe coalitions forming along the diagonal, which represents the seating order in the European Parliament. Within this pattern, we observe four blocks of coalitions: on the left, between GUE-NGL, S&D, and Greens-EFA; in the center, between S&D, Greens-EFA, ALDE, and EPP; on the right-center between ALDE, EPP, and ECR; and finally, on the far-right between ECR, EFDD, ENL, and NI. Fig FIGREF42 I shows the strongest opposition between groups that systematically disagree in voting. The strongest disagreements are between left- and right-aligned groups, but not between the left-most and right-most groups, in particular, between GUE-NGL and ECR, but also between S&D and Greens-EFA on one side, and ENL and NI on the other. In the area of Economic and monetary system we see a strong cooperation between EPP and S&D (Fig FIGREF42 A), which is on a par with the cohesion of the most cohesive groups (GUE-NGL, S&D, Greens-EFA, ALDE, and EPP), and is above the cohesion of the other groups. As pointed out in the section “sec:coalitionpolicy”, there is a strong separation in two blocks between supporters and opponents of European integration, which is even more clearly observed in Fig FIGREF42 B. On one hand, we observe cooperation between S&D, ALDE, EPP, and ECR, and on the other, cooperation between GUE-NGL, Greens-EFA, EFDD, ENL, and NI. This division in blocks is seen again in Fig FIGREF42 C, which shows the strongest disagreements. Here, we observe two blocks composed of S&D, EPP, and ALDE on one hand, and GUE-NGL, EFDD, ENL, and NI on the other, which are in strong opposition to each other. In the area of State and Evolution of the Union we again observe a strong division in two blocks (see Fig FIGREF42 E). This is different to the Economic and monetary system, however, where we observe a far-left and far-right cooperation, where the division is along the traditional left-right axis. The patterns of coalitions forming on Twitter closely resemble those in the European Parliament. In Fig FIGREF42 J we see that the strongest degrees of cooperation on Twitter are within the groups. The only group with low cohesion is the NI, whose members have only seven retweets between them. The coalitions on Twitter follow the seating order in the European Parliament remarkably well (see Fig FIGREF42 K). What is striking is that the same blocks form on the left, center, and on the center-right, both in the European Parliament and on Twitter. The largest difference between the coalitions in the European Parliament and on Twitter is on the far-right, where we observe ENL and NI as isolated blocks. The results shown in Fig FIGREF44 quantify the extent to which communication in one social context (Twitter) can explain cooperation in another social context (co-voting in the European Parliament). A positive value indicates that the matching behavior in the retweet network is similar to the one in the co-voting network, specific for an individual policy area. On the other hand, a negative value implies a negative “correlation” between the retweeting and co-voting of MEPs in the two different contexts. The bars in Fig FIGREF44 correspond to the coefficients from the edge covariate terms of the ERGM, describing the relationship between the retweeting and co-voting behavior of MEPs. The coefficients are aggregated for individual policy areas by means of a meta-analysis. Overall, we observe a positive correlation between retweeting and co-voting, which is significantly different from zero. The strongest positive correlations are in the areas Area of freedom, security and justice, External relations of the Union, and Internal markets. Weaker, but still positive, correlations are observed in the areas Economic, social and territorial cohesion, European citizenship, and State and evolution of the Union. The only exception, with a significantly negative coefficient, is the area Economic and monetary system. This implies that in the area Economic and monetary system we observe a significant deviation from the usual co-voting patterns. Results from section “sec:coalitionpolicy”, confirm that this is indeed the case. Especially noteworthy are the coalitions between GUE-NGL and Greens-EFA on the left wing, and EFDD and ENL on the right wing. In the section “sec:coalitionpolicy” we interpret these results as a combination of Euroscepticism on both sides, motivated on the left by a skeptical attitude towards the market orientation of the EU, and on the right by a reluctance to give up national sovereignty. Discussion We study cohesion and coalitions in the Eighth European Parliament by analyzing, on one hand, MEPs' co-voting tendencies and, on the other, their retweeting behavior. We reveal that the most cohesive political group in the European Parliament, when it comes to co-voting, is Greens-EFA, closely followed by S&D and EPP. This is consistent with what VoteWatch BIBREF37 reported for the Seventh European Parliament. The non-aligned (NI) come out as the least cohesive group, followed by the Eurosceptic EFDD. Hix and Noury BIBREF11 also report that nationalists and Eurosceptics form the least cohesive groups in the Sixth European Parliament. We reaffirm most of these results with both of the two employed methodologies. The only point where the two methodologies disagree is in the level of cohesion for the left-wing GUE-NGL, which is portrayed by ERGM as a much less cohesive group, due to their relatively lower attendance rate. The level of cohesion of the political groups is quite stable across different policy areas and similar conclusions apply. On Twitter we can see results that are consistent with the RCV results for the left-to-center political spectrum. The exception, which clearly stands out, is the right-wing groups ENL and EFDD that seem to be the most cohesive ones. This is the direct opposite of what was observed in the RCV data. We speculate that this phenomenon can be attributed to the fact that European right-wing groups, on a European but also on a national level, rely to a large degree on social media to spread their narratives critical of European integration. We observed the same phenomenon recently during the Brexit campaign BIBREF38 . Along our interpretation the Brexit was “won” to some extent due to these social media activities, which are practically non-existent among the pro-EU political groups. The fact that ENL and EFDD are the least cohesive groups in the European Parliament can be attributed to their political focus. It seems more important for the group to agree on its anti-EU stance and to call for independence and sovereignty, and much less important to agree on other issues put forward in the parliament. The basic pattern of coalition formation, with respect to co-voting, can already be seen in Fig FIGREF2 A: the force-based layout almost completely corresponds to the seating order in the European Parliament (from the left- to the right-wing groups). A more thorough examination shows that the strongest cooperation can be observed, for both methodologies, between EPP, S&D, and ALDE, where EPP and S&D are the two largest groups, while the liberal ALDE plays the role of an intermediary in this context. On the other hand, the role of an intermediary between the far-left GUE-NGL and its center-left neighbor, S&D, is played by the Greens-EFA. These three parties also form a strong coalition in the European Parliament. On the far right of the spectrum, the non-aligned, EFDD, and ENL form another coalition. This behavior was also observed by Hix et al. BIBREF12 , stating that alignments on the left and right have in recent years replaced the “Grand Coalition” between the two large blocks of Christian Conservatives (EPP) and Social Democrats (S&D) as the dominant form of finding majorities in the European Parliament. When looking at the policy area Economic and monetary system, we see the same coalitions. However, interestingly, EFDD, ENL, and NI often co-vote with the far-left GUE-NGL. This can be attributed to a certain degree of Euroscepticism on both sides: as a criticism of capitalism, on one hand, and as the main political agenda, on the other. This pattern was also discovered by Hix et al. BIBREF12 , who argued that these coalitions emerge from a form of government-opposition dynamics, rooted at the national level, but also reflected at the European level. When studying coalitions on Twitter, the strongest coalitions can be observed on the right of the spectrum (between EFDD, ECR, ENL, and NI). This is, yet again, in contrast to what was observed in the RCV data. The reason lies in the anti-EU messages they tend to collectively spread (retweet) across the network. This behavior forms strong retweet ties, not only within, but also between, these groups. For example, MEPs of EFDD mainly retweet MEPs from ECR (with the exception of MEPs from their own group). In contrast to these right-wing coalitions, we find the other coalitions to be consistent with what is observed in the RCV data. The strongest coalitions on the left-to-center part of the axis are those between GUE-NGL, Greens-EFA, and S&D, and between S&D, ALDE, and EPP. These results reaffirm the role of Greens-EFA and ALDE as intermediaries, not only in the European Parliament but also in the debates on social media. Last, but not least, with the ERGM methodology we measure the extent to which the retweet network can explain the co-voting activities in the European Parliament. We compute this for each policy area separately and also over all RCVs. We conclude that the retweet network indeed matches the co-voting behavior, with the exception of one specific policy area. In the area Economic and monetary system, the links in the (overall) retweet network do not match the links in the co-voting network. Moreover, the negative coefficients imply a radically different formation of coalitions in the European Parliament. This is consistent with the results in Figs FIGREF36 and FIGREF37 (the left-hand panels), and is also observed in Fig FIGREF42 (the top charts). From these figures we see that in this particular case, the coalitions are also formed between the right-wing groups and the far-left GUE-NGL. As already explained, we attribute this to the degree of Euroscepticism that these groups share on this particular policy issue. Conclusions In this paper we analyze (co-)voting patterns and social behavior of members of the European Parliament, as well as the interaction between these two systems. More precisely, we analyze a set of 2535 roll-call votes as well as the tweets and retweets of members of the MEPs in the period from October 2014 to February 2016. The results indicate a considerable level of correlation between these two complex systems. This is consistent with previous findings of Cherepnalkoski et al. BIBREF22 , who reconstructed the adherence of MEPs to their respective political or national group solely from their retweeting behavior. We employ two different methodologies to quantify the co-voting patterns: Krippendorff's INLINEFORM0 and ERGM. They were developed in different fields of research, use different techniques, and are based on different assumptions, but in general they yield consistent results. However, there are some differences which have consequences for the interpretation of the results. INLINEFORM0 is a measure of agreement, designed as a generalization of several specialized measures, that can compare different numbers of observations, in our case roll-call votes. It only considers yes/no votes. Absence and abstention by MEPs is ignored. Its baseline ( INLINEFORM1 ), i.e., co-voting by chance, is computed from the yes/no votes of all MEPs on all RCVs. ERGMs are used in social-network analyses to determine factors influencing the edge formation process. In our case an edge between two MEPs is formed when they cast the same yes/no vote within a RCV. It is assumed that a priori each MEP can form a link with any other MEP. No assumptions about the presence or absence of individual MEPs in a voting session are made. Each RCV is analyzed as a separate binary network. The node set is thereby kept constant for each RCV network. While the ERGM departs from the originally observed network, where MEPs who didn't vote or abstained appear as isolated nodes, links between these nodes are possible within the network sampling process which is part of the ERGM optimization process. The results of several RCVs are aggregated by means of the meta-analysis approach. The baseline (ERGM coefficients INLINEFORM0 ), i.e., co-voting by chance, is computed from a large sample of randomly generated networks. These two different baselines have to be taken into account when interpreting the results of INLINEFORM0 and ERGM. In a typical voting session, 25% of the MEPs are missing or abstaining. When assessing cohesion of political groups, all INLINEFORM1 values are well above the baseline, and the average INLINEFORM2 . The average ERGM cohesion coefficients, on the other hand, are around the baseline. The difference is even more pronounced for groups with higher non-attendance/abstention rates like GUE-NGL (34%) and NI (40%). When assessing strength of coalitions between pairs of groups, INLINEFORM3 values are balanced around the baseline, while the ERGM coefficients are mostly negative. The ordering of coalitions from the strongest to the weakest is therefor different when groups with high non-attendance/abstention rates are involved. The choice of the methodology to asses cohesion and coalitions is not obvious. Roll-call voting is used for decisions which demand a simple majority only. One might however argue that non-attendance/abstention corresponds to a no vote, or that absence is used strategically. Also, the importance of individual votes, i.e., how high on the agenda of a political group is the subject, affects their attendance, and consequently the perception of their cohesion and the potential to act as a reliable coalition partner. Acknowledgments This work was supported in part by the EC projects SIMPOL (no. 610704) and DOLFINS (no. 640772), and by the Slovenian ARRS programme Knowledge Technologies (no. P2-103).
No