id
stringlengths
40
40
pid
stringlengths
42
42
input
stringlengths
8.37k
169k
output
stringlengths
1
1.63k
b03e8e9a0cd2a44a215082773c7338f2f3be412a
b03e8e9a0cd2a44a215082773c7338f2f3be412a_0
Q: What baselines are used? Text: A Switching Dynamical System for Narrative Generation In this section, we give a brief overview of Switching Dynamical systems and how they can be used to capture both a scaffold of the narrative as well as the narrative dynamics. We then describe in detail the components of our model and its relation to existing models. A Switching Dynamical System for Narrative Generation ::: Narrative Dynamics in a Dynamical System The specifics of the narrative (characters, setting, etc.), will differ between stories, but as BIBREF0 notes, the way they transition to the next point in the narrative (what we refer to as “narrative dynamics") is often shared. Let's say that, as done often, we represent the `narrative specifics' at time step $i$ with a latent vector $Z_i$. A natural way to explicitly model how this state evolves over time that fits with the above observation is as a Linear Dynamical System: Where $A$ is a matrix, shared across all narratives, and $\Sigma $ is a noise term that takes into consideration idiosyncrasies different narratives will have. The fact that the shared transition matrix $A$ is linear means that narratives will have linearly analogous trajectories through time, despite having different details (comparable to stories with different settings but matching structures such as Ran/King Lear, Ulysses/Odyssey, etc). Of course, the fatal flaw of the model is that it assumes there exists only one transition matrix, and thus only one possible way to transition through a narrative! A Switching Dynamical System for Narrative Generation ::: Narrative Scaffolds as Switching Variables A more fitting model would thus be a Switching Linear Dynamical System BIBREF1, BIBREF2, BIBREF3. In an SLDS, we assume there exists a set of $K$ different sets of dynamics, $\lbrace (A_1, \Sigma _1),...(A_K,\Sigma _K)\rbrace $. At time step $i+1$, one of these sets of dynamics is used. The one used depends on the value of a discrete variable at time step $i+1$ called the switching variable, $S_{i+1} \in \lbrace 1,...K\rbrace $: There is a switching variable $S_i$ associated with each time step. The switching variable value itself evolves over time by a prior Markov process, $P(S_{i+1} | S_{i})$. This top level chain of switching variables thus forms our narrative scaffold, indicating what transitions we must go through in the narrative, with the dynamics matrices indicating how they transition. A Switching Dynamical System for Narrative Generation ::: Narrative Scaffold - Emotional Trajectory What the switching variables actually represent can be chosen by the user. Straightforward narrative scaffolds include event sequences BIBREF6, keywords BIBREF7, or latent template ids BIBREF8. More complex but potentially more informative scaffolds may be created using concepts such as story grammar non-terminals BIBREF9, BIBREF10, or character action taken throughout a story BIBREF11. In our work, we use the sentiment trajectory of the narrative as the scaffold. That is, each $S_i$ for a sentence indicates the overall coarse sentiment of the sentence (Positive, Negative, or Neutral). Though simple, the overall sentiment trajectory of a narrative is important in defining the high level `shape' of a narrative often shared among different narratives BIBREF12, BIBREF13. Furthermore, sentiment trajectory has been shown to be fairly useful in story understanding tasks BIBREF14, BIBREF15. We discuss in the conclusion future directions for using different types of scaffolds. A Switching Dynamical System for Narrative Generation ::: The Full Model The final component of the model is a conditional language model that generates sentence $i$ conditioned on the current $Z_i$, and all previous sentences, $X_{:i}$. Generation continues until an <eos> is reached. This conditional language model may be parameterized as desired, but in this work, we parameterize it as an RNN neural network language model. The graphical model for our SLDS is pictured in Figure FIGREF8. The model consists of three sets of variables: (1) Switching variables $S_1,...,S_N$, (2) Latent state variables $Z_1,...,Z_N$ capturing the details of the narrative at sentence $i$, (3) The sentences themselves $X_1,...X_N$, where each sentence $X_i$ has $n_i$ words, $x^i_1,...x^i_{n_i}$. The joint over all variables factorizes as below into the following components ($X_{:i}$ stands for all sentence before $X_i$): ❶ Narrative Scaffold Planner: The factor $P(S_i | S_{i-1})$ is a transition matrix, which we calculate via count based statistics from training. It is fed in as prior knowledge and fixed. ❷ Narrative Dynamics Network: The factor $P(Z_i | Z_{i-1}, S_i)$ is determined like a switching linear dynamical system: which is equivalent to drawing $Z_i$ from a Normal distribution with mean $A_{S_i}Z_{i-1}$ and variance $B_{S_i}B_{S_i}^T$. ❸ Conditional Language model: The factor $P(X_i | Z_i, X_{:i})$ is parameterized by an RNN language model conditioned on the latent $Z_i$. Learning and Posterior Inference Due to the conditionals parameterized by neural networks we use amortized variational inference in a manner similar to Variational AutoEncoders BIBREF16, both to learn an approximate posterior $q(S, Z | X)$ and to learn the generative model parameters by maximizing a lower bound on the data likelihood (ELBO). We assume that the approximate posterior factorizes as follows: Like in VAEs, computing these individual factors is done through a parameterized function called the inference or recognition network whose parameters are trained jointly with the generative model. In our case there are two forms for the factors in our posterior: (1) The first form, $q(S_i | \textbf {X}) = q_{S_i}$ is parameterized by a classifier that takes in the set of sentences $\mathbf {X}$ and outputs a categorical distribution over the switching variables. (2) The second form, $q(Z_i| Z_{i-1}, S_i, X_{:i}, X_{i}) = q_{Z_i}$ is realized by functions $f_{\mu }(Z_{i-1}, S_i, X_{:i}, X_{i})$ and $f_\sigma (Z_{i-1}, S_i, X_{:i}, X_{i})$ that output the mean and variance, respectively, of a Gaussian over $Z_i$. Borrowing terminology from VAEs, the approximate posterior (the factors given above) act as an `encoder', while the generative model from the previous section can be seen as the `decoder'. This type of training has been previously used in BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Learning and Posterior Inference ::: Lower bound formula & exact training algorithm As mentioned previously, we optimize all parameters (including the variational factor functions) by optimizing a lower bound on the data likelihood. The model may be trained either with supervision labels for the switching states (in our case, sentiment labels) or without supervised labels. If one is training without the sentiment labels, then the lower bound on the marginal likelihood (and thus our optimization objective) may be written as follows: The derivation for this objective is identical to that found in BIBREF18, BIBREF19, and simply relies on using properties of iterated expectations. All expectations are estimated with Monte Carlo samples. If training with the sentiment labels $S_1,...,S_N$, then the objective is similar (but without the sampling of the switching states), and is augmented with an additional supervision objective as done in BIBREF22: Final training procedure for a single narrative is: For each sentence (starting from the first), sample the switching state $S_i$ from $q(S_i | \textbf {X})$. For each sentence (starting from the first), sample the latent $Z_i$ from $q(Z_i | S_i, Z_{i-1}, X)$. Evaluate the data likelihood and KL term(s) with these samples. Take the gradients of the objective function w.r.t. all parameters, using the reparameterization trick for $q_{Z_i}$ BIBREF16 or the Gumbel-Softmax trick for $q_{S_i}$ BIBREF23, and optimize. Interpolations via Gibbs Sampling One of the benefits of probabilistic formulation is the possibility (if an inference procedure can be found) of generating narratives with specific constraints, where the constraints may be specified as clamped variables in the model. In this section, we show how narratives may be generated conditioned on arbitrary bits and pieces of the narrative already filled in, using approximate Gibbs sampling. This allows one to, for example, interpolate a narrative given the first and the last sentence (similar to how earlier story generation systems were able to generate with a given end goal in mind). Some examples of these interpolations generated by our system can be found in Table TABREF37. We give the equations and summarize the algorithm in the next sections. Interpolations via Gibbs Sampling ::: Conditionals for Gibbs Sampling For our Gibbs sampling algorithm we give the narrative scaffold (switching variables), $S_1,...,S_T \in \mathbf {S}$ and a set of observed sentences, $\mathbf {X^+}$. This may be any set of sentences (the first and last, just the second sentence, etc) as inputs to the system. We wish to find values for the unobserved sentences in set $\mathbf {X^-}$ by sampling from the distribution $P(\mathbf {X^-}, Z_1,...,Z_T | \mathbf {S},\mathbf {X^+})$. We perform this sampling via Gibbs sampling. Two different forms of conditionals need to be derived to do Gibbs sampling. One over some $Z_i$ conditioned on everything else, and one over some $X_i$ conditioned on everything else. By using the d-separation properties of the graph, and substituting the true posterior over $Z_{i}$ with our approximate posterior $q$, we can show the first distribution is approximately proportional to The last line is the product between a Gaussian density over $Z_{i+1}$ and $Z_{i}$, respectively. With some algebraic manipulations, one can show the last line is proportional to a single Gaussian PDF over $Z_i$: To find the second conditional, one can use the d-separation properties of the graph to find that it is proportional to: These two distributions are simply factors of our conditional language model, and both terms can thus be evaluated easily. In theory, one could use this fact to sample the original conditional via Metropolis-Hastings . Unfortunately, we found this approach to be much too slow for practical purposes. We observed that the simple heuristic of deterministically assigning $X_i$ to be the greedy decoded output of the conditional language model $P(X_{i} | X_{:i}, Z_{i})$ works well, as evidenced by the empirical results. We leave it for future work to research different conditional language model parameterizations that allow easy sampling from this conditional Interpolations via Gibbs Sampling ::: Gibbs Sampling Interpolation Overview The variables in the Gibbs sampler are first initialized using some heuristics (see Supplemental Materials for details). After initialization, performing the interpolations with Gibbs sampling follows the below two step process: For each $Z_i$, sample a value $Z^\prime $ from equation $(1)$ and set $Z_i$ to $Z^\prime $. For each $X_i$ in $\mathbf {X}^-$, find a new value for $X_i$ by running greedy decoding using the conditional language model. Training Details ::: Dataset and Preprocessing We use the ROCStories corpora introduced in BIBREF27. It contains 98,159 short commonsense stories in English as training, and 1,570 stories for validation and test each. Each story in the dataset has five-sentences and captures causal and temporal commonsense relations. We limit our vocabulary size to 16,983 based on a per-word frequency cutoff set to 5. For sentiment tags, we automatically tag the entirety of the corpus with the rule based sentiment tagger, Vader BIBREF28, and bucket the polarity scores of Vader into three tags: neutral, negative, and positive. These tags form the label set of the $S$ variables in our SLDS model. We tokenize the stories with Spacy tokenizer. Each sentences in the input narrative has an <eos> tag except for the S2S model discussed below. Training Details ::: Switching Linear Dynamical System (SLDS) SLDS has RNN encoder and decoder networks with single layer GRU cells of hidden size 1024. Model uses an embedding size of 300. We train the model using Adam optimizer with the defaults used by PyTorch. We stop training the models when the validation loss does not decrease for 3 consecutive epochs. Training details remain same as above unless otherwise mentioned. Training Details ::: Baselines Language Model (LM): We train a two layer recurrent neural language model with GRU cells of hidden size 512. Sequence-to-Sequence Attention Model (S2S): We train a two layer neural sequence to sequence model equipped with bi-linear attention function with GRU cells of hidden size 512. Sentiments tags for a narrative (1 for each sentence) are given as input to the model and the corresponding sentences are concatenated together as the output with only one <eos> tag at the end. This model is trained with a 0.1 dropout. This model is comparable to the static model of BIBREF7, and other recent works employing a notion of scaffolding into neural generation (albeit adapted for our setting). Linear Dynamical System (LDS): We also train a linear dynamical system as discussed in Section SECREF1 as one of our baselines for fair comparisons. Apart from having just a single transition matrix this model has the same architectural details as SLDS. Semi-Supervised SLDS (SLDS-X%): To gauge the usability of semi-supervision, we also train semi-supervised SLDS models with varying amount of labelled sentiment tags unlike the original model which uses 100% tagged data. We refer to these as SLDS-X%, where X is the % labelled data used for training: 1%, 10%, 25%, and 50%. Evaluations As described above, our model is able to perform narrative interpolations via an approximate Gibbs sampling procedure. At the core of our evaluations is thus a fill-in-the-sentences task. We provide 1 or 2 sentences, and require the model to generate the rest of the narrative . We evaluate this via automatic evaluations as well as with crowd-sourced human evaluations. We also report perplexity to evaluate the models' ability to fit the data. Lastly, we look at whether the transitions learned by the SLDS models capture what they are intended to capture: does using the transition matrix associated with a sentiment tag (positive/negative/neutral) lead to a generated sentence with that sentiment? Evaluations ::: Generating the Interpolations For the SLDS models, the interpolations are generated via the Gibbs sampling algorithm described earlier. In all experiments for the SLDS models we draw 50 samples (including burn in samples) and output the interpolation that maximizes the probability of the given sentence(s). Since the baselines do not have the means for doing interpolations, we simulate `interpolations' for the baselines; we draw 1000 samples using top k (with k=15) truncated sampling (conditioned on the given initial sentences, if available). We then output the sample that maximizes the probability of the clamped sentences around which we are interpolating the others. We allow the S2S access to the gold sentiment tags. To give a lower bound on the performance of the SLDS model, we do not provide it with gold tags. We instead provide the SLDS model with the semi-noisy tags that are output from $q(S_i | X)$. Evaluations ::: Automatic Evaluation of Interpolations We automatically evaluate on four different types of interpolations (where different combinations of sentences are removed and the model is forced to regenerate them), We evaluate the generations with the ROUGE BIBREF29 and METEOR BIBREF30 metrics using the true sentences as targets. Table TABREF33 shows the automatic evaluation results from interpolations using our proposed models and baselines. The #Sent(s) column indicates which sentence(s) were removed, and then regenerated by the model. We gave the baselines a slight edge over SLDS because they pick the best out of 1000 samples while SLDS is only out of 50. The SLDS models see their largest gain over the baseline models when at least the first sentence is given as an input. The baseline models do better when the first and second sentence need to be imputed. This is likely due to the fact that having access to the earlier sentences allows a better initialization for the Gibbs sampler. Surprisingly, the semi-supervised variants of the SLDS models achieve higher scores. The reasons for this is discussed below in the Perplexity section. Evaluations ::: Human Evaluation of Interpolations ::: Annotation Scheme As automatic evaluation metrics are not sufficient to assess the quality of any creative task such as narrative generation, we measure the quality of the generations through human evaluation of 200 stories on the Amazon Mechanical Turk platform. We provided Turkers with two generated narratives from two different models, each with five sentences. The first and last sentences were fed to each model as input, and the middle three sentences were generated. Each pair of narratives is graded by 3 users each with two tasks: (1) to rank on a scale of 0-3 each of the sentences except the first one on the basis of its coherency with the previous sentence(s) and (2) compare and rank the two narratives based on their overall coherency, ie how well the story connects the starting/ending sentences. Evaluations ::: Human Evaluation of Interpolations ::: Human Evaluation Results Table TABREF41 reports the result of human evaluations of SLDS and baseline generations. We can observe that people preferred narratives generated by SLDS over the ones generated by baseline models (LM and S2S) as they found the former model more coherent, which is an important criteria for narrative generation. 51.3% of the time SLDS generates better narratives than the LM model while LM in turn does it only 35.0% of the times. 13.7% of the generations end up in tie. The mean sentence level coherence score for SLDS is around 12.5% larger than that of the LM, with a slightly lower standard deviation. We see similar results when compared against the S2S model. Evaluations ::: Language Modeling Perplexity Score As our models are essentially language models, we evaluated their per-sentence negative log-likelihood and per-word perplexity scores, which can be viewed as an indirect measure of how well a system works as a generative model of narrative text. For the SLDS and LDS models these scores are approximations, an upper bound (the negative of the ELBO) to the actual values. For the other two models the scores are exact. A good model should assign low perplexity scores to its test set. In Table TABREF44 SLDS achieves the lowest scores, implying that it is able to model the data distribution well. In Table TABREF45 we also calculate the perplexity scores for the semi-supervised SLDS models to assess the effectiveness of semi-supervised training. Surprisingly, the models with less supervision scored better in terms of perplexity. One possibility for this might be the use of the soft Gumbel-Softmax in the semi-supervised models. The soft Gumbel-Softmax variant does not commit to using a single transition matrix at each time step (instead linearly combining them, weighted by the Softmax weights). This fact may permit the model greater flexibility in fitting the training data. While this leads to better scores in metrics such as perplexity or BLEU, it does leads to transitions that are worse in capturing the properties they should be capturing, as we shall see in the next section. Evaluations ::: Evaluation of Transition Dynamics One matter of interest is whether or not the transitions are capturing what they are supposed to capture, appropriate sentiment. Since we used the sentiment tagger Vader for training tags, we again utilize it to evaluate whether using transitions of a certain sentiment actually leads the model to produce outputs with the given sentiment. To perform this evaluation, we give as input to our models (and the S2S baseline) the sentiment tags for a sentence and allow it to generate a sentence conditioned on these sentiment tags. We then tag the generated sentences with Vader and see if the sentiment tags match the originals. We calculate the F1 score across all sentiment tags and report the macro average. In Table TABREF47 we see that having labels is incredibly important for meaningful transitions. There is a large drop in F1 as the amount of labels given to the model is decreased. The SLDS model that is trained with 100% of the labels performs a little better than even S2S, despite not having direct access to the sentiment labels (SLDS only uses the sentiment labels to decide which transition to use while the S2S model uses attention directly on the sentiment labels). Related Work Story/narrative generation has a rich history in the field of AI. Many early systems were based on structured formalisms for describing common narrative structures BIBREF9, BIBREF10, BIBREF31, many being inspired by the initial work of BIBREF0. There has been a swath of recent work that has looked to add some semblance of a `narrative scaffold' back into generation methods BIBREF32, BIBREF6, BIBREF7, BIBREF33. Many of these methods work as conditional LMs (conditioned directly on the scaffold). This line of work may be combined with our formalization as well, by conditioning the generation on the switching state as well, as done in the model of BIBREF4. Recent work by BIBREF34 has similar goals to ours in permitting more controlability in generation systems, developing a RL-based system that allows users to specify an end goal for a story (by specifying the event class that is desired to appear at the end). Their work differs from ours in that it does not deal with text directly, modeling only the sequences of events in the narrative. It may be possible to utilize this model as the scaffolding component in our model (utilizing their RL policy for the scaffold planner, rather than the simple Markovian distribution used here). Conclusion and Future Work In this paper, we formulated the problem of narrative generation as a switching dynamical system. We showed how this formulation captures notions important in narrative generation, such as narrative dynamics and scaffolds. We developed an approximate Gibbs sampling algorithm for the model that permits the system to generate interpolations conditioned on arbitrary parts of the narrative, and evaluated these interpolations using both human and automatic evaluations. Though in this work we used sentiment tags for our scaffolds/switching variables, future work may look at utilizing different kinds of information to guide the generation of narratives. Utilizing the main predicate of a sentence as a scaffold would be a logical next step, and may prove more informative then the sentiment trajectory. A scaffold such as this can take on many more possible values then a sentiment tag, and as such, it may prove difficult to assign a set of dynamics to each value. Another avenue for future work would deal with this possible problem. One potential solution could be to associate each switching variable value with a (learned) vector in a probability simplex, and use this vector to combine a small set of “primitive" dynamics matrices in order to get that value's associated set of dynamics.
a two layer recurrent neural language model with GRU cells of hidden size 512, a two layer neural sequence to sequence model equipped with bi-linear attention function with GRU cells of hidden size 512, a linear dynamical system, semi-supervised SLDS models with varying amount of labelled sentiment tags
f608fbc7a4a10a79698f340e2948c4c7034642d5
f608fbc7a4a10a79698f340e2948c4c7034642d5_0
Q: Which model is used to capture the implicit structure? Text: Introduction Accepted as a long paper in EMNLP 2019 (Conference on Empirical Methods in Natural Language Processing). Targeted sentiment analysis (TSA) is an important task useful for public opinion mining BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. The task focuses on predicting the sentiment information towards a specific target phrase, which is usually a named entity, in a given input sentence. Currently, TSA in the literature may refer to either of the two possible tasks under two different setups: 1) predicting the sentiment polarity for a given specific target phrase BIBREF5, BIBREF6, BIBREF7, BIBREF8; 2) jointly predicting the targets together with the sentiment polarity assigned to each target BIBREF9, BIBREF10, BIBREF11, BIBREF12. In this paper, we focus on the latter setup which was originally proposed by BIBREF9. Figure FIGREF2 presents an example sentence containing three targets. Each target is associated with a sentiment, where we use $+$ for denoting positive polarity, 0 for neutral and $-$ for negative. Existing research efforts mostly regard this task as a sequence labeling problem by assigning a tag to each word token, where the tags are typically designed in a way that capture both the target boundary as well as the targeted sentiment polarity information together. Existing approaches BIBREF9, BIBREF10, BIBREF12 build models based on conditional random fields (CRF) BIBREF13 or structural support vector machines (SSVM) BIBREF14, BIBREF15 to explicitly model the sentiment information with structured outputs, where each targeted sentiment prediction corresponds to exactly one fixed output. While effective, such models suffer from their inability in capturing certain long-distance dependencies between sentiment keywords and their targets. To remedy this issue, BIBREF11 proposed their “sentiment scope’’ model to learn flexible output representations. For example, three text spans with their corresponding targets in bold are presented in Figure FIGREF2, where each target’s sentiment is characterized by the words appearing in the corresponding text span. They learn from data for each target a latent text span used for attributing its sentiment, resulting in flexible output structures. However, we note there are two major limitations with the approach of BIBREF11. First, their model requires a large number of hand-crafted discrete features. Second, the model relies on a strong assumption that the latent sentiment spans do not overlap with one another. For example, in Figure FIGREF2, their model will not be able to capture the interaction between the target word “OZ” in the first sentiment span and the keyword “amazing” due to the assumptions made on the explicit structures in the output space. One idea to resolve this issue is to design an alternative mechanism to capture such useful structural information that resides in the input space. On the other hand, recent literature shows that feature learning mechanisms such as self-attention have been successful for the task of sentiment prediction when targets are given BIBREF16, BIBREF17, BIBREF18 (i.e., under the first setup mentioned above). Such approaches essentially attempt to learn rich implicit structural information in the input space that captures the interactions between a given target and all other word tokens within the sentence. Such implicit structures are then used to generate sentiment summary representation towards the given target, leading to the performance boost. However, to date capturing rich implicit structures in the joint prediction task that we focus on (i.e., the second setup) remains largely unexplored. Unlike the first setup, in our setup the targets are not given, we need to handle exponentially many possible combinations of targets in the joint task. This makes the design of an algorithm for capturing both implicit structural information from the input space and the explicit structural information from the output space challenging. Motivated by the limitations and challenges, we present a novel approach that is able to efficiently and effectively capture the explicit and implicit structural information for TSA. We make the following key contributions in this work: We propose a model that is able to properly integrate both explicit and implicit structural information, called EI. The model is able to learn flexible explicit structural information in the output space while being able to efficiently learn rich implicit structures by LSTM and self-attention for exponentially many possible combinations of targets in a given sentence. We conducted extensive experiments to validate our claim that both explicit and implicit structures are indispensable in such a task, and demonstrate the effectiveness and robustness of our model. Approach Our objective is to design a model to extract targets as well as their associated targeted sentiments for a given sentence in a joint manner. As we mentioned before, we believe that both explicit and implicit structures are crucial for building a successful model for TSA. Specifically, we first present an approach to learn flexible explicit structures based on latent CRF, and next present an approach to efficiently learn the rich implicit structures for exponentially many possible combinations of targets. Approach ::: Explicit Structure Motivated by BIBREF11, we design an approach based on latent CRF to model flexible sentiment spans to capture better explicit structures in the output space. To do so, we firstly integrate target and targeted sentiment information into a label sequence by using 3 types of tags in our EI model: $\mathbf {B}_p$, $\mathbf {A}_p$, and $\mathbf {E}_{\epsilon ,p}$, where $p \in \lbrace +, -, 0\rbrace $ indicates the sentiment polarity and $\epsilon \in \lbrace \textit {B,M,E,S}\rbrace $ denotes the BMES tagging scheme. We explain the meaning of each type of tags as follows. $\mathbf {B}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears before the target word or exactly as the first word of the target. $\mathbf {A}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears after the target word or exactly as the last word of the target. $\mathbf {E}_{\epsilon ,p}$ is used to denote the current word is part of a sentiment span with polarity $p$, and is also a part of the target. The BMES sub-tag $\epsilon $ denotes the position information within the target phrase. For example, $\mathbf {E}_{B,+}$ represents that the current word appears as the first word of a target with the positive polarity. We illustrate how to construct the label sequence for a specific combination of sentiment spans of the given example sentence in Figure FIGREF5, where three non-overlapping sentiment spans in yellow are presented. Each such sentiment span encodes the sentiment polarity in blue for a target in bold in pink square. At each position, we allow multiple tags in a sequence to appear such that the edge $\mathbf {A}_p\mathbf {B}_{p^{\prime }}$ in red consistently indicates the boundary between two adjacent sentiment spans. The first sentiment span with positive ($+$) polarity contains only one word which is also the target. Such a single word target is also the beginning and the end of the target. We use three tags $\mathbf {B}_+$, $\mathbf {E}_{S,+}$ and $\mathbf {A}_+$ to encode such information above. The second sentiment span with positive ($+$) polarity contains a two-word target “Shin Lim”. The word “and” appearing before such target takes a tag $\mathbf {B}_+$. The words “perform amazing magic” appearing after such target take a tag $\mathbf {A}_+$ at each position. As for the target, the word “Shin” at the beginning of the target takes tags $\mathbf {B}_+$ and $\mathbf {E}_{B,+}$, while the word “Lim” at the end of the target takes tags $\mathbf {E}_{E,+}$ and $\mathbf {A}_+$. The third sentiment span with neutral (0) polarity contains a single-word target “AGT”. Similarly, we use three tags $\mathbf {B}_0$, $\mathbf {E}_{S,0}$ and $\mathbf {A}_0$ to represent such single word target. The word “on” appearing before such target takes a tag $\mathbf {B}_0$. The word “2018” appearing afterwards takes a tag $\mathbf {A}_0$. Note that if there exists a target with length larger than 2, the tag $\mathbf {E}_{M,p}$ will be used. For example in Figure FIGREF5, if the target phrase “Shin Lim” is replaced by “Shin Bob Lim”, we will keep the tags at “Shin” and “Lim” unchanged. We assign a tag $\mathbf {E}_{M,+}$ at the word “Bob” to indicate that “Bob” appears in the middle of the target by following the BMES tagging scheme. Finally, we represent the label sequence by connecting adjacent tags sequentially with edges. Notice that for a given input sentence and the output targets as well as the associated targeted sentiment, there exist exponentially many possible label sequences, each specifying a different possible combinations of sentiment spans. Figure FIGREF11 shows a label sequence for an alternative combination of the sentiment spans. Those label sequences representing the same input and output construct a latent variable in our model, capturing the flexible explicit structures in the output space. We use a log-linear formulation to parameterize our model. Specifically, the probability of predicting a possible output $\mathbf {y}$, which is a list of targets and their associated sentiment information, given an input sentence $\mathbf {x}$, is defined as: where $s(\mathbf {x},\mathbf {y},\mathbf {h})$ is a score function defined over the sentence $\mathbf {x}$ and the output structure $\mathbf {y}$, together with the latent variable $\mathbf {h}$ that provides all the possible combinations of sentiment spans for the $(\mathbf {x,y})$ tuple. We define $E(\mathbf {x},\mathbf {y},\mathbf {h})$ as a set of all the edges appearing in all the label sequences for such combinations of sentiment spans. To compute $s(\mathbf {x},\mathbf {y},\mathbf {h})$, we sum up the scores of each edge in $E(\mathbf {x},\mathbf {y},\mathbf {h})$: where $\phi _{\mathbf {x}}(e)$ is a score function defined over an edge $e$ for the input $\mathbf {x}$. The overall model is analogous to that of a neural CRF BIBREF19, BIBREF20; hence the inference and decoding follow standard marginal and MAP inference procedures. For example, the prediction of $\mathbf {y}$ follows the Viterbi-like MAP inference procedure. Approach ::: Implicit Structure We propose a design for EI to efficiently learn rich implicit structures for exponentially many combinations of targets to predict. To do so, we explain the process to assign scores to each edge $e$ from our neural architecture. The three yellow boxes in Figure FIGREF14 compute scores for rich implicit structures from the neural architecture consisting of LSTM and self-attention. Given an input token sequence $\mathbf {x}=\lbrace x_1,x_2,\cdots ,x_{n}\rbrace $ of length $n$, we first compute the concatenated embedding $\mathbf {e}_k=[\mathbf {w}_k;\mathbf {c}_k]$ based on word embedding $\mathbf {w}_k$ and character embedding $\mathbf {c}_k$ at position $k$. As illustrated on the left part in Figure FIGREF14, we then use a Bi-directional LSTM to encode context features and obtain hidden states $\mathbf {h}_k=\mathrm {BiLSTM}(\mathbf {e_1},\mathbf {e_2}, \cdots , \mathbf {e_n})$. We use two different linear layers $f_t$ and $f_s$ to compute scores for target and sentiment respectively. The linear layer $f_t$ returns a vector of length 4, with each value in the vector indicating the score of the corresponding tag under the BMES tagging scheme. The linear layer $f_s$ returns a vector of length 3, with each value representing the score of a certain polarity of $+,0,-$. We assign such scores to each type of edge as follows: Note that the subscript $p$ and $\epsilon $ at the right hand side of above equations denote the corresponding index of the vector that $f_t$ or $f_s$ returns. We apply $f_{t}$ on edges $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {E}^{k+1}_{\epsilon ^{\prime },p}$ and $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {A}^{k}_{p}$, since words at these edges are parts of the target phrase in a sentiment span. Similarly, we apply $f_{s}$ on edges $\mathbf {B}^{k}_{p}\mathbf {B}^{k+1}_{p}$,$\mathbf {A}^{k}_{p}\mathbf {A}^{k+1}_{p}$ and $\mathbf {A}^{k}_{p}\mathbf {B}^{k+1}_{p^{\prime }}$, since words at these edges contribute the sentiment information for the target in the sentiment span. As illustrated in Figure FIGREF14, we calculate $\mathbf {a}_k$, the output of self-attention at position $k$: where $\alpha _{k,j}$ is the normalized weight score for $\mathbf {\beta }_{k,j}$, and $\mathbf {\beta }_{k,j}$ is the weight score calculated by target representation at position $k$ and contextual representation at position $j$. In addition, $W$ and $b$ as well as the attention matrix $U$ are the weights to be learned. Such a vector $\mathbf {a}_k$ encodes the implicit structures between the word $x_k$ and each word in the remaining sentence. Motivated by the character embeddings BIBREF21 which are generated based on hidden states at two ends of a subsequence, we encode such implicit structures for a target similarly. For any target starting at the position $k_1$ and ending at the position $k_2$, we could use $\mathbf {a}_{k_1}$ and $\mathbf {a}_{k_2}$ at two ends to represent the implicit structures of such a target. We encode such information on the edges $\mathbf {B}^{k_1}_{p}\mathbf {E}^{k_1}_{\epsilon ,p}$ and $\mathbf {E}^{k_2}_{\epsilon ,p}\mathbf {A}^{k_2}_{p}$ which appear at the beginning and the end of a target phrase respectively with sentiment polarity $p$. To do so, we assign the scores calculated from the self-attention to such two edges: where $g_{s}$ returns a vector of length 3 with scores of three polarities. Note that $\mathbf {h}_k$ and $\mathbf {a}_k$ could be pre-computed at every position $k$ and assigned to the corresponding edges. Such an approach allows us to maintain the inference time complexity $O(Tn)$, where $T$ is the maximum number of tags at each position which is 9 in this work and $n$ is the number of words in the input sentence. This approach enables EI to efficiently learn rich implicit structures from LSTM and self-attention for exponentially many combinations of targets. Experimental Setup ::: Data We mainly conduct our experiments on the datasets released by BIBREF9. They contain 2,350 English tweets and 7,105 Spanish tweets, with target and targeted sentiment annotated. See Table TABREF15 for corpus statistics. Experimental Setup ::: Evaluation Metrics Following the previous works, we report the precision ($P.$), recall ($R.$) and $F_1$ scores for target recognition and targeted sentiment. Note that a correct target prediction requires the boundary of the target to be correct, and a correct targeted sentiment prediction requires both target boundary and sentiment polarity to be correct. Experimental Setup ::: Hyperparameters We adopt pretrained embeddings from BIBREF22 and BIBREF23 for English data and Spanish data respectively. We use a 2-layer LSTM (for both directions) with a hidden dimension of 500 and 600 for English data and Spanish data respectively. The dimension of the attention weight $U$ is 300. As for optimization, we use the Adam BIBREF24 optimizer to optimize the model with batch size 1 and dropout rate $0.5$. All the neural weights are initialized by Xavier BIBREF25. Experimental Setup ::: Training and Implementation We train our model for a maximal of 6 epochs. We select the best model parameters based on the best $F_1$ score on the development data after each epoch. Note that we split $10\%$ of data from the training data as the development data. The selected model is then applied to the test data for evaluation. During testing, we map words not appearing in the training data to the UNK token. Following the previous works, we perform 10-fold cross validation and report the average results. Our models and variants are implemented using PyTorch BIBREF26. Experimental Setup ::: Baselines We consider the following baselines: Pipeline BIBREF10 and Collapse BIBREF10 both are linear-chain CRF models using discrete features and embeddings. The former predicts targets first and calculate targeted sentiment for each predicted target. The latter outputs a tag at each position by collapsing the target tag and sentiment tag together. Joint BIBREF10 is a linear-chain SSVM model using both discrete features and embeddings. Such a model jointly produces target tags and sentiment tags. Bi-GRU BIBREF12 and MBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU. HBi-GRU BIBREF12 and HMBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings and character embedding. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU. SS BIBREF11 and SS + emb BIBREF11 are both based on a latent CRF model to learn flexible explicit structures. The former uses discrete features and the latter uses both discrete features and word embeddings. SA-CRF is a linear-chain CRF model with self-attention. Such a model concatenates the hidden state from LSTM and a vector constructed by self-attention at each position, and feeds them into CRF as features. The model attempts to capture rich implicit structures in the input space, but it does not put effort on explicit structures in the output space. E-I is a weaker version of EI. Such a model removes the BMES sub-tags in the E tag, causing the model to learn less explicit structural information in the output space. EI- is a weaker version of EI. Such a model removes the self-attention from EI, causing the model to learn less expressive implicit structures in the input space. Results and Discussion ::: Main Results The main results are presented in Table TABREF16, where explicit structures as well as implicit structures are indicated for each model for clear comparisons. In general, our model EI outperforms all the baselines. Specifically, it outperforms the strongest baseline EI- significantly with $p < 0.01$ on the English and Spanish datasets in terms of $F_1$ scores. Note that EI- which models flexible explicit structures and less implicit structural information, achieves better performance than most of the baselines, indicating flexible explicit structures contribute a lot to the performance boost. Now let us take a closer look at the differences based on detailed comparisons. First of all, we compare our model EI with the work proposed by BIBREF10. The Pipeline model (based on CRF) as well as Joint and Collapse models (based on SSVM) in their work capture fixed explicit structures. Such two models rely on multi-layer perceptron (MLP) to obtain the local context features for implicit structures. These two models do not put much effort to capture better explicit structures and implicit structures. Our model EI (and even EI-) outperforms these two models significantly. We also compare our work with models in BIBREF12, which also capture fixed explicit structures. Such models leverage different GRUs (single-layer or multi-layer) and different input features (word embeddings and character representations) to learn better contextual features. Their best result by HMBi-GRU is obtained with multi-layer GRU with word embeddings and character embeddings. As we can see, our model EI outperforms HMBi-GRU under all evaluation metrics. On the English data, EI obtains $6.50$ higher $F_1$ score and $2.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. On Spanish, EI obtains $5.16$ higher $F_1$ score and $0.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. Notably, compared with HMBi-GRU, even EI- capturing the flexible explicit structures achieves better performance on most of metrics and obtains the comparable results in terms of precision and $F_1$ score on Spanish. Since both EI and EI- models attempt to capture the flexible explicit structures, the comparisons above imply the importance of modeling such flexible explicit structures in the output space. We also compare EI with E-I. The difference between these two models is that E-I removes the BMES sub-tags. Such a model captures less explicit structural information in the output space. We can see that EI outperforms E-I. Such results show that adopting BMES sub-tags in the output space to capture explicit structural information is beneficial. Now we compare EI with SA-CRF which is a linear-chain CRF model with self-attention. Such a model attempts to capture rich implicit structures, and fixed explicit structures. The difference between EI and SA-CRF is that our model EI captures flexible explicit structures in the output space which model output representations as latent variables. We can see that EI outperforms SA-CRF on all the metrics. Such a comparison also implies the importance of capturing flexible explicit structures in the output space. Next, we focus on the comparisons with SS BIBREF11 and SS + emb BIBREF11. Such two models as well as our models all capture the flexible explicit structures. As for the difference, both two SS models rely on hand-crafted discrete features to capture implicit structures, while our model EI and EI- learn better implicit structures by LSTM and self-attention. Furthermore, our models only require word embeddings and character embeddings as the input to our neural architecture to model rich implicit structures, leading to a comparatively simpler and more straightforward design. The comparison here suggests that LSTM and self-attention neural networks are able to capture better implicit structures than hand-crafted features. Finally, we compare EI with EI-. We can see that the $F_1$ scores of targeted sentiment for both English and Spanish produced by EI are $0.95$ and $0.97$ points higher than EI-. The main difference here is that EI makes use of self-attention to capture richer implicit structures between each target phrase and all words in the complete sentence. The comparisons here indicate the importance of capturing rich implicit structures using self-attention on this task. Results and Discussion ::: Main Results ::: Robustness Overall, all these comparisons above based on empirical results show the importance of capturing both flexible explicit structures in the output space and rich implicit structures by LSTM and self-attention in the input space. We analyze the model robustness by assessing the performance on the targeted sentiment for targets of different lengths. For both English and Spanish, we group targets into 4 categories respectively, namely length of 1, 2, 3 and $\ge 4$. Figure FIGREF32 reports the $F_1$ scores of targeted sentiment for such 4 groups on Spanish. See the English results in the supplementary material. As we can see EI outperforms all the baselines on all groups. Furthermore, following the comparisons in BIBREF10, we also measure the precision, recall and $F_1$ of subjectivity and non-neutral polarities on the Spanish dataset. Results are reported in Table TABREF29. The subjectivity measures whether a target phrase expresses an opinion or not according to BIBREF1. Comparing with the best-performing system's results reported in BIBREF10 and BIBREF11, our model EI can achieve higher $F_1$ scores on subjectivity and non-neutral polarities. Results and Discussion ::: Main Results ::: Error Analysis We conducted error analysis for our main model EI. We calculate $F_1$ scores based on the partial match instead of exact match. The $F_1$ scores for target partial match is $76.04$ and $83.82$ for English and Spanish respectively. We compare these two numbers against $63.48$ and $71.17$ which are the $F_1$ scores based on exact match. This comparison indicates that boundaries of many predicted targets do not match exactly with those of the correct targets. Furthermore, we investigate the errors caused by incorrect sentiment polarities. We found that the major type of errors is to incorrectly predict positive targets as neutral targets. Such errors contribute $64\%$ and $36\%$ of total errors for English and Spanish respectively. We believe they are mainly caused by challenging expressions in the tweet input text. Such challenging expressions such as “below expectations” are very sparse in the data, which makes effective learning for such phrases difficult. Results and Discussion ::: Effect of Implicit Structures In order to understand whether the implicit structures are truly making contributions in terms of the overall performance, we compare the performance among four models: EI and EI- as well as two variants EI (i:MLP) and EI (i:Identity) (where i indicates the implicit structure). Such two variants replace the implicit structure by other components: EI (i:MLP) replaces self-attention by multi-layer perceptron (MLP) for implicit structures. Such a variant attempts to capture implicit structures for a target phrase towards words restricted by a window of size 3 centered at the two ends of the target phrase. EI (i:Identity) replaces self-attention by an identity layer as implicit structure. Such a variant attempts to capture implicit structures for a target phrase towards words at the two ends of the target phrase exactly. Overall, those variants perform worse than EI on all the metrics. When the self-attention is replaced by MLP or the identity layer for implicit structures, the performance drops a lot on both target and targeted sentiment. Such two variants EI (i:MLP) and EI (i:Identity) consider the words within a small window centered at the two ends of the target phrase, which might not be capable of capturing the desired implicit structures. The EI- model capturing less implicit structural information achieves worse results than EI, but obtains better results than the two variants discussed above. This comparison implies that properly capturing implicit structures as the complement of explicit structural information is essential. Results and Discussion ::: Qualitative Analysis We present an example sentence in the test data in Figure FIGREF38, where the gold targets are in bold, the predicted targets are in the pink boxes, the gold sentiment is in blue and predicted sentiment is in red. EI makes all correct predictions for three targets. EI- predicts correct boundaries for three targets and the targeted sentiment predictions are highlighted in Figure FIGREF38. As we can see, EI- incorrectly predicts the targeted sentiment on the first target as neural (0). The first target here is far from the sentiment expression “sound good” which is not in the first sentiment span, making EI- not capable of capturing such a sentiment expression. This qualitative analysis helps us to better understand the importance to capture implicit structures using both LSTM and self-attention. Results and Discussion ::: Additional Experiments We also conducted experiments on multi-lingual Restaurant datasets from SemEval 2016 Task 5 BIBREF28, where aspect target phrases and aspect sentiments are provided. We regard each aspect target phrase as a target and assign such a target with the corresponding aspect sentiment polarity in the data. Note that we remove all the instances which contain no targets in the training data. Following the main experiment, we split $10\%$ of training data as development set for the selection of the best model during training. We report the $F_1$ scores of target and targeted sentiment for English, Dutch and Russian respectively in Table TABREF43. The results show that EI achieves the best performance. The performance of SS BIBREF11 is much worse on Russian due to the inability of discrete features in SS to capture the complex morphology in Russian. Related Work We briefly survey the research efforts on two types of TSA tasks mentioned in the introduction. Note that TSA is related to aspect sentiment analysis which is to determine the sentiment polarity given a target and an aspect describing a property of related topics. Related Work ::: Predicting sentiment for a given target Such a task is typically solved by leveraging sentence structural information, such as syntactic trees BIBREF5, dependency trees BIBREF6 as well as surrounding context based on LSTM BIBREF29, GRU BIBREF7 or CNN BIBREF8. Another line of works leverage self-attention BIBREF30 or memory networks BIBREF31 to encode rich global context information. BIBREF16 adopted the segmental attention BIBREF32 to model the important text segments to compute the targeted sentiment. BIBREF33 studied the issue that the different combinations of target and aspect may result in different sentiment polarity. They proposed a model to distinguish such different combinations based on memory networks to produce the representation for aspect sentiment classification. Related Work ::: Jointly predicting targets and their associated sentiment Such a joint task is usually regarded as sequence labeling problem. BIBREF9 introduced the task of open domain targeted sentiment analysis. They proposed several models based on CRF such as the pipeline model, the collapsed model as well as the joint model to predict both targets and targeted sentiment information. Their experiments showed that the collapsed model and the joint model could achieve better results, implying the benefit of the joint learning on this task. BIBREF10 proposed an approach based on structured SVM BIBREF14, BIBREF15 integrating both discrete features and neural features for this joint task. BIBREF11 proposed the sentiment scope model motivated from a linguistic phenomenon to represent the structure information for both the targets and their associated sentiment polarities. They modelled the latent sentiment scope based on CRF with latent variables, and achieved the best performance among all the existing works. However, they did not explore much on the implicit structural information and their work mostly relied on hand-crafted discrete features. BIBREF12 adopted a multi-layer GRU to learn targets and sentiments jointly by producing the target tag and the sentiment tag at each position. They introduced a constraint forcing the sentiment tag at each position to be consistent with the target tag. However, they did not explore the explicit structural information in the output space as we do in this work. Conclusion and Future Work In this work, we argue that properly modeling both explicit structures in the output space and the implicit structures in the input space are crucial for building a successful targeted sentiment analysis system. Specifically, we propose a new model that captures explicit structures with latent CRF, and uses LSTM and self-attention to capture rich implicit structures in the input space efficiently. Through extensive experiments, we show that our model is able to outperform competitive baseline models significantly, thanks to its ability to properly capture both explicit and implicit structural information. Future work includes exploring approaches to capture explicit and implicit structural information to other sentiment analysis tasks and other structured prediction problems. Acknowledgments We would like to thank the anonymous reviewers for their thoughtful and constructive comments. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156. Appendix ::: Robustness We also report the results for targets of different lengths on English in Figure FIGREF44. As we can see, our model BI outperforms others except when the length is greater than or equal 4. Note that according to statistics in the main paper, there exists a small number of targets of length 4. Appendix ::: Additional Experiments We present the data statistics for English, Dutch and Russian in SemEval 2016 Restaurant dataset BIBREF28 in Table TABREF45.
Bi-directional LSTM, self-attention
9439430ff97c6e927d919860b1cb86a0dcff0038
9439430ff97c6e927d919860b1cb86a0dcff0038_0
Q: How is the robustness of the model evaluated? Text: Introduction Accepted as a long paper in EMNLP 2019 (Conference on Empirical Methods in Natural Language Processing). Targeted sentiment analysis (TSA) is an important task useful for public opinion mining BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. The task focuses on predicting the sentiment information towards a specific target phrase, which is usually a named entity, in a given input sentence. Currently, TSA in the literature may refer to either of the two possible tasks under two different setups: 1) predicting the sentiment polarity for a given specific target phrase BIBREF5, BIBREF6, BIBREF7, BIBREF8; 2) jointly predicting the targets together with the sentiment polarity assigned to each target BIBREF9, BIBREF10, BIBREF11, BIBREF12. In this paper, we focus on the latter setup which was originally proposed by BIBREF9. Figure FIGREF2 presents an example sentence containing three targets. Each target is associated with a sentiment, where we use $+$ for denoting positive polarity, 0 for neutral and $-$ for negative. Existing research efforts mostly regard this task as a sequence labeling problem by assigning a tag to each word token, where the tags are typically designed in a way that capture both the target boundary as well as the targeted sentiment polarity information together. Existing approaches BIBREF9, BIBREF10, BIBREF12 build models based on conditional random fields (CRF) BIBREF13 or structural support vector machines (SSVM) BIBREF14, BIBREF15 to explicitly model the sentiment information with structured outputs, where each targeted sentiment prediction corresponds to exactly one fixed output. While effective, such models suffer from their inability in capturing certain long-distance dependencies between sentiment keywords and their targets. To remedy this issue, BIBREF11 proposed their “sentiment scope’’ model to learn flexible output representations. For example, three text spans with their corresponding targets in bold are presented in Figure FIGREF2, where each target’s sentiment is characterized by the words appearing in the corresponding text span. They learn from data for each target a latent text span used for attributing its sentiment, resulting in flexible output structures. However, we note there are two major limitations with the approach of BIBREF11. First, their model requires a large number of hand-crafted discrete features. Second, the model relies on a strong assumption that the latent sentiment spans do not overlap with one another. For example, in Figure FIGREF2, their model will not be able to capture the interaction between the target word “OZ” in the first sentiment span and the keyword “amazing” due to the assumptions made on the explicit structures in the output space. One idea to resolve this issue is to design an alternative mechanism to capture such useful structural information that resides in the input space. On the other hand, recent literature shows that feature learning mechanisms such as self-attention have been successful for the task of sentiment prediction when targets are given BIBREF16, BIBREF17, BIBREF18 (i.e., under the first setup mentioned above). Such approaches essentially attempt to learn rich implicit structural information in the input space that captures the interactions between a given target and all other word tokens within the sentence. Such implicit structures are then used to generate sentiment summary representation towards the given target, leading to the performance boost. However, to date capturing rich implicit structures in the joint prediction task that we focus on (i.e., the second setup) remains largely unexplored. Unlike the first setup, in our setup the targets are not given, we need to handle exponentially many possible combinations of targets in the joint task. This makes the design of an algorithm for capturing both implicit structural information from the input space and the explicit structural information from the output space challenging. Motivated by the limitations and challenges, we present a novel approach that is able to efficiently and effectively capture the explicit and implicit structural information for TSA. We make the following key contributions in this work: We propose a model that is able to properly integrate both explicit and implicit structural information, called EI. The model is able to learn flexible explicit structural information in the output space while being able to efficiently learn rich implicit structures by LSTM and self-attention for exponentially many possible combinations of targets in a given sentence. We conducted extensive experiments to validate our claim that both explicit and implicit structures are indispensable in such a task, and demonstrate the effectiveness and robustness of our model. Approach Our objective is to design a model to extract targets as well as their associated targeted sentiments for a given sentence in a joint manner. As we mentioned before, we believe that both explicit and implicit structures are crucial for building a successful model for TSA. Specifically, we first present an approach to learn flexible explicit structures based on latent CRF, and next present an approach to efficiently learn the rich implicit structures for exponentially many possible combinations of targets. Approach ::: Explicit Structure Motivated by BIBREF11, we design an approach based on latent CRF to model flexible sentiment spans to capture better explicit structures in the output space. To do so, we firstly integrate target and targeted sentiment information into a label sequence by using 3 types of tags in our EI model: $\mathbf {B}_p$, $\mathbf {A}_p$, and $\mathbf {E}_{\epsilon ,p}$, where $p \in \lbrace +, -, 0\rbrace $ indicates the sentiment polarity and $\epsilon \in \lbrace \textit {B,M,E,S}\rbrace $ denotes the BMES tagging scheme. We explain the meaning of each type of tags as follows. $\mathbf {B}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears before the target word or exactly as the first word of the target. $\mathbf {A}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears after the target word or exactly as the last word of the target. $\mathbf {E}_{\epsilon ,p}$ is used to denote the current word is part of a sentiment span with polarity $p$, and is also a part of the target. The BMES sub-tag $\epsilon $ denotes the position information within the target phrase. For example, $\mathbf {E}_{B,+}$ represents that the current word appears as the first word of a target with the positive polarity. We illustrate how to construct the label sequence for a specific combination of sentiment spans of the given example sentence in Figure FIGREF5, where three non-overlapping sentiment spans in yellow are presented. Each such sentiment span encodes the sentiment polarity in blue for a target in bold in pink square. At each position, we allow multiple tags in a sequence to appear such that the edge $\mathbf {A}_p\mathbf {B}_{p^{\prime }}$ in red consistently indicates the boundary between two adjacent sentiment spans. The first sentiment span with positive ($+$) polarity contains only one word which is also the target. Such a single word target is also the beginning and the end of the target. We use three tags $\mathbf {B}_+$, $\mathbf {E}_{S,+}$ and $\mathbf {A}_+$ to encode such information above. The second sentiment span with positive ($+$) polarity contains a two-word target “Shin Lim”. The word “and” appearing before such target takes a tag $\mathbf {B}_+$. The words “perform amazing magic” appearing after such target take a tag $\mathbf {A}_+$ at each position. As for the target, the word “Shin” at the beginning of the target takes tags $\mathbf {B}_+$ and $\mathbf {E}_{B,+}$, while the word “Lim” at the end of the target takes tags $\mathbf {E}_{E,+}$ and $\mathbf {A}_+$. The third sentiment span with neutral (0) polarity contains a single-word target “AGT”. Similarly, we use three tags $\mathbf {B}_0$, $\mathbf {E}_{S,0}$ and $\mathbf {A}_0$ to represent such single word target. The word “on” appearing before such target takes a tag $\mathbf {B}_0$. The word “2018” appearing afterwards takes a tag $\mathbf {A}_0$. Note that if there exists a target with length larger than 2, the tag $\mathbf {E}_{M,p}$ will be used. For example in Figure FIGREF5, if the target phrase “Shin Lim” is replaced by “Shin Bob Lim”, we will keep the tags at “Shin” and “Lim” unchanged. We assign a tag $\mathbf {E}_{M,+}$ at the word “Bob” to indicate that “Bob” appears in the middle of the target by following the BMES tagging scheme. Finally, we represent the label sequence by connecting adjacent tags sequentially with edges. Notice that for a given input sentence and the output targets as well as the associated targeted sentiment, there exist exponentially many possible label sequences, each specifying a different possible combinations of sentiment spans. Figure FIGREF11 shows a label sequence for an alternative combination of the sentiment spans. Those label sequences representing the same input and output construct a latent variable in our model, capturing the flexible explicit structures in the output space. We use a log-linear formulation to parameterize our model. Specifically, the probability of predicting a possible output $\mathbf {y}$, which is a list of targets and their associated sentiment information, given an input sentence $\mathbf {x}$, is defined as: where $s(\mathbf {x},\mathbf {y},\mathbf {h})$ is a score function defined over the sentence $\mathbf {x}$ and the output structure $\mathbf {y}$, together with the latent variable $\mathbf {h}$ that provides all the possible combinations of sentiment spans for the $(\mathbf {x,y})$ tuple. We define $E(\mathbf {x},\mathbf {y},\mathbf {h})$ as a set of all the edges appearing in all the label sequences for such combinations of sentiment spans. To compute $s(\mathbf {x},\mathbf {y},\mathbf {h})$, we sum up the scores of each edge in $E(\mathbf {x},\mathbf {y},\mathbf {h})$: where $\phi _{\mathbf {x}}(e)$ is a score function defined over an edge $e$ for the input $\mathbf {x}$. The overall model is analogous to that of a neural CRF BIBREF19, BIBREF20; hence the inference and decoding follow standard marginal and MAP inference procedures. For example, the prediction of $\mathbf {y}$ follows the Viterbi-like MAP inference procedure. Approach ::: Implicit Structure We propose a design for EI to efficiently learn rich implicit structures for exponentially many combinations of targets to predict. To do so, we explain the process to assign scores to each edge $e$ from our neural architecture. The three yellow boxes in Figure FIGREF14 compute scores for rich implicit structures from the neural architecture consisting of LSTM and self-attention. Given an input token sequence $\mathbf {x}=\lbrace x_1,x_2,\cdots ,x_{n}\rbrace $ of length $n$, we first compute the concatenated embedding $\mathbf {e}_k=[\mathbf {w}_k;\mathbf {c}_k]$ based on word embedding $\mathbf {w}_k$ and character embedding $\mathbf {c}_k$ at position $k$. As illustrated on the left part in Figure FIGREF14, we then use a Bi-directional LSTM to encode context features and obtain hidden states $\mathbf {h}_k=\mathrm {BiLSTM}(\mathbf {e_1},\mathbf {e_2}, \cdots , \mathbf {e_n})$. We use two different linear layers $f_t$ and $f_s$ to compute scores for target and sentiment respectively. The linear layer $f_t$ returns a vector of length 4, with each value in the vector indicating the score of the corresponding tag under the BMES tagging scheme. The linear layer $f_s$ returns a vector of length 3, with each value representing the score of a certain polarity of $+,0,-$. We assign such scores to each type of edge as follows: Note that the subscript $p$ and $\epsilon $ at the right hand side of above equations denote the corresponding index of the vector that $f_t$ or $f_s$ returns. We apply $f_{t}$ on edges $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {E}^{k+1}_{\epsilon ^{\prime },p}$ and $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {A}^{k}_{p}$, since words at these edges are parts of the target phrase in a sentiment span. Similarly, we apply $f_{s}$ on edges $\mathbf {B}^{k}_{p}\mathbf {B}^{k+1}_{p}$,$\mathbf {A}^{k}_{p}\mathbf {A}^{k+1}_{p}$ and $\mathbf {A}^{k}_{p}\mathbf {B}^{k+1}_{p^{\prime }}$, since words at these edges contribute the sentiment information for the target in the sentiment span. As illustrated in Figure FIGREF14, we calculate $\mathbf {a}_k$, the output of self-attention at position $k$: where $\alpha _{k,j}$ is the normalized weight score for $\mathbf {\beta }_{k,j}$, and $\mathbf {\beta }_{k,j}$ is the weight score calculated by target representation at position $k$ and contextual representation at position $j$. In addition, $W$ and $b$ as well as the attention matrix $U$ are the weights to be learned. Such a vector $\mathbf {a}_k$ encodes the implicit structures between the word $x_k$ and each word in the remaining sentence. Motivated by the character embeddings BIBREF21 which are generated based on hidden states at two ends of a subsequence, we encode such implicit structures for a target similarly. For any target starting at the position $k_1$ and ending at the position $k_2$, we could use $\mathbf {a}_{k_1}$ and $\mathbf {a}_{k_2}$ at two ends to represent the implicit structures of such a target. We encode such information on the edges $\mathbf {B}^{k_1}_{p}\mathbf {E}^{k_1}_{\epsilon ,p}$ and $\mathbf {E}^{k_2}_{\epsilon ,p}\mathbf {A}^{k_2}_{p}$ which appear at the beginning and the end of a target phrase respectively with sentiment polarity $p$. To do so, we assign the scores calculated from the self-attention to such two edges: where $g_{s}$ returns a vector of length 3 with scores of three polarities. Note that $\mathbf {h}_k$ and $\mathbf {a}_k$ could be pre-computed at every position $k$ and assigned to the corresponding edges. Such an approach allows us to maintain the inference time complexity $O(Tn)$, where $T$ is the maximum number of tags at each position which is 9 in this work and $n$ is the number of words in the input sentence. This approach enables EI to efficiently learn rich implicit structures from LSTM and self-attention for exponentially many combinations of targets. Experimental Setup ::: Data We mainly conduct our experiments on the datasets released by BIBREF9. They contain 2,350 English tweets and 7,105 Spanish tweets, with target and targeted sentiment annotated. See Table TABREF15 for corpus statistics. Experimental Setup ::: Evaluation Metrics Following the previous works, we report the precision ($P.$), recall ($R.$) and $F_1$ scores for target recognition and targeted sentiment. Note that a correct target prediction requires the boundary of the target to be correct, and a correct targeted sentiment prediction requires both target boundary and sentiment polarity to be correct. Experimental Setup ::: Hyperparameters We adopt pretrained embeddings from BIBREF22 and BIBREF23 for English data and Spanish data respectively. We use a 2-layer LSTM (for both directions) with a hidden dimension of 500 and 600 for English data and Spanish data respectively. The dimension of the attention weight $U$ is 300. As for optimization, we use the Adam BIBREF24 optimizer to optimize the model with batch size 1 and dropout rate $0.5$. All the neural weights are initialized by Xavier BIBREF25. Experimental Setup ::: Training and Implementation We train our model for a maximal of 6 epochs. We select the best model parameters based on the best $F_1$ score on the development data after each epoch. Note that we split $10\%$ of data from the training data as the development data. The selected model is then applied to the test data for evaluation. During testing, we map words not appearing in the training data to the UNK token. Following the previous works, we perform 10-fold cross validation and report the average results. Our models and variants are implemented using PyTorch BIBREF26. Experimental Setup ::: Baselines We consider the following baselines: Pipeline BIBREF10 and Collapse BIBREF10 both are linear-chain CRF models using discrete features and embeddings. The former predicts targets first and calculate targeted sentiment for each predicted target. The latter outputs a tag at each position by collapsing the target tag and sentiment tag together. Joint BIBREF10 is a linear-chain SSVM model using both discrete features and embeddings. Such a model jointly produces target tags and sentiment tags. Bi-GRU BIBREF12 and MBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU. HBi-GRU BIBREF12 and HMBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings and character embedding. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU. SS BIBREF11 and SS + emb BIBREF11 are both based on a latent CRF model to learn flexible explicit structures. The former uses discrete features and the latter uses both discrete features and word embeddings. SA-CRF is a linear-chain CRF model with self-attention. Such a model concatenates the hidden state from LSTM and a vector constructed by self-attention at each position, and feeds them into CRF as features. The model attempts to capture rich implicit structures in the input space, but it does not put effort on explicit structures in the output space. E-I is a weaker version of EI. Such a model removes the BMES sub-tags in the E tag, causing the model to learn less explicit structural information in the output space. EI- is a weaker version of EI. Such a model removes the self-attention from EI, causing the model to learn less expressive implicit structures in the input space. Results and Discussion ::: Main Results The main results are presented in Table TABREF16, where explicit structures as well as implicit structures are indicated for each model for clear comparisons. In general, our model EI outperforms all the baselines. Specifically, it outperforms the strongest baseline EI- significantly with $p < 0.01$ on the English and Spanish datasets in terms of $F_1$ scores. Note that EI- which models flexible explicit structures and less implicit structural information, achieves better performance than most of the baselines, indicating flexible explicit structures contribute a lot to the performance boost. Now let us take a closer look at the differences based on detailed comparisons. First of all, we compare our model EI with the work proposed by BIBREF10. The Pipeline model (based on CRF) as well as Joint and Collapse models (based on SSVM) in their work capture fixed explicit structures. Such two models rely on multi-layer perceptron (MLP) to obtain the local context features for implicit structures. These two models do not put much effort to capture better explicit structures and implicit structures. Our model EI (and even EI-) outperforms these two models significantly. We also compare our work with models in BIBREF12, which also capture fixed explicit structures. Such models leverage different GRUs (single-layer or multi-layer) and different input features (word embeddings and character representations) to learn better contextual features. Their best result by HMBi-GRU is obtained with multi-layer GRU with word embeddings and character embeddings. As we can see, our model EI outperforms HMBi-GRU under all evaluation metrics. On the English data, EI obtains $6.50$ higher $F_1$ score and $2.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. On Spanish, EI obtains $5.16$ higher $F_1$ score and $0.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. Notably, compared with HMBi-GRU, even EI- capturing the flexible explicit structures achieves better performance on most of metrics and obtains the comparable results in terms of precision and $F_1$ score on Spanish. Since both EI and EI- models attempt to capture the flexible explicit structures, the comparisons above imply the importance of modeling such flexible explicit structures in the output space. We also compare EI with E-I. The difference between these two models is that E-I removes the BMES sub-tags. Such a model captures less explicit structural information in the output space. We can see that EI outperforms E-I. Such results show that adopting BMES sub-tags in the output space to capture explicit structural information is beneficial. Now we compare EI with SA-CRF which is a linear-chain CRF model with self-attention. Such a model attempts to capture rich implicit structures, and fixed explicit structures. The difference between EI and SA-CRF is that our model EI captures flexible explicit structures in the output space which model output representations as latent variables. We can see that EI outperforms SA-CRF on all the metrics. Such a comparison also implies the importance of capturing flexible explicit structures in the output space. Next, we focus on the comparisons with SS BIBREF11 and SS + emb BIBREF11. Such two models as well as our models all capture the flexible explicit structures. As for the difference, both two SS models rely on hand-crafted discrete features to capture implicit structures, while our model EI and EI- learn better implicit structures by LSTM and self-attention. Furthermore, our models only require word embeddings and character embeddings as the input to our neural architecture to model rich implicit structures, leading to a comparatively simpler and more straightforward design. The comparison here suggests that LSTM and self-attention neural networks are able to capture better implicit structures than hand-crafted features. Finally, we compare EI with EI-. We can see that the $F_1$ scores of targeted sentiment for both English and Spanish produced by EI are $0.95$ and $0.97$ points higher than EI-. The main difference here is that EI makes use of self-attention to capture richer implicit structures between each target phrase and all words in the complete sentence. The comparisons here indicate the importance of capturing rich implicit structures using self-attention on this task. Results and Discussion ::: Main Results ::: Robustness Overall, all these comparisons above based on empirical results show the importance of capturing both flexible explicit structures in the output space and rich implicit structures by LSTM and self-attention in the input space. We analyze the model robustness by assessing the performance on the targeted sentiment for targets of different lengths. For both English and Spanish, we group targets into 4 categories respectively, namely length of 1, 2, 3 and $\ge 4$. Figure FIGREF32 reports the $F_1$ scores of targeted sentiment for such 4 groups on Spanish. See the English results in the supplementary material. As we can see EI outperforms all the baselines on all groups. Furthermore, following the comparisons in BIBREF10, we also measure the precision, recall and $F_1$ of subjectivity and non-neutral polarities on the Spanish dataset. Results are reported in Table TABREF29. The subjectivity measures whether a target phrase expresses an opinion or not according to BIBREF1. Comparing with the best-performing system's results reported in BIBREF10 and BIBREF11, our model EI can achieve higher $F_1$ scores on subjectivity and non-neutral polarities. Results and Discussion ::: Main Results ::: Error Analysis We conducted error analysis for our main model EI. We calculate $F_1$ scores based on the partial match instead of exact match. The $F_1$ scores for target partial match is $76.04$ and $83.82$ for English and Spanish respectively. We compare these two numbers against $63.48$ and $71.17$ which are the $F_1$ scores based on exact match. This comparison indicates that boundaries of many predicted targets do not match exactly with those of the correct targets. Furthermore, we investigate the errors caused by incorrect sentiment polarities. We found that the major type of errors is to incorrectly predict positive targets as neutral targets. Such errors contribute $64\%$ and $36\%$ of total errors for English and Spanish respectively. We believe they are mainly caused by challenging expressions in the tweet input text. Such challenging expressions such as “below expectations” are very sparse in the data, which makes effective learning for such phrases difficult. Results and Discussion ::: Effect of Implicit Structures In order to understand whether the implicit structures are truly making contributions in terms of the overall performance, we compare the performance among four models: EI and EI- as well as two variants EI (i:MLP) and EI (i:Identity) (where i indicates the implicit structure). Such two variants replace the implicit structure by other components: EI (i:MLP) replaces self-attention by multi-layer perceptron (MLP) for implicit structures. Such a variant attempts to capture implicit structures for a target phrase towards words restricted by a window of size 3 centered at the two ends of the target phrase. EI (i:Identity) replaces self-attention by an identity layer as implicit structure. Such a variant attempts to capture implicit structures for a target phrase towards words at the two ends of the target phrase exactly. Overall, those variants perform worse than EI on all the metrics. When the self-attention is replaced by MLP or the identity layer for implicit structures, the performance drops a lot on both target and targeted sentiment. Such two variants EI (i:MLP) and EI (i:Identity) consider the words within a small window centered at the two ends of the target phrase, which might not be capable of capturing the desired implicit structures. The EI- model capturing less implicit structural information achieves worse results than EI, but obtains better results than the two variants discussed above. This comparison implies that properly capturing implicit structures as the complement of explicit structural information is essential. Results and Discussion ::: Qualitative Analysis We present an example sentence in the test data in Figure FIGREF38, where the gold targets are in bold, the predicted targets are in the pink boxes, the gold sentiment is in blue and predicted sentiment is in red. EI makes all correct predictions for three targets. EI- predicts correct boundaries for three targets and the targeted sentiment predictions are highlighted in Figure FIGREF38. As we can see, EI- incorrectly predicts the targeted sentiment on the first target as neural (0). The first target here is far from the sentiment expression “sound good” which is not in the first sentiment span, making EI- not capable of capturing such a sentiment expression. This qualitative analysis helps us to better understand the importance to capture implicit structures using both LSTM and self-attention. Results and Discussion ::: Additional Experiments We also conducted experiments on multi-lingual Restaurant datasets from SemEval 2016 Task 5 BIBREF28, where aspect target phrases and aspect sentiments are provided. We regard each aspect target phrase as a target and assign such a target with the corresponding aspect sentiment polarity in the data. Note that we remove all the instances which contain no targets in the training data. Following the main experiment, we split $10\%$ of training data as development set for the selection of the best model during training. We report the $F_1$ scores of target and targeted sentiment for English, Dutch and Russian respectively in Table TABREF43. The results show that EI achieves the best performance. The performance of SS BIBREF11 is much worse on Russian due to the inability of discrete features in SS to capture the complex morphology in Russian. Related Work We briefly survey the research efforts on two types of TSA tasks mentioned in the introduction. Note that TSA is related to aspect sentiment analysis which is to determine the sentiment polarity given a target and an aspect describing a property of related topics. Related Work ::: Predicting sentiment for a given target Such a task is typically solved by leveraging sentence structural information, such as syntactic trees BIBREF5, dependency trees BIBREF6 as well as surrounding context based on LSTM BIBREF29, GRU BIBREF7 or CNN BIBREF8. Another line of works leverage self-attention BIBREF30 or memory networks BIBREF31 to encode rich global context information. BIBREF16 adopted the segmental attention BIBREF32 to model the important text segments to compute the targeted sentiment. BIBREF33 studied the issue that the different combinations of target and aspect may result in different sentiment polarity. They proposed a model to distinguish such different combinations based on memory networks to produce the representation for aspect sentiment classification. Related Work ::: Jointly predicting targets and their associated sentiment Such a joint task is usually regarded as sequence labeling problem. BIBREF9 introduced the task of open domain targeted sentiment analysis. They proposed several models based on CRF such as the pipeline model, the collapsed model as well as the joint model to predict both targets and targeted sentiment information. Their experiments showed that the collapsed model and the joint model could achieve better results, implying the benefit of the joint learning on this task. BIBREF10 proposed an approach based on structured SVM BIBREF14, BIBREF15 integrating both discrete features and neural features for this joint task. BIBREF11 proposed the sentiment scope model motivated from a linguistic phenomenon to represent the structure information for both the targets and their associated sentiment polarities. They modelled the latent sentiment scope based on CRF with latent variables, and achieved the best performance among all the existing works. However, they did not explore much on the implicit structural information and their work mostly relied on hand-crafted discrete features. BIBREF12 adopted a multi-layer GRU to learn targets and sentiments jointly by producing the target tag and the sentiment tag at each position. They introduced a constraint forcing the sentiment tag at each position to be consistent with the target tag. However, they did not explore the explicit structural information in the output space as we do in this work. Conclusion and Future Work In this work, we argue that properly modeling both explicit structures in the output space and the implicit structures in the input space are crucial for building a successful targeted sentiment analysis system. Specifically, we propose a new model that captures explicit structures with latent CRF, and uses LSTM and self-attention to capture rich implicit structures in the input space efficiently. Through extensive experiments, we show that our model is able to outperform competitive baseline models significantly, thanks to its ability to properly capture both explicit and implicit structural information. Future work includes exploring approaches to capture explicit and implicit structural information to other sentiment analysis tasks and other structured prediction problems. Acknowledgments We would like to thank the anonymous reviewers for their thoughtful and constructive comments. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156. Appendix ::: Robustness We also report the results for targets of different lengths on English in Figure FIGREF44. As we can see, our model BI outperforms others except when the length is greater than or equal 4. Note that according to statistics in the main paper, there exists a small number of targets of length 4. Appendix ::: Additional Experiments We present the data statistics for English, Dutch and Russian in SemEval 2016 Restaurant dataset BIBREF28 in Table TABREF45.
10-fold cross validation
00d6228bcd6b839529e52d0d622bf787a9356158
00d6228bcd6b839529e52d0d622bf787a9356158_0
Q: How is the effectiveness of the model evaluated? Text: Introduction Accepted as a long paper in EMNLP 2019 (Conference on Empirical Methods in Natural Language Processing). Targeted sentiment analysis (TSA) is an important task useful for public opinion mining BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. The task focuses on predicting the sentiment information towards a specific target phrase, which is usually a named entity, in a given input sentence. Currently, TSA in the literature may refer to either of the two possible tasks under two different setups: 1) predicting the sentiment polarity for a given specific target phrase BIBREF5, BIBREF6, BIBREF7, BIBREF8; 2) jointly predicting the targets together with the sentiment polarity assigned to each target BIBREF9, BIBREF10, BIBREF11, BIBREF12. In this paper, we focus on the latter setup which was originally proposed by BIBREF9. Figure FIGREF2 presents an example sentence containing three targets. Each target is associated with a sentiment, where we use $+$ for denoting positive polarity, 0 for neutral and $-$ for negative. Existing research efforts mostly regard this task as a sequence labeling problem by assigning a tag to each word token, where the tags are typically designed in a way that capture both the target boundary as well as the targeted sentiment polarity information together. Existing approaches BIBREF9, BIBREF10, BIBREF12 build models based on conditional random fields (CRF) BIBREF13 or structural support vector machines (SSVM) BIBREF14, BIBREF15 to explicitly model the sentiment information with structured outputs, where each targeted sentiment prediction corresponds to exactly one fixed output. While effective, such models suffer from their inability in capturing certain long-distance dependencies between sentiment keywords and their targets. To remedy this issue, BIBREF11 proposed their “sentiment scope’’ model to learn flexible output representations. For example, three text spans with their corresponding targets in bold are presented in Figure FIGREF2, where each target’s sentiment is characterized by the words appearing in the corresponding text span. They learn from data for each target a latent text span used for attributing its sentiment, resulting in flexible output structures. However, we note there are two major limitations with the approach of BIBREF11. First, their model requires a large number of hand-crafted discrete features. Second, the model relies on a strong assumption that the latent sentiment spans do not overlap with one another. For example, in Figure FIGREF2, their model will not be able to capture the interaction between the target word “OZ” in the first sentiment span and the keyword “amazing” due to the assumptions made on the explicit structures in the output space. One idea to resolve this issue is to design an alternative mechanism to capture such useful structural information that resides in the input space. On the other hand, recent literature shows that feature learning mechanisms such as self-attention have been successful for the task of sentiment prediction when targets are given BIBREF16, BIBREF17, BIBREF18 (i.e., under the first setup mentioned above). Such approaches essentially attempt to learn rich implicit structural information in the input space that captures the interactions between a given target and all other word tokens within the sentence. Such implicit structures are then used to generate sentiment summary representation towards the given target, leading to the performance boost. However, to date capturing rich implicit structures in the joint prediction task that we focus on (i.e., the second setup) remains largely unexplored. Unlike the first setup, in our setup the targets are not given, we need to handle exponentially many possible combinations of targets in the joint task. This makes the design of an algorithm for capturing both implicit structural information from the input space and the explicit structural information from the output space challenging. Motivated by the limitations and challenges, we present a novel approach that is able to efficiently and effectively capture the explicit and implicit structural information for TSA. We make the following key contributions in this work: We propose a model that is able to properly integrate both explicit and implicit structural information, called EI. The model is able to learn flexible explicit structural information in the output space while being able to efficiently learn rich implicit structures by LSTM and self-attention for exponentially many possible combinations of targets in a given sentence. We conducted extensive experiments to validate our claim that both explicit and implicit structures are indispensable in such a task, and demonstrate the effectiveness and robustness of our model. Approach Our objective is to design a model to extract targets as well as their associated targeted sentiments for a given sentence in a joint manner. As we mentioned before, we believe that both explicit and implicit structures are crucial for building a successful model for TSA. Specifically, we first present an approach to learn flexible explicit structures based on latent CRF, and next present an approach to efficiently learn the rich implicit structures for exponentially many possible combinations of targets. Approach ::: Explicit Structure Motivated by BIBREF11, we design an approach based on latent CRF to model flexible sentiment spans to capture better explicit structures in the output space. To do so, we firstly integrate target and targeted sentiment information into a label sequence by using 3 types of tags in our EI model: $\mathbf {B}_p$, $\mathbf {A}_p$, and $\mathbf {E}_{\epsilon ,p}$, where $p \in \lbrace +, -, 0\rbrace $ indicates the sentiment polarity and $\epsilon \in \lbrace \textit {B,M,E,S}\rbrace $ denotes the BMES tagging scheme. We explain the meaning of each type of tags as follows. $\mathbf {B}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears before the target word or exactly as the first word of the target. $\mathbf {A}_p$ is used to denote that the current word is part of a sentiment span with polarity $p$, but appears after the target word or exactly as the last word of the target. $\mathbf {E}_{\epsilon ,p}$ is used to denote the current word is part of a sentiment span with polarity $p$, and is also a part of the target. The BMES sub-tag $\epsilon $ denotes the position information within the target phrase. For example, $\mathbf {E}_{B,+}$ represents that the current word appears as the first word of a target with the positive polarity. We illustrate how to construct the label sequence for a specific combination of sentiment spans of the given example sentence in Figure FIGREF5, where three non-overlapping sentiment spans in yellow are presented. Each such sentiment span encodes the sentiment polarity in blue for a target in bold in pink square. At each position, we allow multiple tags in a sequence to appear such that the edge $\mathbf {A}_p\mathbf {B}_{p^{\prime }}$ in red consistently indicates the boundary between two adjacent sentiment spans. The first sentiment span with positive ($+$) polarity contains only one word which is also the target. Such a single word target is also the beginning and the end of the target. We use three tags $\mathbf {B}_+$, $\mathbf {E}_{S,+}$ and $\mathbf {A}_+$ to encode such information above. The second sentiment span with positive ($+$) polarity contains a two-word target “Shin Lim”. The word “and” appearing before such target takes a tag $\mathbf {B}_+$. The words “perform amazing magic” appearing after such target take a tag $\mathbf {A}_+$ at each position. As for the target, the word “Shin” at the beginning of the target takes tags $\mathbf {B}_+$ and $\mathbf {E}_{B,+}$, while the word “Lim” at the end of the target takes tags $\mathbf {E}_{E,+}$ and $\mathbf {A}_+$. The third sentiment span with neutral (0) polarity contains a single-word target “AGT”. Similarly, we use three tags $\mathbf {B}_0$, $\mathbf {E}_{S,0}$ and $\mathbf {A}_0$ to represent such single word target. The word “on” appearing before such target takes a tag $\mathbf {B}_0$. The word “2018” appearing afterwards takes a tag $\mathbf {A}_0$. Note that if there exists a target with length larger than 2, the tag $\mathbf {E}_{M,p}$ will be used. For example in Figure FIGREF5, if the target phrase “Shin Lim” is replaced by “Shin Bob Lim”, we will keep the tags at “Shin” and “Lim” unchanged. We assign a tag $\mathbf {E}_{M,+}$ at the word “Bob” to indicate that “Bob” appears in the middle of the target by following the BMES tagging scheme. Finally, we represent the label sequence by connecting adjacent tags sequentially with edges. Notice that for a given input sentence and the output targets as well as the associated targeted sentiment, there exist exponentially many possible label sequences, each specifying a different possible combinations of sentiment spans. Figure FIGREF11 shows a label sequence for an alternative combination of the sentiment spans. Those label sequences representing the same input and output construct a latent variable in our model, capturing the flexible explicit structures in the output space. We use a log-linear formulation to parameterize our model. Specifically, the probability of predicting a possible output $\mathbf {y}$, which is a list of targets and their associated sentiment information, given an input sentence $\mathbf {x}$, is defined as: where $s(\mathbf {x},\mathbf {y},\mathbf {h})$ is a score function defined over the sentence $\mathbf {x}$ and the output structure $\mathbf {y}$, together with the latent variable $\mathbf {h}$ that provides all the possible combinations of sentiment spans for the $(\mathbf {x,y})$ tuple. We define $E(\mathbf {x},\mathbf {y},\mathbf {h})$ as a set of all the edges appearing in all the label sequences for such combinations of sentiment spans. To compute $s(\mathbf {x},\mathbf {y},\mathbf {h})$, we sum up the scores of each edge in $E(\mathbf {x},\mathbf {y},\mathbf {h})$: where $\phi _{\mathbf {x}}(e)$ is a score function defined over an edge $e$ for the input $\mathbf {x}$. The overall model is analogous to that of a neural CRF BIBREF19, BIBREF20; hence the inference and decoding follow standard marginal and MAP inference procedures. For example, the prediction of $\mathbf {y}$ follows the Viterbi-like MAP inference procedure. Approach ::: Implicit Structure We propose a design for EI to efficiently learn rich implicit structures for exponentially many combinations of targets to predict. To do so, we explain the process to assign scores to each edge $e$ from our neural architecture. The three yellow boxes in Figure FIGREF14 compute scores for rich implicit structures from the neural architecture consisting of LSTM and self-attention. Given an input token sequence $\mathbf {x}=\lbrace x_1,x_2,\cdots ,x_{n}\rbrace $ of length $n$, we first compute the concatenated embedding $\mathbf {e}_k=[\mathbf {w}_k;\mathbf {c}_k]$ based on word embedding $\mathbf {w}_k$ and character embedding $\mathbf {c}_k$ at position $k$. As illustrated on the left part in Figure FIGREF14, we then use a Bi-directional LSTM to encode context features and obtain hidden states $\mathbf {h}_k=\mathrm {BiLSTM}(\mathbf {e_1},\mathbf {e_2}, \cdots , \mathbf {e_n})$. We use two different linear layers $f_t$ and $f_s$ to compute scores for target and sentiment respectively. The linear layer $f_t$ returns a vector of length 4, with each value in the vector indicating the score of the corresponding tag under the BMES tagging scheme. The linear layer $f_s$ returns a vector of length 3, with each value representing the score of a certain polarity of $+,0,-$. We assign such scores to each type of edge as follows: Note that the subscript $p$ and $\epsilon $ at the right hand side of above equations denote the corresponding index of the vector that $f_t$ or $f_s$ returns. We apply $f_{t}$ on edges $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {E}^{k+1}_{\epsilon ^{\prime },p}$ and $\mathbf {E}^{k}_{\epsilon ,p}\mathbf {A}^{k}_{p}$, since words at these edges are parts of the target phrase in a sentiment span. Similarly, we apply $f_{s}$ on edges $\mathbf {B}^{k}_{p}\mathbf {B}^{k+1}_{p}$,$\mathbf {A}^{k}_{p}\mathbf {A}^{k+1}_{p}$ and $\mathbf {A}^{k}_{p}\mathbf {B}^{k+1}_{p^{\prime }}$, since words at these edges contribute the sentiment information for the target in the sentiment span. As illustrated in Figure FIGREF14, we calculate $\mathbf {a}_k$, the output of self-attention at position $k$: where $\alpha _{k,j}$ is the normalized weight score for $\mathbf {\beta }_{k,j}$, and $\mathbf {\beta }_{k,j}$ is the weight score calculated by target representation at position $k$ and contextual representation at position $j$. In addition, $W$ and $b$ as well as the attention matrix $U$ are the weights to be learned. Such a vector $\mathbf {a}_k$ encodes the implicit structures between the word $x_k$ and each word in the remaining sentence. Motivated by the character embeddings BIBREF21 which are generated based on hidden states at two ends of a subsequence, we encode such implicit structures for a target similarly. For any target starting at the position $k_1$ and ending at the position $k_2$, we could use $\mathbf {a}_{k_1}$ and $\mathbf {a}_{k_2}$ at two ends to represent the implicit structures of such a target. We encode such information on the edges $\mathbf {B}^{k_1}_{p}\mathbf {E}^{k_1}_{\epsilon ,p}$ and $\mathbf {E}^{k_2}_{\epsilon ,p}\mathbf {A}^{k_2}_{p}$ which appear at the beginning and the end of a target phrase respectively with sentiment polarity $p$. To do so, we assign the scores calculated from the self-attention to such two edges: where $g_{s}$ returns a vector of length 3 with scores of three polarities. Note that $\mathbf {h}_k$ and $\mathbf {a}_k$ could be pre-computed at every position $k$ and assigned to the corresponding edges. Such an approach allows us to maintain the inference time complexity $O(Tn)$, where $T$ is the maximum number of tags at each position which is 9 in this work and $n$ is the number of words in the input sentence. This approach enables EI to efficiently learn rich implicit structures from LSTM and self-attention for exponentially many combinations of targets. Experimental Setup ::: Data We mainly conduct our experiments on the datasets released by BIBREF9. They contain 2,350 English tweets and 7,105 Spanish tweets, with target and targeted sentiment annotated. See Table TABREF15 for corpus statistics. Experimental Setup ::: Evaluation Metrics Following the previous works, we report the precision ($P.$), recall ($R.$) and $F_1$ scores for target recognition and targeted sentiment. Note that a correct target prediction requires the boundary of the target to be correct, and a correct targeted sentiment prediction requires both target boundary and sentiment polarity to be correct. Experimental Setup ::: Hyperparameters We adopt pretrained embeddings from BIBREF22 and BIBREF23 for English data and Spanish data respectively. We use a 2-layer LSTM (for both directions) with a hidden dimension of 500 and 600 for English data and Spanish data respectively. The dimension of the attention weight $U$ is 300. As for optimization, we use the Adam BIBREF24 optimizer to optimize the model with batch size 1 and dropout rate $0.5$. All the neural weights are initialized by Xavier BIBREF25. Experimental Setup ::: Training and Implementation We train our model for a maximal of 6 epochs. We select the best model parameters based on the best $F_1$ score on the development data after each epoch. Note that we split $10\%$ of data from the training data as the development data. The selected model is then applied to the test data for evaluation. During testing, we map words not appearing in the training data to the UNK token. Following the previous works, we perform 10-fold cross validation and report the average results. Our models and variants are implemented using PyTorch BIBREF26. Experimental Setup ::: Baselines We consider the following baselines: Pipeline BIBREF10 and Collapse BIBREF10 both are linear-chain CRF models using discrete features and embeddings. The former predicts targets first and calculate targeted sentiment for each predicted target. The latter outputs a tag at each position by collapsing the target tag and sentiment tag together. Joint BIBREF10 is a linear-chain SSVM model using both discrete features and embeddings. Such a model jointly produces target tags and sentiment tags. Bi-GRU BIBREF12 and MBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU. HBi-GRU BIBREF12 and HMBi-GRU BIBREF12 are both linear-chain CRF models using word embeddings and character embedding. The former uses bi-directional GRU and the latter uses multi-layer bi-directional GRU. SS BIBREF11 and SS + emb BIBREF11 are both based on a latent CRF model to learn flexible explicit structures. The former uses discrete features and the latter uses both discrete features and word embeddings. SA-CRF is a linear-chain CRF model with self-attention. Such a model concatenates the hidden state from LSTM and a vector constructed by self-attention at each position, and feeds them into CRF as features. The model attempts to capture rich implicit structures in the input space, but it does not put effort on explicit structures in the output space. E-I is a weaker version of EI. Such a model removes the BMES sub-tags in the E tag, causing the model to learn less explicit structural information in the output space. EI- is a weaker version of EI. Such a model removes the self-attention from EI, causing the model to learn less expressive implicit structures in the input space. Results and Discussion ::: Main Results The main results are presented in Table TABREF16, where explicit structures as well as implicit structures are indicated for each model for clear comparisons. In general, our model EI outperforms all the baselines. Specifically, it outperforms the strongest baseline EI- significantly with $p < 0.01$ on the English and Spanish datasets in terms of $F_1$ scores. Note that EI- which models flexible explicit structures and less implicit structural information, achieves better performance than most of the baselines, indicating flexible explicit structures contribute a lot to the performance boost. Now let us take a closer look at the differences based on detailed comparisons. First of all, we compare our model EI with the work proposed by BIBREF10. The Pipeline model (based on CRF) as well as Joint and Collapse models (based on SSVM) in their work capture fixed explicit structures. Such two models rely on multi-layer perceptron (MLP) to obtain the local context features for implicit structures. These two models do not put much effort to capture better explicit structures and implicit structures. Our model EI (and even EI-) outperforms these two models significantly. We also compare our work with models in BIBREF12, which also capture fixed explicit structures. Such models leverage different GRUs (single-layer or multi-layer) and different input features (word embeddings and character representations) to learn better contextual features. Their best result by HMBi-GRU is obtained with multi-layer GRU with word embeddings and character embeddings. As we can see, our model EI outperforms HMBi-GRU under all evaluation metrics. On the English data, EI obtains $6.50$ higher $F_1$ score and $2.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. On Spanish, EI obtains $5.16$ higher $F_1$ score and $0.50$ higher $F_1$ score on target recognition and targeted sentiment respectively. Notably, compared with HMBi-GRU, even EI- capturing the flexible explicit structures achieves better performance on most of metrics and obtains the comparable results in terms of precision and $F_1$ score on Spanish. Since both EI and EI- models attempt to capture the flexible explicit structures, the comparisons above imply the importance of modeling such flexible explicit structures in the output space. We also compare EI with E-I. The difference between these two models is that E-I removes the BMES sub-tags. Such a model captures less explicit structural information in the output space. We can see that EI outperforms E-I. Such results show that adopting BMES sub-tags in the output space to capture explicit structural information is beneficial. Now we compare EI with SA-CRF which is a linear-chain CRF model with self-attention. Such a model attempts to capture rich implicit structures, and fixed explicit structures. The difference between EI and SA-CRF is that our model EI captures flexible explicit structures in the output space which model output representations as latent variables. We can see that EI outperforms SA-CRF on all the metrics. Such a comparison also implies the importance of capturing flexible explicit structures in the output space. Next, we focus on the comparisons with SS BIBREF11 and SS + emb BIBREF11. Such two models as well as our models all capture the flexible explicit structures. As for the difference, both two SS models rely on hand-crafted discrete features to capture implicit structures, while our model EI and EI- learn better implicit structures by LSTM and self-attention. Furthermore, our models only require word embeddings and character embeddings as the input to our neural architecture to model rich implicit structures, leading to a comparatively simpler and more straightforward design. The comparison here suggests that LSTM and self-attention neural networks are able to capture better implicit structures than hand-crafted features. Finally, we compare EI with EI-. We can see that the $F_1$ scores of targeted sentiment for both English and Spanish produced by EI are $0.95$ and $0.97$ points higher than EI-. The main difference here is that EI makes use of self-attention to capture richer implicit structures between each target phrase and all words in the complete sentence. The comparisons here indicate the importance of capturing rich implicit structures using self-attention on this task. Results and Discussion ::: Main Results ::: Robustness Overall, all these comparisons above based on empirical results show the importance of capturing both flexible explicit structures in the output space and rich implicit structures by LSTM and self-attention in the input space. We analyze the model robustness by assessing the performance on the targeted sentiment for targets of different lengths. For both English and Spanish, we group targets into 4 categories respectively, namely length of 1, 2, 3 and $\ge 4$. Figure FIGREF32 reports the $F_1$ scores of targeted sentiment for such 4 groups on Spanish. See the English results in the supplementary material. As we can see EI outperforms all the baselines on all groups. Furthermore, following the comparisons in BIBREF10, we also measure the precision, recall and $F_1$ of subjectivity and non-neutral polarities on the Spanish dataset. Results are reported in Table TABREF29. The subjectivity measures whether a target phrase expresses an opinion or not according to BIBREF1. Comparing with the best-performing system's results reported in BIBREF10 and BIBREF11, our model EI can achieve higher $F_1$ scores on subjectivity and non-neutral polarities. Results and Discussion ::: Main Results ::: Error Analysis We conducted error analysis for our main model EI. We calculate $F_1$ scores based on the partial match instead of exact match. The $F_1$ scores for target partial match is $76.04$ and $83.82$ for English and Spanish respectively. We compare these two numbers against $63.48$ and $71.17$ which are the $F_1$ scores based on exact match. This comparison indicates that boundaries of many predicted targets do not match exactly with those of the correct targets. Furthermore, we investigate the errors caused by incorrect sentiment polarities. We found that the major type of errors is to incorrectly predict positive targets as neutral targets. Such errors contribute $64\%$ and $36\%$ of total errors for English and Spanish respectively. We believe they are mainly caused by challenging expressions in the tweet input text. Such challenging expressions such as “below expectations” are very sparse in the data, which makes effective learning for such phrases difficult. Results and Discussion ::: Effect of Implicit Structures In order to understand whether the implicit structures are truly making contributions in terms of the overall performance, we compare the performance among four models: EI and EI- as well as two variants EI (i:MLP) and EI (i:Identity) (where i indicates the implicit structure). Such two variants replace the implicit structure by other components: EI (i:MLP) replaces self-attention by multi-layer perceptron (MLP) for implicit structures. Such a variant attempts to capture implicit structures for a target phrase towards words restricted by a window of size 3 centered at the two ends of the target phrase. EI (i:Identity) replaces self-attention by an identity layer as implicit structure. Such a variant attempts to capture implicit structures for a target phrase towards words at the two ends of the target phrase exactly. Overall, those variants perform worse than EI on all the metrics. When the self-attention is replaced by MLP or the identity layer for implicit structures, the performance drops a lot on both target and targeted sentiment. Such two variants EI (i:MLP) and EI (i:Identity) consider the words within a small window centered at the two ends of the target phrase, which might not be capable of capturing the desired implicit structures. The EI- model capturing less implicit structural information achieves worse results than EI, but obtains better results than the two variants discussed above. This comparison implies that properly capturing implicit structures as the complement of explicit structural information is essential. Results and Discussion ::: Qualitative Analysis We present an example sentence in the test data in Figure FIGREF38, where the gold targets are in bold, the predicted targets are in the pink boxes, the gold sentiment is in blue and predicted sentiment is in red. EI makes all correct predictions for three targets. EI- predicts correct boundaries for three targets and the targeted sentiment predictions are highlighted in Figure FIGREF38. As we can see, EI- incorrectly predicts the targeted sentiment on the first target as neural (0). The first target here is far from the sentiment expression “sound good” which is not in the first sentiment span, making EI- not capable of capturing such a sentiment expression. This qualitative analysis helps us to better understand the importance to capture implicit structures using both LSTM and self-attention. Results and Discussion ::: Additional Experiments We also conducted experiments on multi-lingual Restaurant datasets from SemEval 2016 Task 5 BIBREF28, where aspect target phrases and aspect sentiments are provided. We regard each aspect target phrase as a target and assign such a target with the corresponding aspect sentiment polarity in the data. Note that we remove all the instances which contain no targets in the training data. Following the main experiment, we split $10\%$ of training data as development set for the selection of the best model during training. We report the $F_1$ scores of target and targeted sentiment for English, Dutch and Russian respectively in Table TABREF43. The results show that EI achieves the best performance. The performance of SS BIBREF11 is much worse on Russian due to the inability of discrete features in SS to capture the complex morphology in Russian. Related Work We briefly survey the research efforts on two types of TSA tasks mentioned in the introduction. Note that TSA is related to aspect sentiment analysis which is to determine the sentiment polarity given a target and an aspect describing a property of related topics. Related Work ::: Predicting sentiment for a given target Such a task is typically solved by leveraging sentence structural information, such as syntactic trees BIBREF5, dependency trees BIBREF6 as well as surrounding context based on LSTM BIBREF29, GRU BIBREF7 or CNN BIBREF8. Another line of works leverage self-attention BIBREF30 or memory networks BIBREF31 to encode rich global context information. BIBREF16 adopted the segmental attention BIBREF32 to model the important text segments to compute the targeted sentiment. BIBREF33 studied the issue that the different combinations of target and aspect may result in different sentiment polarity. They proposed a model to distinguish such different combinations based on memory networks to produce the representation for aspect sentiment classification. Related Work ::: Jointly predicting targets and their associated sentiment Such a joint task is usually regarded as sequence labeling problem. BIBREF9 introduced the task of open domain targeted sentiment analysis. They proposed several models based on CRF such as the pipeline model, the collapsed model as well as the joint model to predict both targets and targeted sentiment information. Their experiments showed that the collapsed model and the joint model could achieve better results, implying the benefit of the joint learning on this task. BIBREF10 proposed an approach based on structured SVM BIBREF14, BIBREF15 integrating both discrete features and neural features for this joint task. BIBREF11 proposed the sentiment scope model motivated from a linguistic phenomenon to represent the structure information for both the targets and their associated sentiment polarities. They modelled the latent sentiment scope based on CRF with latent variables, and achieved the best performance among all the existing works. However, they did not explore much on the implicit structural information and their work mostly relied on hand-crafted discrete features. BIBREF12 adopted a multi-layer GRU to learn targets and sentiments jointly by producing the target tag and the sentiment tag at each position. They introduced a constraint forcing the sentiment tag at each position to be consistent with the target tag. However, they did not explore the explicit structural information in the output space as we do in this work. Conclusion and Future Work In this work, we argue that properly modeling both explicit structures in the output space and the implicit structures in the input space are crucial for building a successful targeted sentiment analysis system. Specifically, we propose a new model that captures explicit structures with latent CRF, and uses LSTM and self-attention to capture rich implicit structures in the input space efficiently. Through extensive experiments, we show that our model is able to outperform competitive baseline models significantly, thanks to its ability to properly capture both explicit and implicit structural information. Future work includes exploring approaches to capture explicit and implicit structural information to other sentiment analysis tasks and other structured prediction problems. Acknowledgments We would like to thank the anonymous reviewers for their thoughtful and constructive comments. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156. Appendix ::: Robustness We also report the results for targets of different lengths on English in Figure FIGREF44. As we can see, our model BI outperforms others except when the length is greater than or equal 4. Note that according to statistics in the main paper, there exists a small number of targets of length 4. Appendix ::: Additional Experiments We present the data statistics for English, Dutch and Russian in SemEval 2016 Restaurant dataset BIBREF28 in Table TABREF45.
precision ($P.$), recall ($R.$) and $F_1$ scores for target recognition and targeted sentiment
c3d50f1e6942c9894f9a344e7cbc411af01e419c
c3d50f1e6942c9894f9a344e7cbc411af01e419c_0
Q: Do they assume sentence-level supervision? Text: Introduction Humans spend countless hours extracting structured machine readable information from unstructured information in a multitude of domains. Promising to automate this, information extraction (IE) is one of the most sought-after industrial applications of natural language processing. However, despite substantial research efforts, in practice, many applications still rely on manual effort to extract the relevant information. One of the main bottlenecks is a shortage of the data required to train state-of-the-art IE models, which rely on sequence tagging BIBREF0 , BIBREF1 . Such models require sufficient amounts of training data that is labeled at the token-level, i.e., with one label for each word. The reason token-level labels are in short supply is that they are not the intended output of human IE tasks. Creating token-level labels thus requires an additional effort, essentially doubling the work required to process each item. This additional effort is expensive and infeasible for many production systems: estimates put the average cost for a sentence at about 3 dollars, and about half an hour annotator time BIBREF2 . Consequently, state-of-the-art IE approaches, relying on sequence taggers, cannot be applied to many real life IE tasks. What is readily available in abundance and at no additional costs, is the raw, unstructured input and machine-readable output to a human IE task. Consider the transcription of receipts, checks, or business documents, where the input is an unstructured PDF and the output a row in a database (due date, payable amount, etc). Another example is flight bookings, where the input is a natural language request from the user, and the output a HTTP request, sent to the airline booking API. To better exploit such existing data sources, we propose an end-to-end (E2E) model based on pointer networks with attention, which can be trained end-to-end on the input/output pairs of human IE tasks, without requiring token-level annotations. We evaluate our model on three traditional IE data sets. Note that our model and the baselines are competing in two dimensions. The first is cost and applicability. The baselines require token-level labels, which are expensive and unavailable for many real life tasks. Our model does not require such token-level labels. Given the time and money required for these annotations, our model clearly improves over the baselines in this dimension. The second dimension is the accuracy of the models. Here we show that our model is competitive with the baseline models on two of the data sets and only slightly worse on the last data set, all despite fewer available annotations. Model Our proposed model is based on pointer networks BIBREF3 . A pointer network is a sequence-to-sequence model with attention in which the output is a position in the input sequence. The input position is "pointed to" using the attention mechanism. See figure 1 for an overview. Our formulation of the pointer network is slightly different from the original: Our output is some content from the input rather than a position in the input. An input sequence of $N$ words $\mathbf {x} = x_1,...,x_N$ is encoded into another sequence of length $N$ using an Encoder. $$e_i &= \text{Encoder}(x_i, e_{i-1})$$ (Eq. 3) We use a single shared encoder, and $k = 1..K$ decoders, one for each piece of information we wish to extract. At each step $j$ each decoder calculate an unnormalized scalar attention score $a_{kji}$ over each input position $i$ . The $k$ 'th decoder output at step $j$ , $o_{kj}$ , is then the weighted sum of inputs, weighted with the normalized attention scores $att_{kji}$ . $$d_{kj} &= \text{Decoder}_k(o_{k,j-1}, d_{k,j-1}) \\ a_{kji} &= \text{Attention}_k(d_{kj}, e_i) \text{ for } i = 1..N \\ att_{kji} &= \text{softmax}(a_{kji}) \text{ for } i = 1..N \\ o_{kj} &= \sum _{i=1}^N att_{kji} \, x_i \ .$$ (Eq. 4) Since each $x_i$ is a one-hot encoded word, and the $att_{kji}$ sum to one over $i$ , $o_{kj}$ is a probability distribution over words. The loss function is the sum of the negative cross entropy for each of the expected outputs $y_{kj}$ and decoder outputs $o_{kj}$ . $$\mathcal {L}(\bf {x}, \bf {y}) &= -\sum _{k=1}^K \frac{1}{M_k} \sum _{j=1}^{M_k} y_{kj} \log \left(o_{kj}\right) \ ,$$ (Eq. 5) where $M_k$ is the sequence length of expected output $y_k$ . The specific architecture depends on the choice of $\text{Encoder}$ , $\text{Decoder}$ and $\text{Attention}$ . For the encoder, we use a Bi-LSTM with 128 hidden units and a word embedding of 96 dimensions. We use a separate decoder for each of the fields. Each decoder has a word embedding of 96 dimensions, a LSTM with 128 units, with a learned first hidden state and its own attention mechanism. Our attention mechanism follows BIBREF4 $$a_{ji} &= v^T \tanh (W_{e} \, enc_i + W_{d} \, dec_j) \ .$$ (Eq. 6) The attention parameters $W_e$ , $W_d$ and $v$ for each attention mechanism are all 128-dimensional. During training we use teacher forcing for the decoders BIBREF5 , such that $o_{k,j-1} = y_{k,j-1}$ . During testing we use argmax to select the most probable output for each step $j$ and run each decoder until the first end of sentence (EOS) symbol. Data sets To compare our model to baselines relying on token-level labels we use existing data sets for which token level-labels are available. We measure our performance on the ATIS data set BIBREF6 (4978 training samples, 893 testing samples) and the MIT restaurant (7660 train, 1521 test) and movie corpus (9775 train, 2443 test) BIBREF7 . These data sets contains token-level labels in the Beginning-Inside-Out format (BIO). The ATIS data set consists of natural language requests to a simulated airline booking system. Each word is labeled with one of several classes, e.g. departure city, arrival city, cost, etc. The MIT restaurant and movie corpus are similar, except for a restaurant and movie domain respectively. See table 1 for samples. Since our model does not need token-level labels, we create an E2E version of each data set without token-level labels by chunking the BIO-labeled words and using the labels as fields to extract. If there are multiple outputs for a single field, e.g. multiple destination cities, we join them with a comma. For the ATIS data set, we choose the 10 most common labels, and we use all the labels for the movie and restaurant corpus. The movie data set has 12 fields and the restaurant has 8. See Table 2 for an example of the E2E ATIS data set. Baselines For the baselines, we use a two layer neural network model. The first layer is a Bi-directional Long Short Term Memory network BIBREF8 (Bi-LSTM) and the second layer is a forward-only LSTM. Both layers have 128 hidden units. We use a trained word embedding of size 128. The baseline is trained with Adam BIBREF9 on the BIO labels and uses early stopping on a held out validation set. This baseline architecture achieves a fairly strong F1 score of 0.9456 on the ATIS data set. For comparison, the published state-of-the-art is at 0.9586 BIBREF1 . These numbers are for the traditional BIO token level measure of performance using the publicly available conlleval script. They should not be confused with the E2E performance reported later. We present them here so that readers familiar with the ATIS data set can evaluate the strength of our baselines using a well-known measure. For the E2E performance measure we train the baseline models using token-level BIO labels and predict BIO labels on the test set. Given the predicted BIO labels, we create the E2E output for the baseline models in the same way we created the E2E data sets, i.e. by chunking and extracting labels as fields. We evaluate our model and the baselines using the MUC-5 definitions of precision, recall and F1, without partial matches BIBREF10 . We use bootstrap sampling to estimate the probability that the model with the best micro average F1 score on the entire test set is worse for a randomly sampled subset of the test data. Our model Since our decoders can only output values that are present in the input, we prepend a single comma to every input sequence. We optimize our model using Adam and use early stopping on a held-out validation set. The model quickly converges to optimal performance, usually after around 5000 updates after which it starts overfitting. For the restaurant data set, to increase performance, we double the sizes of all the parameters and use embedding and recurrent dropout following BIBREF11 . Further, we add a summarizer LSTM to each decoder. The summarizer LSTM reads the entire encoded input. The last hidden state of the summarizer LSTM is then concatenated to each input to the decoder. Results We see in Table 3 that our model is competitive with the baseline models in terms of micro-averaged F1 for two of the three data sets. This is a remarkable result given that the baselines are trained on token-level labels, whereas our model is trained end-to-end. For the restaurant data set, our model is slightly worse than the baseline. Related work Event extraction (EE) is similar to the E2E IE task we propose, except that it can have several event types and multiple events per input. In our E2E IE task, we only have a single event type and assume there is zero or one event mentioned in the input, which is an easier task. Recently, BIBREF12 achieved state of the art results on the ACE 2005 EE data set using a recurrent neural network to jointly model event triggers and argument roles. Other approaches have addressed the need for token-level labels when only raw output values are available. mintzdistant2009 introduced distant supervision, which heuristically generates the token-level labels from the output values. You do this by searching for input tokens that matches output values. The matching tokens are then assigned the labels for the matching outputs. One drawback is that the quality of the labels crucially depend on the search algorithm and how closely the tokens match the output values, which makes it brittle. Our method is trained end-to-end, thus not relying on brittle heuristics. sutskeversequence2014 opened up the sequence-to-sequence paradigm. With the addition of attention BIBREF4 , these models achieved state-of-the-art results in machine translation BIBREF13 . We are broadly inspired by these results to investigate E2E models for IE. The idea of copying words from the input to the output have been used in machine translation to overcome problems with out-of-vocabulary words BIBREF14 , BIBREF15 . Discussion We present an end-to-end IE model that does not require detailed token-level labels. Despite being trained end-to-end, it is competitive with baseline models relying on token-level labels. In contrast to them, our model can be used on many real life IE tasks where intermediate token-level labels are not available and creating them is not feasible. In our experiments our model and the baselines had access to the same amount of training samples. In a real life scenario it is likely that token-level labels only exist for a subset of all the data. It would be interesting to investigate the quantity/quality trade-of of the labels, and a multi task extension to the model, which could make use of available token-level labels. Our model is remarkably stable to hyper parameter changes. On the restaurant dataset we tried several different architectures and hyper parameters before settling on the reported one. The difference between the worst and the best was approximately 2 percentage points. A major limitation of the proposed model is that it can only output values that are present in the input. This is a problem for outputs that are normalized before being submitted as machine readable data, which is a common occurrence. For instance, dates might appear as 'Jan 17 2012' in the input and as '17-01-2012' in the machine readable output. While it is clear that this model does not solve all the problems present in real-life IE tasks, we believe it is an important step towards applicable E2E IE systems. In the future, we will experiment with adding character level models on top of the pointer network outputs so the model can focus on an input, and then normalize it to fit the normalized outputs. Acknowledgments We would like to thank the reviewers who helped make the paper more concise. Dirk Hovy was supported by the Eurostars grant E10138 ReProsis. This research was supported by the NVIDIA Corporation with the donation of TITAN X GPUs.
No
602396d1f5a3c172e60a10c7022bcfa08fa6cbc9
602396d1f5a3c172e60a10c7022bcfa08fa6cbc9_0
Q: By how much do they outperform BiLSTMs in Sentiment Analysis? Text: Introduction Recurrent neural networks (RNNs) live at the heart of many sequence modeling problems. In particular, the incorporation of gated additive recurrent connections is extremely powerful, leading to the pervasive adoption of models such as Gated Recurrent Units (GRU) BIBREF0 or Long Short-Term Memory (LSTM) BIBREF1 across many NLP applications BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In these models, the key idea is that the gating functions control information flow and compositionality over time, deciding how much information to read/write across time steps. This not only serves as a protection against vanishing/exploding gradients but also enables greater relative ease in modeling long-range dependencies. There are two common ways to increase the representation capability of RNNs. Firstly, the number of hidden dimensions could be increased. Secondly, recurrent layers could be stacked on top of each other in a hierarchical fashion BIBREF6 , with each layer's input being the output of the previous, enabling hierarchical features to be captured. Notably, the wide adoption of stacked architectures across many applications BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 signify the need for designing complex and expressive encoders. Unfortunately, these strategies may face limitations. For example, the former might run a risk of overfitting and/or hitting a wall in performance. On the other hand, the latter might be faced with the inherent difficulties of going deep such as vanishing gradients or difficulty in feature propagation across deep RNN layers BIBREF11 . This paper proposes Recurrently Controlled Recurrent Networks (RCRN), a new recurrent architecture and a general purpose neural building block for sequence modeling. RCRNs are characterized by its usage of two key components - a recurrent controller cell and a listener cell. The controller cell controls the information flow and compositionality of the listener RNN. The key motivation behind RCRN is to provide expressive and powerful sequence encoding. However, unlike stacked architectures, all RNN layers operate jointly on the same hierarchical level, effectively avoiding the need to go deeper. Therefore, RCRNs provide a new alternate way of utilizing multiple RNN layers in conjunction by allowing one RNN to control another RNN. As such, our key aim in this work is to show that our proposed controller-listener architecture is a viable replacement for the widely adopted stacked recurrent architecture. To demonstrate the effectiveness of our proposed RCRN model, we conduct extensive experiments on a plethora of diverse NLP tasks where sequence encoders such as LSTMs/GRUs are highly essential. These tasks include sentiment analysis (SST, IMDb, Amazon Reviews), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Experimental results show that RCRN outperforms BiLSTMs and multi-layered/stacked BiLSTMs on all 26 datasets, suggesting that RCRNs are viable replacements for the widely adopted stacked recurrent architectures. Additionally, RCRN achieves close to state-of-the-art performance on several datasets. Related Work RNN variants such as LSTMs and GRUs are ubiquitous and indispensible building blocks in many NLP applications such as question answering BIBREF12 , BIBREF9 , machine translation BIBREF2 , entailment classification BIBREF13 and sentiment analysis BIBREF14 , BIBREF15 . In recent years, many RNN variants have been proposed, ranging from multi-scale models BIBREF16 , BIBREF17 , BIBREF18 to tree-structured encoders BIBREF19 , BIBREF20 . Models that are targetted at improving the internals of the RNN cell have also been proposed BIBREF21 , BIBREF22 . Given the importance of sequence encoding in NLP, the design of effective RNN units for this purpose remains an active area of research. Stacking RNN layers is the most common way to improve representation power. This has been used in many highly performant models ranging from speech recognition BIBREF7 to machine reading BIBREF9 . The BCN model BIBREF5 similarly uses multiple BiLSTM layers within their architecture. Models that use shortcut/residual connections in conjunctin with stacked RNN layers are also notable BIBREF11 , BIBREF14 , BIBREF10 , BIBREF23 . Notably, a recent emerging trend is to model sequences without recurrence. This is primarily motivated by the fact that recurrence is an inherent prohibitor of parallelism. To this end, many works have explored the possibility of using attention as a replacement for recurrence. In particular, self-attention BIBREF24 has been a popular choice. This has sparked many innovations, including general purpose encoders such as DiSAN BIBREF25 and Block Bi-DiSAN BIBREF26 . The key idea in these works is to use multi-headed self-attention and positional encodings to model temporal information. While attention-only models may come close in performance, some domains may still require the complex and expressive recurrent encoders. Moreover, we note that in BIBREF25 , BIBREF26 , the scores on multiple benchmarks (e.g., SST, TREC, SNLI, MultiNLI) do not outperform (or even approach) the state-of-the-art, most of which are models that still heavily rely on bidirectional LSTMs BIBREF27 , BIBREF20 , BIBREF5 , BIBREF10 . While self-attentive RNN-less encoders have recently been popular, our work moves in an orthogonal and possibly complementary direction, advocating a stronger RNN unit for sequence encoding instead. Nevertheless, it is also good to note that our RCRN model outperforms DiSAN in all our experiments. Another line of work is also concerned with eliminating recurrence. SRUs (Simple Recurrent Units) BIBREF28 are recently proposed networks that remove the sequential dependencies in RNNs. SRUs can be considered a special case of Quasi-RNNs BIBREF29 , which performs incremental pooling using pre-learned convolutional gates. A recent work, Multi-range Reasoning Units (MRU) BIBREF30 follows the same paradigm, trading convolutional gates with features learned via expressive multi-granular reasoning. BIBREF31 proposed sentence-state LSTMs (S-LSTM) that exchanges incremental reading for a single global state. Our work proposes a new way of enhancing the representation capability of RNNs without going deep. For the first time, we propose a controller-listener architecture that uses one recurrent unit to control another recurrent unit. Our proposed RCRN consistently outperforms stacked BiLSTMs and achieves state-of-the-art results on several datasets. We outperform above-mentioned competitors such as DiSAN, SRUs, stacked BiLSTMs and sentence-state LSTMs. Recurrently Controlled Recurrent Networks (RCRN) This section formally introduces the RCRN architecture. Our model is split into two main components - a controller cell and a listener cell. Figure FIGREF1 illustrates the model architecture. Controller Cell The goal of the controller cell is to learn gating functions in order to influence the target cell. In order to control the target cell, the controller cell constructs a forget gate and an output gate which are then used to influence the information flow of the listener cell. For each gate (output and forget), we use a separate RNN cell. As such, the controller cell comprises two cell states and an additional set of parameters. The equations of the controller cell are defined as follows: i1t = s(W1ixt + U1ih1t-1 + b1i) and i2t = s(W2ixt + U2ih2t-1 + b2i) f1t = s(W1fxt + U1fh1t-1 + b1f) and f2t = s(W2fxt + U2fh2t-1 + b2f) o1t = s(W1oxt + U1oh1t-1 + b1o) and o2t = s(W2oxt + U2oh2t-1 + b2o) c1t = f1t c1t-1 + i1t (W1cxt + U1ch1t-1 + b1c) c2t = f2t c2t-1 + i2t (W2cxt + U2ch2t-1 + b2c) h1t = o1t (c1t) and h2t = o2t (c2t) where INLINEFORM0 is the input to the model at time step INLINEFORM1 . INLINEFORM2 are the parameters of the model where INLINEFORM3 and INLINEFORM4 . INLINEFORM5 is the sigmoid function and INLINEFORM6 is the tanh nonlinearity. INLINEFORM7 is the Hadamard product. The controller RNN has two cell states denoted as INLINEFORM8 and INLINEFORM9 respectively. INLINEFORM10 are the outputs of the unidirectional controller cell at time step INLINEFORM11 . Next, we consider a bidirectional adaptation of the controller cell. Let Equations ( SECREF2 - SECREF2 ) be represented by the function INLINEFORM12 , the bidirectional adaptation is represented as: h1t,h2t = CT(h1t-1, h2t-1, xt) t=1, h1t,h2t = CT(h1t+1, h2t+1, xt) t=M, 1 h1t = [h1t; h1t] and h2t = [h2t; h2t] The outputs of the bidirectional controller cell are INLINEFORM0 for time step INLINEFORM1 . These hidden outputs act as gates for the listener cell. Listener Cell The listener cell is another recurrent cell. The final output of the RCRN is generated by the listener cell which is being influenced by the controller cell. First, the listener cell uses a base recurrent model to process the sequence input. The equations of this base recurrent model are defined as follows: i3t = s(W3ixt + U3ih3t-1 + b3i) f3t = s(W3fxt + U3fh3t-1 + b3f) o3t = s(W3oxt + U3oh3t-1 + b3o) c3t = f3t c3t-1 + i3t (W3cxt + U3ch3t-1 + b3c) h3t = o3t (c3t) Similarly, a bidirectional adaptation is used, obtaining INLINEFORM0 . Next, using INLINEFORM1 (outputs of the controller cell), we define another recurrent operation as follows: c4t = s(h1t) c4t-1 + (1-s(h1t)) h3t h4t = h2t c3t where INLINEFORM0 and INLINEFORM1 are the cell and hidden states at time step INLINEFORM2 . INLINEFORM3 are the parameters of the listener cell where INLINEFORM4 . Note that INLINEFORM5 and INLINEFORM6 are the outputs of the controller cell. In this formulation, INLINEFORM7 acts as the forget gate for the listener cell. Likewise INLINEFORM8 acts as the output gate for the listener. Overall RCRN Architecture, Variants and Implementation Intuitively, the overall architecture of the RCRN model can be explained as follows: Firstly, the controller cell can be thought of as two BiRNN models which hidden states are used as the forget and output gates for another recurrent model, i.e., the listener. The listener uses a single BiRNN model for sequence encoding and then allows this representation to be altered by listening to the controller. An alternative interpretation to our model architecture is that it is essentially a `recurrent-over-recurrent' model. Clearly, the formulation we have used above uses BiLSTMs as the atomic building block for RCRN. Hence, we note that it is also possible to have a simplified variant of RCRN that uses GRUs as the atomic block which we found to have performed slightly better on certain datasets. For efficiency purposes, we use the cuDNN optimized version of the base recurrent unit (LSTMs/GRUs). Additionally, note that the final recurrent cell (Equation ( SECREF3 )) can be subject to cuda-level optimization following simple recurrent units (SRU) BIBREF28 . The key idea is that this operation can be performed along the dimension axis, enabling greater parallelization on the GPU. For the sake of brevity, we refer interested readers to BIBREF28 . Note that this form of cuda-level optimization was also performed in the Quasi-RNN model BIBREF29 , which effectively subsumes the SRU model. Note that a single RCRN model is equivalent to a stacked BiLSTM of 3 layers. This is clear when we consider how two controller BiRNNs are used to control a single listener BiRNN. As such, for our experiments, when considering only the encoder and keeping all other components constant, 3L-BiLSTM has equal parameters to RCRN while RCRN and 3L-BiLSTM are approximately three times larger than BiLSTM. Experiments This section discusses the overall empirical evaluation of our proposed RCRN model. Tasks and Datasets In order to verify the effectiveness of our proposed RCRN architecture, we conduct extensive experiments across several tasks in the NLP domain. Sentiment analysis is a text classification problem in which the goal is to determine the polarity of a given sentence/document. We conduct experiments on both sentence and document level. More concretely, we use 16 Amazon review datasets from BIBREF32 , the well-established Stanford Sentiment TreeBank (SST-5/SST-2) BIBREF33 and the IMDb Sentiment dataset BIBREF34 . All tasks are binary classification tasks with the exception of SST-5. The metric is the accuracy score. The goal of this task is to classify questions into fine-grained categories such as number or location. We use the TREC question classification dataset BIBREF35 . The metric is the accuracy score. This is a well-established and popular task in the field of natural language understanding and inference. Given two sentences INLINEFORM0 and INLINEFORM1 , the goal is to determine if INLINEFORM2 entails or contradicts INLINEFORM3 . We use two popular benchmark datasets, i.e., the Stanford Natural Language Inference (SNLI) corpus BIBREF36 , and SciTail (Science Entailment) BIBREF37 datasets. This is a pairwise classsification problem in which the metric is also the accuracy score. This is a standard problem in information retrieval and learning-to-rank. Given a question, the task at hand is to rank candidate answers. We use the popular WikiQA BIBREF38 and TrecQA BIBREF39 datasets. For TrecQA, we use the cleaned setting as denoted by BIBREF40 . The evaluation metrics are the MAP (Mean Average Precision) and Mean Reciprocal Rank (MRR) ranking metrics. This task involves reading documents and answering questions about these documents. We use the recent NarrativeQA BIBREF41 dataset which involves reasoning and answering questions over story summaries. We follow the original paper and report scores on BLEU-1, BLEU-4, Meteor and Rouge-L. Task-Specific Model Architectures and Implementation Details In this section, we describe the task-specific model architectures for each task. This architecture is used for all text classification tasks (sentiment analysis and question classification datasets). We use 300D GloVe BIBREF42 vectors with 600D CoVe BIBREF5 vectors as pretrained embedding vectors. An optional character-level word representation is also added (constructed with a standard BiGRU model). The output of the embedding layer is passed into the RCRN model directly without using any projection layer. Word embeddings are not updated during training. Given the hidden output states of the INLINEFORM0 dimensional RCRN cell, we take the concatenation of the max, mean and min pooling of all hidden states to form the final feature vector. This feature vector is passed into a single dense layer with ReLU activations of INLINEFORM1 dimensions. The output of this layer is then passed into a softmax layer for classification. This model optimizes the cross entropy loss. We train this model using Adam BIBREF43 and learning rate is tuned amongst INLINEFORM2 . This architecture is used for entailment tasks. This is a pairwise classification models with two input sequences. Similar to the singleton classsification model, we utilize the identical input encoder (GloVe, CoVE and character RNN) but include an additional part-of-speech (POS tag) embedding. We pass the input representation into a two layer highway network BIBREF44 of 300 hidden dimensions before passing into the RCRN encoder. The feature representation of INLINEFORM0 and INLINEFORM1 is the concatentation of the max and mean pooling of the RCRN hidden outputs. To compare INLINEFORM2 and INLINEFORM3 , we pass INLINEFORM4 into a two layer highway network. This output is then passed into a softmax layer for classification. We train this model using Adam and learning rate is tuned amongst INLINEFORM5 . We mainly focus on the encoder-only setting which does not allow cross sentence attention. This is a commonly tested setting on the SNLI dataset. This architecture is used for the ranking tasks (i.e., answer selection). We use the model architecture from Attentive Pooling BiLSTMs (AP-BiLSTM) BIBREF45 as our base and swap the RNN encoder with our RCRN encoder. The dimensionality is set to 200. The similarity scoring function is the cosine similarity and the objective function is the pairwise hinge loss with a margin of INLINEFORM0 . We use negative sampling of INLINEFORM1 to train our model. We train our model using Adadelta BIBREF46 with a learning rate of INLINEFORM2 . We use R-NET BIBREF9 as the base model. Since R-NET uses three Bidirectional GRU layers as the encoder, we replaced this stacked BiGRU layer with RCRN. For fairness, we use the GRU variant of RCRN instead. The dimensionality of the encoder is set to 75. We train both models using Adam with a learning rate of INLINEFORM0 . For all datasets, we include an additional ablative baselines, swapping the RCRN with (1) a standard BiLSTM model and (2) a stacked BiLSTM of 3 layers (3L-BiLSTM). This is to fairly observe the impact of different encoder models based on the same overall model framework. Overall Results This section discusses the overall results of our experiments. On the 16 review datasets (Table TABREF22 ) from BIBREF32 , BIBREF31 , our proposed RCRN architecture achieves the highest score on all 16 datasets, outperforming the existing state-of-the-art model - sentence state LSTMs (SLSTM) BIBREF31 . The macro average performance gain over BiLSTMs ( INLINEFORM0 ) and Stacked (2 X BiLSTM) ( INLINEFORM1 ) is also notable. On the same architecture, our RCRN outperforms ablative baselines BiLSTM by INLINEFORM2 and 3L-BiLSTM by INLINEFORM3 on average across 16 datasets. Results on SST-5 (Table TABREF22 ) and SST-2 (Table TABREF22 ) are also promising. More concretely, our RCRN architecture achieves state-of-the-art results on SST-5 and SST-2. RCRN also outperforms many strong baselines such as DiSAN BIBREF25 , a self-attentive model and Bi-Attentive classification network (BCN) BIBREF5 that also use CoVe vectors. On SST-2, strong baselines such as Neural Semantic Encoders BIBREF53 and similarly the BCN model are also outperformed by our RCRN model. Finally, on the IMDb sentiment classification dataset (Table TABREF25 ), RCRN achieved INLINEFORM0 accuracy. Our proposed RCRN outperforms Residual BiLSTMs BIBREF14 , 4-layered Quasi Recurrent Neural Networks (QRNN) BIBREF29 and the BCN model which can be considered to be very competitive baselines. RCRN also outperforms ablative baselines BiLSTM ( INLINEFORM1 ) and 3L-BiLSTM ( INLINEFORM2 ). Our results on the TREC question classification dataset (Table TABREF25 ) is also promising. RCRN achieved a state-of-the-art score of INLINEFORM0 on this dataset. A notable baseline is the Densely Connected BiLSTM BIBREF23 , a deep residual stacked BiLSTM model which RCRN outperforms ( INLINEFORM1 ). Our model also outperforms BCN (+0.4%) and SRU ( INLINEFORM2 ). Our ablative BiLSTM baselines achieve reasonably high score, posssibly due to CoVe Embeddings. However, our RCRN can further increase the performance score. Results on entailment classification are also optimistic. On SNLI (Table TABREF26 ), RCRN achieves INLINEFORM0 accuracy, which is competitive to Gumbel LSTM. However, RCRN outperforms a wide range of baselines, including self-attention based models as multi-head BIBREF24 and DiSAN BIBREF25 . There is also performance gain of INLINEFORM1 over Bi-SRU even though our model does not use attention at all. RCRN also outperforms shortcut stacked encoders, which use a series of BiLSTM connected by shortcut layers. Post review, as per reviewer request, we experimented with adding cross sentence attention, in particular adding the attention of BIBREF61 on 3L-BiLSTM and RCRN. We found that they performed comparably (both at INLINEFORM2 ). We did not have resources to experiment further even though intuitively incorporating different/newer variants of attention BIBREF65 , BIBREF63 , BIBREF13 and/or ELMo BIBREF50 can definitely raise the score further. However, we hypothesize that cross sentence attention forces less reliance on the encoder. Therefore stacked BiLSTMs and RCRNs perform similarly. The results on SciTail similarly show that RCRN is more effective than BiLSTM ( INLINEFORM0 ). Moreover, RCRN outperforms several baselines in BIBREF37 including models that use cross sentence attention such as DecompAtt BIBREF61 and ESIM BIBREF13 . However, it still falls short to recent state-of-the-art models such as OpenAI's Generative Pretrained Transformer BIBREF64 . Results on the answer selection (Table TABREF26 ) task show that RCRN leads to considerable improvements on both WikiQA and TrecQA datasets. We investigate two settings. The first, we reimplement AP-BiLSTM and swap the BiLSTM for RCRN encoders. Secondly, we completely remove all attention layers from both models to test the ability of the standalone encoder. Without attention, RCRN gives an improvement of INLINEFORM0 on both datasets. With attentive pooling, RCRN maintains a INLINEFORM1 improvement in terms of MAP score. However, the gains on MRR are greater ( INLINEFORM2 ). Notably, AP-RCRN model outperforms the official results reported in BIBREF45 . Overall, we observe that RCRN is much stronger than BiLSTMs and 3L-BiLSTMs on this task. Results (Table TABREF26 ) show that enhancing R-NET with RCRN can lead to considerable improvements. This leads to an improvement of INLINEFORM0 on all four metrics. Note that our model only uses a single layered RCRN while R-NET uses 3 layered BiGRUs. This empirical evidence might suggest that RCRN is a better way to utilize multiple recurrent layers. Across all 26 datasets, RCRN outperforms not only standard BiLSTMs but also 3L-BiLSTMs which have approximately equal parameterization. 3L-BiLSTMs were overall better than BiLSTMs but lose out on a minority of datasets. RCRN outperforms a wide range of competitive baselines such as DiSAN, Bi-SRUs, BCN and LSTM-CNN, etc. We achieve (close to) state-of-the-art performance on SST, TREC question classification and 16 Amazon review datasets. Runtime Analysis This section aims to get a benchmark on model performance with respect to model efficiency. In order to do that, we benchmark RCRN along with BiLSTMs and 3 layered BiLSTMs (with and without cuDNN optimization) on different sequence lengths (i.e., INLINEFORM0 ). We use the IMDb sentiment task. We use the same standard hardware (a single Nvidia GTX1070 card) and an identical overarching model architecture. The dimensionality of the model is set to 200 with a fixed batch size of 32. Finally, we also benchmark a CUDA optimized adaptation of RCRN which has been described earlier (Section SECREF4 ). Table TABREF32 reports training/inference times of all benchmarked models. The fastest model is naturally the 1 layer BiLSTM (cuDNN). Intuitively, the speed of RCRN should be roughly equivalent to using 3 BiLSTMs. Surprisingly, we found that the cuda optimized RCRN performs consistently slightly faster than the 3 layer BiLSTM (cuDNN). At the very least, RCRN provides comparable efficiency to using stacked BiLSTM and empirically we show that there is nothing to lose in this aspect. However, we note that cuda-level optimizations have to be performed. Finally, the non-cuDNN optimized BiLSTM and stacked BiLSTMs are also provided for reference. Conclusion and Future Directions We proposed Recurrently Controlled Recurrent Networks (RCRN), a new recurrent architecture and encoder for a myriad of NLP tasks. RCRN operates in a novel controller-listener architecture which uses RNNs to learn the gating functions of another RNN. We apply RCRN to a potpourri of NLP tasks and achieve promising/highly competitive results on all tasks and 26 benchmark datasets. Overall findings suggest that our controller-listener architecture is more effective than stacking RNN layers. Moreover, RCRN remains equally (or slightly more) efficient compared to stacked RNNs of approximately equal parameterization. There are several potential interesting directions for further investigating RCRNs. Firstly, investigating RCRNs controlling other RCRNs and secondly, investigating RCRNs in other domains where recurrent models are also prevalent for sequence modeling. The source code of our model can be found at https://github.com/vanzytay/NIPS2018_RCRN. Acknowledgements We thank the anonymous reviewers and area chair from NIPS 2018 for their constructive and high quality feedback.
Proposed RCRN outperforms ablative baselines BiLSTM by +2.9% and 3L-BiLSTM by +1.1% on average across 16 datasets.
b984612ceac5b4cf5efd841af2afddd244ee497a
b984612ceac5b4cf5efd841af2afddd244ee497a_0
Q: Does their model have more parameters than other models? Text: Introduction Recurrent neural networks (RNNs) live at the heart of many sequence modeling problems. In particular, the incorporation of gated additive recurrent connections is extremely powerful, leading to the pervasive adoption of models such as Gated Recurrent Units (GRU) BIBREF0 or Long Short-Term Memory (LSTM) BIBREF1 across many NLP applications BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . In these models, the key idea is that the gating functions control information flow and compositionality over time, deciding how much information to read/write across time steps. This not only serves as a protection against vanishing/exploding gradients but also enables greater relative ease in modeling long-range dependencies. There are two common ways to increase the representation capability of RNNs. Firstly, the number of hidden dimensions could be increased. Secondly, recurrent layers could be stacked on top of each other in a hierarchical fashion BIBREF6 , with each layer's input being the output of the previous, enabling hierarchical features to be captured. Notably, the wide adoption of stacked architectures across many applications BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 signify the need for designing complex and expressive encoders. Unfortunately, these strategies may face limitations. For example, the former might run a risk of overfitting and/or hitting a wall in performance. On the other hand, the latter might be faced with the inherent difficulties of going deep such as vanishing gradients or difficulty in feature propagation across deep RNN layers BIBREF11 . This paper proposes Recurrently Controlled Recurrent Networks (RCRN), a new recurrent architecture and a general purpose neural building block for sequence modeling. RCRNs are characterized by its usage of two key components - a recurrent controller cell and a listener cell. The controller cell controls the information flow and compositionality of the listener RNN. The key motivation behind RCRN is to provide expressive and powerful sequence encoding. However, unlike stacked architectures, all RNN layers operate jointly on the same hierarchical level, effectively avoiding the need to go deeper. Therefore, RCRNs provide a new alternate way of utilizing multiple RNN layers in conjunction by allowing one RNN to control another RNN. As such, our key aim in this work is to show that our proposed controller-listener architecture is a viable replacement for the widely adopted stacked recurrent architecture. To demonstrate the effectiveness of our proposed RCRN model, we conduct extensive experiments on a plethora of diverse NLP tasks where sequence encoders such as LSTMs/GRUs are highly essential. These tasks include sentiment analysis (SST, IMDb, Amazon Reviews), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Experimental results show that RCRN outperforms BiLSTMs and multi-layered/stacked BiLSTMs on all 26 datasets, suggesting that RCRNs are viable replacements for the widely adopted stacked recurrent architectures. Additionally, RCRN achieves close to state-of-the-art performance on several datasets. Related Work RNN variants such as LSTMs and GRUs are ubiquitous and indispensible building blocks in many NLP applications such as question answering BIBREF12 , BIBREF9 , machine translation BIBREF2 , entailment classification BIBREF13 and sentiment analysis BIBREF14 , BIBREF15 . In recent years, many RNN variants have been proposed, ranging from multi-scale models BIBREF16 , BIBREF17 , BIBREF18 to tree-structured encoders BIBREF19 , BIBREF20 . Models that are targetted at improving the internals of the RNN cell have also been proposed BIBREF21 , BIBREF22 . Given the importance of sequence encoding in NLP, the design of effective RNN units for this purpose remains an active area of research. Stacking RNN layers is the most common way to improve representation power. This has been used in many highly performant models ranging from speech recognition BIBREF7 to machine reading BIBREF9 . The BCN model BIBREF5 similarly uses multiple BiLSTM layers within their architecture. Models that use shortcut/residual connections in conjunctin with stacked RNN layers are also notable BIBREF11 , BIBREF14 , BIBREF10 , BIBREF23 . Notably, a recent emerging trend is to model sequences without recurrence. This is primarily motivated by the fact that recurrence is an inherent prohibitor of parallelism. To this end, many works have explored the possibility of using attention as a replacement for recurrence. In particular, self-attention BIBREF24 has been a popular choice. This has sparked many innovations, including general purpose encoders such as DiSAN BIBREF25 and Block Bi-DiSAN BIBREF26 . The key idea in these works is to use multi-headed self-attention and positional encodings to model temporal information. While attention-only models may come close in performance, some domains may still require the complex and expressive recurrent encoders. Moreover, we note that in BIBREF25 , BIBREF26 , the scores on multiple benchmarks (e.g., SST, TREC, SNLI, MultiNLI) do not outperform (or even approach) the state-of-the-art, most of which are models that still heavily rely on bidirectional LSTMs BIBREF27 , BIBREF20 , BIBREF5 , BIBREF10 . While self-attentive RNN-less encoders have recently been popular, our work moves in an orthogonal and possibly complementary direction, advocating a stronger RNN unit for sequence encoding instead. Nevertheless, it is also good to note that our RCRN model outperforms DiSAN in all our experiments. Another line of work is also concerned with eliminating recurrence. SRUs (Simple Recurrent Units) BIBREF28 are recently proposed networks that remove the sequential dependencies in RNNs. SRUs can be considered a special case of Quasi-RNNs BIBREF29 , which performs incremental pooling using pre-learned convolutional gates. A recent work, Multi-range Reasoning Units (MRU) BIBREF30 follows the same paradigm, trading convolutional gates with features learned via expressive multi-granular reasoning. BIBREF31 proposed sentence-state LSTMs (S-LSTM) that exchanges incremental reading for a single global state. Our work proposes a new way of enhancing the representation capability of RNNs without going deep. For the first time, we propose a controller-listener architecture that uses one recurrent unit to control another recurrent unit. Our proposed RCRN consistently outperforms stacked BiLSTMs and achieves state-of-the-art results on several datasets. We outperform above-mentioned competitors such as DiSAN, SRUs, stacked BiLSTMs and sentence-state LSTMs. Recurrently Controlled Recurrent Networks (RCRN) This section formally introduces the RCRN architecture. Our model is split into two main components - a controller cell and a listener cell. Figure FIGREF1 illustrates the model architecture. Controller Cell The goal of the controller cell is to learn gating functions in order to influence the target cell. In order to control the target cell, the controller cell constructs a forget gate and an output gate which are then used to influence the information flow of the listener cell. For each gate (output and forget), we use a separate RNN cell. As such, the controller cell comprises two cell states and an additional set of parameters. The equations of the controller cell are defined as follows: i1t = s(W1ixt + U1ih1t-1 + b1i) and i2t = s(W2ixt + U2ih2t-1 + b2i) f1t = s(W1fxt + U1fh1t-1 + b1f) and f2t = s(W2fxt + U2fh2t-1 + b2f) o1t = s(W1oxt + U1oh1t-1 + b1o) and o2t = s(W2oxt + U2oh2t-1 + b2o) c1t = f1t c1t-1 + i1t (W1cxt + U1ch1t-1 + b1c) c2t = f2t c2t-1 + i2t (W2cxt + U2ch2t-1 + b2c) h1t = o1t (c1t) and h2t = o2t (c2t) where INLINEFORM0 is the input to the model at time step INLINEFORM1 . INLINEFORM2 are the parameters of the model where INLINEFORM3 and INLINEFORM4 . INLINEFORM5 is the sigmoid function and INLINEFORM6 is the tanh nonlinearity. INLINEFORM7 is the Hadamard product. The controller RNN has two cell states denoted as INLINEFORM8 and INLINEFORM9 respectively. INLINEFORM10 are the outputs of the unidirectional controller cell at time step INLINEFORM11 . Next, we consider a bidirectional adaptation of the controller cell. Let Equations ( SECREF2 - SECREF2 ) be represented by the function INLINEFORM12 , the bidirectional adaptation is represented as: h1t,h2t = CT(h1t-1, h2t-1, xt) t=1, h1t,h2t = CT(h1t+1, h2t+1, xt) t=M, 1 h1t = [h1t; h1t] and h2t = [h2t; h2t] The outputs of the bidirectional controller cell are INLINEFORM0 for time step INLINEFORM1 . These hidden outputs act as gates for the listener cell. Listener Cell The listener cell is another recurrent cell. The final output of the RCRN is generated by the listener cell which is being influenced by the controller cell. First, the listener cell uses a base recurrent model to process the sequence input. The equations of this base recurrent model are defined as follows: i3t = s(W3ixt + U3ih3t-1 + b3i) f3t = s(W3fxt + U3fh3t-1 + b3f) o3t = s(W3oxt + U3oh3t-1 + b3o) c3t = f3t c3t-1 + i3t (W3cxt + U3ch3t-1 + b3c) h3t = o3t (c3t) Similarly, a bidirectional adaptation is used, obtaining INLINEFORM0 . Next, using INLINEFORM1 (outputs of the controller cell), we define another recurrent operation as follows: c4t = s(h1t) c4t-1 + (1-s(h1t)) h3t h4t = h2t c3t where INLINEFORM0 and INLINEFORM1 are the cell and hidden states at time step INLINEFORM2 . INLINEFORM3 are the parameters of the listener cell where INLINEFORM4 . Note that INLINEFORM5 and INLINEFORM6 are the outputs of the controller cell. In this formulation, INLINEFORM7 acts as the forget gate for the listener cell. Likewise INLINEFORM8 acts as the output gate for the listener. Overall RCRN Architecture, Variants and Implementation Intuitively, the overall architecture of the RCRN model can be explained as follows: Firstly, the controller cell can be thought of as two BiRNN models which hidden states are used as the forget and output gates for another recurrent model, i.e., the listener. The listener uses a single BiRNN model for sequence encoding and then allows this representation to be altered by listening to the controller. An alternative interpretation to our model architecture is that it is essentially a `recurrent-over-recurrent' model. Clearly, the formulation we have used above uses BiLSTMs as the atomic building block for RCRN. Hence, we note that it is also possible to have a simplified variant of RCRN that uses GRUs as the atomic block which we found to have performed slightly better on certain datasets. For efficiency purposes, we use the cuDNN optimized version of the base recurrent unit (LSTMs/GRUs). Additionally, note that the final recurrent cell (Equation ( SECREF3 )) can be subject to cuda-level optimization following simple recurrent units (SRU) BIBREF28 . The key idea is that this operation can be performed along the dimension axis, enabling greater parallelization on the GPU. For the sake of brevity, we refer interested readers to BIBREF28 . Note that this form of cuda-level optimization was also performed in the Quasi-RNN model BIBREF29 , which effectively subsumes the SRU model. Note that a single RCRN model is equivalent to a stacked BiLSTM of 3 layers. This is clear when we consider how two controller BiRNNs are used to control a single listener BiRNN. As such, for our experiments, when considering only the encoder and keeping all other components constant, 3L-BiLSTM has equal parameters to RCRN while RCRN and 3L-BiLSTM are approximately three times larger than BiLSTM. Experiments This section discusses the overall empirical evaluation of our proposed RCRN model. Tasks and Datasets In order to verify the effectiveness of our proposed RCRN architecture, we conduct extensive experiments across several tasks in the NLP domain. Sentiment analysis is a text classification problem in which the goal is to determine the polarity of a given sentence/document. We conduct experiments on both sentence and document level. More concretely, we use 16 Amazon review datasets from BIBREF32 , the well-established Stanford Sentiment TreeBank (SST-5/SST-2) BIBREF33 and the IMDb Sentiment dataset BIBREF34 . All tasks are binary classification tasks with the exception of SST-5. The metric is the accuracy score. The goal of this task is to classify questions into fine-grained categories such as number or location. We use the TREC question classification dataset BIBREF35 . The metric is the accuracy score. This is a well-established and popular task in the field of natural language understanding and inference. Given two sentences INLINEFORM0 and INLINEFORM1 , the goal is to determine if INLINEFORM2 entails or contradicts INLINEFORM3 . We use two popular benchmark datasets, i.e., the Stanford Natural Language Inference (SNLI) corpus BIBREF36 , and SciTail (Science Entailment) BIBREF37 datasets. This is a pairwise classsification problem in which the metric is also the accuracy score. This is a standard problem in information retrieval and learning-to-rank. Given a question, the task at hand is to rank candidate answers. We use the popular WikiQA BIBREF38 and TrecQA BIBREF39 datasets. For TrecQA, we use the cleaned setting as denoted by BIBREF40 . The evaluation metrics are the MAP (Mean Average Precision) and Mean Reciprocal Rank (MRR) ranking metrics. This task involves reading documents and answering questions about these documents. We use the recent NarrativeQA BIBREF41 dataset which involves reasoning and answering questions over story summaries. We follow the original paper and report scores on BLEU-1, BLEU-4, Meteor and Rouge-L. Task-Specific Model Architectures and Implementation Details In this section, we describe the task-specific model architectures for each task. This architecture is used for all text classification tasks (sentiment analysis and question classification datasets). We use 300D GloVe BIBREF42 vectors with 600D CoVe BIBREF5 vectors as pretrained embedding vectors. An optional character-level word representation is also added (constructed with a standard BiGRU model). The output of the embedding layer is passed into the RCRN model directly without using any projection layer. Word embeddings are not updated during training. Given the hidden output states of the INLINEFORM0 dimensional RCRN cell, we take the concatenation of the max, mean and min pooling of all hidden states to form the final feature vector. This feature vector is passed into a single dense layer with ReLU activations of INLINEFORM1 dimensions. The output of this layer is then passed into a softmax layer for classification. This model optimizes the cross entropy loss. We train this model using Adam BIBREF43 and learning rate is tuned amongst INLINEFORM2 . This architecture is used for entailment tasks. This is a pairwise classification models with two input sequences. Similar to the singleton classsification model, we utilize the identical input encoder (GloVe, CoVE and character RNN) but include an additional part-of-speech (POS tag) embedding. We pass the input representation into a two layer highway network BIBREF44 of 300 hidden dimensions before passing into the RCRN encoder. The feature representation of INLINEFORM0 and INLINEFORM1 is the concatentation of the max and mean pooling of the RCRN hidden outputs. To compare INLINEFORM2 and INLINEFORM3 , we pass INLINEFORM4 into a two layer highway network. This output is then passed into a softmax layer for classification. We train this model using Adam and learning rate is tuned amongst INLINEFORM5 . We mainly focus on the encoder-only setting which does not allow cross sentence attention. This is a commonly tested setting on the SNLI dataset. This architecture is used for the ranking tasks (i.e., answer selection). We use the model architecture from Attentive Pooling BiLSTMs (AP-BiLSTM) BIBREF45 as our base and swap the RNN encoder with our RCRN encoder. The dimensionality is set to 200. The similarity scoring function is the cosine similarity and the objective function is the pairwise hinge loss with a margin of INLINEFORM0 . We use negative sampling of INLINEFORM1 to train our model. We train our model using Adadelta BIBREF46 with a learning rate of INLINEFORM2 . We use R-NET BIBREF9 as the base model. Since R-NET uses three Bidirectional GRU layers as the encoder, we replaced this stacked BiGRU layer with RCRN. For fairness, we use the GRU variant of RCRN instead. The dimensionality of the encoder is set to 75. We train both models using Adam with a learning rate of INLINEFORM0 . For all datasets, we include an additional ablative baselines, swapping the RCRN with (1) a standard BiLSTM model and (2) a stacked BiLSTM of 3 layers (3L-BiLSTM). This is to fairly observe the impact of different encoder models based on the same overall model framework. Overall Results This section discusses the overall results of our experiments. On the 16 review datasets (Table TABREF22 ) from BIBREF32 , BIBREF31 , our proposed RCRN architecture achieves the highest score on all 16 datasets, outperforming the existing state-of-the-art model - sentence state LSTMs (SLSTM) BIBREF31 . The macro average performance gain over BiLSTMs ( INLINEFORM0 ) and Stacked (2 X BiLSTM) ( INLINEFORM1 ) is also notable. On the same architecture, our RCRN outperforms ablative baselines BiLSTM by INLINEFORM2 and 3L-BiLSTM by INLINEFORM3 on average across 16 datasets. Results on SST-5 (Table TABREF22 ) and SST-2 (Table TABREF22 ) are also promising. More concretely, our RCRN architecture achieves state-of-the-art results on SST-5 and SST-2. RCRN also outperforms many strong baselines such as DiSAN BIBREF25 , a self-attentive model and Bi-Attentive classification network (BCN) BIBREF5 that also use CoVe vectors. On SST-2, strong baselines such as Neural Semantic Encoders BIBREF53 and similarly the BCN model are also outperformed by our RCRN model. Finally, on the IMDb sentiment classification dataset (Table TABREF25 ), RCRN achieved INLINEFORM0 accuracy. Our proposed RCRN outperforms Residual BiLSTMs BIBREF14 , 4-layered Quasi Recurrent Neural Networks (QRNN) BIBREF29 and the BCN model which can be considered to be very competitive baselines. RCRN also outperforms ablative baselines BiLSTM ( INLINEFORM1 ) and 3L-BiLSTM ( INLINEFORM2 ). Our results on the TREC question classification dataset (Table TABREF25 ) is also promising. RCRN achieved a state-of-the-art score of INLINEFORM0 on this dataset. A notable baseline is the Densely Connected BiLSTM BIBREF23 , a deep residual stacked BiLSTM model which RCRN outperforms ( INLINEFORM1 ). Our model also outperforms BCN (+0.4%) and SRU ( INLINEFORM2 ). Our ablative BiLSTM baselines achieve reasonably high score, posssibly due to CoVe Embeddings. However, our RCRN can further increase the performance score. Results on entailment classification are also optimistic. On SNLI (Table TABREF26 ), RCRN achieves INLINEFORM0 accuracy, which is competitive to Gumbel LSTM. However, RCRN outperforms a wide range of baselines, including self-attention based models as multi-head BIBREF24 and DiSAN BIBREF25 . There is also performance gain of INLINEFORM1 over Bi-SRU even though our model does not use attention at all. RCRN also outperforms shortcut stacked encoders, which use a series of BiLSTM connected by shortcut layers. Post review, as per reviewer request, we experimented with adding cross sentence attention, in particular adding the attention of BIBREF61 on 3L-BiLSTM and RCRN. We found that they performed comparably (both at INLINEFORM2 ). We did not have resources to experiment further even though intuitively incorporating different/newer variants of attention BIBREF65 , BIBREF63 , BIBREF13 and/or ELMo BIBREF50 can definitely raise the score further. However, we hypothesize that cross sentence attention forces less reliance on the encoder. Therefore stacked BiLSTMs and RCRNs perform similarly. The results on SciTail similarly show that RCRN is more effective than BiLSTM ( INLINEFORM0 ). Moreover, RCRN outperforms several baselines in BIBREF37 including models that use cross sentence attention such as DecompAtt BIBREF61 and ESIM BIBREF13 . However, it still falls short to recent state-of-the-art models such as OpenAI's Generative Pretrained Transformer BIBREF64 . Results on the answer selection (Table TABREF26 ) task show that RCRN leads to considerable improvements on both WikiQA and TrecQA datasets. We investigate two settings. The first, we reimplement AP-BiLSTM and swap the BiLSTM for RCRN encoders. Secondly, we completely remove all attention layers from both models to test the ability of the standalone encoder. Without attention, RCRN gives an improvement of INLINEFORM0 on both datasets. With attentive pooling, RCRN maintains a INLINEFORM1 improvement in terms of MAP score. However, the gains on MRR are greater ( INLINEFORM2 ). Notably, AP-RCRN model outperforms the official results reported in BIBREF45 . Overall, we observe that RCRN is much stronger than BiLSTMs and 3L-BiLSTMs on this task. Results (Table TABREF26 ) show that enhancing R-NET with RCRN can lead to considerable improvements. This leads to an improvement of INLINEFORM0 on all four metrics. Note that our model only uses a single layered RCRN while R-NET uses 3 layered BiGRUs. This empirical evidence might suggest that RCRN is a better way to utilize multiple recurrent layers. Across all 26 datasets, RCRN outperforms not only standard BiLSTMs but also 3L-BiLSTMs which have approximately equal parameterization. 3L-BiLSTMs were overall better than BiLSTMs but lose out on a minority of datasets. RCRN outperforms a wide range of competitive baselines such as DiSAN, Bi-SRUs, BCN and LSTM-CNN, etc. We achieve (close to) state-of-the-art performance on SST, TREC question classification and 16 Amazon review datasets. Runtime Analysis This section aims to get a benchmark on model performance with respect to model efficiency. In order to do that, we benchmark RCRN along with BiLSTMs and 3 layered BiLSTMs (with and without cuDNN optimization) on different sequence lengths (i.e., INLINEFORM0 ). We use the IMDb sentiment task. We use the same standard hardware (a single Nvidia GTX1070 card) and an identical overarching model architecture. The dimensionality of the model is set to 200 with a fixed batch size of 32. Finally, we also benchmark a CUDA optimized adaptation of RCRN which has been described earlier (Section SECREF4 ). Table TABREF32 reports training/inference times of all benchmarked models. The fastest model is naturally the 1 layer BiLSTM (cuDNN). Intuitively, the speed of RCRN should be roughly equivalent to using 3 BiLSTMs. Surprisingly, we found that the cuda optimized RCRN performs consistently slightly faster than the 3 layer BiLSTM (cuDNN). At the very least, RCRN provides comparable efficiency to using stacked BiLSTM and empirically we show that there is nothing to lose in this aspect. However, we note that cuda-level optimizations have to be performed. Finally, the non-cuDNN optimized BiLSTM and stacked BiLSTMs are also provided for reference. Conclusion and Future Directions We proposed Recurrently Controlled Recurrent Networks (RCRN), a new recurrent architecture and encoder for a myriad of NLP tasks. RCRN operates in a novel controller-listener architecture which uses RNNs to learn the gating functions of another RNN. We apply RCRN to a potpourri of NLP tasks and achieve promising/highly competitive results on all tasks and 26 benchmark datasets. Overall findings suggest that our controller-listener architecture is more effective than stacking RNN layers. Moreover, RCRN remains equally (or slightly more) efficient compared to stacked RNNs of approximately equal parameterization. There are several potential interesting directions for further investigating RCRNs. Firstly, investigating RCRNs controlling other RCRNs and secondly, investigating RCRNs in other domains where recurrent models are also prevalent for sequence modeling. The source code of our model can be found at https://github.com/vanzytay/NIPS2018_RCRN. Acknowledgements We thank the anonymous reviewers and area chair from NIPS 2018 for their constructive and high quality feedback.
approximately equal parameterization
bde6fa2057fa21b38a91eeb2bb6a3ae7fb3a2c62
bde6fa2057fa21b38a91eeb2bb6a3ae7fb3a2c62_0
Q: what state of the accuracy did they obtain? Text: Introduction State-of-the-art deep neural networks leverage task-specific architectures to develop hierarchical representations of their input, with each layer building a refined abstraction of the layer that came before it BIBREF0 . For text classification, one can think of this as a single reader building up an increasingly refined understanding of the content. In a departure from this philosophy, we propose a divide-and-conquer approach, where a team of readers each focus on different aspects of the text, and then combine their representations to make a joint decision. More precisely, the proposed Multi-View Network (MVN) for text classification learns to generate several views of its input text. Each view is formed by focusing on different sets of words through a view-specific attention mechanism. These views are arranged sequentially, so each subsequent view can build upon or deviate from previous views as appropriate. The final representation that concatenates these diverse views should be more robust to noise than any one of its components. Furthermore, different sentences may look similar under one view but different under another, allowing the network to devote particular views to distinguishing between subtle differences in sentences, resulting in more discriminative representations. Unlike existing multi-view neural network approaches for image processing BIBREF1 , BIBREF2 , where multiple views are provided as part of the input, our MVN learns to automatically create views from its input text by focusing on different sets of words. Compared to deep Convolutional Networks (CNN) for text BIBREF3 , BIBREF0 , the MVN strategy emphasizes network width over depth. Shorter connections between each view and the loss function enable better gradient flow in the networks, which makes the system easier to train. Our use of multiple views is similar in spirit to the weak learners used in ensemble methods BIBREF4 , BIBREF5 , BIBREF6 , but our views produce vector-valued intermediate representations instead of classification scores, and all our views are trained jointly with feedback from the final classifier. Experiments on two benchmark data sets, the Stanford Sentiment Treebank BIBREF7 and the AG English news corpus BIBREF3 , show that 1) our method achieves very competitive accuracy, 2) some views distinguish themselves from others by better categorizing specific classes, and 3) when our base bag-of-words feature set is augmented with convolutional features, the method establishes a new state-of-the-art for both data sets. Multi-View Networks for Text The MVN architecture is depicted in Figure FIGREF1 . First, individual selection vectors INLINEFORM0 are created, each formed by a distinct softmax weighted sum over the word vectors of the input text. Next, these selections are sequentially transformed into views INLINEFORM1 , with each view influencing the views that come after it. Finally, all views are concatenated and fed into a two-layer perceptron for classification. Multiple Attentions for Selection Each selection INLINEFORM0 is constructed by focusing on a different subset of words from the original text, as determined by a softmax weighted sum BIBREF8 . Given a piece of text with INLINEFORM1 words, we represent it as a bag-of-words feature matrix INLINEFORM2 INLINEFORM3 . Each row of the matrix corresponds to one word, which is represented by a INLINEFORM4 -dimensional vector, as provided by a learned word embedding table. The selection INLINEFORM5 for the INLINEFORM6 view is the softmax weighted sum of features: DISPLAYFORM0 where the weight INLINEFORM0 is computed by: DISPLAYFORM0 DISPLAYFORM1 here, INLINEFORM0 (a vector) and INLINEFORM1 (a matrix) are learned selection parameters. By varying the weights INLINEFORM2 , the selection for each view can focus on different words from INLINEFORM3 , as illustrated by different color curves connecting to INLINEFORM4 in Figure FIGREF1 . Aggregating Selections into Views Having built one INLINEFORM0 for each of our INLINEFORM1 views, the actual views are then created as follows: DISPLAYFORM0 where INLINEFORM0 are learned parameter matrices, and INLINEFORM1 represents concatenation. The first and last views are formed by solely INLINEFORM2 ; however, they play very different roles in our network. INLINEFORM3 is completely disconnected from the others, an independent attempt at good feature selection, intended to increase view diversity BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Conversely, INLINEFORM4 forms the base of a structure similar to a multi-layer perceptron with short-cutting, as defined by the recurrence in Equation EQREF7 . Here, the concatenation of all previous views implements short-cutting, while the recursive definition of each view implements stacking, forming a deep network depicted by horizontal arrows in Figure FIGREF1 . This structure makes each view aware of the information in those previous to it, allowing them to build upon each other. Note that the INLINEFORM5 matrices are view-specific and grow with each view, making the overall parameter count quadratic in the number of views. Classification with Views The final step is to transform our views into a classification of the input text. The MVN does so by concatenating its view vectors, which are then fed into a fully connected projection followed by a softmax function to produce a distribution over the possible classes. Dropout regularization BIBREF13 can be applied at this softmax layer, as in BIBREF14 . Beyond Bags of Words The MVN's selection layer operates on a matrix of feature vectors INLINEFORM0 , which has thus far corresponded to a bag of word vectors. Each view's selection makes intuitive sense when features correspond to words, as it is easy to imagine different readers of a text focusing on different words, with each reader arriving at a useful interpretation. However, there is a wealth of knowledge on how to construct powerful feature representations for text, such as those used by convolutional neural networks (CNNs). To demonstrate the utility of having views that weight arbitrary feature vectors, we augment our bag-of-words representation with vectors built by INLINEFORM1 -gram filters max-pooled over the entire text BIBREF14 , with one feature vector for each INLINEFORM2 -gram order, INLINEFORM3 . The augmented INLINEFORM4 matrix has INLINEFORM5 rows. Unlike our word vectors, the 4 CNN vectors each provide representations of the entire text. Returning to our reader analogy, one could imagine these to correspond to quick ( INLINEFORM6 ) or careful ( INLINEFORM7 ) skims of the text. Regardless of whether a feature vector is built by embedding table or by max-pooled INLINEFORM8 -gram filters, we always back-propagate through all feature construction layers, so they become specialized to our end task. Stanford Sentiment Treebank The Stanford Sentiment Treebank contains 11,855 sentences from movie reviews. We use the same splits for training, dev, and test data as in BIBREF7 to predict the fine-grained 5-class sentiment categories of the sentences. For comparison purposes, following BIBREF14 , BIBREF15 , BIBREF16 , we train the models using both phrases and sentences, but only evaluate sentences at test time. We initialized all of the word embeddings BIBREF17 , BIBREF18 using the publicly available 300 dimensional pre-trained vectors from GloVe BIBREF19 . We learned 8 views with 200 dimensions each, which requires us to project the 300 dimensional word vectors, which we implemented using a linear transformation, whose weight matrix and bias term are shared across all words, followed by a INLINEFORM0 activation. For optimization, we used Adadelta BIBREF20 , with a starting learning rate of 0.0005 and a mini-batch of size 50. Also, we used dropout (with a rate of 0.2) to avoid overfitting. All of these MVN hyperparameters were determined through experiments measuring validation-set accuracy. The test-set accuracies obtained by different learning methods, including the current state-of-the-art results, are presented in Table TABREF11 . The results indicate that the bag-of-words MVN outperforms most methods, but obtains lower accuracy than the state-of-the-art results achieved by the tree-LSTM BIBREF21 , BIBREF22 and the high-order CNN BIBREF16 . However, when augmented with 4 convolutional features as described in Section SECREF9 , the MVN strategy surpasses both of these, establishing a new state-of-the-art on this benchmark. In Figure FIGREF12 , we present the test-set accuracies obtained while varying the number of views in our MVN with convolutional features. These results indicate that better predictive accuracy can be achieved while increasing the number of views up to eight. After eight, the accuracy starts to drop. The number of MVN views should be tuned for each new application, but it is good to see that not too many views are required to achieve optimal performance on this task. To better understand the benefits of the MVN method, we further analyzed the eight views constructed by our best model. After training, we obtained the view representation vectors for both the training and testing data, and then independently trained a very simple, but fast and stable Naïve Bayes classifier BIBREF23 for each view. We report class-specific F-measures for each view in Figure FIGREF13 . From this figure, we can observe that different views focus on different target classes. For example, the first two views perform poorly on the 0 (very negative) and 1 (negative) classes, but achieve the highest F-measures on the 2 (neutral) class. Meanwhile, the non-neutral classes each have a different view that achieves the highest F-measure. This suggests that some views have specialized in order to better separate subsets of the training data. We provide an ablation study in Table TABREF14 . First, we construct a traditional ensemble model. We independently train eight MVN models, each with a single view, to serve as weak learners. We have them vote with equal weight for the final classification, obtaining a test-set accuracy of 50.2. Next, we restrict the views in the MVN to be unaware of each other. That is, we replace Equation EQREF7 with INLINEFORM0 , which removes all horizontal links in Figure FIGREF1 . This drops performance to 49.0. Finally, we experiment with a variant of MVN, where each view is only connected to the most recent previous view, replacing Equation EQREF7 with INLINEFORM1 , leading to a version where the parameter count grows linearly in the number of views. This drops the test-set performance to 50.5. These experiments suggest that enabling the views to build upon each other is crucial for achieving the best performance. AG's English News Categorization The AG corpus BIBREF3 , BIBREF0 contains categorized news articles from more than 2,000 news outlets on the web. The task has four classes, and for each class there are 30,000 training documents and 1,900 test documents. A random sample of the training set was used for hyper-parameter tuning. The training and testing settings of this task are exactly the same as those presented for the Stanford Sentiment Treebank task in Section SECREF10 , except that the mini-batch size is reduced to 23, and each view has a dimension of 100. The test errors obtained by various methods are presented in Table TABREF16 . These results show that the bag-of-words MVN outperforms the state-of-the-art accuracy obtained by the non-neural INLINEFORM0 -gram TFIDF approach BIBREF3 , as well as several very deep CNNs BIBREF0 . Accuracy was further improved when the MVN was augmented with 4 convolutional features. In Figure FIGREF17 , we show how accuracy and loss evolve on the validation set during MVN training. These curves show that training is quite stable. The MVN achieves its best results in just a few thousand iterations. Conclusion and Future Work We have presented a novel multi-view neural network for text classification, which creates multiple views of the input text, each represented as a weighted sum of a base set of feature vectors. These views work together to produce a discriminative feature representation for text classification. Unlike many neural approaches to classification, our architecture emphasizes network width in addition to depth, enhancing gradient flow during training. We have used the multi-view network architecture to establish new state-of-the-art results on two benchmark text classification tasks. In the future, we wish to better understand the benefits of generating multiple views, explore new sources of base features, and apply this technique to other NLP problems such as translation or tagging.
51.5
a381ba83a08148ce0324b48b8ff35128e66f580a
a381ba83a08148ce0324b48b8ff35128e66f580a_0
Q: what models did they compare to? Text: Introduction State-of-the-art deep neural networks leverage task-specific architectures to develop hierarchical representations of their input, with each layer building a refined abstraction of the layer that came before it BIBREF0 . For text classification, one can think of this as a single reader building up an increasingly refined understanding of the content. In a departure from this philosophy, we propose a divide-and-conquer approach, where a team of readers each focus on different aspects of the text, and then combine their representations to make a joint decision. More precisely, the proposed Multi-View Network (MVN) for text classification learns to generate several views of its input text. Each view is formed by focusing on different sets of words through a view-specific attention mechanism. These views are arranged sequentially, so each subsequent view can build upon or deviate from previous views as appropriate. The final representation that concatenates these diverse views should be more robust to noise than any one of its components. Furthermore, different sentences may look similar under one view but different under another, allowing the network to devote particular views to distinguishing between subtle differences in sentences, resulting in more discriminative representations. Unlike existing multi-view neural network approaches for image processing BIBREF1 , BIBREF2 , where multiple views are provided as part of the input, our MVN learns to automatically create views from its input text by focusing on different sets of words. Compared to deep Convolutional Networks (CNN) for text BIBREF3 , BIBREF0 , the MVN strategy emphasizes network width over depth. Shorter connections between each view and the loss function enable better gradient flow in the networks, which makes the system easier to train. Our use of multiple views is similar in spirit to the weak learners used in ensemble methods BIBREF4 , BIBREF5 , BIBREF6 , but our views produce vector-valued intermediate representations instead of classification scores, and all our views are trained jointly with feedback from the final classifier. Experiments on two benchmark data sets, the Stanford Sentiment Treebank BIBREF7 and the AG English news corpus BIBREF3 , show that 1) our method achieves very competitive accuracy, 2) some views distinguish themselves from others by better categorizing specific classes, and 3) when our base bag-of-words feature set is augmented with convolutional features, the method establishes a new state-of-the-art for both data sets. Multi-View Networks for Text The MVN architecture is depicted in Figure FIGREF1 . First, individual selection vectors INLINEFORM0 are created, each formed by a distinct softmax weighted sum over the word vectors of the input text. Next, these selections are sequentially transformed into views INLINEFORM1 , with each view influencing the views that come after it. Finally, all views are concatenated and fed into a two-layer perceptron for classification. Multiple Attentions for Selection Each selection INLINEFORM0 is constructed by focusing on a different subset of words from the original text, as determined by a softmax weighted sum BIBREF8 . Given a piece of text with INLINEFORM1 words, we represent it as a bag-of-words feature matrix INLINEFORM2 INLINEFORM3 . Each row of the matrix corresponds to one word, which is represented by a INLINEFORM4 -dimensional vector, as provided by a learned word embedding table. The selection INLINEFORM5 for the INLINEFORM6 view is the softmax weighted sum of features: DISPLAYFORM0 where the weight INLINEFORM0 is computed by: DISPLAYFORM0 DISPLAYFORM1 here, INLINEFORM0 (a vector) and INLINEFORM1 (a matrix) are learned selection parameters. By varying the weights INLINEFORM2 , the selection for each view can focus on different words from INLINEFORM3 , as illustrated by different color curves connecting to INLINEFORM4 in Figure FIGREF1 . Aggregating Selections into Views Having built one INLINEFORM0 for each of our INLINEFORM1 views, the actual views are then created as follows: DISPLAYFORM0 where INLINEFORM0 are learned parameter matrices, and INLINEFORM1 represents concatenation. The first and last views are formed by solely INLINEFORM2 ; however, they play very different roles in our network. INLINEFORM3 is completely disconnected from the others, an independent attempt at good feature selection, intended to increase view diversity BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Conversely, INLINEFORM4 forms the base of a structure similar to a multi-layer perceptron with short-cutting, as defined by the recurrence in Equation EQREF7 . Here, the concatenation of all previous views implements short-cutting, while the recursive definition of each view implements stacking, forming a deep network depicted by horizontal arrows in Figure FIGREF1 . This structure makes each view aware of the information in those previous to it, allowing them to build upon each other. Note that the INLINEFORM5 matrices are view-specific and grow with each view, making the overall parameter count quadratic in the number of views. Classification with Views The final step is to transform our views into a classification of the input text. The MVN does so by concatenating its view vectors, which are then fed into a fully connected projection followed by a softmax function to produce a distribution over the possible classes. Dropout regularization BIBREF13 can be applied at this softmax layer, as in BIBREF14 . Beyond Bags of Words The MVN's selection layer operates on a matrix of feature vectors INLINEFORM0 , which has thus far corresponded to a bag of word vectors. Each view's selection makes intuitive sense when features correspond to words, as it is easy to imagine different readers of a text focusing on different words, with each reader arriving at a useful interpretation. However, there is a wealth of knowledge on how to construct powerful feature representations for text, such as those used by convolutional neural networks (CNNs). To demonstrate the utility of having views that weight arbitrary feature vectors, we augment our bag-of-words representation with vectors built by INLINEFORM1 -gram filters max-pooled over the entire text BIBREF14 , with one feature vector for each INLINEFORM2 -gram order, INLINEFORM3 . The augmented INLINEFORM4 matrix has INLINEFORM5 rows. Unlike our word vectors, the 4 CNN vectors each provide representations of the entire text. Returning to our reader analogy, one could imagine these to correspond to quick ( INLINEFORM6 ) or careful ( INLINEFORM7 ) skims of the text. Regardless of whether a feature vector is built by embedding table or by max-pooled INLINEFORM8 -gram filters, we always back-propagate through all feature construction layers, so they become specialized to our end task. Stanford Sentiment Treebank The Stanford Sentiment Treebank contains 11,855 sentences from movie reviews. We use the same splits for training, dev, and test data as in BIBREF7 to predict the fine-grained 5-class sentiment categories of the sentences. For comparison purposes, following BIBREF14 , BIBREF15 , BIBREF16 , we train the models using both phrases and sentences, but only evaluate sentences at test time. We initialized all of the word embeddings BIBREF17 , BIBREF18 using the publicly available 300 dimensional pre-trained vectors from GloVe BIBREF19 . We learned 8 views with 200 dimensions each, which requires us to project the 300 dimensional word vectors, which we implemented using a linear transformation, whose weight matrix and bias term are shared across all words, followed by a INLINEFORM0 activation. For optimization, we used Adadelta BIBREF20 , with a starting learning rate of 0.0005 and a mini-batch of size 50. Also, we used dropout (with a rate of 0.2) to avoid overfitting. All of these MVN hyperparameters were determined through experiments measuring validation-set accuracy. The test-set accuracies obtained by different learning methods, including the current state-of-the-art results, are presented in Table TABREF11 . The results indicate that the bag-of-words MVN outperforms most methods, but obtains lower accuracy than the state-of-the-art results achieved by the tree-LSTM BIBREF21 , BIBREF22 and the high-order CNN BIBREF16 . However, when augmented with 4 convolutional features as described in Section SECREF9 , the MVN strategy surpasses both of these, establishing a new state-of-the-art on this benchmark. In Figure FIGREF12 , we present the test-set accuracies obtained while varying the number of views in our MVN with convolutional features. These results indicate that better predictive accuracy can be achieved while increasing the number of views up to eight. After eight, the accuracy starts to drop. The number of MVN views should be tuned for each new application, but it is good to see that not too many views are required to achieve optimal performance on this task. To better understand the benefits of the MVN method, we further analyzed the eight views constructed by our best model. After training, we obtained the view representation vectors for both the training and testing data, and then independently trained a very simple, but fast and stable Naïve Bayes classifier BIBREF23 for each view. We report class-specific F-measures for each view in Figure FIGREF13 . From this figure, we can observe that different views focus on different target classes. For example, the first two views perform poorly on the 0 (very negative) and 1 (negative) classes, but achieve the highest F-measures on the 2 (neutral) class. Meanwhile, the non-neutral classes each have a different view that achieves the highest F-measure. This suggests that some views have specialized in order to better separate subsets of the training data. We provide an ablation study in Table TABREF14 . First, we construct a traditional ensemble model. We independently train eight MVN models, each with a single view, to serve as weak learners. We have them vote with equal weight for the final classification, obtaining a test-set accuracy of 50.2. Next, we restrict the views in the MVN to be unaware of each other. That is, we replace Equation EQREF7 with INLINEFORM0 , which removes all horizontal links in Figure FIGREF1 . This drops performance to 49.0. Finally, we experiment with a variant of MVN, where each view is only connected to the most recent previous view, replacing Equation EQREF7 with INLINEFORM1 , leading to a version where the parameter count grows linearly in the number of views. This drops the test-set performance to 50.5. These experiments suggest that enabling the views to build upon each other is crucial for achieving the best performance. AG's English News Categorization The AG corpus BIBREF3 , BIBREF0 contains categorized news articles from more than 2,000 news outlets on the web. The task has four classes, and for each class there are 30,000 training documents and 1,900 test documents. A random sample of the training set was used for hyper-parameter tuning. The training and testing settings of this task are exactly the same as those presented for the Stanford Sentiment Treebank task in Section SECREF10 , except that the mini-batch size is reduced to 23, and each view has a dimension of 100. The test errors obtained by various methods are presented in Table TABREF16 . These results show that the bag-of-words MVN outperforms the state-of-the-art accuracy obtained by the non-neural INLINEFORM0 -gram TFIDF approach BIBREF3 , as well as several very deep CNNs BIBREF0 . Accuracy was further improved when the MVN was augmented with 4 convolutional features. In Figure FIGREF17 , we show how accuracy and loss evolve on the validation set during MVN training. These curves show that training is quite stable. The MVN achieves its best results in just a few thousand iterations. Conclusion and Future Work We have presented a novel multi-view neural network for text classification, which creates multiple views of the input text, each represented as a weighted sum of a base set of feature vectors. These views work together to produce a discriminative feature representation for text classification. Unlike many neural approaches to classification, our architecture emphasizes network width in addition to depth, enhancing gradient flow during training. We have used the multi-view network architecture to establish new state-of-the-art results on two benchmark text classification tasks. In the future, we wish to better understand the benefits of generating multiple views, explore new sources of base features, and apply this technique to other NLP problems such as translation or tagging.
High-order CNN, Tree-LSTM, DRNN, DCNN, CNN-MC, NBoW and SVM
edb068df4ffbd73b379590762125990fcd317862
edb068df4ffbd73b379590762125990fcd317862_0
Q: which benchmark tasks did they experiment on? Text: Introduction State-of-the-art deep neural networks leverage task-specific architectures to develop hierarchical representations of their input, with each layer building a refined abstraction of the layer that came before it BIBREF0 . For text classification, one can think of this as a single reader building up an increasingly refined understanding of the content. In a departure from this philosophy, we propose a divide-and-conquer approach, where a team of readers each focus on different aspects of the text, and then combine their representations to make a joint decision. More precisely, the proposed Multi-View Network (MVN) for text classification learns to generate several views of its input text. Each view is formed by focusing on different sets of words through a view-specific attention mechanism. These views are arranged sequentially, so each subsequent view can build upon or deviate from previous views as appropriate. The final representation that concatenates these diverse views should be more robust to noise than any one of its components. Furthermore, different sentences may look similar under one view but different under another, allowing the network to devote particular views to distinguishing between subtle differences in sentences, resulting in more discriminative representations. Unlike existing multi-view neural network approaches for image processing BIBREF1 , BIBREF2 , where multiple views are provided as part of the input, our MVN learns to automatically create views from its input text by focusing on different sets of words. Compared to deep Convolutional Networks (CNN) for text BIBREF3 , BIBREF0 , the MVN strategy emphasizes network width over depth. Shorter connections between each view and the loss function enable better gradient flow in the networks, which makes the system easier to train. Our use of multiple views is similar in spirit to the weak learners used in ensemble methods BIBREF4 , BIBREF5 , BIBREF6 , but our views produce vector-valued intermediate representations instead of classification scores, and all our views are trained jointly with feedback from the final classifier. Experiments on two benchmark data sets, the Stanford Sentiment Treebank BIBREF7 and the AG English news corpus BIBREF3 , show that 1) our method achieves very competitive accuracy, 2) some views distinguish themselves from others by better categorizing specific classes, and 3) when our base bag-of-words feature set is augmented with convolutional features, the method establishes a new state-of-the-art for both data sets. Multi-View Networks for Text The MVN architecture is depicted in Figure FIGREF1 . First, individual selection vectors INLINEFORM0 are created, each formed by a distinct softmax weighted sum over the word vectors of the input text. Next, these selections are sequentially transformed into views INLINEFORM1 , with each view influencing the views that come after it. Finally, all views are concatenated and fed into a two-layer perceptron for classification. Multiple Attentions for Selection Each selection INLINEFORM0 is constructed by focusing on a different subset of words from the original text, as determined by a softmax weighted sum BIBREF8 . Given a piece of text with INLINEFORM1 words, we represent it as a bag-of-words feature matrix INLINEFORM2 INLINEFORM3 . Each row of the matrix corresponds to one word, which is represented by a INLINEFORM4 -dimensional vector, as provided by a learned word embedding table. The selection INLINEFORM5 for the INLINEFORM6 view is the softmax weighted sum of features: DISPLAYFORM0 where the weight INLINEFORM0 is computed by: DISPLAYFORM0 DISPLAYFORM1 here, INLINEFORM0 (a vector) and INLINEFORM1 (a matrix) are learned selection parameters. By varying the weights INLINEFORM2 , the selection for each view can focus on different words from INLINEFORM3 , as illustrated by different color curves connecting to INLINEFORM4 in Figure FIGREF1 . Aggregating Selections into Views Having built one INLINEFORM0 for each of our INLINEFORM1 views, the actual views are then created as follows: DISPLAYFORM0 where INLINEFORM0 are learned parameter matrices, and INLINEFORM1 represents concatenation. The first and last views are formed by solely INLINEFORM2 ; however, they play very different roles in our network. INLINEFORM3 is completely disconnected from the others, an independent attempt at good feature selection, intended to increase view diversity BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Conversely, INLINEFORM4 forms the base of a structure similar to a multi-layer perceptron with short-cutting, as defined by the recurrence in Equation EQREF7 . Here, the concatenation of all previous views implements short-cutting, while the recursive definition of each view implements stacking, forming a deep network depicted by horizontal arrows in Figure FIGREF1 . This structure makes each view aware of the information in those previous to it, allowing them to build upon each other. Note that the INLINEFORM5 matrices are view-specific and grow with each view, making the overall parameter count quadratic in the number of views. Classification with Views The final step is to transform our views into a classification of the input text. The MVN does so by concatenating its view vectors, which are then fed into a fully connected projection followed by a softmax function to produce a distribution over the possible classes. Dropout regularization BIBREF13 can be applied at this softmax layer, as in BIBREF14 . Beyond Bags of Words The MVN's selection layer operates on a matrix of feature vectors INLINEFORM0 , which has thus far corresponded to a bag of word vectors. Each view's selection makes intuitive sense when features correspond to words, as it is easy to imagine different readers of a text focusing on different words, with each reader arriving at a useful interpretation. However, there is a wealth of knowledge on how to construct powerful feature representations for text, such as those used by convolutional neural networks (CNNs). To demonstrate the utility of having views that weight arbitrary feature vectors, we augment our bag-of-words representation with vectors built by INLINEFORM1 -gram filters max-pooled over the entire text BIBREF14 , with one feature vector for each INLINEFORM2 -gram order, INLINEFORM3 . The augmented INLINEFORM4 matrix has INLINEFORM5 rows. Unlike our word vectors, the 4 CNN vectors each provide representations of the entire text. Returning to our reader analogy, one could imagine these to correspond to quick ( INLINEFORM6 ) or careful ( INLINEFORM7 ) skims of the text. Regardless of whether a feature vector is built by embedding table or by max-pooled INLINEFORM8 -gram filters, we always back-propagate through all feature construction layers, so they become specialized to our end task. Stanford Sentiment Treebank The Stanford Sentiment Treebank contains 11,855 sentences from movie reviews. We use the same splits for training, dev, and test data as in BIBREF7 to predict the fine-grained 5-class sentiment categories of the sentences. For comparison purposes, following BIBREF14 , BIBREF15 , BIBREF16 , we train the models using both phrases and sentences, but only evaluate sentences at test time. We initialized all of the word embeddings BIBREF17 , BIBREF18 using the publicly available 300 dimensional pre-trained vectors from GloVe BIBREF19 . We learned 8 views with 200 dimensions each, which requires us to project the 300 dimensional word vectors, which we implemented using a linear transformation, whose weight matrix and bias term are shared across all words, followed by a INLINEFORM0 activation. For optimization, we used Adadelta BIBREF20 , with a starting learning rate of 0.0005 and a mini-batch of size 50. Also, we used dropout (with a rate of 0.2) to avoid overfitting. All of these MVN hyperparameters were determined through experiments measuring validation-set accuracy. The test-set accuracies obtained by different learning methods, including the current state-of-the-art results, are presented in Table TABREF11 . The results indicate that the bag-of-words MVN outperforms most methods, but obtains lower accuracy than the state-of-the-art results achieved by the tree-LSTM BIBREF21 , BIBREF22 and the high-order CNN BIBREF16 . However, when augmented with 4 convolutional features as described in Section SECREF9 , the MVN strategy surpasses both of these, establishing a new state-of-the-art on this benchmark. In Figure FIGREF12 , we present the test-set accuracies obtained while varying the number of views in our MVN with convolutional features. These results indicate that better predictive accuracy can be achieved while increasing the number of views up to eight. After eight, the accuracy starts to drop. The number of MVN views should be tuned for each new application, but it is good to see that not too many views are required to achieve optimal performance on this task. To better understand the benefits of the MVN method, we further analyzed the eight views constructed by our best model. After training, we obtained the view representation vectors for both the training and testing data, and then independently trained a very simple, but fast and stable Naïve Bayes classifier BIBREF23 for each view. We report class-specific F-measures for each view in Figure FIGREF13 . From this figure, we can observe that different views focus on different target classes. For example, the first two views perform poorly on the 0 (very negative) and 1 (negative) classes, but achieve the highest F-measures on the 2 (neutral) class. Meanwhile, the non-neutral classes each have a different view that achieves the highest F-measure. This suggests that some views have specialized in order to better separate subsets of the training data. We provide an ablation study in Table TABREF14 . First, we construct a traditional ensemble model. We independently train eight MVN models, each with a single view, to serve as weak learners. We have them vote with equal weight for the final classification, obtaining a test-set accuracy of 50.2. Next, we restrict the views in the MVN to be unaware of each other. That is, we replace Equation EQREF7 with INLINEFORM0 , which removes all horizontal links in Figure FIGREF1 . This drops performance to 49.0. Finally, we experiment with a variant of MVN, where each view is only connected to the most recent previous view, replacing Equation EQREF7 with INLINEFORM1 , leading to a version where the parameter count grows linearly in the number of views. This drops the test-set performance to 50.5. These experiments suggest that enabling the views to build upon each other is crucial for achieving the best performance. AG's English News Categorization The AG corpus BIBREF3 , BIBREF0 contains categorized news articles from more than 2,000 news outlets on the web. The task has four classes, and for each class there are 30,000 training documents and 1,900 test documents. A random sample of the training set was used for hyper-parameter tuning. The training and testing settings of this task are exactly the same as those presented for the Stanford Sentiment Treebank task in Section SECREF10 , except that the mini-batch size is reduced to 23, and each view has a dimension of 100. The test errors obtained by various methods are presented in Table TABREF16 . These results show that the bag-of-words MVN outperforms the state-of-the-art accuracy obtained by the non-neural INLINEFORM0 -gram TFIDF approach BIBREF3 , as well as several very deep CNNs BIBREF0 . Accuracy was further improved when the MVN was augmented with 4 convolutional features. In Figure FIGREF17 , we show how accuracy and loss evolve on the validation set during MVN training. These curves show that training is quite stable. The MVN achieves its best results in just a few thousand iterations. Conclusion and Future Work We have presented a novel multi-view neural network for text classification, which creates multiple views of the input text, each represented as a weighted sum of a base set of feature vectors. These views work together to produce a discriminative feature representation for text classification. Unlike many neural approaches to classification, our architecture emphasizes network width in addition to depth, enhancing gradient flow during training. We have used the multi-view network architecture to establish new state-of-the-art results on two benchmark text classification tasks. In the future, we wish to better understand the benefits of generating multiple views, explore new sources of base features, and apply this technique to other NLP problems such as translation or tagging.
They used Stanford Sentiment Treebank benchmark for sentiment classification task and AG English news corpus for the text classification task.
8ea664a72e6d6eca73c1b3e1f75a72a677474ab1
8ea664a72e6d6eca73c1b3e1f75a72a677474ab1_0
Q: Are recurrent neural networks trained on perturbed data? Text: Introduction At the core of Natural Language Processing (NLP) neural models are pre-trained word embeddings like Word2Vec BIBREF0, GloVe BIBREF1 and ELMo BIBREF2. They help initialize the neural models, lead to faster convergence and have improved performance for numerous application such as Question Answering BIBREF3, Summarization BIBREF4, Sentiment Analysis BIBREF5. While word embeddings are powerful in unlimited constraints such as computation power and compute resources, it becomes challenging to deploy them to on-device due to their huge size. This led to interesting research by BIBREF6, BIBREF7, BIBREF8, who showed that actually word embedding can be replaced with lightweight binary LSH projections learned on-the-fly. The projection approach BIBREF9, BIBREF10 surmounts the need to store any embedding matrices, since the projections are dynamically computed. This further enables user privacy by performing inference directly on device without sending user data (e.g., personal information) to the server. The computation of the representation is linear in the number of inputs in the sentence surmounting the need to maintain and lookup global vocabulary, and reducing the memory size to $O(|T \cdot d|)$. The projection representations can operate on word and character level, and can be used to represent a sentence or a word depending on the NLP application. BIBREF6 have shown that on-device LSH projections lead to state-of-the-art results in dialog act classification and reach significant improvement upon prior LSTM and CNN neural models. Despite being so successful, yet there are no studies showing the properties and power of LSH projections. In this paper, we address that by studying What makes projection models effective? and Are these projection models resistant to perturbations and misspellings in input text? To answer these questions, we conduct a series of experimental studies and analysis. For instance, by studying the collision of the learned projection representations, we verify the effectiveness of the produced representations. Our study showed that LSH projections have low collision, meaning that the representations are good allowing the model to capture the meaning of words, instead of colliding everything into one meaning. Next, by analyzing the different character perturbations, we show the robustness of LSH projections when modeling word or sentence level representations. The intuition is that the projection should be able to capture word misspellings as similar, and yet it should be robust to semantically dissimilar terms. We show that Self-Governing Neural Networks (SGNN) models BIBREF6 evaluated with perturbed LSH projections are resistant to misspellings and transformation attacks, while LSTMs with increased perturbations dropped in performance. Overall, the studies are very interesting showcasing the robustness of LSH projection representations, their resistance to misspellings and transformations, and also explains why they lead to better performance. Background: LSH projections for text representations The Projection function, $\mathbb {P}$ (Figure FIGREF1), BIBREF9 used in SGNN models BIBREF6 extracts token (or character) n-gram & skip-gram features from a raw input text, $\textbf {x}$ and dynamically generates a binary projection representation, $\mathbb {P}(\mathbf {x}) \in [0,1]^{T.d}$ after a Locality-Sensitive Hashing (LSH) based transformation, $\mathbb {L}$ as in where $\mathbb {F}$ extracts n-grams(or skip-grams), $[f_1, \cdots , f_n]$ from the input text. Here, $[f_1, \cdots , f_n]$ could refer to either character level or token level n-grams(or skip-grams) features. Collision Study Before diving into the actual collision studies, it is important to understand what the properties of good projections are. For instance, good projections should be as separate as possible, while still capturing the inherent n-gram features. Words with similar character n-gram feature vectors should be closer to each other i.e. cat and cats, but yet separate from each other so that the network can learn that cat and cats are related, but yet different. Such observations are not evident from the projections. One way to understand them is by looking at the collision rates. For instance, if there are too many projection collisions, this means that the network is fundamentally incapable of learning and it will not be able to generalize. For the purpose, we test how spread out the projections are for word and sentence representations. We take a large corpus enwik9 and analyze the average hamming distance of the words and sentences in the corpus. Intuitively, good projections should have less collisions. Our study shows that there is almost no collision. On an average the Hamming distances between words are 557 bits, which is around 50% of the projection dimension. Standard deviations are one order of magnitude lower compared to the average Hamming distances between words which means that on average projections are more or less spread out. For high deviation, it means too many words are either too close to each other or too far away from other other. To understand the properties of word and sentence projections, we conduct two experiments, one in which we compute the word projections and another one in which we compute the sentence projections. For our experiments, we fix the projection dimension, $dim(\mathbb {P}(w)) = 1120$ ($T=80, \, d=14$) following BIBREF6. Results are shown in Table TABREF3 and Table TABREF4 respectively. Table TABREF3 shows the collision results of the word level projections. On the left we list different projection configurations by varying the number of projection functions $T$, the dimensionality $d$, turning on or off character level projections, including varying size of n-gram and skip-gram features. For each projection configuration, we show the average Hamming distance and the standard deviation. As it can be seen, by increasing the number of n-gram and skip-gram features, the words become more spread out with lesser standard deviation. We recommend using higher number of n-gram and skip-gram features for better model performance. Table TABREF4 shows the collision results of the sentence level projections. Similarly to Table TABREF3 the left side shows the different projection configurations. For each configuration, we show the average Hamming distance and standard deviation. In the sentence level projection study, we observe that when we consider only word level features, the projections are insensitive to sentence length. But with the character projections on, they are sensitive to the sentence length. This happens because the character projection space is smaller than the words space, as we see only fewer variations for the sentence projections with n-gram and skip-gram compared to word level. In sentence level projection with word level features, the dimensionality of the spacer vector is high, hence applying projections on this leads to discriminative representations. More concretely, this means that projections with large feature spaces are able to capture the distinctions between any two observed pairs and adding more words to the sentence is not going to change that. On the other hand for short sentences with character level features, the number of possible observed unique char ngrams vs those observed in longer sentences can differ. Perturbation Study To further test the robustness of the projections, we conduct perturbation study. A good projection should separate out perturbed word like baank from cats. Meaning that the average Hamming distance from the collision study should be greater than the Hamming distance with and without perturbations. Perturbation Study ::: Character & Word Perturbations In this section, we analyze the Hamming distance between the projections of the sentences from the enwik9 dataset and the corresponding projections of the same sentences after applying character level perturbations. We experiment with three types of character level perturbation BIBREF11 and two types of word level perturbation operations. Perturbation Study ::: Character Level Perturbation Operations insert(word, n) : We randomly choose n characters from the character vocabulary and insert them at random locations into the input word. We however retain the first and last characters of the word as is. Ex. transformation: $sample \rightarrow samnple$. swap(word, n): We randomly swap the location of two characters in the word n times. As with the insert operation, we retain the first and last characters of the word as is and only apply the swap operation to the remaining characters. Ex. transformation: $sample \rightarrow sapmle$. duplicate(word, n): We randomly duplicate a character in the word by n times. Ex. transformation: $sample \rightarrow saample$. Perturbation Study ::: Character Level Perturbation Operations ::: Word Level Perturbation Operations drop(sentence, n): We randomly drop n words from the sentence. Ex. transformation: This is a big cat. $\rightarrow $ This is a cat. duplicate(sentence, n): Similar to duplicate(word, n) above, we randomly duplicate a word in the sentence n times. Ex. transformation: This is a big cat. $\rightarrow $ This is a big big cat. swap(sentence, n): Similar to swap(word, n), we randomly swap the location of two words in the sentence n times. Ex. transformation: This is a big cat. $\rightarrow $ This cat is big. For both character and word level perturbations, we decide whether or not to perturb each word in a sentence with a fixed probability. For the character level perturbations, once a word is chosen for perturbation, we randomly pick one of the perturbation operations from {insert, swap, duplicate} and randomly pick the number of characters to transform $n \in \lbrace 1,\;3\rbrace $. For the word level perturbations, we randomly apply one of the operations from {drop, duplicate, swap}. We consider perturbation probabilities of $0.05$ and $0.1$ for our experiments. Perturbation Study ::: Discussion We show results on multiple perturbation studies. For instance, sentence has word and character level perturbations, while word has character only perturbation. We evaluate the impact of the word and character projections for sentence and word level projections on the enwik9 dataset. Table TABREF13 shows the character and word perturbation with sentence level projections. Table TABREF14 shows the character perturbation for word level projections. We observe that the hamming distances between the projections of the perturbed versions of the same words are significantly smaller than the average distance of the word projections measured in the collision study in Section SECREF3. This shows that the words are well separated in the projection space and could potentially be less susceptible to misspellings and omissions. Based on the results in all Tables 1 to 4, we found a nice linear relationship between the hamming distance, the projection dimension and the amount of perturbation. As it can be seen in the results, the hamming distance between the projections before and after perturbation is directly proportional to the product of the projection dimension and percentage of perturbation as follows: $ \Delta _{\mathbb {P}_{m}} = K_{m}\, \cdot T \, \cdot \, d \cdot P_{perturb} \; , m \in \lbrace word, \,character\rbrace , \; K_{m} > 0$ where $\Delta _{\mathbb {P}_{m}}$ refers to the hamming distance between the projections before and after perturbations and $m$ refers to the mode of projection - {word, character}. $T \cdot d$ refers to the projection space dimension and $P_{perturb}$ refers to the probability of perturbation. $K_{m} > 0$ is a proportionality constant which depends on the projection mode. We observe that $K_{word} > K_{char}$ from our experiments. Character mode projections are relatively more robust to perturbations, however we would also want to include word level n-gram and skipgram features to generate a holistic representation. This establishes a tradeoff between choosing word and character level features. Ideally, one would like to reserve some bits for word and some bits for character level features. We leave the design of the right bit division to future work. Effect of Perturbation on Classification We evaluate LSH projections with text transformations to test whether the projections are robust to input perturbations by nature. We use the character level operations from Section SECREF4. Effect of Perturbation on Classification ::: Evaluation Setup For evaluation, we used the widely popular dialog act and intent prediction datasets. MRDA BIBREF12 is a dialog corpus of multi-party meetings with 6 classes, 78K training and 15K test data; ATIS BIBREF13 is intent prediction dataset for flight reservations with 21 classes, 4.4K training and 893 test examples; and SWDA BIBREF14, BIBREF15 is an open domain dialog corpus between two speakers with 42 classes, 193K training and 5K test examples. For fair comparison, we train LSTM baseline with sub-words and 240 vocabulary size on MRDA, ATIS and SWDA. We uniformly randomly initialized the input word embeddings. We also trained the on-device SGNN model BIBREF6. Then, we created test sets with varying levels of perturbation operations - $\lbrace 20\%,40\%,60\%\rbrace $. Effect of Perturbation on Classification ::: Results Table TABREF15 shows the accuracy results of LSTM and on-device SGNN models. Overall, SGNN models are consistently more robust to perturbations across all three datasets and tasks. One of the reasons is that SGNN relies on word and character level n-gram features, while for LSTMs, the character perturbations result in sub-words being mapped to unknown embedding. This leads LSTM to learn to map inputs with many unknown words to the majority class. We observed the same when we perturbed $100\%$ of the words in the input. As shown in Table TABREF18, the standard deviations of the accuracy with LSTMs are much higher compared to SGNN. This further reinforces the fact that SGNNs are fundamentally more robust to both word misspellings and black box attacks. In the future, we are plan to benchmark SGNN with more aggressive and exploitative black box based attacks. Conclusion In this work, we perform a detailed study analyzing why recent LSH-based projection neural networks are effective for language classification tasks. Through extensive analyses including perturbation studies and experiments on multiple tasks, we show that projection-based neural models are resistant to text transformations compared to widely-used approaches like LSTMs with embeddings.
No
5e41516a27c587aa2f80dba8cf4c3f616174099b
5e41516a27c587aa2f80dba8cf4c3f616174099b_0
Q: How does their perturbation algorihm work? Text: Introduction At the core of Natural Language Processing (NLP) neural models are pre-trained word embeddings like Word2Vec BIBREF0, GloVe BIBREF1 and ELMo BIBREF2. They help initialize the neural models, lead to faster convergence and have improved performance for numerous application such as Question Answering BIBREF3, Summarization BIBREF4, Sentiment Analysis BIBREF5. While word embeddings are powerful in unlimited constraints such as computation power and compute resources, it becomes challenging to deploy them to on-device due to their huge size. This led to interesting research by BIBREF6, BIBREF7, BIBREF8, who showed that actually word embedding can be replaced with lightweight binary LSH projections learned on-the-fly. The projection approach BIBREF9, BIBREF10 surmounts the need to store any embedding matrices, since the projections are dynamically computed. This further enables user privacy by performing inference directly on device without sending user data (e.g., personal information) to the server. The computation of the representation is linear in the number of inputs in the sentence surmounting the need to maintain and lookup global vocabulary, and reducing the memory size to $O(|T \cdot d|)$. The projection representations can operate on word and character level, and can be used to represent a sentence or a word depending on the NLP application. BIBREF6 have shown that on-device LSH projections lead to state-of-the-art results in dialog act classification and reach significant improvement upon prior LSTM and CNN neural models. Despite being so successful, yet there are no studies showing the properties and power of LSH projections. In this paper, we address that by studying What makes projection models effective? and Are these projection models resistant to perturbations and misspellings in input text? To answer these questions, we conduct a series of experimental studies and analysis. For instance, by studying the collision of the learned projection representations, we verify the effectiveness of the produced representations. Our study showed that LSH projections have low collision, meaning that the representations are good allowing the model to capture the meaning of words, instead of colliding everything into one meaning. Next, by analyzing the different character perturbations, we show the robustness of LSH projections when modeling word or sentence level representations. The intuition is that the projection should be able to capture word misspellings as similar, and yet it should be robust to semantically dissimilar terms. We show that Self-Governing Neural Networks (SGNN) models BIBREF6 evaluated with perturbed LSH projections are resistant to misspellings and transformation attacks, while LSTMs with increased perturbations dropped in performance. Overall, the studies are very interesting showcasing the robustness of LSH projection representations, their resistance to misspellings and transformations, and also explains why they lead to better performance. Background: LSH projections for text representations The Projection function, $\mathbb {P}$ (Figure FIGREF1), BIBREF9 used in SGNN models BIBREF6 extracts token (or character) n-gram & skip-gram features from a raw input text, $\textbf {x}$ and dynamically generates a binary projection representation, $\mathbb {P}(\mathbf {x}) \in [0,1]^{T.d}$ after a Locality-Sensitive Hashing (LSH) based transformation, $\mathbb {L}$ as in where $\mathbb {F}$ extracts n-grams(or skip-grams), $[f_1, \cdots , f_n]$ from the input text. Here, $[f_1, \cdots , f_n]$ could refer to either character level or token level n-grams(or skip-grams) features. Collision Study Before diving into the actual collision studies, it is important to understand what the properties of good projections are. For instance, good projections should be as separate as possible, while still capturing the inherent n-gram features. Words with similar character n-gram feature vectors should be closer to each other i.e. cat and cats, but yet separate from each other so that the network can learn that cat and cats are related, but yet different. Such observations are not evident from the projections. One way to understand them is by looking at the collision rates. For instance, if there are too many projection collisions, this means that the network is fundamentally incapable of learning and it will not be able to generalize. For the purpose, we test how spread out the projections are for word and sentence representations. We take a large corpus enwik9 and analyze the average hamming distance of the words and sentences in the corpus. Intuitively, good projections should have less collisions. Our study shows that there is almost no collision. On an average the Hamming distances between words are 557 bits, which is around 50% of the projection dimension. Standard deviations are one order of magnitude lower compared to the average Hamming distances between words which means that on average projections are more or less spread out. For high deviation, it means too many words are either too close to each other or too far away from other other. To understand the properties of word and sentence projections, we conduct two experiments, one in which we compute the word projections and another one in which we compute the sentence projections. For our experiments, we fix the projection dimension, $dim(\mathbb {P}(w)) = 1120$ ($T=80, \, d=14$) following BIBREF6. Results are shown in Table TABREF3 and Table TABREF4 respectively. Table TABREF3 shows the collision results of the word level projections. On the left we list different projection configurations by varying the number of projection functions $T$, the dimensionality $d$, turning on or off character level projections, including varying size of n-gram and skip-gram features. For each projection configuration, we show the average Hamming distance and the standard deviation. As it can be seen, by increasing the number of n-gram and skip-gram features, the words become more spread out with lesser standard deviation. We recommend using higher number of n-gram and skip-gram features for better model performance. Table TABREF4 shows the collision results of the sentence level projections. Similarly to Table TABREF3 the left side shows the different projection configurations. For each configuration, we show the average Hamming distance and standard deviation. In the sentence level projection study, we observe that when we consider only word level features, the projections are insensitive to sentence length. But with the character projections on, they are sensitive to the sentence length. This happens because the character projection space is smaller than the words space, as we see only fewer variations for the sentence projections with n-gram and skip-gram compared to word level. In sentence level projection with word level features, the dimensionality of the spacer vector is high, hence applying projections on this leads to discriminative representations. More concretely, this means that projections with large feature spaces are able to capture the distinctions between any two observed pairs and adding more words to the sentence is not going to change that. On the other hand for short sentences with character level features, the number of possible observed unique char ngrams vs those observed in longer sentences can differ. Perturbation Study To further test the robustness of the projections, we conduct perturbation study. A good projection should separate out perturbed word like baank from cats. Meaning that the average Hamming distance from the collision study should be greater than the Hamming distance with and without perturbations. Perturbation Study ::: Character & Word Perturbations In this section, we analyze the Hamming distance between the projections of the sentences from the enwik9 dataset and the corresponding projections of the same sentences after applying character level perturbations. We experiment with three types of character level perturbation BIBREF11 and two types of word level perturbation operations. Perturbation Study ::: Character Level Perturbation Operations insert(word, n) : We randomly choose n characters from the character vocabulary and insert them at random locations into the input word. We however retain the first and last characters of the word as is. Ex. transformation: $sample \rightarrow samnple$. swap(word, n): We randomly swap the location of two characters in the word n times. As with the insert operation, we retain the first and last characters of the word as is and only apply the swap operation to the remaining characters. Ex. transformation: $sample \rightarrow sapmle$. duplicate(word, n): We randomly duplicate a character in the word by n times. Ex. transformation: $sample \rightarrow saample$. Perturbation Study ::: Character Level Perturbation Operations ::: Word Level Perturbation Operations drop(sentence, n): We randomly drop n words from the sentence. Ex. transformation: This is a big cat. $\rightarrow $ This is a cat. duplicate(sentence, n): Similar to duplicate(word, n) above, we randomly duplicate a word in the sentence n times. Ex. transformation: This is a big cat. $\rightarrow $ This is a big big cat. swap(sentence, n): Similar to swap(word, n), we randomly swap the location of two words in the sentence n times. Ex. transformation: This is a big cat. $\rightarrow $ This cat is big. For both character and word level perturbations, we decide whether or not to perturb each word in a sentence with a fixed probability. For the character level perturbations, once a word is chosen for perturbation, we randomly pick one of the perturbation operations from {insert, swap, duplicate} and randomly pick the number of characters to transform $n \in \lbrace 1,\;3\rbrace $. For the word level perturbations, we randomly apply one of the operations from {drop, duplicate, swap}. We consider perturbation probabilities of $0.05$ and $0.1$ for our experiments. Perturbation Study ::: Discussion We show results on multiple perturbation studies. For instance, sentence has word and character level perturbations, while word has character only perturbation. We evaluate the impact of the word and character projections for sentence and word level projections on the enwik9 dataset. Table TABREF13 shows the character and word perturbation with sentence level projections. Table TABREF14 shows the character perturbation for word level projections. We observe that the hamming distances between the projections of the perturbed versions of the same words are significantly smaller than the average distance of the word projections measured in the collision study in Section SECREF3. This shows that the words are well separated in the projection space and could potentially be less susceptible to misspellings and omissions. Based on the results in all Tables 1 to 4, we found a nice linear relationship between the hamming distance, the projection dimension and the amount of perturbation. As it can be seen in the results, the hamming distance between the projections before and after perturbation is directly proportional to the product of the projection dimension and percentage of perturbation as follows: $ \Delta _{\mathbb {P}_{m}} = K_{m}\, \cdot T \, \cdot \, d \cdot P_{perturb} \; , m \in \lbrace word, \,character\rbrace , \; K_{m} > 0$ where $\Delta _{\mathbb {P}_{m}}$ refers to the hamming distance between the projections before and after perturbations and $m$ refers to the mode of projection - {word, character}. $T \cdot d$ refers to the projection space dimension and $P_{perturb}$ refers to the probability of perturbation. $K_{m} > 0$ is a proportionality constant which depends on the projection mode. We observe that $K_{word} > K_{char}$ from our experiments. Character mode projections are relatively more robust to perturbations, however we would also want to include word level n-gram and skipgram features to generate a holistic representation. This establishes a tradeoff between choosing word and character level features. Ideally, one would like to reserve some bits for word and some bits for character level features. We leave the design of the right bit division to future work. Effect of Perturbation on Classification We evaluate LSH projections with text transformations to test whether the projections are robust to input perturbations by nature. We use the character level operations from Section SECREF4. Effect of Perturbation on Classification ::: Evaluation Setup For evaluation, we used the widely popular dialog act and intent prediction datasets. MRDA BIBREF12 is a dialog corpus of multi-party meetings with 6 classes, 78K training and 15K test data; ATIS BIBREF13 is intent prediction dataset for flight reservations with 21 classes, 4.4K training and 893 test examples; and SWDA BIBREF14, BIBREF15 is an open domain dialog corpus between two speakers with 42 classes, 193K training and 5K test examples. For fair comparison, we train LSTM baseline with sub-words and 240 vocabulary size on MRDA, ATIS and SWDA. We uniformly randomly initialized the input word embeddings. We also trained the on-device SGNN model BIBREF6. Then, we created test sets with varying levels of perturbation operations - $\lbrace 20\%,40\%,60\%\rbrace $. Effect of Perturbation on Classification ::: Results Table TABREF15 shows the accuracy results of LSTM and on-device SGNN models. Overall, SGNN models are consistently more robust to perturbations across all three datasets and tasks. One of the reasons is that SGNN relies on word and character level n-gram features, while for LSTMs, the character perturbations result in sub-words being mapped to unknown embedding. This leads LSTM to learn to map inputs with many unknown words to the majority class. We observed the same when we perturbed $100\%$ of the words in the input. As shown in Table TABREF18, the standard deviations of the accuracy with LSTMs are much higher compared to SGNN. This further reinforces the fact that SGNNs are fundamentally more robust to both word misspellings and black box attacks. In the future, we are plan to benchmark SGNN with more aggressive and exploitative black box based attacks. Conclusion In this work, we perform a detailed study analyzing why recent LSH-based projection neural networks are effective for language classification tasks. Through extensive analyses including perturbation studies and experiments on multiple tasks, we show that projection-based neural models are resistant to text transformations compared to widely-used approaches like LSTMs with embeddings.
same sentences after applying character level perturbations
edc43e1b75c0970b7003deeabfe3ad247cb1ed83
edc43e1b75c0970b7003deeabfe3ad247cb1ed83_0
Q: Which language is divided into six dialects in the task mentioned in the paper? Text: Introduction As discussed in a recent survey BIBREF0 , discriminating between similar languages, national language varieties, and dialects is an important challenge faced by state-of-the-art language identification systems. The topic has attracted more and more attention from the CL/NLP community in recent years with publications on similar languages of the Iberian peninsula BIBREF1 , and varieties and dialects of several languages such as Greek BIBREF2 and Romanian BIBREF3 to name a few. As evidenced in Section "Related Work" , the focus of most of these studies is the identification of languages and dialects using contemporary data. A few exceptions include the work by trieschnigg2012exploration who applied language identification methods to historical varieties of Dutch and the work by CLIarxiv on languages written in cuneiform script: Sumerian and Akkadian. Cuneiform is an ancient writing system invented by the Sumerians for more than three millennia. In this paper we describe computational approaches to language identification on texts written in cuneiform script. For this purpose we use the dataset made available by CLIarxiv to participants of the Cuneiform Language Identification (CLI) shared task organized at VarDial 2019 BIBREF4 . Our submission, under the team name PZ, is an adaptation of an n-gram-based meta-classifier system which showed very good performance in previous language identification shared tasks BIBREF5 , BIBREF6 . Furthermore, we compare the performance of the meta-classifier to the submissions to the CLI shared task and, in particular, to a deep learning approach submitted by the team ghpaetzold. It has been shown in previous language identification studies BIBREF7 , BIBREF8 that deep learning approaches do not outperform n-gram-based methods and we were interested in investigating whether this is also true for the languages and dialects included in CLI. Related Work Since its first edition in 2014, shared tasks on similar language and dialect identification have been organized together with the VarDial workshop co-located with international conferences such as COLING, EACL, and NAACL. The first and most well-attended of these competitions was the Discrminating between Similar Languages (DSL) shared task which has been organized between 2014 and 2017 BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . The DSL provided the first benchmark for evaluation of language identification systems developed for similar languages and language varieties using the DSL Corpus Collection (DSLCC) BIBREF13 , a multilingual benchmarked dataset compiled for this purpose. In 2017 and 2018, VarDial featured evaluation campaigns with multiple shared tasks not only on language and dialect identification but also on other NLP tasks related to language and dialect variation (e.g. morphosyntactic tagging, and cross-lingual dependency parsing). With the exception of the DSL, the language and dialect identification competitions organized at VarDial focused on groups of dialects from the same language such as Arabic (ADI shared task) and German (GDI shared task). The focus of the aforementioned language and dialect identification competitions was diatopic variation and thus the data made available in these competitions was synchronic contemporary corpora. In the 2019 edition of the workshop, for the first time, a task including historical languages was organized. The CLI shared task provided participants with a dataset containing languages and dialects written in cuneiform script: Sumerian and Akkadian. Akkadian is divided into six dialects in the dataset: Old Babylonian, Middle Babylonian peripheral, Standard Babylonian, Neo Babylonian, Late Babylonian, and Neo Assyrian BIBREF14 . The CLI shared task is an innovative initiative that opens new perspectives in the computational processing of languages written in cuneiform script. There have been a number of studies applying computational methods to process these languages (e.g. Sumerian BIBREF15 ), but with the exception of CLIarxiv, to the best of our knowledge, no language identification studies have been published. CLI is the first competition organized on cuneiform script texts in particular and in historical language identification in general. Methodology and Data The dataset used in the CLI shared task is described in detail in CLIarxiv. All of the data included in the dataset was collected from the Open Richly Annotated Cuneiform Corpus (Oracc) which contains transliterated texts. CLIarxiv created a tool to transform the texts back to the cuneiform script. The dataset features texts from seven languages and dialects amounting to a little over 13,000 texts. The list of languages and dialects is presented in Table 1 . System Description Our submission to the CLI shared task is a system based on a meta-classifier trained on several SVM models. Meta-classifiers BIBREF16 and ensemble learning methods have proved to deliver competitive performance not only in language identification BIBREF5 , BIBREF6 but also in many other text classification tasks BIBREF17 , BIBREF18 . The meta-classifier is an adaptation of previous submissions to VarDial shared tasks described in BIBREF6 . It is essentially a bagging ensemble trained on the outputs of linear SVM classifiers. As features, the system uses the following character n-gram and character skip-gram features: character $n$ -grams of order 1–5; 1-skip character bigrams and trigrams; 2-skip character bigrams and trigrams; 3-skip character bigrams and trigrams. Each feature class is used to train a single linear SVM classifier using LIBLINEAR BIBREF19 . The outputs of these SVM classifiers on the training data are then used to train the meta-classifier. Results Table 2 showcases the results obtained by our team (PZ in bold) and the best submission by each of the eight teams which participating in the CLI shared task. Even though the competition allowed the use of other datasets (open submission), we have used only the dataset provided by the shared task organizers to train our model. Our submission was ranked 4th in the shared task, only a few percentage points below the top-3 systems: NRC-CNRC, tearsofjoy, and Twist Bytes. The meta-classifier achieved much higher performance at distinguishing between these Mesopotamian languages and dialects than the neural model by ghpaetzold, which ranked 6th in the competition. We present this neural model in more detail comparing its performance to our meta-classifier in Section "Comparison to a Neural Model" . Comparison to a Neural Model We take the opportunity to compare the performance of our system with an entirely different type of model submitted by team ghpaetzold. This comparison was motivated by the lower performance obtained by the neural models in comparison to traditional machine learning models in previous VarDial shared tasks BIBREF20 . It was made possible due to the collaboration between the ghpaetzold team and ours. As demonstrated by ling2015finding, compositional recurrent neural networks can offer very reliable performance on a variety of NLP tasks. Previous language identification and dialect studies BIBREF7 , BIBREF8 , BIBREF21 and the results of the previous shared tasks organized at VarDial BIBREF12 , BIBREF20 , however, showed that deep learning approaches do not outperform more linear n-gram-based methods so we were interested in comparing the performance of a neural model to the meta-classifier for this dataset. A compositional network is commonly described as a model that builds numerical representations of words based on the sequence of characters that compose them. They are inherently more time-consuming to train than typical neural models that use traditional word vectors because of the added parameters, but they compensate by being able to handle any conceivable word passed as input with very impressive robustness BIBREF22 , BIBREF23 . The model takes as input a sentence and produces a corresponding label as output. First, the model vectorizes each character of each word in the sentence using a typical character embedding layer. It then passes the sequence of vectors through a set of 2 layers of Gated Recurrent Units (GRUs) and produces a numerical representation for each word as a whole. This set of representations is then passed through another 2-layer set of GRUs to produce a final vector for the sentence as a whole, and then a dense layer is used to produce a softmax distribution over the label set. The model uses 25 dimensions for character embeddings, 30 nodes for each GRU layer and 50% dropout. A version of each model was saved after each epoch so that the team could choose the one with the lowest error on the development set as their submission. Inspecting the two confusion matrices depicted in Figures 1 and 1 , we found that the neural model did not do very well at differentiating between Standard Babylonian and Neo Assyrian, as well as between Neo Babylonian and Neo Assyrian, leading to many misclassifications. These two language pairs were also the most challenging for the meta-classifier, however, the number of missclassified instances by the meta-classifier was much lower. Conclusion and Future Work In this paper we presented a meta-classifier system submitted by the team PZ to the Cuneiform Language Identification shared task organized at VarDial 2019. Our submission is an adaptation of a sophisticated meta-classifier which achieved high performance in previous language and dialect identification shared tasks at VarDial BIBREF6 . The meta-classifier combines the output of multiple SVM classifers trained on character-based features. The meta-classifier ranked 4th in the competition among eight teams only a few percentage points below the top-3 systems in the competition. Finally, we compared the performance of the meta-classifier with a compositional RNN model that uses only the text from the instance as input trained on the same dataset. The comparison shows that, while the neural model does offer competitive performance against some of the systems submitted to the shared task, the more elaborate features used by the meta-classifier allows it to much more proficiently distinguish between very similar language pairs, such as Neo Babylonian and Neo Assyrian, leading to a performance gain of 18.2% F-score and 2 positions in the shared task rankings. The results obtained by the meta-classifier in comparison to the neural model corroborate the findings of previous studies BIBREF7 in the last two VarDial evaluation campaigns BIBREF12 , BIBREF20 . In the future we would like to analyze the results obtained by the highest performing teams in the CLI shared task. The top team achieved the best performance in the competition using a neural-based method. This is, to the best of our knowledge, the first time in which a deep learning approach outperforms traditional machine learning methods in one of the VarDial shared tasks. The great performance obtained by the NRC-CNRC team might be explained by the use of more suitable deep learning methods such as BERT BIBREF24 . Acknowledgements We would like to thank Shervin Malmasi for his valuable suggestions and feedback. We further thank the CLI shared task organizer, Tommi Jauhiainen, for organizing this interesting shared task. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research.
Akkadian.
0c3924214572579ddbc1b4a87c7f7842ef20ff1b
0c3924214572579ddbc1b4a87c7f7842ef20ff1b_0
Q: What is one of the first writing systems in the world? Text: Introduction As discussed in a recent survey BIBREF0 , discriminating between similar languages, national language varieties, and dialects is an important challenge faced by state-of-the-art language identification systems. The topic has attracted more and more attention from the CL/NLP community in recent years with publications on similar languages of the Iberian peninsula BIBREF1 , and varieties and dialects of several languages such as Greek BIBREF2 and Romanian BIBREF3 to name a few. As evidenced in Section "Related Work" , the focus of most of these studies is the identification of languages and dialects using contemporary data. A few exceptions include the work by trieschnigg2012exploration who applied language identification methods to historical varieties of Dutch and the work by CLIarxiv on languages written in cuneiform script: Sumerian and Akkadian. Cuneiform is an ancient writing system invented by the Sumerians for more than three millennia. In this paper we describe computational approaches to language identification on texts written in cuneiform script. For this purpose we use the dataset made available by CLIarxiv to participants of the Cuneiform Language Identification (CLI) shared task organized at VarDial 2019 BIBREF4 . Our submission, under the team name PZ, is an adaptation of an n-gram-based meta-classifier system which showed very good performance in previous language identification shared tasks BIBREF5 , BIBREF6 . Furthermore, we compare the performance of the meta-classifier to the submissions to the CLI shared task and, in particular, to a deep learning approach submitted by the team ghpaetzold. It has been shown in previous language identification studies BIBREF7 , BIBREF8 that deep learning approaches do not outperform n-gram-based methods and we were interested in investigating whether this is also true for the languages and dialects included in CLI. Related Work Since its first edition in 2014, shared tasks on similar language and dialect identification have been organized together with the VarDial workshop co-located with international conferences such as COLING, EACL, and NAACL. The first and most well-attended of these competitions was the Discrminating between Similar Languages (DSL) shared task which has been organized between 2014 and 2017 BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . The DSL provided the first benchmark for evaluation of language identification systems developed for similar languages and language varieties using the DSL Corpus Collection (DSLCC) BIBREF13 , a multilingual benchmarked dataset compiled for this purpose. In 2017 and 2018, VarDial featured evaluation campaigns with multiple shared tasks not only on language and dialect identification but also on other NLP tasks related to language and dialect variation (e.g. morphosyntactic tagging, and cross-lingual dependency parsing). With the exception of the DSL, the language and dialect identification competitions organized at VarDial focused on groups of dialects from the same language such as Arabic (ADI shared task) and German (GDI shared task). The focus of the aforementioned language and dialect identification competitions was diatopic variation and thus the data made available in these competitions was synchronic contemporary corpora. In the 2019 edition of the workshop, for the first time, a task including historical languages was organized. The CLI shared task provided participants with a dataset containing languages and dialects written in cuneiform script: Sumerian and Akkadian. Akkadian is divided into six dialects in the dataset: Old Babylonian, Middle Babylonian peripheral, Standard Babylonian, Neo Babylonian, Late Babylonian, and Neo Assyrian BIBREF14 . The CLI shared task is an innovative initiative that opens new perspectives in the computational processing of languages written in cuneiform script. There have been a number of studies applying computational methods to process these languages (e.g. Sumerian BIBREF15 ), but with the exception of CLIarxiv, to the best of our knowledge, no language identification studies have been published. CLI is the first competition organized on cuneiform script texts in particular and in historical language identification in general. Methodology and Data The dataset used in the CLI shared task is described in detail in CLIarxiv. All of the data included in the dataset was collected from the Open Richly Annotated Cuneiform Corpus (Oracc) which contains transliterated texts. CLIarxiv created a tool to transform the texts back to the cuneiform script. The dataset features texts from seven languages and dialects amounting to a little over 13,000 texts. The list of languages and dialects is presented in Table 1 . System Description Our submission to the CLI shared task is a system based on a meta-classifier trained on several SVM models. Meta-classifiers BIBREF16 and ensemble learning methods have proved to deliver competitive performance not only in language identification BIBREF5 , BIBREF6 but also in many other text classification tasks BIBREF17 , BIBREF18 . The meta-classifier is an adaptation of previous submissions to VarDial shared tasks described in BIBREF6 . It is essentially a bagging ensemble trained on the outputs of linear SVM classifiers. As features, the system uses the following character n-gram and character skip-gram features: character $n$ -grams of order 1–5; 1-skip character bigrams and trigrams; 2-skip character bigrams and trigrams; 3-skip character bigrams and trigrams. Each feature class is used to train a single linear SVM classifier using LIBLINEAR BIBREF19 . The outputs of these SVM classifiers on the training data are then used to train the meta-classifier. Results Table 2 showcases the results obtained by our team (PZ in bold) and the best submission by each of the eight teams which participating in the CLI shared task. Even though the competition allowed the use of other datasets (open submission), we have used only the dataset provided by the shared task organizers to train our model. Our submission was ranked 4th in the shared task, only a few percentage points below the top-3 systems: NRC-CNRC, tearsofjoy, and Twist Bytes. The meta-classifier achieved much higher performance at distinguishing between these Mesopotamian languages and dialects than the neural model by ghpaetzold, which ranked 6th in the competition. We present this neural model in more detail comparing its performance to our meta-classifier in Section "Comparison to a Neural Model" . Comparison to a Neural Model We take the opportunity to compare the performance of our system with an entirely different type of model submitted by team ghpaetzold. This comparison was motivated by the lower performance obtained by the neural models in comparison to traditional machine learning models in previous VarDial shared tasks BIBREF20 . It was made possible due to the collaboration between the ghpaetzold team and ours. As demonstrated by ling2015finding, compositional recurrent neural networks can offer very reliable performance on a variety of NLP tasks. Previous language identification and dialect studies BIBREF7 , BIBREF8 , BIBREF21 and the results of the previous shared tasks organized at VarDial BIBREF12 , BIBREF20 , however, showed that deep learning approaches do not outperform more linear n-gram-based methods so we were interested in comparing the performance of a neural model to the meta-classifier for this dataset. A compositional network is commonly described as a model that builds numerical representations of words based on the sequence of characters that compose them. They are inherently more time-consuming to train than typical neural models that use traditional word vectors because of the added parameters, but they compensate by being able to handle any conceivable word passed as input with very impressive robustness BIBREF22 , BIBREF23 . The model takes as input a sentence and produces a corresponding label as output. First, the model vectorizes each character of each word in the sentence using a typical character embedding layer. It then passes the sequence of vectors through a set of 2 layers of Gated Recurrent Units (GRUs) and produces a numerical representation for each word as a whole. This set of representations is then passed through another 2-layer set of GRUs to produce a final vector for the sentence as a whole, and then a dense layer is used to produce a softmax distribution over the label set. The model uses 25 dimensions for character embeddings, 30 nodes for each GRU layer and 50% dropout. A version of each model was saved after each epoch so that the team could choose the one with the lowest error on the development set as their submission. Inspecting the two confusion matrices depicted in Figures 1 and 1 , we found that the neural model did not do very well at differentiating between Standard Babylonian and Neo Assyrian, as well as between Neo Babylonian and Neo Assyrian, leading to many misclassifications. These two language pairs were also the most challenging for the meta-classifier, however, the number of missclassified instances by the meta-classifier was much lower. Conclusion and Future Work In this paper we presented a meta-classifier system submitted by the team PZ to the Cuneiform Language Identification shared task organized at VarDial 2019. Our submission is an adaptation of a sophisticated meta-classifier which achieved high performance in previous language and dialect identification shared tasks at VarDial BIBREF6 . The meta-classifier combines the output of multiple SVM classifers trained on character-based features. The meta-classifier ranked 4th in the competition among eight teams only a few percentage points below the top-3 systems in the competition. Finally, we compared the performance of the meta-classifier with a compositional RNN model that uses only the text from the instance as input trained on the same dataset. The comparison shows that, while the neural model does offer competitive performance against some of the systems submitted to the shared task, the more elaborate features used by the meta-classifier allows it to much more proficiently distinguish between very similar language pairs, such as Neo Babylonian and Neo Assyrian, leading to a performance gain of 18.2% F-score and 2 positions in the shared task rankings. The results obtained by the meta-classifier in comparison to the neural model corroborate the findings of previous studies BIBREF7 in the last two VarDial evaluation campaigns BIBREF12 , BIBREF20 . In the future we would like to analyze the results obtained by the highest performing teams in the CLI shared task. The top team achieved the best performance in the competition using a neural-based method. This is, to the best of our knowledge, the first time in which a deep learning approach outperforms traditional machine learning methods in one of the VarDial shared tasks. The great performance obtained by the NRC-CNRC team might be explained by the use of more suitable deep learning methods such as BERT BIBREF24 . Acknowledgements We would like to thank Shervin Malmasi for his valuable suggestions and feedback. We further thank the CLI shared task organizer, Tommi Jauhiainen, for organizing this interesting shared task. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research.
Cuneiform
4519afe91b1042876d7c021487d98e2d72a09861
4519afe91b1042876d7c021487d98e2d72a09861_0
Q: How do they obtain distant supervision rules for predicting relations? Text: Introduction This work discusses two information extraction systems for identifying temporal information in clinical text, submitted to SemEval-2016 Task 12 : Clinical TempEval BIBREF0 . We participated in tasks from both phases: (1) identifying text spans of time and event mentions; and (2) predicting relations between clinical events and document creation time. Temporal information extraction is the task of constructing a timeline or ordering of all events in a given document. In the clinical domain, this is a key requirement for medical reasoning systems as well as longitudinal research into the progression of disease. While timestamps and the structured nature of the electronic medical record (EMR) directly capture some aspects of time, a large amount of information on the progression of disease is found in the unstructured text component of the EMR where temporal structure is less obvious. We examine a deep-learning approach to sequence labeling using a vanilla recurrent neural network (RNN) with word embeddings, as well as a joint inference, structured prediction approach using Stanford's knowledge base construction framework DeepDive BIBREF1 . Our DeepDive application outperformed the RNN and scored similarly to 2015's best-in-class extraction systems, even though it only used a small set of context window and dictionary features. Extraction performance, however lagged this year's best system submission. For document creation time relations, we again use DeepDive. Our system examined a simple temporal distant supervision rule for labeling time expressions and linking them to nearby event mentions via inference rules. Overall system performance was better than this year's median submission, but again fell short of the best system. Methods and Materials Phase 1 of the challenge required parsing clinical documents to identify Timex3 and Event temporal entity mentions in text. Timex3 entities are expressions of time, ranging from concrete dates to phrases describing intervals like “the last few months." Event entities are broadly defined as anything relevant to a patient's clinical timeline, e.g., diagnoses, illnesses, procedures. Entity mentions are tagged using a document collection of clinic and pathology notes from the Mayo Clinic called the THYME (Temporal History of Your Medical Events) corpus BIBREF2 . We treat Phase 1 as a sequence labeling task and examine several models for labeling entities. We discuss our submitted tagger which uses a vanilla RNN and compare its performance to a DeepDive-based system, which lets us encode domain knowledge and sequence structure into a probabilistic graphic model. For Phase 2, we are given all test set entities and asked to identify the temporal relationship between an Event mention and corresponding document creation time. This relation is represented as a classification problem, assigning event attributes from the label set {Before, Overlap, Before/Overlap, After}. We use DeepDive to define several inference rules for leveraging neighboring pairs of Event and Timex3 mentions to better reason about temporal labels. Recurrent Neural Networks Vanilla (or Elman-type) RNNs are recursive neural networks with a linear chain structure BIBREF3 . RNNs are similar to classical feedforward neural networks, except that they incorporate an additional hidden context layer that forms a time-lagged, recurrent connection (a directed cycle) to the primary hidden layer. In the canonical RNN design, the output of the hidden layer at time step INLINEFORM0 is retained in the context layer and fed back into the hidden layer at INLINEFORM1 this enables the RNN to explicitly model some aspects of sequence history. (see Figure FIGREF4 ). Each word in our vocabulary is represented as an INLINEFORM0 -dimensional vector in a lookup table of INLINEFORM1 x INLINEFORM2 parameters (i.e., our learned embedding matrix). Input features then consist of a concatenation of these embeddings to represent a context window surrounding our target word. The output layer then emits a probability distribution in the dimension of the candidate label set. The lookup table is shared across all input instances and updated during training. Formally our RNN definition follows BIBREF4 : INLINEFORM0 where INLINEFORM0 is our concatenated context window of word embeddings, INLINEFORM1 is our hidden layer, INLINEFORM2 is the input-to-hidden layer matrix, INLINEFORM3 is the hidden layer-to-context layer matrix, and INLINEFORM4 is the activation function (logistic in this work). INLINEFORM0 The output layer INLINEFORM0 consists of a softmax activation function INLINEFORM1 INLINEFORM0 INLINEFORM1 where INLINEFORM0 is the output layer matrix. Training is done using batch gradient descent using one sentence per batch. Our RNN implementation is based on code available as part of Theano v0.7 BIBREF5 . For baseline RNN models, all embedding parameters are initialized randomly in the range [-1.0, 1.0]. For all other word-based models, embedding vectors are initialized or pre-trained with parameters trained on different clinical corpora. Pre-training generally improves classification performance over random initialization and provides a mechanism to leverage large collections of unlabeled data for use in semi-supervised learning BIBREF6 . We create word embeddings using two collections of clinical documents: the MIMIC-III database containing 2.4M notes from critical care patients at Beth Israel Deaconess Medical Center BIBREF7 ; and the University of Iowa Hospitals and Clinics (UIHC) corpus, containing 15M predominantly inpatient notes (see Table TABREF6 ). All word embeddings in this document are trained with word2vec BIBREF8 using the Skip-gram model, trained with a 10 token window size. We generated 100 and 300 dimensional embeddings based on prior work tuning representation sizes in clinical domains BIBREF9 . We train RNN models for three tasks in Phase 1: a character-level RNN for tokenization; and two word-level RNNs for POS tagging and entity labeling. Word-level RNNs are pre-trained with the embeddings described above, while character-level RNNs are randomly initialized. All words are normalized by lowercasing tokens and replacing digits with N, e.g., 01-Apr-2016 becomes NN-apr-NNNN to improve generalizability and restrict vocabulary size. Characters are left as unnormalized input. In the test data set, unknown words/characters are represented using the special token <UNK> . All hyperparameters were selected using a randomized grid search. Tokenization: Word tokenization and sentence boundary detection are done simultaneously using a character-level RNN. Each character is assigned a tag from 3 classes: WORD(W) if a character is a member of a token that does not end a sentence; END(E) for a token that does end a sentence, and whitespace O. We use IOB2 tagging to encode the range of token spans. Models are trained using THYME syntactic annotations from colon and brain cancer notes. Training data consists of all sentences, padded with 5 characters from the left and right neighboring sentences. Each character is represented by a 16-dimensional embedding (from an alphabet of 90 characters) and an 11 character context window. The final prediction task input is one, long character sequence per-document. We found that the tokenizer consistently made errors conflating E and W classes (e.g., B-W, I-E, I-E) so after tagging, we enforce an additional consistency constraint on B-* and I-* tags so that contiguous BEGIN/INSIDE spans share the same class. Part-of-speech Tagging: We trained a POS tagger using THYME syntactic annotations. A model using 100-dimensional UIHC-CN embeddings (clinic notes) and a context window of INLINEFORM0 2 words performed best on held out test data, with an accuracy of 97.67% and F INLINEFORM1 = 0.973. TIMEX3 and EVENT Span Tagging: We train separate models for each entity type, testing different pre-training schemes using 100 and 300-dimensional embeddings trained on our large, unlabeled clinical corpora. Both tasks use context windows of INLINEFORM0 2 words (i.e., concatenated input of 5 INLINEFORM1 -d word embeddings) and a learning rate of 0.01. We use 80 hidden units for 100-dimensional embeddings models and 256 units for 300-dimensional models. Output tags are in the IOB2 tagging format. DeepDive DeepDive developers build domain knowledge into applications using a combination of distant supervision rules, which use heuristics to generate noisy training examples, and inference rules which use factors to define relationships between random variables. This design pattern allows us to quickly encode domain knowledge into a probabilistic graphical model and do joint inference over a large space of random variables. For example, we want to capture the relationship between Event entities and their closest Timex3 mentions in text since that provides some information about when the Event occurred relative to document creation time. Timex3s lack a class DocRelTime, but we can use a distant supervision rule to generate a noisy label that we then leverage to predict neighboring Event labels. We also know that the set of all Event/Timex3 mentions within a given note section, such as patient history, provides discriminative information that should be shared across labels in that section. DeepDive lets us easily define these structures by linking random variables (in this case all entity class labels) with factors, directly encoding domain knowledge into our learning algorithm. Phase 1: Our baseline tagger consists of three inference rules: logistic regression, conditional random fields (CRF), and skip-chain CRF BIBREF10 . In CRFs, factor edges link adjoining words in a linear chain structure, capturing label dependencies between neighboring words. Skip-chain CRFs generalize this idea to include skip edges, which can connect non-neighboring words. For example, we can link labels for all identical words in a given window of sentences. We use DeepDive's feature library, ddlib, to generate common textual features like context windows and dictionary membership. We explored combinations of left/right windows of 2 neighboring words and POS tags, letter case, and entity dictionaries for all vocabulary identified by the challenge's baseline memorization rule, i.e., all phrases that are labeled as true entities INLINEFORM0 50% of the time in the training set. Feature Ablation Tests We evaluate 3 feature set combinations to determine how each contributes predictive power to our system. Run 1: dictionary features, letter case Run 2: dictionary features, letter case, context window ( INLINEFORM0 2 normalized words) Run 3: dictionary features, letter case, context window ( INLINEFORM0 2 normalized words), POS tags Phase 2: In order to predict the relationship between an event and the creation time of its parent document, we assign a DocRelTime random variable to every Timex3 and Event mention. For Events, these values are provided by the training data, for Timex3s we have to compute class labels. Around 42% of Timex3 mentions are simple dates (“12/29/08", “October 16", etc.) and can be naively canonicalized to a universal timestamp. This is done using regular expressions to identify common date patterns and heuristics to deal with incomplete dates. The missing year in “October 16", for example, can be filled in using the nearest preceding date mention; if that isn't available we use the document creation year. These mentions are then assigned a class using the parent document's DocTime value and any revision timestamps. Other Timex3 mentions are more ambiguous so we use a distant supervision approach. Phrases like “currently" and “today's" tend to occur near Events that overlap the current document creation time, while “ago" or “ INLINEFORM0 -years" indicate past events. These dominant temporal associations can be learned from training data and then used to label Timex3s. Finally, we define a logistic regression rule to predict entity DocRelTime values as well as specify a linear skip-chain factor over Event mentions and their nearest Timex3 neighbor, encoding the baseline system heuristic directly as an inference rule. Phase 1 Word tokenization performance was high, F INLINEFORM0 =0.993 while sentence boundary detection was lower with F INLINEFORM1 = 0.938 (document micro average F INLINEFORM2 = 0.985). Tokenization errors were largely confined to splitting numbers and hyphenated words (“ex-smoker" vs. “ex - smoker") which has minimal impact on upstream sequence labeling. Sentence boundary errors were largely missed terminal words, creating longer sentences, which is preferable to short, less informative sequences in terms of impact on RNN mini-batches. Tables TABREF13 and TABREF14 contain results for all sequence labeling models. For Timex3 spans, the best RNN ensemble model performed poorly compared to the winning system (0.706 vs. 0.795). DeepDive runs 2-3 performed as well as 2015's best system, but also fell short of the top system (0.730 vs. 0.795). Event spans were easier to tag and RNN models compared favorably with DeepDive, the former scoring higher recall and the latter higher precision. Both approaches scored below this year's best system (0.885 vs. 0.903). Phase 2 Finally, Table TABREF16 contains our DocRelTime relation extraction. Our simple distant supervision rule leads to better performance than then median system submission, but also falls substantially short of current state of the art. Discussion Randomly initialized RNNs generally weren't competitive to our best performing structured prediction models (DeepDive runs 2-3) which isn't surprising considering the small amount of training data available compared to typical deep-learning contexts. There was a statistically significant improvement for RNNs pre-trained with clinical text word2vec embeddings, reflecting the consensus that embeddings capture some syntactic and semantic information that must otherwise be manually encoded as features. Performance was virtually the same across all embedding types, independent of corpus size, note type, etc. While embeddings trained on more data perform better in semantic tasks like synonym detection, its unclear if that representational strength is important here. Similar performance might also just reflect the general ubiquity with which temporal vocabulary occurs in all clinical note contexts. Alternatively, vanilla RNNs rarely achieve state-of-the-art performance in sequence labeling tasks due to well-known issues surrounding the vanishing or exploding gradient effect BIBREF12 . More sophisticated recurrent architectures with gated units such as Long Short-Term Memory (LSTM), BIBREF13 and gated recurrent unit BIBREF14 or recursive structures like Tree-LSTM BIBREF15 have shown strong representational power in other sequence labeling tasks. Such approaches might perform better in this setting. DeepDive's feature generator libraries let us easily create a large space of binary features and then let regularization address overfitting. In our extraction system, just using a context window of INLINEFORM0 2 words and dictionaries representing the baseline memorization rules was enough to achieve median system performance. POS tag features had no statistically significant impact on performance in either Event/Timex3 extraction. For classifying an Event's document creation time relation, our DeepDive application essentially implements the joint inference version of the baseline memorization rule, leveraging entity proximity to increase predictive performance. A simple distant supervision rule that canonicalizes Timex3 timestamps and predicts nearby Event's lead to a slight performance boost, suggesting that using a larger collection of unlabeled note data could lead to further increases. While our systems did not achieve current state-of-the-art performance, DeepDive matched last year's top submission for Timex3 and Event tagging with very little upfront engineering – around a week of dedicated development time. One of the primary goals of this work was to avoid an over-engineered extraction pipeline, instead relying on feature generation libraries or deep learning approaches to model underlying structure. Both systems explored in this work were successful to some extent, though future work remains in order to close the performance gap between these approaches and current state-of-the-art systems. Acknowledgments This work was supported by the Mobilize Center, a National Institutes of Health Big Data to Knowledge (BD2K) Center of Excellence supported through Grant U54EB020405.
dominant temporal associations can be learned from training data
0cfaca6f3f33ebdb338c5f991f6a7a33ff33844d
0cfaca6f3f33ebdb338c5f991f6a7a33ff33844d_0
Q: Which structured prediction approach do they adopt for temporal entity extraction? Text: Introduction This work discusses two information extraction systems for identifying temporal information in clinical text, submitted to SemEval-2016 Task 12 : Clinical TempEval BIBREF0 . We participated in tasks from both phases: (1) identifying text spans of time and event mentions; and (2) predicting relations between clinical events and document creation time. Temporal information extraction is the task of constructing a timeline or ordering of all events in a given document. In the clinical domain, this is a key requirement for medical reasoning systems as well as longitudinal research into the progression of disease. While timestamps and the structured nature of the electronic medical record (EMR) directly capture some aspects of time, a large amount of information on the progression of disease is found in the unstructured text component of the EMR where temporal structure is less obvious. We examine a deep-learning approach to sequence labeling using a vanilla recurrent neural network (RNN) with word embeddings, as well as a joint inference, structured prediction approach using Stanford's knowledge base construction framework DeepDive BIBREF1 . Our DeepDive application outperformed the RNN and scored similarly to 2015's best-in-class extraction systems, even though it only used a small set of context window and dictionary features. Extraction performance, however lagged this year's best system submission. For document creation time relations, we again use DeepDive. Our system examined a simple temporal distant supervision rule for labeling time expressions and linking them to nearby event mentions via inference rules. Overall system performance was better than this year's median submission, but again fell short of the best system. Methods and Materials Phase 1 of the challenge required parsing clinical documents to identify Timex3 and Event temporal entity mentions in text. Timex3 entities are expressions of time, ranging from concrete dates to phrases describing intervals like “the last few months." Event entities are broadly defined as anything relevant to a patient's clinical timeline, e.g., diagnoses, illnesses, procedures. Entity mentions are tagged using a document collection of clinic and pathology notes from the Mayo Clinic called the THYME (Temporal History of Your Medical Events) corpus BIBREF2 . We treat Phase 1 as a sequence labeling task and examine several models for labeling entities. We discuss our submitted tagger which uses a vanilla RNN and compare its performance to a DeepDive-based system, which lets us encode domain knowledge and sequence structure into a probabilistic graphic model. For Phase 2, we are given all test set entities and asked to identify the temporal relationship between an Event mention and corresponding document creation time. This relation is represented as a classification problem, assigning event attributes from the label set {Before, Overlap, Before/Overlap, After}. We use DeepDive to define several inference rules for leveraging neighboring pairs of Event and Timex3 mentions to better reason about temporal labels. Recurrent Neural Networks Vanilla (or Elman-type) RNNs are recursive neural networks with a linear chain structure BIBREF3 . RNNs are similar to classical feedforward neural networks, except that they incorporate an additional hidden context layer that forms a time-lagged, recurrent connection (a directed cycle) to the primary hidden layer. In the canonical RNN design, the output of the hidden layer at time step INLINEFORM0 is retained in the context layer and fed back into the hidden layer at INLINEFORM1 this enables the RNN to explicitly model some aspects of sequence history. (see Figure FIGREF4 ). Each word in our vocabulary is represented as an INLINEFORM0 -dimensional vector in a lookup table of INLINEFORM1 x INLINEFORM2 parameters (i.e., our learned embedding matrix). Input features then consist of a concatenation of these embeddings to represent a context window surrounding our target word. The output layer then emits a probability distribution in the dimension of the candidate label set. The lookup table is shared across all input instances and updated during training. Formally our RNN definition follows BIBREF4 : INLINEFORM0 where INLINEFORM0 is our concatenated context window of word embeddings, INLINEFORM1 is our hidden layer, INLINEFORM2 is the input-to-hidden layer matrix, INLINEFORM3 is the hidden layer-to-context layer matrix, and INLINEFORM4 is the activation function (logistic in this work). INLINEFORM0 The output layer INLINEFORM0 consists of a softmax activation function INLINEFORM1 INLINEFORM0 INLINEFORM1 where INLINEFORM0 is the output layer matrix. Training is done using batch gradient descent using one sentence per batch. Our RNN implementation is based on code available as part of Theano v0.7 BIBREF5 . For baseline RNN models, all embedding parameters are initialized randomly in the range [-1.0, 1.0]. For all other word-based models, embedding vectors are initialized or pre-trained with parameters trained on different clinical corpora. Pre-training generally improves classification performance over random initialization and provides a mechanism to leverage large collections of unlabeled data for use in semi-supervised learning BIBREF6 . We create word embeddings using two collections of clinical documents: the MIMIC-III database containing 2.4M notes from critical care patients at Beth Israel Deaconess Medical Center BIBREF7 ; and the University of Iowa Hospitals and Clinics (UIHC) corpus, containing 15M predominantly inpatient notes (see Table TABREF6 ). All word embeddings in this document are trained with word2vec BIBREF8 using the Skip-gram model, trained with a 10 token window size. We generated 100 and 300 dimensional embeddings based on prior work tuning representation sizes in clinical domains BIBREF9 . We train RNN models for three tasks in Phase 1: a character-level RNN for tokenization; and two word-level RNNs for POS tagging and entity labeling. Word-level RNNs are pre-trained with the embeddings described above, while character-level RNNs are randomly initialized. All words are normalized by lowercasing tokens and replacing digits with N, e.g., 01-Apr-2016 becomes NN-apr-NNNN to improve generalizability and restrict vocabulary size. Characters are left as unnormalized input. In the test data set, unknown words/characters are represented using the special token <UNK> . All hyperparameters were selected using a randomized grid search. Tokenization: Word tokenization and sentence boundary detection are done simultaneously using a character-level RNN. Each character is assigned a tag from 3 classes: WORD(W) if a character is a member of a token that does not end a sentence; END(E) for a token that does end a sentence, and whitespace O. We use IOB2 tagging to encode the range of token spans. Models are trained using THYME syntactic annotations from colon and brain cancer notes. Training data consists of all sentences, padded with 5 characters from the left and right neighboring sentences. Each character is represented by a 16-dimensional embedding (from an alphabet of 90 characters) and an 11 character context window. The final prediction task input is one, long character sequence per-document. We found that the tokenizer consistently made errors conflating E and W classes (e.g., B-W, I-E, I-E) so after tagging, we enforce an additional consistency constraint on B-* and I-* tags so that contiguous BEGIN/INSIDE spans share the same class. Part-of-speech Tagging: We trained a POS tagger using THYME syntactic annotations. A model using 100-dimensional UIHC-CN embeddings (clinic notes) and a context window of INLINEFORM0 2 words performed best on held out test data, with an accuracy of 97.67% and F INLINEFORM1 = 0.973. TIMEX3 and EVENT Span Tagging: We train separate models for each entity type, testing different pre-training schemes using 100 and 300-dimensional embeddings trained on our large, unlabeled clinical corpora. Both tasks use context windows of INLINEFORM0 2 words (i.e., concatenated input of 5 INLINEFORM1 -d word embeddings) and a learning rate of 0.01. We use 80 hidden units for 100-dimensional embeddings models and 256 units for 300-dimensional models. Output tags are in the IOB2 tagging format. DeepDive DeepDive developers build domain knowledge into applications using a combination of distant supervision rules, which use heuristics to generate noisy training examples, and inference rules which use factors to define relationships between random variables. This design pattern allows us to quickly encode domain knowledge into a probabilistic graphical model and do joint inference over a large space of random variables. For example, we want to capture the relationship between Event entities and their closest Timex3 mentions in text since that provides some information about when the Event occurred relative to document creation time. Timex3s lack a class DocRelTime, but we can use a distant supervision rule to generate a noisy label that we then leverage to predict neighboring Event labels. We also know that the set of all Event/Timex3 mentions within a given note section, such as patient history, provides discriminative information that should be shared across labels in that section. DeepDive lets us easily define these structures by linking random variables (in this case all entity class labels) with factors, directly encoding domain knowledge into our learning algorithm. Phase 1: Our baseline tagger consists of three inference rules: logistic regression, conditional random fields (CRF), and skip-chain CRF BIBREF10 . In CRFs, factor edges link adjoining words in a linear chain structure, capturing label dependencies between neighboring words. Skip-chain CRFs generalize this idea to include skip edges, which can connect non-neighboring words. For example, we can link labels for all identical words in a given window of sentences. We use DeepDive's feature library, ddlib, to generate common textual features like context windows and dictionary membership. We explored combinations of left/right windows of 2 neighboring words and POS tags, letter case, and entity dictionaries for all vocabulary identified by the challenge's baseline memorization rule, i.e., all phrases that are labeled as true entities INLINEFORM0 50% of the time in the training set. Feature Ablation Tests We evaluate 3 feature set combinations to determine how each contributes predictive power to our system. Run 1: dictionary features, letter case Run 2: dictionary features, letter case, context window ( INLINEFORM0 2 normalized words) Run 3: dictionary features, letter case, context window ( INLINEFORM0 2 normalized words), POS tags Phase 2: In order to predict the relationship between an event and the creation time of its parent document, we assign a DocRelTime random variable to every Timex3 and Event mention. For Events, these values are provided by the training data, for Timex3s we have to compute class labels. Around 42% of Timex3 mentions are simple dates (“12/29/08", “October 16", etc.) and can be naively canonicalized to a universal timestamp. This is done using regular expressions to identify common date patterns and heuristics to deal with incomplete dates. The missing year in “October 16", for example, can be filled in using the nearest preceding date mention; if that isn't available we use the document creation year. These mentions are then assigned a class using the parent document's DocTime value and any revision timestamps. Other Timex3 mentions are more ambiguous so we use a distant supervision approach. Phrases like “currently" and “today's" tend to occur near Events that overlap the current document creation time, while “ago" or “ INLINEFORM0 -years" indicate past events. These dominant temporal associations can be learned from training data and then used to label Timex3s. Finally, we define a logistic regression rule to predict entity DocRelTime values as well as specify a linear skip-chain factor over Event mentions and their nearest Timex3 neighbor, encoding the baseline system heuristic directly as an inference rule. Phase 1 Word tokenization performance was high, F INLINEFORM0 =0.993 while sentence boundary detection was lower with F INLINEFORM1 = 0.938 (document micro average F INLINEFORM2 = 0.985). Tokenization errors were largely confined to splitting numbers and hyphenated words (“ex-smoker" vs. “ex - smoker") which has minimal impact on upstream sequence labeling. Sentence boundary errors were largely missed terminal words, creating longer sentences, which is preferable to short, less informative sequences in terms of impact on RNN mini-batches. Tables TABREF13 and TABREF14 contain results for all sequence labeling models. For Timex3 spans, the best RNN ensemble model performed poorly compared to the winning system (0.706 vs. 0.795). DeepDive runs 2-3 performed as well as 2015's best system, but also fell short of the top system (0.730 vs. 0.795). Event spans were easier to tag and RNN models compared favorably with DeepDive, the former scoring higher recall and the latter higher precision. Both approaches scored below this year's best system (0.885 vs. 0.903). Phase 2 Finally, Table TABREF16 contains our DocRelTime relation extraction. Our simple distant supervision rule leads to better performance than then median system submission, but also falls substantially short of current state of the art. Discussion Randomly initialized RNNs generally weren't competitive to our best performing structured prediction models (DeepDive runs 2-3) which isn't surprising considering the small amount of training data available compared to typical deep-learning contexts. There was a statistically significant improvement for RNNs pre-trained with clinical text word2vec embeddings, reflecting the consensus that embeddings capture some syntactic and semantic information that must otherwise be manually encoded as features. Performance was virtually the same across all embedding types, independent of corpus size, note type, etc. While embeddings trained on more data perform better in semantic tasks like synonym detection, its unclear if that representational strength is important here. Similar performance might also just reflect the general ubiquity with which temporal vocabulary occurs in all clinical note contexts. Alternatively, vanilla RNNs rarely achieve state-of-the-art performance in sequence labeling tasks due to well-known issues surrounding the vanishing or exploding gradient effect BIBREF12 . More sophisticated recurrent architectures with gated units such as Long Short-Term Memory (LSTM), BIBREF13 and gated recurrent unit BIBREF14 or recursive structures like Tree-LSTM BIBREF15 have shown strong representational power in other sequence labeling tasks. Such approaches might perform better in this setting. DeepDive's feature generator libraries let us easily create a large space of binary features and then let regularization address overfitting. In our extraction system, just using a context window of INLINEFORM0 2 words and dictionaries representing the baseline memorization rules was enough to achieve median system performance. POS tag features had no statistically significant impact on performance in either Event/Timex3 extraction. For classifying an Event's document creation time relation, our DeepDive application essentially implements the joint inference version of the baseline memorization rule, leveraging entity proximity to increase predictive performance. A simple distant supervision rule that canonicalizes Timex3 timestamps and predicts nearby Event's lead to a slight performance boost, suggesting that using a larger collection of unlabeled note data could lead to further increases. While our systems did not achieve current state-of-the-art performance, DeepDive matched last year's top submission for Timex3 and Event tagging with very little upfront engineering – around a week of dedicated development time. One of the primary goals of this work was to avoid an over-engineered extraction pipeline, instead relying on feature generation libraries or deep learning approaches to model underlying structure. Both systems explored in this work were successful to some extent, though future work remains in order to close the performance gap between these approaches and current state-of-the-art systems. Acknowledgments This work was supported by the Mobilize Center, a National Institutes of Health Big Data to Knowledge (BD2K) Center of Excellence supported through Grant U54EB020405.
DeepDive BIBREF1
70c2dc170a73185c9d1a16953f85aca834ead6d3
70c2dc170a73185c9d1a16953f85aca834ead6d3_0
Q: Which evaluation metric has been measured? Text: Introduction In Information Retrieval (IR), the searched query has always been an integral part. When a user enters a query in the information retrieval system the keywords they use might be different from the ones used in the documents or they might be expressing it in a different form. Considering this situation, the information retrieval systems should be intelligent and provide the requested information to the user. According to Spink (2001), each user in the web uses 2.4 words in their query; having said that, the probability of the input query being close to those of the documents is extremely low [22]. The latest algorithms implement query indexing techniques and covers only the user's history of search. This simply brings the problem of keywords mismatch; the queries entered by user don't match with the ones in the documents, this problem is called the lexical problem. The lexical problem originates from synonymy. Synonymy is the state that two or more words have the same meaning. Thus, expanding the query by enriching each word with their synonyms will enhance the IR results. This paper is organized as follows. In section II, we discuss some previous researches conducted on IR. In section III, the proposed method is described. Section IV, represents the evaluation and results of proposed method; and finally, in section V, we conclude the remarks and discuss some possible future works. Previous Works One of the first researchers who used the method for indexing was Maron (1960) [11]. Aforementioned paper described a meticulous and novel method to retrieve information from the books in the library. This paper is also one of the pioneers of the relevance and using probabilistic indexing. Relevance feedback is the process to involve user in the retrieved documents. It was mentioned in Rocchio (1971) [15], Ide (1971) [8], and Salton (1971) [19]. In the Relevance feedback the user's opinion for the retrieved documents is asked, then by the help of the user's feedbacks the relevance and irrelevance of the documents is decided. In the later researches, relevance feedback has been used in combination with other methods. For instance, Rahimi (2014) [14] used relevance feedback and Latent Semantic Analysis (LSA) to increase user's satisfaction. Other researches regarding the usage of relevance feedback are Salton (1997) [18], Rui (1997) [16], and Rui (1998) [17]. In the next approaches, the usage of thesauri was increased. Zazo used thesauri to "reformulate" user's input query [23]. Then came the WordNet. WordNet was one the paradigm shifting resources. It was first created at Princeton University's Cognitive Science Laboratory in 1995 [12]. It is a lexical database of English which includes: Nouns, Adjectives, Verbs, and Adverbs. The structure of WordNet is a semantic network which has several relations such as: synonymy, hypernymy, hyponymy, meronymy, holonymy, and etc. WordNet contains more than 155,000 entries. Using WordNet for query expansion was first introduced in Gong (2005) [5]. They implemented query expansion via WordNet to improve one token search in images and improved precision. Another research conducted by Pal (2014) showed that the results from query expansion using standard TREC collections improves the results on overall [13]. Zhang (2009) reported 7 percent improvement in precision in comparison to the queries without being expanded [24]. Using WordNet for query expansion improved 23 to 31 percent improvement on TREC 9, 10, and 12 [10]. Liu (2004) used a knowledge database called ConceptNet which contained 1.6 million commonsense knowledge [9]. ConceptNet is used for Topic Gisting, Analogy-Making, and other context-oriented inferences. Later, Hsu (2006) used WordNet and ConceptNet to expand queries and the results were better than not using query expansion method [6]. FarsNet [20] [21] is the first WordNet for Persian, developed by the NLP Lab at Shahid Beheshti University and it follows the same structure as the original WordNet. The first version of FarsNet contained more than 10,000 synsets while version 2.0 and 2.5 contained 20,000 synsets. Currently, FarsNet version 3 is under release and contains more than 40,000 synsets [7]. Proposed Method Each word in FarsNet has a Word ID (WID). Each WID is then related to other WIDs e.g. words and their synonyms are related to each other in groups called synsets. As mentioned before, often the user input doesn't match with the ones used in the documents and therefore the information retrieval system fails to fulfil user's request. Having said that; the present paper utilizes FarsNet and its synonymy relations to use in query expansion. We use the original synsets of FarsNet 2.5 as dataset. However, the data is first cleaned and normalized. Normalization refers to the process where the /ی/ is replaced with Unicode code point of 06CC and /ک/ is replaced by Unicode code point of 06A9. The input of the algorithm is the string of input queries. Then the input string is tokenized. Tokenization is the process of separating each word token by white space characters. In the next step, each token is searched in FarsNet and if it is found, the WID of the token will be searched in the database of synonyms; in other words, FarsNet Synsets. Finally, each word is concatenated to its synonyms and they are searched in the collection. Snippet below shows the pseudo code of the query expansion method. Sample input and output are: Input: [Casualties of drought] Output: [drought Casualties waterless dry dried up] بي آب خشک خشکيده خسارات خشك سالي GET input_query L <- an empty list FOR token IN input_query: Wid <- find token's WID in FarsNet INSERT(Wid , L) Expanded_Query <- input@query FOR wid IN L: Syns <- find synonym of wid in Synset CONCAT(Expanded_Query, Syns) Search Expanded_Query in Collection END Experimental Results In the evaluation phase, we used Hamshahri Corpus [2] which is one of the biggest collections of documents for Persian, suitable for Information Retrieval tasks. This corpus was first created by Database Research Group at Tehran University. The name Hamshahri comes from the Persian newspaper Hamshahri, one of the biggest Persian language newspapers. Hamshahri corpus contains 166,000 documents from Hamshahri newspaper in 65 categories. On average, each document contains 380 words and in general the corpus contains 400,000 distinct words. This corpus is built with TREC standards and contains list of standard judged queries. These queries are judged to be relevant or irrelevant to the queries based on real judgments. The judgment list contains 65 standard queries along with the judgements and some descriptions of the queries. Sample queries include: [women basketball] [teaching gardening flower] [news about jungles' fires] [status of Iran's carpet export] [air bicycle] In the present paper, the information retrieval experiments are based on standard queries of Hamshahri corpus. For assessment of the proposed algorithm in a real information retrieval situation we used Elasticsearch database [1]. Elasticsearch is a noSQL database which its base is document, hence called document based database. Elasticsearch uses Lucene as its engine. The evaluation process started with normalizing all the documents in Hamshahri corpus. Then some articles that were incomplete or had errors were removed so that they could be indexed in Elasticsearch. In the end, the total number of 165,000 documents were indexed in Elasticsearch. Code snippet below shows a sample of index structure in Elasticsearch database. _index: "Hamshahri" [Default-Elasticsearch Index] _type: "articles" [Default-All our types are Hamshahri document] _id : "AV9Np3YfvUqJXrCluoHe" [random generated ID] DID: "1S1" [Document ID in Hamshahri Corpus] Date: "75\\04\\02" [Document date in Iranian Calendar, \\ is for character escape] Cat: "adabh" [Document category e.g. adab-honar] Body: "&" [Document body] We arranged two sets of experiments for evaluation of the algorithm: without query expansion (baseline) and with query expansion (proposed). First, for each query in the standard query list of Hamshahri corpus, we searched in Elasticsearch database and retrieved the results. In the next step, we expanded each query using proposed method and searched each expanded query in Elasticsearch. In order to evaluate the precision of the retrieved documents in each experiment, we used "TREC_Eval" tool [3]. TREC_Eval is a standard tool for evaluation of IR tasks and its name is a short form of Text REtrieval Conference (TREC) Evaluation tool. The Mean Average Precision (MAP) reported by TREC_Eval was 27.99% without query expansion and 37.10% with query expansion which shows more than 9 percent improvement. Table 1 and Figure 1 show the precision at the first n retrieved documents (P@n) for different numbers of n in two sets of experiments. In all P@n states the precision of Query Expansion algorithm was higher than the baseline. Figure 1 shows the plot of precision vs recall for two sets of experiments. This plot shows that our method will improve the overall quality of Information Retrieval system. Conclusions In this paper, we proposed a method for query expansion in IR systems using FarsNet. Results from this approach showed about 9% improvement in Mean Average Precision (MAP) for document retrieval. In the future researches, we will use FarsNet 3.0 and also, we will modify and revise some synsets in the FarsNet, in order toincrease the precision for Information Retrieval.
Mean Average Precision
38854255dbdf2f36eebefc0d9826aa76df9637c6
38854255dbdf2f36eebefc0d9826aa76df9637c6_0
Q: What is the WordNet counterpart for Persian? Text: Introduction In Information Retrieval (IR), the searched query has always been an integral part. When a user enters a query in the information retrieval system the keywords they use might be different from the ones used in the documents or they might be expressing it in a different form. Considering this situation, the information retrieval systems should be intelligent and provide the requested information to the user. According to Spink (2001), each user in the web uses 2.4 words in their query; having said that, the probability of the input query being close to those of the documents is extremely low [22]. The latest algorithms implement query indexing techniques and covers only the user's history of search. This simply brings the problem of keywords mismatch; the queries entered by user don't match with the ones in the documents, this problem is called the lexical problem. The lexical problem originates from synonymy. Synonymy is the state that two or more words have the same meaning. Thus, expanding the query by enriching each word with their synonyms will enhance the IR results. This paper is organized as follows. In section II, we discuss some previous researches conducted on IR. In section III, the proposed method is described. Section IV, represents the evaluation and results of proposed method; and finally, in section V, we conclude the remarks and discuss some possible future works. Previous Works One of the first researchers who used the method for indexing was Maron (1960) [11]. Aforementioned paper described a meticulous and novel method to retrieve information from the books in the library. This paper is also one of the pioneers of the relevance and using probabilistic indexing. Relevance feedback is the process to involve user in the retrieved documents. It was mentioned in Rocchio (1971) [15], Ide (1971) [8], and Salton (1971) [19]. In the Relevance feedback the user's opinion for the retrieved documents is asked, then by the help of the user's feedbacks the relevance and irrelevance of the documents is decided. In the later researches, relevance feedback has been used in combination with other methods. For instance, Rahimi (2014) [14] used relevance feedback and Latent Semantic Analysis (LSA) to increase user's satisfaction. Other researches regarding the usage of relevance feedback are Salton (1997) [18], Rui (1997) [16], and Rui (1998) [17]. In the next approaches, the usage of thesauri was increased. Zazo used thesauri to "reformulate" user's input query [23]. Then came the WordNet. WordNet was one the paradigm shifting resources. It was first created at Princeton University's Cognitive Science Laboratory in 1995 [12]. It is a lexical database of English which includes: Nouns, Adjectives, Verbs, and Adverbs. The structure of WordNet is a semantic network which has several relations such as: synonymy, hypernymy, hyponymy, meronymy, holonymy, and etc. WordNet contains more than 155,000 entries. Using WordNet for query expansion was first introduced in Gong (2005) [5]. They implemented query expansion via WordNet to improve one token search in images and improved precision. Another research conducted by Pal (2014) showed that the results from query expansion using standard TREC collections improves the results on overall [13]. Zhang (2009) reported 7 percent improvement in precision in comparison to the queries without being expanded [24]. Using WordNet for query expansion improved 23 to 31 percent improvement on TREC 9, 10, and 12 [10]. Liu (2004) used a knowledge database called ConceptNet which contained 1.6 million commonsense knowledge [9]. ConceptNet is used for Topic Gisting, Analogy-Making, and other context-oriented inferences. Later, Hsu (2006) used WordNet and ConceptNet to expand queries and the results were better than not using query expansion method [6]. FarsNet [20] [21] is the first WordNet for Persian, developed by the NLP Lab at Shahid Beheshti University and it follows the same structure as the original WordNet. The first version of FarsNet contained more than 10,000 synsets while version 2.0 and 2.5 contained 20,000 synsets. Currently, FarsNet version 3 is under release and contains more than 40,000 synsets [7]. Proposed Method Each word in FarsNet has a Word ID (WID). Each WID is then related to other WIDs e.g. words and their synonyms are related to each other in groups called synsets. As mentioned before, often the user input doesn't match with the ones used in the documents and therefore the information retrieval system fails to fulfil user's request. Having said that; the present paper utilizes FarsNet and its synonymy relations to use in query expansion. We use the original synsets of FarsNet 2.5 as dataset. However, the data is first cleaned and normalized. Normalization refers to the process where the /ی/ is replaced with Unicode code point of 06CC and /ک/ is replaced by Unicode code point of 06A9. The input of the algorithm is the string of input queries. Then the input string is tokenized. Tokenization is the process of separating each word token by white space characters. In the next step, each token is searched in FarsNet and if it is found, the WID of the token will be searched in the database of synonyms; in other words, FarsNet Synsets. Finally, each word is concatenated to its synonyms and they are searched in the collection. Snippet below shows the pseudo code of the query expansion method. Sample input and output are: Input: [Casualties of drought] Output: [drought Casualties waterless dry dried up] بي آب خشک خشکيده خسارات خشك سالي GET input_query L <- an empty list FOR token IN input_query: Wid <- find token's WID in FarsNet INSERT(Wid , L) Expanded_Query <- input@query FOR wid IN L: Syns <- find synonym of wid in Synset CONCAT(Expanded_Query, Syns) Search Expanded_Query in Collection END Experimental Results In the evaluation phase, we used Hamshahri Corpus [2] which is one of the biggest collections of documents for Persian, suitable for Information Retrieval tasks. This corpus was first created by Database Research Group at Tehran University. The name Hamshahri comes from the Persian newspaper Hamshahri, one of the biggest Persian language newspapers. Hamshahri corpus contains 166,000 documents from Hamshahri newspaper in 65 categories. On average, each document contains 380 words and in general the corpus contains 400,000 distinct words. This corpus is built with TREC standards and contains list of standard judged queries. These queries are judged to be relevant or irrelevant to the queries based on real judgments. The judgment list contains 65 standard queries along with the judgements and some descriptions of the queries. Sample queries include: [women basketball] [teaching gardening flower] [news about jungles' fires] [status of Iran's carpet export] [air bicycle] In the present paper, the information retrieval experiments are based on standard queries of Hamshahri corpus. For assessment of the proposed algorithm in a real information retrieval situation we used Elasticsearch database [1]. Elasticsearch is a noSQL database which its base is document, hence called document based database. Elasticsearch uses Lucene as its engine. The evaluation process started with normalizing all the documents in Hamshahri corpus. Then some articles that were incomplete or had errors were removed so that they could be indexed in Elasticsearch. In the end, the total number of 165,000 documents were indexed in Elasticsearch. Code snippet below shows a sample of index structure in Elasticsearch database. _index: "Hamshahri" [Default-Elasticsearch Index] _type: "articles" [Default-All our types are Hamshahri document] _id : "AV9Np3YfvUqJXrCluoHe" [random generated ID] DID: "1S1" [Document ID in Hamshahri Corpus] Date: "75\\04\\02" [Document date in Iranian Calendar, \\ is for character escape] Cat: "adabh" [Document category e.g. adab-honar] Body: "&" [Document body] We arranged two sets of experiments for evaluation of the algorithm: without query expansion (baseline) and with query expansion (proposed). First, for each query in the standard query list of Hamshahri corpus, we searched in Elasticsearch database and retrieved the results. In the next step, we expanded each query using proposed method and searched each expanded query in Elasticsearch. In order to evaluate the precision of the retrieved documents in each experiment, we used "TREC_Eval" tool [3]. TREC_Eval is a standard tool for evaluation of IR tasks and its name is a short form of Text REtrieval Conference (TREC) Evaluation tool. The Mean Average Precision (MAP) reported by TREC_Eval was 27.99% without query expansion and 37.10% with query expansion which shows more than 9 percent improvement. Table 1 and Figure 1 show the precision at the first n retrieved documents (P@n) for different numbers of n in two sets of experiments. In all P@n states the precision of Query Expansion algorithm was higher than the baseline. Figure 1 shows the plot of precision vs recall for two sets of experiments. This plot shows that our method will improve the overall quality of Information Retrieval system. Conclusions In this paper, we proposed a method for query expansion in IR systems using FarsNet. Results from this approach showed about 9% improvement in Mean Average Precision (MAP) for document retrieval. In the future researches, we will use FarsNet 3.0 and also, we will modify and revise some synsets in the FarsNet, in order toincrease the precision for Information Retrieval.
FarsNet
827b5bd215599623a3125afe331b56b89b42bf09
827b5bd215599623a3125afe331b56b89b42bf09_0
Q: What large corpus is used for experiments? Text: Introduction Since early times of computer-based speech synthesis research, voice quality (the perceived timbre of speech) analysis/modification has attracted interest of researchers BIBREF0. The topic of voice quality analysis finds application in various areas of speech processing such as high-quality parametric speech synthesis, expressive/emotional speech synthesis, speaker identification, emotion recognition, prosody analysis, speech therapy. Due to availability of reviews such as BIBREF1 and space limitations, a review of voice quality analysis methods will not be presented here. For voice quality analysis of speech corpora, it is common practice to estimate spectral parameters directly from speech signals such as relative harmonic amplitudes, or Harmonic to Noise Ratio (HNR). Although the voice quality variations are mainly considered to be controlled by the glottal source, glottal source estimation is considered to be problematic and hence avoided in the parameter estimation procedures for processing large speech corpora. In this work, we follow the not so common path and study the differences present in the glottal source signal parameters estimated via an automatic algorithm when a given speaker produces different voice qualities. Based on a parametric analysis of these latter (Section SECREF2), we further investigate the use of the information extracted from a large corpus, for voice quality modification of other speech databases in a HMM-based speech synthesizer (Section SECREF3). Excitation-based Voice quality analysis The goal of this part is to highlight the differences present in the excitation when a given speaker produces different voice qualities. The De7 database used for this study was designed by Marc Schroeder as one of the first attempts of creating diphone databases for expressive speech synthesis BIBREF2. The database contains three voice qualities (modal, soft and loud) uttered by a German female speaker, with about 50 minutes of speech available for each voice quality. In Section SECREF1, the glottal flow estimation method and glottal flow parametrization used in this work are briefly presented. The harmonicity of speech is studied via the maximum voiced frequency in Section SECREF3. As an important perceptual charactersitic, spectral tilt is analyzed in Section SECREF4. Section SECREF6 compares the so-called eigenresiduals BIBREF3 of the different voice qualities. Finally Section SECREF8 quantifies the separability between the three voice qualities for the extracted excitation features. Excitation-based Voice quality analysis ::: Glottal source We recently showed that complex cepstrum can be efficiently used for glottal flow estimation BIBREF4. This method aims at separating the minimum and maximum-phase components of the speech signal. Indeed it has been shown previously BIBREF5 that speech is a mixed-phase signal where the maximum-phase (i.e anti-causal) contribution corresponds to the glottal open phase, while the minimum-phase component is related to the vocal tract transmittance (assuming an abrupt glottal return phase). Isolating the maximum-phase component of speech then provides a reliable estimation of the glottal source, which can be achieved by the complex cepstrum. The glottal flow open phase is then parametrized by three features: the glottal formant frequency ($F_g$), the Normalized Amplitude Quotient ($NAQ$) and the Quasi-Open Quotient ($QOQ$). The glottal formant was tracked using the method described in BIBREF6. Figure FIGREF2(a) shows the histograms of $Fg/F_0$ for the three voice qualities. Significant differences between the distributions are observed. Among others it turns out that a louder (softer) voice results in the production of a higher (lower) glottal formant frequency. Another observation that can be drawn from this figure is the presence of two modes for the modal and loud voices. This may be explained by the fact that the estimated glottal source sometimes comprises a ripple both in the time and frequency domains, which in turn may have two possible causes: an incomplete separation between $Fg$ and the first formant $F_1$ BIBREF6, and/or a non-linear interaction between the vocal tract and the glottis BIBREF7. This ripple may therefore affect the detection of the glottal formant frequency and in this way explain the parasitical peak in the $Fg/F_0$ histogram for the modal and loud voices. In previous works BIBREF8, BIBREF9, Alku et al. proposed the Normalized Amplitude Quotient and the Quasi-Open Quotient as two efficient time-domain parameters characterizing respectively the closing and open phase of the glottal flow. These parameters are extracted using the Aparat toolkit BIBREF10 from the glottal source estimated here by the complex cepstrum . Figures FIGREF2(b) and FIGREF2(c) display the histograms of these two features for the three voice qualities. Notable differences between histograms may be observed. Excitation-based Voice quality analysis ::: Maximum Voiced Frequency Some approaches, such as the Harmonic plus Noise Model (HNM ,BIBREF11), consider that the speech signal can be modeled by a non-periodic component beyond a given frequency. In the case of HNM, this maximum voiced frequency ($F_m$) demarcates the boundary between two distinct spectral bands, where respectively an harmonic and a stochastic modeling are supposed to hold. The higher the $F_m$, the stronger the harmonicity, and consequently the weaker the presence of noise in speech. In this paper, $F_m$ was estimated using the algorithm described in BIBREF11. Figure FIGREF2(d) displays the histograms of $F_m$ for the three voice qualities. It can be observed that, in general, the soft voice has a low $F_m$ (as a result of its breathy nature) and that the stronger the vocal effort, the more harmonic the speech signal, and consequently the higher $F_m$. Excitation-based Voice quality analysis ::: Spectral Tilt Spectral tilt of speech is known to play an important role in the perception of a voice quality BIBREF12. To capture this crucial feature, an averaged spectrum is obtained on the whole corpus by a process independent of the prosody and the vocal tract variations. For this, voiced speech frames are extracted by applying a two pitch period-long Hanning windowing centered on the current Glottal Closure Instant (GCI). GCI locations are determined using the technique described in BIBREF13. These frames are then resampled on a fixed number of points (corresponding to two mean pitch periods) and normalized in energy. The averaged spectrum is finally achieved by averaging the spectra of the normalized speech frames. The averaged amplitude spectrum then contains a mix of the average glottal and vocal tract contributions. The averaged spectrum for the three voice qualities is exhibited in Figure FIGREF5. Since these spectra were computed for the same speaker, it is reasonable to think that the main difference between them is due to the spectral tilt regarding the produced voice quality. Among others it can be noticed from Figure FIGREF5 that the stronger the vocal effort, the richer the spectral content in the [1kHz-5kHz] band. Excitation-based Voice quality analysis ::: Eigenresiduals In BIBREF3 we proposed to model the residual signal by decomposing speaker-dependent pitch-synchronous residual frames on an orthonormal basis. It was also shown that the first so-obtained eigenvector (called eigenresidual) can be efficiently used in parametric speech synthesis. As eigenresiduals are employed in our voice quality modification application in Section SECREF3, Figure FIGREF7 displays the differences present in this signal depending on the produced voice quality. It can be noticed that the conclusions we drew in Section SECREF1 about the glottal open phase are corroborated. It is indeed observed that the stronger the vocal effort, the faster the response of the eigenresidual open phase. Excitation-based Voice quality analysis ::: Separability between Distributions Important differences in the distributions of the features have been presented in the previous subsections (which are in line with the conclusions presented in various studies BIBREF0, BIBREF8, BIBREF12). In this section, we quantify how these differences between voice qualities are significative. For this, the Kullback-Leibler (KL) divergence BIBREF14 is known to measure the separability between two discrete density functions $A$ and $B$. But since this measure is non-symmetric (and consequently is not a true distance), its symmetrised version, called Jensen-Shannon divergence BIBREF14, is often prefered. It consists of a sum of two KL measures: where $M$ is the average of the two distributions ($M=0.5*(A+B)$). Table TABREF10 shows the results for the four features we previously presented. Among others it can be noticed that the loud and soft voices are highly separable, while the loud type is closer to the modal voice than the soft one. It is also seen that $F_g$ and $NAQ$ are highly informative for voice quality labeling. Voice quality modification In a previous work BIBREF3, we proposed a Deterministic plus Stochastic Model (DSM) of the residual signal. In this approach, the excitation is divided into two distinct spectral bands delimited by the maximum voiced frequency $F_m$. The deterministic part concerns the low-frequency contents and is modeled by the first eigenresidual as explained in Section SECREF6. As for the stochastic component, it is a high-pass filtered noise similarly to what is used in the HNM BIBREF11. The residual signal is then passed through a LPC-like filter to obtain the synthetic speech. This section aims at applying voice quality modification as a post-process to HMM-based speech synthesis BIBREF15 using the DSM of the residual signal. More precisely, a HMM-based synthesizer is trained on a corpus of modal voice for a given speaker. The goal is then to transform the synthetic voice so that it is perceived as soft or loud, while avoiding a degradation of quality in the produced speech. Since no dataset of expressive voice is available for the considered speaker, modifications are extrapolated from the prototypes described for speaker De7 in Section SECREF2, assuming that other speakers modify their voice quality in the same way. Three main transformations are here considered: The eigenresiduals presented in Section SECREF6 are used for the deterministic part of the DSM. These waveforms implicitly convey the modifications of glottal open phase that were underlined in Section SECREF1. The maximum voiced frequency $F_m$ is fixed for a given voice quality according to Section SECREF3 by taking its mean value: 4600 Hz for the loud, 3990 Hz for the modal (confirming the 4 kHz we used in BIBREF3), and 2460 Hz for the soft voice. The spectral tilt is modified using the inverse of the process described in Section SECREF4. For this, the averaged spectrum of the voiced segments is transformed, in the pitch-normalized domain, by a filter expressed as a ratio between auto-regressive modelings of the source and target voice qualities (cf Fig.FIGREF5). Residual frames are then resampled to the target pitch at synthesis time. This latter transformation is then pitch-dependent. To evaluate the technique, a subjective test was submitted to 10 people. The test consisted of 27 sentences generated by our system for three speakers (two males and one female). One third of these sentences were converted to a softer voice, and one third to a louder one. For each sentence, participants were asked to assess the vocal effort they perceive (0 = very soft, 100 = very tensed), and to give a MOS score. Results are displayed in Table TABREF14 with their 95% confidence intervals. Interestingly it can be noticed that voice quality modifications are perceived as expected while the overall quality is not significantly altered (although listeners have a natural tendency to prefer softer voices). Conclusion In this study we show that a glottal flow estimation algorithm BIBREF4 can be effectively used for voice quality analysis on large speech corpora where most of glottal flow estimation literature are based on tests with sustained vowels. We study the variations in parameters for different voice qualities and conclude that the two glottal flow parameters $F_g$ and $NAQ$ are highly informative for voice quality labeling. We further show that the information extracted from one speech database can be applied to other speech databases for voice quality modification and the quality achieved in a speech synthesis application is fairly high. Acknowledgments Thomas Drugman is supported by the “Fonds National de la Recherche Scientifique” (FNRS). The authors would like to thank M. Schroeder for the De7 database, as well as Y. Stylianou for providing the algorithm extracting $F_m$.
The De7 database
2159062595f24ec29826d517429e1b809ba068b3
2159062595f24ec29826d517429e1b809ba068b3_0
Q: Are any of the utterances ungrammatical? Text: Introduction We have acquired large sets of both written and spoken data during the implementation of campaigns aimed at assessing the proficiency, at school, of Italian pupils learning both German and English. Part of the acquired data has been included in a corpus, named "Trentino Language Testing" in schools (TLT-school), that will be described in the following. All the collected sentences have been annotated by human experts in terms of some predefined “indicators” which, in turn, were used to assign the proficiency level to each student undertaking the assigned test. This level is expressed according to the well-known Common European Framework of Reference for Languages (Council of Europe, 2001) scale. The CEFR defines 6 levels of proficiency: A1 (beginner), A2, B1, B2, C1 and C2. The levels considered in the evaluation campaigns where the data have been collected are: A1, A2 and B1. The indicators measure the linguistic competence of test takers both in relation to the content (e.g. grammatical correctness, lexical richness, semantic coherence, etc.) and to the speaking capabilities (e.g. pronunciation, fluency, etc.). Refer to Section SECREF2 for a description of the adopted indicators. The learners are Italian students, between 9 and 16 years old. They took proficiency tests by answering question prompts provided in written form. The “TLT-school” corpus, that we are going to make publicly available, contains part of the spoken answers (together with the respective manual transcriptions) recorded during some of the above mentioned evaluation campaigns. We will release the written answers in future. Details and critical issues found during the acquisition of the answers of the test takers will be discussed in Section SECREF2. The tasks that can be addressed by using the corpus are very challenging and pose many problems, which have only partially been solved by the interested scientific community. From the ASR perspective, major difficulties are represented by: a) recognition of both child and non-native speech, i.e. Italian pupils speaking both English and German, b) presence of a large number of spontaneous speech phenomena (hesitations, false starts, fragments of words, etc.), c) presence of multiple languages (English, Italian and German words are frequently uttered in response to a single question), d) presence of a significant level of background noise due to the fact that the microphone remains open for a fixed time interval (e.g. 20 seconds - depending on the question), and e) presence of non-collaborative speakers (students often joke, laugh, speak softly, etc.). Refer to Section SECREF6 for a detailed description of the collected spoken data set. Furthermore, since the sets of data from which “TLT-school” was derived were primarily acquired for measuring proficiency of second language (L2) learners, it is quite obvious to exploit the corpus for automatic speech rating. To this purpose, one can try to develop automatic approaches to reliably estimate the above-mentioned indicators used by the human experts who scored the answers of the pupils (such an approach is described in BIBREF0). However, it has to be noticed that scientific literature proposes to use several features and indicators for automatic speech scoring, partly different from those adopted in “TLT-school” corpus (see below for a brief review of the literature). Hence, we believe that adding new annotations to the corpus, related to particular aspects of language proficiency, can stimulate research and experimentation in this area. Finally, it is worth mentioning that also written responses of “TLT-school” corpus are characterised by a high level of noise due to: spelling errors, insertion of word fragments, presence of words belonging to multiple languages, presence of off-topic answers (e.g. containing jokes, comments not related to the questions, etc.). This set of text data will allow scientists to investigate both language and behaviour of pupils learning second languages at school. Written data are described in detail in Section SECREF5 Relation to prior work. Scientific literature is rich in approaches for automated assessment of spoken language proficiency. Performance is directly dependent on ASR accuracy which, in turn, depends on the type of input, read or spontaneous, and on the speakers' age, adults or children (see BIBREF1 for an overview of spoken language technology for education). A recent publication reporting an overview of state-of-the-art automated speech scoring technology as it is currently used at Educational Testing Service (ETS) can be found in BIBREF2. In order to address automatic assessment of complex spoken tasks requiring more general communication capabilities from L2 learners, the AZELLA data set BIBREF3, developed by Pearson, has been collected and used as benchmark for some researches BIBREF4, BIBREF3. The corpus contains $1,500$ spoken tests, each double graded by human professionals, from a variety of tasks. A public set of spoken data has been recently distributed in a spoken CALL (Computer Assisted Language Learning) shared task where Swiss students learning English had to answer to both written and spoken prompts. The goal of this challenge is to label students' spoken responses as “accept” or “reject”. Refer to BIBREF5 for details of the challenge and of the associated data sets. Many non-native speech corpora (mostly in English as target language) have been collected during the years. A list, though not recent, as well as a brief description of most of them can be found in BIBREF6. The same paper also gives information on how the data sets are distributed and can be accessed (many of them are available through both LDC and ELDA agencies). Some of the corpora also provide proficiency ratings to be used in CALL applications. Among them, we mention the ISLE corpus BIBREF7, which also contains transcriptions at the phonetic level and was used in the experiments reported in BIBREF0. Note that all corpora mentioned in BIBREF6 come from adult speech while, to our knowledge, the access to publicly available non-native children's speech corpora, as well as of children's speech corpora in general, is still scarce. Specifically concerning non-native children's speech, we believe worth mentioning the following corpora. The PF-STAR corpus (see BIBREF8) contains English utterances read by both Italian and German children, between 6 and 13 years old. The same corpus also contains utterances read by English children. The ChildIt corpus BIBREF9 contains English utterances (both read and imitated) by Italian children. By distributing “TLT-school” corpus, we hope to help researchers to investigate novel approaches and models in the areas of both non-native and children's speech and to build related benchmarks. Data Acquisition In Trentino, an autonomous region in northern Italy, there is a series of evaluation campaigns underway for testing L2 linguistic competence of Italian students taking proficiency tests in both English and German. A set of three evaluation campaigns is underway, two having been completed in 2016 and 2018, and a final one scheduled in 2020. Note that the “TLT-school” corpus refers to only the 2018 campaign, that was split in two parts: 2017 try-out data set (involving about 500 pupils) and the actual 2018 data (about 2500 pupils). Each of the three campaigns (i.e. 2016, 2018 and 2020) involves about 3000 students ranging from 9 to 16 years, belonging to four different school grade levels and three proficiency levels (A1, A2, B1). The schools involved in the evaluations are located in most part of the Trentino region, not only in its main towns; Table highlights some information about the pupils that took part to the campaigns. Several tests, aimed at assessing the language learning skills of the students, were carried out by means of multiple-choice questions, which can be evaluated automatically. However, a detailed linguistic evaluation cannot be performed without allowing the students to express themselves in both written sentences and spoken utterances, which typically require the intervention of human experts to be scored. Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively. The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table . Since every utterance was scored by only one expert, it was not possible to evaluate any kind of agreement among experts. For future evaluations, more experts are expected to provide independent scoring on same data sets, so this kind of evaluation will be possible. Data Acquisition ::: Prompts The speaking part of the proficiency tests in 2017/2018 consists of 47 question prompts provided in written form: 24 in English and 23 in German, divided according to CEFR levels. Apart from A1 level, which differs in the number of questions (11 for English; 10 for German), both English and German A2 and B1 levels have respectively 6 and 7 questions each. As for A1 level, the first four introductory questions are the same (How old are you?, Where do you live?, What are your hobbies?, Wie alt bist du?, Wo wohnst du?, Was sind deine Hobbys?) or slightly different (What's your favourite pet?, Welche Tiere magst du?) in both languages, whereas the second part of the test puts the test-takers in the role of a customer in a pizzeria (English) or in a bar (German). A2 level test is composed of small talk questions which relate to everyday life situations. In this case, questions are more open-ended than the aforementioned ones and allow the test-takers to interact by means of a broader range of answers. Finally, as for B1 level, questions are similar to A2 ones, but they include a role-play activity in the final part, which allows a good amount of freedom and creativity in answering the question. Data Acquisition ::: Written Data Table reports some statistics extracted from the written data collected so far. In this table, the number of pupils taking part in the English and German evaluation is reported, along with the number of sentences and tokens, identified as character sequences bounded by spaces. It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...). To do this, we have applied some text processing, that in sequence: $\bullet $ removes strange characters; $\bullet $ performs some text normalisation (lowercase, umlaut, numbers, ...) and tokenisation (punctuation, etc.) $\bullet $ removes / corrects non words (e.g. hallooooooooooo becomes hallo; aaaaaaaaeeeeeeeeiiiiiiii is removed) $\bullet $ identifies the language of each word, choosing among Italian, English, German; $\bullet $ corrects common typing errors (e.g. ai em becomes i am) $\bullet $ replaces unknown words, with respect to a large lexicon, with the label $<$unk$>$. Table reports some samples of written answers. Data Acquisition ::: Spoken Data Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand. Manual Transcriptions In order to create both an adaptation and an evaluation set for ASR, we manually transcribed part of the 2017 data sets. We defined an initial set of guidelines for the annotation, which were used by 5 researchers to manually transcribe about 20 minutes of audio data. This experience led to a discussion, from which a second set of guidelines originated, aiming at reaching a reasonable trade-off between transcription accuracy and speed. As a consequence, we decided to apply the following transcription rules: only the main speaker has to be transcribed; presence of other voices (schoolmates, teacher) should be reported only with the label “@voices”, presence of whispered speech was found to be significant, so it should be explicitly marked with the label “()”, badly pronounced words have to be marked by a “#” sign, without trying to phonetically transcribe the pronounced sounds; “#*” marks incomprehensible speech; speech in a different language from the target language has to be reported by means of an explicit marker “I am 10 years old @it(io ho già risposto)”. Next, we concatenated utterances to be transcribed into blocks of about 5 minutes each. We noticed that knowing the question and hearing several answers could be of great help for transcribing some poorly pronounced words or phrases. Therefore, each block contains only answers to the same question, explicitly reported at the beginning of the block. We engaged about 30 students from two Italian linguistic high schools (namely “C” and “S”) to perform manual transcriptions. After a joint training session, we paired students together. Each pair first transcribed, individually, the same block of 5 minutes. Then, they went through a comparison phase, where each pair of students discussed their choices and agreed on a single transcription for the assigned data. Transcriptions made before the comparison phase were retained to evaluate inter-annotator agreement. Apart from this first 5 minute block, each utterance was transcribed by only one transcriber. Inter-annotator agreement for the 5-minute blocks is shown in Table in terms of words (after removing hesitations and other labels related to background voices and noises, etc.). The low level of agreement reflects the difficulty of the task. In order to assure quality of the manual transcriptions, every sentence transcribed by the high school students was automatically processed to find out possible formal errors, and manually validated by researchers in our lab. Speakers were assigned either to training or evaluation sets, with proportions of $\frac{2}{3}$ and $\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded. Usage of the Data From the above description it appears that the corpus can be effectively used in many research directions. Usage of the Data ::: ASR-related Challenges The spoken corpus features non-native speech recordings in real classrooms and, consequently, peculiar phenomena appear and can be investigated. Phonological and cross-language interference requires specific approaches for accurate acoustic modelling. Moreover, for coping with cross-language interference it is important to consider alternative ways to represent specific words (e.g. words of two languages with the same graphemic representation). Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29. As for language models, accurate transcriptions of spoken responses demand for models able to cope with not well-formed expressions (due to students' grammatical errors). Also the presence of code-switched words, words fragments and spontaneous speech phenomena requires specific investigations to reduce their impact on the final performance. We believe that the particular domain and set of data pave the way to investigate into various ASR topics, such as: non-native speech, children speech, spontaneous speech, code-switching, multiple pronunciation, etc. Usage of the Data ::: Data Annotation The corpus has been (partly) annotated using the guidelines presented in Section SECREF3 on the basis of a preliminary analysis of the most common acoustic phenomena appearing in the data sets. Additional annotations could be included to address topics related to other spurious segments, as for example: understandable words pronounced in other languages or by other students, detection of phonological interference, detection of spontaneous speech phenomena, detection of overlapped speech, etc. In order to measure specific proficiency indicators, e.g. related to pronunciation and fluency, suprasegmental annotations can be also inserted in the corpus. Usage of the Data ::: Proficiency Assessment of L2 Learners The corpus is a valuable resource for training and evaluating a scoring classifier based on different approaches. Preliminary results BIBREF0 show that the usage of suitable linguistic features mainly based on statistical language models allow to predict the scores assigned by the human experts. The evaluation campaign has been conceived to verify the expected proficiency level according to class grade; as a result, although the proposed test cannot be used to assign a precise score to a given student, it allows to study typical error patterns according to age and level of the students. Furthermore, the fine-grained annotation, at sentence level, of the indicators described above is particularly suitable for creating a test bed for approaches based on “word embeddings” BIBREF30, BIBREF31, BIBREF32 to automatically estimate the language learner proficiency. Actually, the experiments reported in BIBREF30 demonstrate superior performance of word-embeddings for speech scoring with respect to the well known (feature-based) SpeechRater system BIBREF33, BIBREF2. In this regard, we believe that additional, specific annotations can be developed and included in the “TLT-school” corpus. Usage of the Data ::: Modelling Pronunciation By looking at the manual transcriptions, it is straightforward to detect the most problematic words, i.e. frequently occurring words, which were often marked as mispronounced (preceded by label “#”). This allows to prepare a set of data composed by good pronounced vs. bad pronounced words. A list of words, partly mispronounced, is shown in Table , from which one can try to model typical pronunciation errors (note that other occurrences of the selected words could be easily extracted from the non-annotated data). Finally, as mentioned above, further manual checking and annotation could be introduced to improve modelling of pronunciation errors. Distribution of the Corpus The corpus to be released is still under preparation, given the huge amount of spoken and written data; in particular, some checks are in progress in order to: remove from the data responses with personal or inadequate content (e.g. bad language); normalise the written responses (e.g. upper/lower case, punctuation, evident typos); normalise and verify the consistency of the transcription of spoken responses; check the available human scores and - if possible - merge or map the scores according to more general performance categories (e.g. delivery, language use, topic development) and an acknowledged scale (e.g. from 0 to 4). In particular, the proposal for an international challenge focused on non-native children speech recognition is being submitted where an English subset will be released and the perspective participants are invited to propose and evaluate state-of-art techniques for dealing with the multiple issues related to this challenging ASR scenario (acoustic and language models, non-native lexicon, noisy recordings, etc.). Conclusions and Future Works We have described “TLT-school”, a corpus of both spoken and written answers collected during language evaluation campaigns carried out in schools of northern Italy. The procedure used for data acquisition and for their annotation in terms of proficiency indicators has been also reported. Part of the data has been manually transcribed according to some guidelines: this set of data is going to be made publicly available. With regard to data acquisition, some limitations of the corpus have been observed that might be easily overcome during next campaigns. Special attention should be paid to enhancing the elicitation techniques, starting from adjusting the questions presented to test-takers. Some of the question prompts show some lacks that can be filled in without major difficulty: on the one hand, in the spoken part, questions do not require test-takers to shift tense and some are too suggestive and close-ended; on the other hand, in the written part, some question prompts are presented both in source and target language, thus causing or encouraging code-mixing and negative transfer phenomena. The elicitation techniques in a broader sense will be object of revision (see BIBREF34 and specifically on children speech BIBREF35) in order to maximise the quality of the corpus. As for proficiency indicators, one first step that could be taken in order to increase accuracy in the evaluation phase both for human and automatic scoring would be to divide the second indicator (pronunciation and fluency) into two different indicators, since fluent students might not necessarily have good pronunciation skills and vice versa, drawing for example on the IELTS Speaking band descriptors. Also, next campaigns might consider an additional indicator specifically addressed to score prosody (in particular intonation and rhythm), especially for A2 and B1 level test-takers. Considering the scope of the evaluation campaign, it is important to be aware of the limitations of the associated data sets: proficiency levels limited to A1, B1 and B2 (CEFR); custom indicators conceived for expert evaluation (not particularly suitable for automated evaluation); limited amount of responses per speaker. Nevertheless, as already discussed, the fact that the TLT campaign was carried out in 2016 and 2018 in the whole Trentino region makes the corpus a valuable linguistic resource for a number of studies associated to second language acquisition and evaluation. In particular, besides the already introduced proposal for an ASR challenge in 2020, other initiatives for the international community can be envisaged: a study of a fully-automated evaluation procedure without the need of experts' supervision; the investigation of end-to-end classifiers that directly use the spoken response as input and produce proficiency scores according to suitable rubrics. Acknowledgements This work has been partially funded by IPRASE (http://www.iprase.tn.it) under the project “TLT - Trentino Language Testing 2018”. We thank ISIT (http://www.isit.tn.it) for having provided the data and the reference scores.
Yes
9ebb2adf92a0f8db99efddcade02a20a219ca7d9
9ebb2adf92a0f8db99efddcade02a20a219ca7d9_0
Q: How is the proficiency score calculated? Text: Introduction We have acquired large sets of both written and spoken data during the implementation of campaigns aimed at assessing the proficiency, at school, of Italian pupils learning both German and English. Part of the acquired data has been included in a corpus, named "Trentino Language Testing" in schools (TLT-school), that will be described in the following. All the collected sentences have been annotated by human experts in terms of some predefined “indicators” which, in turn, were used to assign the proficiency level to each student undertaking the assigned test. This level is expressed according to the well-known Common European Framework of Reference for Languages (Council of Europe, 2001) scale. The CEFR defines 6 levels of proficiency: A1 (beginner), A2, B1, B2, C1 and C2. The levels considered in the evaluation campaigns where the data have been collected are: A1, A2 and B1. The indicators measure the linguistic competence of test takers both in relation to the content (e.g. grammatical correctness, lexical richness, semantic coherence, etc.) and to the speaking capabilities (e.g. pronunciation, fluency, etc.). Refer to Section SECREF2 for a description of the adopted indicators. The learners are Italian students, between 9 and 16 years old. They took proficiency tests by answering question prompts provided in written form. The “TLT-school” corpus, that we are going to make publicly available, contains part of the spoken answers (together with the respective manual transcriptions) recorded during some of the above mentioned evaluation campaigns. We will release the written answers in future. Details and critical issues found during the acquisition of the answers of the test takers will be discussed in Section SECREF2. The tasks that can be addressed by using the corpus are very challenging and pose many problems, which have only partially been solved by the interested scientific community. From the ASR perspective, major difficulties are represented by: a) recognition of both child and non-native speech, i.e. Italian pupils speaking both English and German, b) presence of a large number of spontaneous speech phenomena (hesitations, false starts, fragments of words, etc.), c) presence of multiple languages (English, Italian and German words are frequently uttered in response to a single question), d) presence of a significant level of background noise due to the fact that the microphone remains open for a fixed time interval (e.g. 20 seconds - depending on the question), and e) presence of non-collaborative speakers (students often joke, laugh, speak softly, etc.). Refer to Section SECREF6 for a detailed description of the collected spoken data set. Furthermore, since the sets of data from which “TLT-school” was derived were primarily acquired for measuring proficiency of second language (L2) learners, it is quite obvious to exploit the corpus for automatic speech rating. To this purpose, one can try to develop automatic approaches to reliably estimate the above-mentioned indicators used by the human experts who scored the answers of the pupils (such an approach is described in BIBREF0). However, it has to be noticed that scientific literature proposes to use several features and indicators for automatic speech scoring, partly different from those adopted in “TLT-school” corpus (see below for a brief review of the literature). Hence, we believe that adding new annotations to the corpus, related to particular aspects of language proficiency, can stimulate research and experimentation in this area. Finally, it is worth mentioning that also written responses of “TLT-school” corpus are characterised by a high level of noise due to: spelling errors, insertion of word fragments, presence of words belonging to multiple languages, presence of off-topic answers (e.g. containing jokes, comments not related to the questions, etc.). This set of text data will allow scientists to investigate both language and behaviour of pupils learning second languages at school. Written data are described in detail in Section SECREF5 Relation to prior work. Scientific literature is rich in approaches for automated assessment of spoken language proficiency. Performance is directly dependent on ASR accuracy which, in turn, depends on the type of input, read or spontaneous, and on the speakers' age, adults or children (see BIBREF1 for an overview of spoken language technology for education). A recent publication reporting an overview of state-of-the-art automated speech scoring technology as it is currently used at Educational Testing Service (ETS) can be found in BIBREF2. In order to address automatic assessment of complex spoken tasks requiring more general communication capabilities from L2 learners, the AZELLA data set BIBREF3, developed by Pearson, has been collected and used as benchmark for some researches BIBREF4, BIBREF3. The corpus contains $1,500$ spoken tests, each double graded by human professionals, from a variety of tasks. A public set of spoken data has been recently distributed in a spoken CALL (Computer Assisted Language Learning) shared task where Swiss students learning English had to answer to both written and spoken prompts. The goal of this challenge is to label students' spoken responses as “accept” or “reject”. Refer to BIBREF5 for details of the challenge and of the associated data sets. Many non-native speech corpora (mostly in English as target language) have been collected during the years. A list, though not recent, as well as a brief description of most of them can be found in BIBREF6. The same paper also gives information on how the data sets are distributed and can be accessed (many of them are available through both LDC and ELDA agencies). Some of the corpora also provide proficiency ratings to be used in CALL applications. Among them, we mention the ISLE corpus BIBREF7, which also contains transcriptions at the phonetic level and was used in the experiments reported in BIBREF0. Note that all corpora mentioned in BIBREF6 come from adult speech while, to our knowledge, the access to publicly available non-native children's speech corpora, as well as of children's speech corpora in general, is still scarce. Specifically concerning non-native children's speech, we believe worth mentioning the following corpora. The PF-STAR corpus (see BIBREF8) contains English utterances read by both Italian and German children, between 6 and 13 years old. The same corpus also contains utterances read by English children. The ChildIt corpus BIBREF9 contains English utterances (both read and imitated) by Italian children. By distributing “TLT-school” corpus, we hope to help researchers to investigate novel approaches and models in the areas of both non-native and children's speech and to build related benchmarks. Data Acquisition In Trentino, an autonomous region in northern Italy, there is a series of evaluation campaigns underway for testing L2 linguistic competence of Italian students taking proficiency tests in both English and German. A set of three evaluation campaigns is underway, two having been completed in 2016 and 2018, and a final one scheduled in 2020. Note that the “TLT-school” corpus refers to only the 2018 campaign, that was split in two parts: 2017 try-out data set (involving about 500 pupils) and the actual 2018 data (about 2500 pupils). Each of the three campaigns (i.e. 2016, 2018 and 2020) involves about 3000 students ranging from 9 to 16 years, belonging to four different school grade levels and three proficiency levels (A1, A2, B1). The schools involved in the evaluations are located in most part of the Trentino region, not only in its main towns; Table highlights some information about the pupils that took part to the campaigns. Several tests, aimed at assessing the language learning skills of the students, were carried out by means of multiple-choice questions, which can be evaluated automatically. However, a detailed linguistic evaluation cannot be performed without allowing the students to express themselves in both written sentences and spoken utterances, which typically require the intervention of human experts to be scored. Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively. The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table . Since every utterance was scored by only one expert, it was not possible to evaluate any kind of agreement among experts. For future evaluations, more experts are expected to provide independent scoring on same data sets, so this kind of evaluation will be possible. Data Acquisition ::: Prompts The speaking part of the proficiency tests in 2017/2018 consists of 47 question prompts provided in written form: 24 in English and 23 in German, divided according to CEFR levels. Apart from A1 level, which differs in the number of questions (11 for English; 10 for German), both English and German A2 and B1 levels have respectively 6 and 7 questions each. As for A1 level, the first four introductory questions are the same (How old are you?, Where do you live?, What are your hobbies?, Wie alt bist du?, Wo wohnst du?, Was sind deine Hobbys?) or slightly different (What's your favourite pet?, Welche Tiere magst du?) in both languages, whereas the second part of the test puts the test-takers in the role of a customer in a pizzeria (English) or in a bar (German). A2 level test is composed of small talk questions which relate to everyday life situations. In this case, questions are more open-ended than the aforementioned ones and allow the test-takers to interact by means of a broader range of answers. Finally, as for B1 level, questions are similar to A2 ones, but they include a role-play activity in the final part, which allows a good amount of freedom and creativity in answering the question. Data Acquisition ::: Written Data Table reports some statistics extracted from the written data collected so far. In this table, the number of pupils taking part in the English and German evaluation is reported, along with the number of sentences and tokens, identified as character sequences bounded by spaces. It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...). To do this, we have applied some text processing, that in sequence: $\bullet $ removes strange characters; $\bullet $ performs some text normalisation (lowercase, umlaut, numbers, ...) and tokenisation (punctuation, etc.) $\bullet $ removes / corrects non words (e.g. hallooooooooooo becomes hallo; aaaaaaaaeeeeeeeeiiiiiiii is removed) $\bullet $ identifies the language of each word, choosing among Italian, English, German; $\bullet $ corrects common typing errors (e.g. ai em becomes i am) $\bullet $ replaces unknown words, with respect to a large lexicon, with the label $<$unk$>$. Table reports some samples of written answers. Data Acquisition ::: Spoken Data Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand. Manual Transcriptions In order to create both an adaptation and an evaluation set for ASR, we manually transcribed part of the 2017 data sets. We defined an initial set of guidelines for the annotation, which were used by 5 researchers to manually transcribe about 20 minutes of audio data. This experience led to a discussion, from which a second set of guidelines originated, aiming at reaching a reasonable trade-off between transcription accuracy and speed. As a consequence, we decided to apply the following transcription rules: only the main speaker has to be transcribed; presence of other voices (schoolmates, teacher) should be reported only with the label “@voices”, presence of whispered speech was found to be significant, so it should be explicitly marked with the label “()”, badly pronounced words have to be marked by a “#” sign, without trying to phonetically transcribe the pronounced sounds; “#*” marks incomprehensible speech; speech in a different language from the target language has to be reported by means of an explicit marker “I am 10 years old @it(io ho già risposto)”. Next, we concatenated utterances to be transcribed into blocks of about 5 minutes each. We noticed that knowing the question and hearing several answers could be of great help for transcribing some poorly pronounced words or phrases. Therefore, each block contains only answers to the same question, explicitly reported at the beginning of the block. We engaged about 30 students from two Italian linguistic high schools (namely “C” and “S”) to perform manual transcriptions. After a joint training session, we paired students together. Each pair first transcribed, individually, the same block of 5 minutes. Then, they went through a comparison phase, where each pair of students discussed their choices and agreed on a single transcription for the assigned data. Transcriptions made before the comparison phase were retained to evaluate inter-annotator agreement. Apart from this first 5 minute block, each utterance was transcribed by only one transcriber. Inter-annotator agreement for the 5-minute blocks is shown in Table in terms of words (after removing hesitations and other labels related to background voices and noises, etc.). The low level of agreement reflects the difficulty of the task. In order to assure quality of the manual transcriptions, every sentence transcribed by the high school students was automatically processed to find out possible formal errors, and manually validated by researchers in our lab. Speakers were assigned either to training or evaluation sets, with proportions of $\frac{2}{3}$ and $\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded. Usage of the Data From the above description it appears that the corpus can be effectively used in many research directions. Usage of the Data ::: ASR-related Challenges The spoken corpus features non-native speech recordings in real classrooms and, consequently, peculiar phenomena appear and can be investigated. Phonological and cross-language interference requires specific approaches for accurate acoustic modelling. Moreover, for coping with cross-language interference it is important to consider alternative ways to represent specific words (e.g. words of two languages with the same graphemic representation). Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29. As for language models, accurate transcriptions of spoken responses demand for models able to cope with not well-formed expressions (due to students' grammatical errors). Also the presence of code-switched words, words fragments and spontaneous speech phenomena requires specific investigations to reduce their impact on the final performance. We believe that the particular domain and set of data pave the way to investigate into various ASR topics, such as: non-native speech, children speech, spontaneous speech, code-switching, multiple pronunciation, etc. Usage of the Data ::: Data Annotation The corpus has been (partly) annotated using the guidelines presented in Section SECREF3 on the basis of a preliminary analysis of the most common acoustic phenomena appearing in the data sets. Additional annotations could be included to address topics related to other spurious segments, as for example: understandable words pronounced in other languages or by other students, detection of phonological interference, detection of spontaneous speech phenomena, detection of overlapped speech, etc. In order to measure specific proficiency indicators, e.g. related to pronunciation and fluency, suprasegmental annotations can be also inserted in the corpus. Usage of the Data ::: Proficiency Assessment of L2 Learners The corpus is a valuable resource for training and evaluating a scoring classifier based on different approaches. Preliminary results BIBREF0 show that the usage of suitable linguistic features mainly based on statistical language models allow to predict the scores assigned by the human experts. The evaluation campaign has been conceived to verify the expected proficiency level according to class grade; as a result, although the proposed test cannot be used to assign a precise score to a given student, it allows to study typical error patterns according to age and level of the students. Furthermore, the fine-grained annotation, at sentence level, of the indicators described above is particularly suitable for creating a test bed for approaches based on “word embeddings” BIBREF30, BIBREF31, BIBREF32 to automatically estimate the language learner proficiency. Actually, the experiments reported in BIBREF30 demonstrate superior performance of word-embeddings for speech scoring with respect to the well known (feature-based) SpeechRater system BIBREF33, BIBREF2. In this regard, we believe that additional, specific annotations can be developed and included in the “TLT-school” corpus. Usage of the Data ::: Modelling Pronunciation By looking at the manual transcriptions, it is straightforward to detect the most problematic words, i.e. frequently occurring words, which were often marked as mispronounced (preceded by label “#”). This allows to prepare a set of data composed by good pronounced vs. bad pronounced words. A list of words, partly mispronounced, is shown in Table , from which one can try to model typical pronunciation errors (note that other occurrences of the selected words could be easily extracted from the non-annotated data). Finally, as mentioned above, further manual checking and annotation could be introduced to improve modelling of pronunciation errors. Distribution of the Corpus The corpus to be released is still under preparation, given the huge amount of spoken and written data; in particular, some checks are in progress in order to: remove from the data responses with personal or inadequate content (e.g. bad language); normalise the written responses (e.g. upper/lower case, punctuation, evident typos); normalise and verify the consistency of the transcription of spoken responses; check the available human scores and - if possible - merge or map the scores according to more general performance categories (e.g. delivery, language use, topic development) and an acknowledged scale (e.g. from 0 to 4). In particular, the proposal for an international challenge focused on non-native children speech recognition is being submitted where an English subset will be released and the perspective participants are invited to propose and evaluate state-of-art techniques for dealing with the multiple issues related to this challenging ASR scenario (acoustic and language models, non-native lexicon, noisy recordings, etc.). Conclusions and Future Works We have described “TLT-school”, a corpus of both spoken and written answers collected during language evaluation campaigns carried out in schools of northern Italy. The procedure used for data acquisition and for their annotation in terms of proficiency indicators has been also reported. Part of the data has been manually transcribed according to some guidelines: this set of data is going to be made publicly available. With regard to data acquisition, some limitations of the corpus have been observed that might be easily overcome during next campaigns. Special attention should be paid to enhancing the elicitation techniques, starting from adjusting the questions presented to test-takers. Some of the question prompts show some lacks that can be filled in without major difficulty: on the one hand, in the spoken part, questions do not require test-takers to shift tense and some are too suggestive and close-ended; on the other hand, in the written part, some question prompts are presented both in source and target language, thus causing or encouraging code-mixing and negative transfer phenomena. The elicitation techniques in a broader sense will be object of revision (see BIBREF34 and specifically on children speech BIBREF35) in order to maximise the quality of the corpus. As for proficiency indicators, one first step that could be taken in order to increase accuracy in the evaluation phase both for human and automatic scoring would be to divide the second indicator (pronunciation and fluency) into two different indicators, since fluent students might not necessarily have good pronunciation skills and vice versa, drawing for example on the IELTS Speaking band descriptors. Also, next campaigns might consider an additional indicator specifically addressed to score prosody (in particular intonation and rhythm), especially for A2 and B1 level test-takers. Considering the scope of the evaluation campaign, it is important to be aware of the limitations of the associated data sets: proficiency levels limited to A1, B1 and B2 (CEFR); custom indicators conceived for expert evaluation (not particularly suitable for automated evaluation); limited amount of responses per speaker. Nevertheless, as already discussed, the fact that the TLT campaign was carried out in 2016 and 2018 in the whole Trentino region makes the corpus a valuable linguistic resource for a number of studies associated to second language acquisition and evaluation. In particular, besides the already introduced proposal for an ASR challenge in 2020, other initiatives for the international community can be envisaged: a study of a fully-automated evaluation procedure without the need of experts' supervision; the investigation of end-to-end classifiers that directly use the spoken response as input and produce proficiency scores according to suitable rubrics. Acknowledgements This work has been partially funded by IPRASE (http://www.iprase.tn.it) under the project “TLT - Trentino Language Testing 2018”. We thank ISIT (http://www.isit.tn.it) for having provided the data and the reference scores.
They used 6 indicators for proficiency (same for written and spoken) each marked by bad, medium or good by one expert.
973f6284664675654cc9881745880a0e88f3280e
973f6284664675654cc9881745880a0e88f3280e_0
Q: What proficiency indicators are used to the score the utterances? Text: Introduction We have acquired large sets of both written and spoken data during the implementation of campaigns aimed at assessing the proficiency, at school, of Italian pupils learning both German and English. Part of the acquired data has been included in a corpus, named "Trentino Language Testing" in schools (TLT-school), that will be described in the following. All the collected sentences have been annotated by human experts in terms of some predefined “indicators” which, in turn, were used to assign the proficiency level to each student undertaking the assigned test. This level is expressed according to the well-known Common European Framework of Reference for Languages (Council of Europe, 2001) scale. The CEFR defines 6 levels of proficiency: A1 (beginner), A2, B1, B2, C1 and C2. The levels considered in the evaluation campaigns where the data have been collected are: A1, A2 and B1. The indicators measure the linguistic competence of test takers both in relation to the content (e.g. grammatical correctness, lexical richness, semantic coherence, etc.) and to the speaking capabilities (e.g. pronunciation, fluency, etc.). Refer to Section SECREF2 for a description of the adopted indicators. The learners are Italian students, between 9 and 16 years old. They took proficiency tests by answering question prompts provided in written form. The “TLT-school” corpus, that we are going to make publicly available, contains part of the spoken answers (together with the respective manual transcriptions) recorded during some of the above mentioned evaluation campaigns. We will release the written answers in future. Details and critical issues found during the acquisition of the answers of the test takers will be discussed in Section SECREF2. The tasks that can be addressed by using the corpus are very challenging and pose many problems, which have only partially been solved by the interested scientific community. From the ASR perspective, major difficulties are represented by: a) recognition of both child and non-native speech, i.e. Italian pupils speaking both English and German, b) presence of a large number of spontaneous speech phenomena (hesitations, false starts, fragments of words, etc.), c) presence of multiple languages (English, Italian and German words are frequently uttered in response to a single question), d) presence of a significant level of background noise due to the fact that the microphone remains open for a fixed time interval (e.g. 20 seconds - depending on the question), and e) presence of non-collaborative speakers (students often joke, laugh, speak softly, etc.). Refer to Section SECREF6 for a detailed description of the collected spoken data set. Furthermore, since the sets of data from which “TLT-school” was derived were primarily acquired for measuring proficiency of second language (L2) learners, it is quite obvious to exploit the corpus for automatic speech rating. To this purpose, one can try to develop automatic approaches to reliably estimate the above-mentioned indicators used by the human experts who scored the answers of the pupils (such an approach is described in BIBREF0). However, it has to be noticed that scientific literature proposes to use several features and indicators for automatic speech scoring, partly different from those adopted in “TLT-school” corpus (see below for a brief review of the literature). Hence, we believe that adding new annotations to the corpus, related to particular aspects of language proficiency, can stimulate research and experimentation in this area. Finally, it is worth mentioning that also written responses of “TLT-school” corpus are characterised by a high level of noise due to: spelling errors, insertion of word fragments, presence of words belonging to multiple languages, presence of off-topic answers (e.g. containing jokes, comments not related to the questions, etc.). This set of text data will allow scientists to investigate both language and behaviour of pupils learning second languages at school. Written data are described in detail in Section SECREF5 Relation to prior work. Scientific literature is rich in approaches for automated assessment of spoken language proficiency. Performance is directly dependent on ASR accuracy which, in turn, depends on the type of input, read or spontaneous, and on the speakers' age, adults or children (see BIBREF1 for an overview of spoken language technology for education). A recent publication reporting an overview of state-of-the-art automated speech scoring technology as it is currently used at Educational Testing Service (ETS) can be found in BIBREF2. In order to address automatic assessment of complex spoken tasks requiring more general communication capabilities from L2 learners, the AZELLA data set BIBREF3, developed by Pearson, has been collected and used as benchmark for some researches BIBREF4, BIBREF3. The corpus contains $1,500$ spoken tests, each double graded by human professionals, from a variety of tasks. A public set of spoken data has been recently distributed in a spoken CALL (Computer Assisted Language Learning) shared task where Swiss students learning English had to answer to both written and spoken prompts. The goal of this challenge is to label students' spoken responses as “accept” or “reject”. Refer to BIBREF5 for details of the challenge and of the associated data sets. Many non-native speech corpora (mostly in English as target language) have been collected during the years. A list, though not recent, as well as a brief description of most of them can be found in BIBREF6. The same paper also gives information on how the data sets are distributed and can be accessed (many of them are available through both LDC and ELDA agencies). Some of the corpora also provide proficiency ratings to be used in CALL applications. Among them, we mention the ISLE corpus BIBREF7, which also contains transcriptions at the phonetic level and was used in the experiments reported in BIBREF0. Note that all corpora mentioned in BIBREF6 come from adult speech while, to our knowledge, the access to publicly available non-native children's speech corpora, as well as of children's speech corpora in general, is still scarce. Specifically concerning non-native children's speech, we believe worth mentioning the following corpora. The PF-STAR corpus (see BIBREF8) contains English utterances read by both Italian and German children, between 6 and 13 years old. The same corpus also contains utterances read by English children. The ChildIt corpus BIBREF9 contains English utterances (both read and imitated) by Italian children. By distributing “TLT-school” corpus, we hope to help researchers to investigate novel approaches and models in the areas of both non-native and children's speech and to build related benchmarks. Data Acquisition In Trentino, an autonomous region in northern Italy, there is a series of evaluation campaigns underway for testing L2 linguistic competence of Italian students taking proficiency tests in both English and German. A set of three evaluation campaigns is underway, two having been completed in 2016 and 2018, and a final one scheduled in 2020. Note that the “TLT-school” corpus refers to only the 2018 campaign, that was split in two parts: 2017 try-out data set (involving about 500 pupils) and the actual 2018 data (about 2500 pupils). Each of the three campaigns (i.e. 2016, 2018 and 2020) involves about 3000 students ranging from 9 to 16 years, belonging to four different school grade levels and three proficiency levels (A1, A2, B1). The schools involved in the evaluations are located in most part of the Trentino region, not only in its main towns; Table highlights some information about the pupils that took part to the campaigns. Several tests, aimed at assessing the language learning skills of the students, were carried out by means of multiple-choice questions, which can be evaluated automatically. However, a detailed linguistic evaluation cannot be performed without allowing the students to express themselves in both written sentences and spoken utterances, which typically require the intervention of human experts to be scored. Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively. The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table . Since every utterance was scored by only one expert, it was not possible to evaluate any kind of agreement among experts. For future evaluations, more experts are expected to provide independent scoring on same data sets, so this kind of evaluation will be possible. Data Acquisition ::: Prompts The speaking part of the proficiency tests in 2017/2018 consists of 47 question prompts provided in written form: 24 in English and 23 in German, divided according to CEFR levels. Apart from A1 level, which differs in the number of questions (11 for English; 10 for German), both English and German A2 and B1 levels have respectively 6 and 7 questions each. As for A1 level, the first four introductory questions are the same (How old are you?, Where do you live?, What are your hobbies?, Wie alt bist du?, Wo wohnst du?, Was sind deine Hobbys?) or slightly different (What's your favourite pet?, Welche Tiere magst du?) in both languages, whereas the second part of the test puts the test-takers in the role of a customer in a pizzeria (English) or in a bar (German). A2 level test is composed of small talk questions which relate to everyday life situations. In this case, questions are more open-ended than the aforementioned ones and allow the test-takers to interact by means of a broader range of answers. Finally, as for B1 level, questions are similar to A2 ones, but they include a role-play activity in the final part, which allows a good amount of freedom and creativity in answering the question. Data Acquisition ::: Written Data Table reports some statistics extracted from the written data collected so far. In this table, the number of pupils taking part in the English and German evaluation is reported, along with the number of sentences and tokens, identified as character sequences bounded by spaces. It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...). To do this, we have applied some text processing, that in sequence: $\bullet $ removes strange characters; $\bullet $ performs some text normalisation (lowercase, umlaut, numbers, ...) and tokenisation (punctuation, etc.) $\bullet $ removes / corrects non words (e.g. hallooooooooooo becomes hallo; aaaaaaaaeeeeeeeeiiiiiiii is removed) $\bullet $ identifies the language of each word, choosing among Italian, English, German; $\bullet $ corrects common typing errors (e.g. ai em becomes i am) $\bullet $ replaces unknown words, with respect to a large lexicon, with the label $<$unk$>$. Table reports some samples of written answers. Data Acquisition ::: Spoken Data Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand. Manual Transcriptions In order to create both an adaptation and an evaluation set for ASR, we manually transcribed part of the 2017 data sets. We defined an initial set of guidelines for the annotation, which were used by 5 researchers to manually transcribe about 20 minutes of audio data. This experience led to a discussion, from which a second set of guidelines originated, aiming at reaching a reasonable trade-off between transcription accuracy and speed. As a consequence, we decided to apply the following transcription rules: only the main speaker has to be transcribed; presence of other voices (schoolmates, teacher) should be reported only with the label “@voices”, presence of whispered speech was found to be significant, so it should be explicitly marked with the label “()”, badly pronounced words have to be marked by a “#” sign, without trying to phonetically transcribe the pronounced sounds; “#*” marks incomprehensible speech; speech in a different language from the target language has to be reported by means of an explicit marker “I am 10 years old @it(io ho già risposto)”. Next, we concatenated utterances to be transcribed into blocks of about 5 minutes each. We noticed that knowing the question and hearing several answers could be of great help for transcribing some poorly pronounced words or phrases. Therefore, each block contains only answers to the same question, explicitly reported at the beginning of the block. We engaged about 30 students from two Italian linguistic high schools (namely “C” and “S”) to perform manual transcriptions. After a joint training session, we paired students together. Each pair first transcribed, individually, the same block of 5 minutes. Then, they went through a comparison phase, where each pair of students discussed their choices and agreed on a single transcription for the assigned data. Transcriptions made before the comparison phase were retained to evaluate inter-annotator agreement. Apart from this first 5 minute block, each utterance was transcribed by only one transcriber. Inter-annotator agreement for the 5-minute blocks is shown in Table in terms of words (after removing hesitations and other labels related to background voices and noises, etc.). The low level of agreement reflects the difficulty of the task. In order to assure quality of the manual transcriptions, every sentence transcribed by the high school students was automatically processed to find out possible formal errors, and manually validated by researchers in our lab. Speakers were assigned either to training or evaluation sets, with proportions of $\frac{2}{3}$ and $\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded. Usage of the Data From the above description it appears that the corpus can be effectively used in many research directions. Usage of the Data ::: ASR-related Challenges The spoken corpus features non-native speech recordings in real classrooms and, consequently, peculiar phenomena appear and can be investigated. Phonological and cross-language interference requires specific approaches for accurate acoustic modelling. Moreover, for coping with cross-language interference it is important to consider alternative ways to represent specific words (e.g. words of two languages with the same graphemic representation). Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29. As for language models, accurate transcriptions of spoken responses demand for models able to cope with not well-formed expressions (due to students' grammatical errors). Also the presence of code-switched words, words fragments and spontaneous speech phenomena requires specific investigations to reduce their impact on the final performance. We believe that the particular domain and set of data pave the way to investigate into various ASR topics, such as: non-native speech, children speech, spontaneous speech, code-switching, multiple pronunciation, etc. Usage of the Data ::: Data Annotation The corpus has been (partly) annotated using the guidelines presented in Section SECREF3 on the basis of a preliminary analysis of the most common acoustic phenomena appearing in the data sets. Additional annotations could be included to address topics related to other spurious segments, as for example: understandable words pronounced in other languages or by other students, detection of phonological interference, detection of spontaneous speech phenomena, detection of overlapped speech, etc. In order to measure specific proficiency indicators, e.g. related to pronunciation and fluency, suprasegmental annotations can be also inserted in the corpus. Usage of the Data ::: Proficiency Assessment of L2 Learners The corpus is a valuable resource for training and evaluating a scoring classifier based on different approaches. Preliminary results BIBREF0 show that the usage of suitable linguistic features mainly based on statistical language models allow to predict the scores assigned by the human experts. The evaluation campaign has been conceived to verify the expected proficiency level according to class grade; as a result, although the proposed test cannot be used to assign a precise score to a given student, it allows to study typical error patterns according to age and level of the students. Furthermore, the fine-grained annotation, at sentence level, of the indicators described above is particularly suitable for creating a test bed for approaches based on “word embeddings” BIBREF30, BIBREF31, BIBREF32 to automatically estimate the language learner proficiency. Actually, the experiments reported in BIBREF30 demonstrate superior performance of word-embeddings for speech scoring with respect to the well known (feature-based) SpeechRater system BIBREF33, BIBREF2. In this regard, we believe that additional, specific annotations can be developed and included in the “TLT-school” corpus. Usage of the Data ::: Modelling Pronunciation By looking at the manual transcriptions, it is straightforward to detect the most problematic words, i.e. frequently occurring words, which were often marked as mispronounced (preceded by label “#”). This allows to prepare a set of data composed by good pronounced vs. bad pronounced words. A list of words, partly mispronounced, is shown in Table , from which one can try to model typical pronunciation errors (note that other occurrences of the selected words could be easily extracted from the non-annotated data). Finally, as mentioned above, further manual checking and annotation could be introduced to improve modelling of pronunciation errors. Distribution of the Corpus The corpus to be released is still under preparation, given the huge amount of spoken and written data; in particular, some checks are in progress in order to: remove from the data responses with personal or inadequate content (e.g. bad language); normalise the written responses (e.g. upper/lower case, punctuation, evident typos); normalise and verify the consistency of the transcription of spoken responses; check the available human scores and - if possible - merge or map the scores according to more general performance categories (e.g. delivery, language use, topic development) and an acknowledged scale (e.g. from 0 to 4). In particular, the proposal for an international challenge focused on non-native children speech recognition is being submitted where an English subset will be released and the perspective participants are invited to propose and evaluate state-of-art techniques for dealing with the multiple issues related to this challenging ASR scenario (acoustic and language models, non-native lexicon, noisy recordings, etc.). Conclusions and Future Works We have described “TLT-school”, a corpus of both spoken and written answers collected during language evaluation campaigns carried out in schools of northern Italy. The procedure used for data acquisition and for their annotation in terms of proficiency indicators has been also reported. Part of the data has been manually transcribed according to some guidelines: this set of data is going to be made publicly available. With regard to data acquisition, some limitations of the corpus have been observed that might be easily overcome during next campaigns. Special attention should be paid to enhancing the elicitation techniques, starting from adjusting the questions presented to test-takers. Some of the question prompts show some lacks that can be filled in without major difficulty: on the one hand, in the spoken part, questions do not require test-takers to shift tense and some are too suggestive and close-ended; on the other hand, in the written part, some question prompts are presented both in source and target language, thus causing or encouraging code-mixing and negative transfer phenomena. The elicitation techniques in a broader sense will be object of revision (see BIBREF34 and specifically on children speech BIBREF35) in order to maximise the quality of the corpus. As for proficiency indicators, one first step that could be taken in order to increase accuracy in the evaluation phase both for human and automatic scoring would be to divide the second indicator (pronunciation and fluency) into two different indicators, since fluent students might not necessarily have good pronunciation skills and vice versa, drawing for example on the IELTS Speaking band descriptors. Also, next campaigns might consider an additional indicator specifically addressed to score prosody (in particular intonation and rhythm), especially for A2 and B1 level test-takers. Considering the scope of the evaluation campaign, it is important to be aware of the limitations of the associated data sets: proficiency levels limited to A1, B1 and B2 (CEFR); custom indicators conceived for expert evaluation (not particularly suitable for automated evaluation); limited amount of responses per speaker. Nevertheless, as already discussed, the fact that the TLT campaign was carried out in 2016 and 2018 in the whole Trentino region makes the corpus a valuable linguistic resource for a number of studies associated to second language acquisition and evaluation. In particular, besides the already introduced proposal for an ASR challenge in 2020, other initiatives for the international community can be envisaged: a study of a fully-automated evaluation procedure without the need of experts' supervision; the investigation of end-to-end classifiers that directly use the spoken response as input and produce proficiency scores according to suitable rubrics. Acknowledgements This work has been partially funded by IPRASE (http://www.iprase.tn.it) under the project “TLT - Trentino Language Testing 2018”. We thank ISIT (http://www.isit.tn.it) for having provided the data and the reference scores.
6 indicators: - lexical richness - pronunciation and fluency - syntactical correctness - fulfillment of delivery - coherence and cohesion - communicative, descriptive, narrative skills
0a3a8d1b0cbac559f7de845d845ebbfefb91135e
0a3a8d1b0cbac559f7de845d845ebbfefb91135e_0
Q: What accuracy is achieved by the speech recognition system? Text: Introduction We have acquired large sets of both written and spoken data during the implementation of campaigns aimed at assessing the proficiency, at school, of Italian pupils learning both German and English. Part of the acquired data has been included in a corpus, named "Trentino Language Testing" in schools (TLT-school), that will be described in the following. All the collected sentences have been annotated by human experts in terms of some predefined “indicators” which, in turn, were used to assign the proficiency level to each student undertaking the assigned test. This level is expressed according to the well-known Common European Framework of Reference for Languages (Council of Europe, 2001) scale. The CEFR defines 6 levels of proficiency: A1 (beginner), A2, B1, B2, C1 and C2. The levels considered in the evaluation campaigns where the data have been collected are: A1, A2 and B1. The indicators measure the linguistic competence of test takers both in relation to the content (e.g. grammatical correctness, lexical richness, semantic coherence, etc.) and to the speaking capabilities (e.g. pronunciation, fluency, etc.). Refer to Section SECREF2 for a description of the adopted indicators. The learners are Italian students, between 9 and 16 years old. They took proficiency tests by answering question prompts provided in written form. The “TLT-school” corpus, that we are going to make publicly available, contains part of the spoken answers (together with the respective manual transcriptions) recorded during some of the above mentioned evaluation campaigns. We will release the written answers in future. Details and critical issues found during the acquisition of the answers of the test takers will be discussed in Section SECREF2. The tasks that can be addressed by using the corpus are very challenging and pose many problems, which have only partially been solved by the interested scientific community. From the ASR perspective, major difficulties are represented by: a) recognition of both child and non-native speech, i.e. Italian pupils speaking both English and German, b) presence of a large number of spontaneous speech phenomena (hesitations, false starts, fragments of words, etc.), c) presence of multiple languages (English, Italian and German words are frequently uttered in response to a single question), d) presence of a significant level of background noise due to the fact that the microphone remains open for a fixed time interval (e.g. 20 seconds - depending on the question), and e) presence of non-collaborative speakers (students often joke, laugh, speak softly, etc.). Refer to Section SECREF6 for a detailed description of the collected spoken data set. Furthermore, since the sets of data from which “TLT-school” was derived were primarily acquired for measuring proficiency of second language (L2) learners, it is quite obvious to exploit the corpus for automatic speech rating. To this purpose, one can try to develop automatic approaches to reliably estimate the above-mentioned indicators used by the human experts who scored the answers of the pupils (such an approach is described in BIBREF0). However, it has to be noticed that scientific literature proposes to use several features and indicators for automatic speech scoring, partly different from those adopted in “TLT-school” corpus (see below for a brief review of the literature). Hence, we believe that adding new annotations to the corpus, related to particular aspects of language proficiency, can stimulate research and experimentation in this area. Finally, it is worth mentioning that also written responses of “TLT-school” corpus are characterised by a high level of noise due to: spelling errors, insertion of word fragments, presence of words belonging to multiple languages, presence of off-topic answers (e.g. containing jokes, comments not related to the questions, etc.). This set of text data will allow scientists to investigate both language and behaviour of pupils learning second languages at school. Written data are described in detail in Section SECREF5 Relation to prior work. Scientific literature is rich in approaches for automated assessment of spoken language proficiency. Performance is directly dependent on ASR accuracy which, in turn, depends on the type of input, read or spontaneous, and on the speakers' age, adults or children (see BIBREF1 for an overview of spoken language technology for education). A recent publication reporting an overview of state-of-the-art automated speech scoring technology as it is currently used at Educational Testing Service (ETS) can be found in BIBREF2. In order to address automatic assessment of complex spoken tasks requiring more general communication capabilities from L2 learners, the AZELLA data set BIBREF3, developed by Pearson, has been collected and used as benchmark for some researches BIBREF4, BIBREF3. The corpus contains $1,500$ spoken tests, each double graded by human professionals, from a variety of tasks. A public set of spoken data has been recently distributed in a spoken CALL (Computer Assisted Language Learning) shared task where Swiss students learning English had to answer to both written and spoken prompts. The goal of this challenge is to label students' spoken responses as “accept” or “reject”. Refer to BIBREF5 for details of the challenge and of the associated data sets. Many non-native speech corpora (mostly in English as target language) have been collected during the years. A list, though not recent, as well as a brief description of most of them can be found in BIBREF6. The same paper also gives information on how the data sets are distributed and can be accessed (many of them are available through both LDC and ELDA agencies). Some of the corpora also provide proficiency ratings to be used in CALL applications. Among them, we mention the ISLE corpus BIBREF7, which also contains transcriptions at the phonetic level and was used in the experiments reported in BIBREF0. Note that all corpora mentioned in BIBREF6 come from adult speech while, to our knowledge, the access to publicly available non-native children's speech corpora, as well as of children's speech corpora in general, is still scarce. Specifically concerning non-native children's speech, we believe worth mentioning the following corpora. The PF-STAR corpus (see BIBREF8) contains English utterances read by both Italian and German children, between 6 and 13 years old. The same corpus also contains utterances read by English children. The ChildIt corpus BIBREF9 contains English utterances (both read and imitated) by Italian children. By distributing “TLT-school” corpus, we hope to help researchers to investigate novel approaches and models in the areas of both non-native and children's speech and to build related benchmarks. Data Acquisition In Trentino, an autonomous region in northern Italy, there is a series of evaluation campaigns underway for testing L2 linguistic competence of Italian students taking proficiency tests in both English and German. A set of three evaluation campaigns is underway, two having been completed in 2016 and 2018, and a final one scheduled in 2020. Note that the “TLT-school” corpus refers to only the 2018 campaign, that was split in two parts: 2017 try-out data set (involving about 500 pupils) and the actual 2018 data (about 2500 pupils). Each of the three campaigns (i.e. 2016, 2018 and 2020) involves about 3000 students ranging from 9 to 16 years, belonging to four different school grade levels and three proficiency levels (A1, A2, B1). The schools involved in the evaluations are located in most part of the Trentino region, not only in its main towns; Table highlights some information about the pupils that took part to the campaigns. Several tests, aimed at assessing the language learning skills of the students, were carried out by means of multiple-choice questions, which can be evaluated automatically. However, a detailed linguistic evaluation cannot be performed without allowing the students to express themselves in both written sentences and spoken utterances, which typically require the intervention of human experts to be scored. Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively. The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table . Since every utterance was scored by only one expert, it was not possible to evaluate any kind of agreement among experts. For future evaluations, more experts are expected to provide independent scoring on same data sets, so this kind of evaluation will be possible. Data Acquisition ::: Prompts The speaking part of the proficiency tests in 2017/2018 consists of 47 question prompts provided in written form: 24 in English and 23 in German, divided according to CEFR levels. Apart from A1 level, which differs in the number of questions (11 for English; 10 for German), both English and German A2 and B1 levels have respectively 6 and 7 questions each. As for A1 level, the first four introductory questions are the same (How old are you?, Where do you live?, What are your hobbies?, Wie alt bist du?, Wo wohnst du?, Was sind deine Hobbys?) or slightly different (What's your favourite pet?, Welche Tiere magst du?) in both languages, whereas the second part of the test puts the test-takers in the role of a customer in a pizzeria (English) or in a bar (German). A2 level test is composed of small talk questions which relate to everyday life situations. In this case, questions are more open-ended than the aforementioned ones and allow the test-takers to interact by means of a broader range of answers. Finally, as for B1 level, questions are similar to A2 ones, but they include a role-play activity in the final part, which allows a good amount of freedom and creativity in answering the question. Data Acquisition ::: Written Data Table reports some statistics extracted from the written data collected so far. In this table, the number of pupils taking part in the English and German evaluation is reported, along with the number of sentences and tokens, identified as character sequences bounded by spaces. It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...). To do this, we have applied some text processing, that in sequence: $\bullet $ removes strange characters; $\bullet $ performs some text normalisation (lowercase, umlaut, numbers, ...) and tokenisation (punctuation, etc.) $\bullet $ removes / corrects non words (e.g. hallooooooooooo becomes hallo; aaaaaaaaeeeeeeeeiiiiiiii is removed) $\bullet $ identifies the language of each word, choosing among Italian, English, German; $\bullet $ corrects common typing errors (e.g. ai em becomes i am) $\bullet $ replaces unknown words, with respect to a large lexicon, with the label $<$unk$>$. Table reports some samples of written answers. Data Acquisition ::: Spoken Data Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand. Manual Transcriptions In order to create both an adaptation and an evaluation set for ASR, we manually transcribed part of the 2017 data sets. We defined an initial set of guidelines for the annotation, which were used by 5 researchers to manually transcribe about 20 minutes of audio data. This experience led to a discussion, from which a second set of guidelines originated, aiming at reaching a reasonable trade-off between transcription accuracy and speed. As a consequence, we decided to apply the following transcription rules: only the main speaker has to be transcribed; presence of other voices (schoolmates, teacher) should be reported only with the label “@voices”, presence of whispered speech was found to be significant, so it should be explicitly marked with the label “()”, badly pronounced words have to be marked by a “#” sign, without trying to phonetically transcribe the pronounced sounds; “#*” marks incomprehensible speech; speech in a different language from the target language has to be reported by means of an explicit marker “I am 10 years old @it(io ho già risposto)”. Next, we concatenated utterances to be transcribed into blocks of about 5 minutes each. We noticed that knowing the question and hearing several answers could be of great help for transcribing some poorly pronounced words or phrases. Therefore, each block contains only answers to the same question, explicitly reported at the beginning of the block. We engaged about 30 students from two Italian linguistic high schools (namely “C” and “S”) to perform manual transcriptions. After a joint training session, we paired students together. Each pair first transcribed, individually, the same block of 5 minutes. Then, they went through a comparison phase, where each pair of students discussed their choices and agreed on a single transcription for the assigned data. Transcriptions made before the comparison phase were retained to evaluate inter-annotator agreement. Apart from this first 5 minute block, each utterance was transcribed by only one transcriber. Inter-annotator agreement for the 5-minute blocks is shown in Table in terms of words (after removing hesitations and other labels related to background voices and noises, etc.). The low level of agreement reflects the difficulty of the task. In order to assure quality of the manual transcriptions, every sentence transcribed by the high school students was automatically processed to find out possible formal errors, and manually validated by researchers in our lab. Speakers were assigned either to training or evaluation sets, with proportions of $\frac{2}{3}$ and $\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded. Usage of the Data From the above description it appears that the corpus can be effectively used in many research directions. Usage of the Data ::: ASR-related Challenges The spoken corpus features non-native speech recordings in real classrooms and, consequently, peculiar phenomena appear and can be investigated. Phonological and cross-language interference requires specific approaches for accurate acoustic modelling. Moreover, for coping with cross-language interference it is important to consider alternative ways to represent specific words (e.g. words of two languages with the same graphemic representation). Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29. As for language models, accurate transcriptions of spoken responses demand for models able to cope with not well-formed expressions (due to students' grammatical errors). Also the presence of code-switched words, words fragments and spontaneous speech phenomena requires specific investigations to reduce their impact on the final performance. We believe that the particular domain and set of data pave the way to investigate into various ASR topics, such as: non-native speech, children speech, spontaneous speech, code-switching, multiple pronunciation, etc. Usage of the Data ::: Data Annotation The corpus has been (partly) annotated using the guidelines presented in Section SECREF3 on the basis of a preliminary analysis of the most common acoustic phenomena appearing in the data sets. Additional annotations could be included to address topics related to other spurious segments, as for example: understandable words pronounced in other languages or by other students, detection of phonological interference, detection of spontaneous speech phenomena, detection of overlapped speech, etc. In order to measure specific proficiency indicators, e.g. related to pronunciation and fluency, suprasegmental annotations can be also inserted in the corpus. Usage of the Data ::: Proficiency Assessment of L2 Learners The corpus is a valuable resource for training and evaluating a scoring classifier based on different approaches. Preliminary results BIBREF0 show that the usage of suitable linguistic features mainly based on statistical language models allow to predict the scores assigned by the human experts. The evaluation campaign has been conceived to verify the expected proficiency level according to class grade; as a result, although the proposed test cannot be used to assign a precise score to a given student, it allows to study typical error patterns according to age and level of the students. Furthermore, the fine-grained annotation, at sentence level, of the indicators described above is particularly suitable for creating a test bed for approaches based on “word embeddings” BIBREF30, BIBREF31, BIBREF32 to automatically estimate the language learner proficiency. Actually, the experiments reported in BIBREF30 demonstrate superior performance of word-embeddings for speech scoring with respect to the well known (feature-based) SpeechRater system BIBREF33, BIBREF2. In this regard, we believe that additional, specific annotations can be developed and included in the “TLT-school” corpus. Usage of the Data ::: Modelling Pronunciation By looking at the manual transcriptions, it is straightforward to detect the most problematic words, i.e. frequently occurring words, which were often marked as mispronounced (preceded by label “#”). This allows to prepare a set of data composed by good pronounced vs. bad pronounced words. A list of words, partly mispronounced, is shown in Table , from which one can try to model typical pronunciation errors (note that other occurrences of the selected words could be easily extracted from the non-annotated data). Finally, as mentioned above, further manual checking and annotation could be introduced to improve modelling of pronunciation errors. Distribution of the Corpus The corpus to be released is still under preparation, given the huge amount of spoken and written data; in particular, some checks are in progress in order to: remove from the data responses with personal or inadequate content (e.g. bad language); normalise the written responses (e.g. upper/lower case, punctuation, evident typos); normalise and verify the consistency of the transcription of spoken responses; check the available human scores and - if possible - merge or map the scores according to more general performance categories (e.g. delivery, language use, topic development) and an acknowledged scale (e.g. from 0 to 4). In particular, the proposal for an international challenge focused on non-native children speech recognition is being submitted where an English subset will be released and the perspective participants are invited to propose and evaluate state-of-art techniques for dealing with the multiple issues related to this challenging ASR scenario (acoustic and language models, non-native lexicon, noisy recordings, etc.). Conclusions and Future Works We have described “TLT-school”, a corpus of both spoken and written answers collected during language evaluation campaigns carried out in schools of northern Italy. The procedure used for data acquisition and for their annotation in terms of proficiency indicators has been also reported. Part of the data has been manually transcribed according to some guidelines: this set of data is going to be made publicly available. With regard to data acquisition, some limitations of the corpus have been observed that might be easily overcome during next campaigns. Special attention should be paid to enhancing the elicitation techniques, starting from adjusting the questions presented to test-takers. Some of the question prompts show some lacks that can be filled in without major difficulty: on the one hand, in the spoken part, questions do not require test-takers to shift tense and some are too suggestive and close-ended; on the other hand, in the written part, some question prompts are presented both in source and target language, thus causing or encouraging code-mixing and negative transfer phenomena. The elicitation techniques in a broader sense will be object of revision (see BIBREF34 and specifically on children speech BIBREF35) in order to maximise the quality of the corpus. As for proficiency indicators, one first step that could be taken in order to increase accuracy in the evaluation phase both for human and automatic scoring would be to divide the second indicator (pronunciation and fluency) into two different indicators, since fluent students might not necessarily have good pronunciation skills and vice versa, drawing for example on the IELTS Speaking band descriptors. Also, next campaigns might consider an additional indicator specifically addressed to score prosody (in particular intonation and rhythm), especially for A2 and B1 level test-takers. Considering the scope of the evaluation campaign, it is important to be aware of the limitations of the associated data sets: proficiency levels limited to A1, B1 and B2 (CEFR); custom indicators conceived for expert evaluation (not particularly suitable for automated evaluation); limited amount of responses per speaker. Nevertheless, as already discussed, the fact that the TLT campaign was carried out in 2016 and 2018 in the whole Trentino region makes the corpus a valuable linguistic resource for a number of studies associated to second language acquisition and evaluation. In particular, besides the already introduced proposal for an ASR challenge in 2020, other initiatives for the international community can be envisaged: a study of a fully-automated evaluation procedure without the need of experts' supervision; the investigation of end-to-end classifiers that directly use the spoken response as input and produce proficiency scores according to suitable rubrics. Acknowledgements This work has been partially funded by IPRASE (http://www.iprase.tn.it) under the project “TLT - Trentino Language Testing 2018”. We thank ISIT (http://www.isit.tn.it) for having provided the data and the reference scores.
Accuracy not available: WER results are reported 42.6 German, 35.9 English
ec2b8c43f14227cf74f9b49573cceb137dd336e7
ec2b8c43f14227cf74f9b49573cceb137dd336e7_0
Q: How is the speech recognition system evaluated? Text: Introduction We have acquired large sets of both written and spoken data during the implementation of campaigns aimed at assessing the proficiency, at school, of Italian pupils learning both German and English. Part of the acquired data has been included in a corpus, named "Trentino Language Testing" in schools (TLT-school), that will be described in the following. All the collected sentences have been annotated by human experts in terms of some predefined “indicators” which, in turn, were used to assign the proficiency level to each student undertaking the assigned test. This level is expressed according to the well-known Common European Framework of Reference for Languages (Council of Europe, 2001) scale. The CEFR defines 6 levels of proficiency: A1 (beginner), A2, B1, B2, C1 and C2. The levels considered in the evaluation campaigns where the data have been collected are: A1, A2 and B1. The indicators measure the linguistic competence of test takers both in relation to the content (e.g. grammatical correctness, lexical richness, semantic coherence, etc.) and to the speaking capabilities (e.g. pronunciation, fluency, etc.). Refer to Section SECREF2 for a description of the adopted indicators. The learners are Italian students, between 9 and 16 years old. They took proficiency tests by answering question prompts provided in written form. The “TLT-school” corpus, that we are going to make publicly available, contains part of the spoken answers (together with the respective manual transcriptions) recorded during some of the above mentioned evaluation campaigns. We will release the written answers in future. Details and critical issues found during the acquisition of the answers of the test takers will be discussed in Section SECREF2. The tasks that can be addressed by using the corpus are very challenging and pose many problems, which have only partially been solved by the interested scientific community. From the ASR perspective, major difficulties are represented by: a) recognition of both child and non-native speech, i.e. Italian pupils speaking both English and German, b) presence of a large number of spontaneous speech phenomena (hesitations, false starts, fragments of words, etc.), c) presence of multiple languages (English, Italian and German words are frequently uttered in response to a single question), d) presence of a significant level of background noise due to the fact that the microphone remains open for a fixed time interval (e.g. 20 seconds - depending on the question), and e) presence of non-collaborative speakers (students often joke, laugh, speak softly, etc.). Refer to Section SECREF6 for a detailed description of the collected spoken data set. Furthermore, since the sets of data from which “TLT-school” was derived were primarily acquired for measuring proficiency of second language (L2) learners, it is quite obvious to exploit the corpus for automatic speech rating. To this purpose, one can try to develop automatic approaches to reliably estimate the above-mentioned indicators used by the human experts who scored the answers of the pupils (such an approach is described in BIBREF0). However, it has to be noticed that scientific literature proposes to use several features and indicators for automatic speech scoring, partly different from those adopted in “TLT-school” corpus (see below for a brief review of the literature). Hence, we believe that adding new annotations to the corpus, related to particular aspects of language proficiency, can stimulate research and experimentation in this area. Finally, it is worth mentioning that also written responses of “TLT-school” corpus are characterised by a high level of noise due to: spelling errors, insertion of word fragments, presence of words belonging to multiple languages, presence of off-topic answers (e.g. containing jokes, comments not related to the questions, etc.). This set of text data will allow scientists to investigate both language and behaviour of pupils learning second languages at school. Written data are described in detail in Section SECREF5 Relation to prior work. Scientific literature is rich in approaches for automated assessment of spoken language proficiency. Performance is directly dependent on ASR accuracy which, in turn, depends on the type of input, read or spontaneous, and on the speakers' age, adults or children (see BIBREF1 for an overview of spoken language technology for education). A recent publication reporting an overview of state-of-the-art automated speech scoring technology as it is currently used at Educational Testing Service (ETS) can be found in BIBREF2. In order to address automatic assessment of complex spoken tasks requiring more general communication capabilities from L2 learners, the AZELLA data set BIBREF3, developed by Pearson, has been collected and used as benchmark for some researches BIBREF4, BIBREF3. The corpus contains $1,500$ spoken tests, each double graded by human professionals, from a variety of tasks. A public set of spoken data has been recently distributed in a spoken CALL (Computer Assisted Language Learning) shared task where Swiss students learning English had to answer to both written and spoken prompts. The goal of this challenge is to label students' spoken responses as “accept” or “reject”. Refer to BIBREF5 for details of the challenge and of the associated data sets. Many non-native speech corpora (mostly in English as target language) have been collected during the years. A list, though not recent, as well as a brief description of most of them can be found in BIBREF6. The same paper also gives information on how the data sets are distributed and can be accessed (many of them are available through both LDC and ELDA agencies). Some of the corpora also provide proficiency ratings to be used in CALL applications. Among them, we mention the ISLE corpus BIBREF7, which also contains transcriptions at the phonetic level and was used in the experiments reported in BIBREF0. Note that all corpora mentioned in BIBREF6 come from adult speech while, to our knowledge, the access to publicly available non-native children's speech corpora, as well as of children's speech corpora in general, is still scarce. Specifically concerning non-native children's speech, we believe worth mentioning the following corpora. The PF-STAR corpus (see BIBREF8) contains English utterances read by both Italian and German children, between 6 and 13 years old. The same corpus also contains utterances read by English children. The ChildIt corpus BIBREF9 contains English utterances (both read and imitated) by Italian children. By distributing “TLT-school” corpus, we hope to help researchers to investigate novel approaches and models in the areas of both non-native and children's speech and to build related benchmarks. Data Acquisition In Trentino, an autonomous region in northern Italy, there is a series of evaluation campaigns underway for testing L2 linguistic competence of Italian students taking proficiency tests in both English and German. A set of three evaluation campaigns is underway, two having been completed in 2016 and 2018, and a final one scheduled in 2020. Note that the “TLT-school” corpus refers to only the 2018 campaign, that was split in two parts: 2017 try-out data set (involving about 500 pupils) and the actual 2018 data (about 2500 pupils). Each of the three campaigns (i.e. 2016, 2018 and 2020) involves about 3000 students ranging from 9 to 16 years, belonging to four different school grade levels and three proficiency levels (A1, A2, B1). The schools involved in the evaluations are located in most part of the Trentino region, not only in its main towns; Table highlights some information about the pupils that took part to the campaigns. Several tests, aimed at assessing the language learning skills of the students, were carried out by means of multiple-choice questions, which can be evaluated automatically. However, a detailed linguistic evaluation cannot be performed without allowing the students to express themselves in both written sentences and spoken utterances, which typically require the intervention of human experts to be scored. Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively. The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table . Since every utterance was scored by only one expert, it was not possible to evaluate any kind of agreement among experts. For future evaluations, more experts are expected to provide independent scoring on same data sets, so this kind of evaluation will be possible. Data Acquisition ::: Prompts The speaking part of the proficiency tests in 2017/2018 consists of 47 question prompts provided in written form: 24 in English and 23 in German, divided according to CEFR levels. Apart from A1 level, which differs in the number of questions (11 for English; 10 for German), both English and German A2 and B1 levels have respectively 6 and 7 questions each. As for A1 level, the first four introductory questions are the same (How old are you?, Where do you live?, What are your hobbies?, Wie alt bist du?, Wo wohnst du?, Was sind deine Hobbys?) or slightly different (What's your favourite pet?, Welche Tiere magst du?) in both languages, whereas the second part of the test puts the test-takers in the role of a customer in a pizzeria (English) or in a bar (German). A2 level test is composed of small talk questions which relate to everyday life situations. In this case, questions are more open-ended than the aforementioned ones and allow the test-takers to interact by means of a broader range of answers. Finally, as for B1 level, questions are similar to A2 ones, but they include a role-play activity in the final part, which allows a good amount of freedom and creativity in answering the question. Data Acquisition ::: Written Data Table reports some statistics extracted from the written data collected so far. In this table, the number of pupils taking part in the English and German evaluation is reported, along with the number of sentences and tokens, identified as character sequences bounded by spaces. It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...). To do this, we have applied some text processing, that in sequence: $\bullet $ removes strange characters; $\bullet $ performs some text normalisation (lowercase, umlaut, numbers, ...) and tokenisation (punctuation, etc.) $\bullet $ removes / corrects non words (e.g. hallooooooooooo becomes hallo; aaaaaaaaeeeeeeeeiiiiiiii is removed) $\bullet $ identifies the language of each word, choosing among Italian, English, German; $\bullet $ corrects common typing errors (e.g. ai em becomes i am) $\bullet $ replaces unknown words, with respect to a large lexicon, with the label $<$unk$>$. Table reports some samples of written answers. Data Acquisition ::: Spoken Data Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand. Manual Transcriptions In order to create both an adaptation and an evaluation set for ASR, we manually transcribed part of the 2017 data sets. We defined an initial set of guidelines for the annotation, which were used by 5 researchers to manually transcribe about 20 minutes of audio data. This experience led to a discussion, from which a second set of guidelines originated, aiming at reaching a reasonable trade-off between transcription accuracy and speed. As a consequence, we decided to apply the following transcription rules: only the main speaker has to be transcribed; presence of other voices (schoolmates, teacher) should be reported only with the label “@voices”, presence of whispered speech was found to be significant, so it should be explicitly marked with the label “()”, badly pronounced words have to be marked by a “#” sign, without trying to phonetically transcribe the pronounced sounds; “#*” marks incomprehensible speech; speech in a different language from the target language has to be reported by means of an explicit marker “I am 10 years old @it(io ho già risposto)”. Next, we concatenated utterances to be transcribed into blocks of about 5 minutes each. We noticed that knowing the question and hearing several answers could be of great help for transcribing some poorly pronounced words or phrases. Therefore, each block contains only answers to the same question, explicitly reported at the beginning of the block. We engaged about 30 students from two Italian linguistic high schools (namely “C” and “S”) to perform manual transcriptions. After a joint training session, we paired students together. Each pair first transcribed, individually, the same block of 5 minutes. Then, they went through a comparison phase, where each pair of students discussed their choices and agreed on a single transcription for the assigned data. Transcriptions made before the comparison phase were retained to evaluate inter-annotator agreement. Apart from this first 5 minute block, each utterance was transcribed by only one transcriber. Inter-annotator agreement for the 5-minute blocks is shown in Table in terms of words (after removing hesitations and other labels related to background voices and noises, etc.). The low level of agreement reflects the difficulty of the task. In order to assure quality of the manual transcriptions, every sentence transcribed by the high school students was automatically processed to find out possible formal errors, and manually validated by researchers in our lab. Speakers were assigned either to training or evaluation sets, with proportions of $\frac{2}{3}$ and $\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded. Usage of the Data From the above description it appears that the corpus can be effectively used in many research directions. Usage of the Data ::: ASR-related Challenges The spoken corpus features non-native speech recordings in real classrooms and, consequently, peculiar phenomena appear and can be investigated. Phonological and cross-language interference requires specific approaches for accurate acoustic modelling. Moreover, for coping with cross-language interference it is important to consider alternative ways to represent specific words (e.g. words of two languages with the same graphemic representation). Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29. As for language models, accurate transcriptions of spoken responses demand for models able to cope with not well-formed expressions (due to students' grammatical errors). Also the presence of code-switched words, words fragments and spontaneous speech phenomena requires specific investigations to reduce their impact on the final performance. We believe that the particular domain and set of data pave the way to investigate into various ASR topics, such as: non-native speech, children speech, spontaneous speech, code-switching, multiple pronunciation, etc. Usage of the Data ::: Data Annotation The corpus has been (partly) annotated using the guidelines presented in Section SECREF3 on the basis of a preliminary analysis of the most common acoustic phenomena appearing in the data sets. Additional annotations could be included to address topics related to other spurious segments, as for example: understandable words pronounced in other languages or by other students, detection of phonological interference, detection of spontaneous speech phenomena, detection of overlapped speech, etc. In order to measure specific proficiency indicators, e.g. related to pronunciation and fluency, suprasegmental annotations can be also inserted in the corpus. Usage of the Data ::: Proficiency Assessment of L2 Learners The corpus is a valuable resource for training and evaluating a scoring classifier based on different approaches. Preliminary results BIBREF0 show that the usage of suitable linguistic features mainly based on statistical language models allow to predict the scores assigned by the human experts. The evaluation campaign has been conceived to verify the expected proficiency level according to class grade; as a result, although the proposed test cannot be used to assign a precise score to a given student, it allows to study typical error patterns according to age and level of the students. Furthermore, the fine-grained annotation, at sentence level, of the indicators described above is particularly suitable for creating a test bed for approaches based on “word embeddings” BIBREF30, BIBREF31, BIBREF32 to automatically estimate the language learner proficiency. Actually, the experiments reported in BIBREF30 demonstrate superior performance of word-embeddings for speech scoring with respect to the well known (feature-based) SpeechRater system BIBREF33, BIBREF2. In this regard, we believe that additional, specific annotations can be developed and included in the “TLT-school” corpus. Usage of the Data ::: Modelling Pronunciation By looking at the manual transcriptions, it is straightforward to detect the most problematic words, i.e. frequently occurring words, which were often marked as mispronounced (preceded by label “#”). This allows to prepare a set of data composed by good pronounced vs. bad pronounced words. A list of words, partly mispronounced, is shown in Table , from which one can try to model typical pronunciation errors (note that other occurrences of the selected words could be easily extracted from the non-annotated data). Finally, as mentioned above, further manual checking and annotation could be introduced to improve modelling of pronunciation errors. Distribution of the Corpus The corpus to be released is still under preparation, given the huge amount of spoken and written data; in particular, some checks are in progress in order to: remove from the data responses with personal or inadequate content (e.g. bad language); normalise the written responses (e.g. upper/lower case, punctuation, evident typos); normalise and verify the consistency of the transcription of spoken responses; check the available human scores and - if possible - merge or map the scores according to more general performance categories (e.g. delivery, language use, topic development) and an acknowledged scale (e.g. from 0 to 4). In particular, the proposal for an international challenge focused on non-native children speech recognition is being submitted where an English subset will be released and the perspective participants are invited to propose and evaluate state-of-art techniques for dealing with the multiple issues related to this challenging ASR scenario (acoustic and language models, non-native lexicon, noisy recordings, etc.). Conclusions and Future Works We have described “TLT-school”, a corpus of both spoken and written answers collected during language evaluation campaigns carried out in schools of northern Italy. The procedure used for data acquisition and for their annotation in terms of proficiency indicators has been also reported. Part of the data has been manually transcribed according to some guidelines: this set of data is going to be made publicly available. With regard to data acquisition, some limitations of the corpus have been observed that might be easily overcome during next campaigns. Special attention should be paid to enhancing the elicitation techniques, starting from adjusting the questions presented to test-takers. Some of the question prompts show some lacks that can be filled in without major difficulty: on the one hand, in the spoken part, questions do not require test-takers to shift tense and some are too suggestive and close-ended; on the other hand, in the written part, some question prompts are presented both in source and target language, thus causing or encouraging code-mixing and negative transfer phenomena. The elicitation techniques in a broader sense will be object of revision (see BIBREF34 and specifically on children speech BIBREF35) in order to maximise the quality of the corpus. As for proficiency indicators, one first step that could be taken in order to increase accuracy in the evaluation phase both for human and automatic scoring would be to divide the second indicator (pronunciation and fluency) into two different indicators, since fluent students might not necessarily have good pronunciation skills and vice versa, drawing for example on the IELTS Speaking band descriptors. Also, next campaigns might consider an additional indicator specifically addressed to score prosody (in particular intonation and rhythm), especially for A2 and B1 level test-takers. Considering the scope of the evaluation campaign, it is important to be aware of the limitations of the associated data sets: proficiency levels limited to A1, B1 and B2 (CEFR); custom indicators conceived for expert evaluation (not particularly suitable for automated evaluation); limited amount of responses per speaker. Nevertheless, as already discussed, the fact that the TLT campaign was carried out in 2016 and 2018 in the whole Trentino region makes the corpus a valuable linguistic resource for a number of studies associated to second language acquisition and evaluation. In particular, besides the already introduced proposal for an ASR challenge in 2020, other initiatives for the international community can be envisaged: a study of a fully-automated evaluation procedure without the need of experts' supervision; the investigation of end-to-end classifiers that directly use the spoken response as input and produce proficiency scores according to suitable rubrics. Acknowledgements This work has been partially funded by IPRASE (http://www.iprase.tn.it) under the project “TLT - Trentino Language Testing 2018”. We thank ISIT (http://www.isit.tn.it) for having provided the data and the reference scores.
Speech recognition system is evaluated using WER metric.
5e5460ea955d8bce89526647dd7c4f19b173ab34
5e5460ea955d8bce89526647dd7c4f19b173ab34_0
Q: How many of the utterances are transcribed? Text: Introduction We have acquired large sets of both written and spoken data during the implementation of campaigns aimed at assessing the proficiency, at school, of Italian pupils learning both German and English. Part of the acquired data has been included in a corpus, named "Trentino Language Testing" in schools (TLT-school), that will be described in the following. All the collected sentences have been annotated by human experts in terms of some predefined “indicators” which, in turn, were used to assign the proficiency level to each student undertaking the assigned test. This level is expressed according to the well-known Common European Framework of Reference for Languages (Council of Europe, 2001) scale. The CEFR defines 6 levels of proficiency: A1 (beginner), A2, B1, B2, C1 and C2. The levels considered in the evaluation campaigns where the data have been collected are: A1, A2 and B1. The indicators measure the linguistic competence of test takers both in relation to the content (e.g. grammatical correctness, lexical richness, semantic coherence, etc.) and to the speaking capabilities (e.g. pronunciation, fluency, etc.). Refer to Section SECREF2 for a description of the adopted indicators. The learners are Italian students, between 9 and 16 years old. They took proficiency tests by answering question prompts provided in written form. The “TLT-school” corpus, that we are going to make publicly available, contains part of the spoken answers (together with the respective manual transcriptions) recorded during some of the above mentioned evaluation campaigns. We will release the written answers in future. Details and critical issues found during the acquisition of the answers of the test takers will be discussed in Section SECREF2. The tasks that can be addressed by using the corpus are very challenging and pose many problems, which have only partially been solved by the interested scientific community. From the ASR perspective, major difficulties are represented by: a) recognition of both child and non-native speech, i.e. Italian pupils speaking both English and German, b) presence of a large number of spontaneous speech phenomena (hesitations, false starts, fragments of words, etc.), c) presence of multiple languages (English, Italian and German words are frequently uttered in response to a single question), d) presence of a significant level of background noise due to the fact that the microphone remains open for a fixed time interval (e.g. 20 seconds - depending on the question), and e) presence of non-collaborative speakers (students often joke, laugh, speak softly, etc.). Refer to Section SECREF6 for a detailed description of the collected spoken data set. Furthermore, since the sets of data from which “TLT-school” was derived were primarily acquired for measuring proficiency of second language (L2) learners, it is quite obvious to exploit the corpus for automatic speech rating. To this purpose, one can try to develop automatic approaches to reliably estimate the above-mentioned indicators used by the human experts who scored the answers of the pupils (such an approach is described in BIBREF0). However, it has to be noticed that scientific literature proposes to use several features and indicators for automatic speech scoring, partly different from those adopted in “TLT-school” corpus (see below for a brief review of the literature). Hence, we believe that adding new annotations to the corpus, related to particular aspects of language proficiency, can stimulate research and experimentation in this area. Finally, it is worth mentioning that also written responses of “TLT-school” corpus are characterised by a high level of noise due to: spelling errors, insertion of word fragments, presence of words belonging to multiple languages, presence of off-topic answers (e.g. containing jokes, comments not related to the questions, etc.). This set of text data will allow scientists to investigate both language and behaviour of pupils learning second languages at school. Written data are described in detail in Section SECREF5 Relation to prior work. Scientific literature is rich in approaches for automated assessment of spoken language proficiency. Performance is directly dependent on ASR accuracy which, in turn, depends on the type of input, read or spontaneous, and on the speakers' age, adults or children (see BIBREF1 for an overview of spoken language technology for education). A recent publication reporting an overview of state-of-the-art automated speech scoring technology as it is currently used at Educational Testing Service (ETS) can be found in BIBREF2. In order to address automatic assessment of complex spoken tasks requiring more general communication capabilities from L2 learners, the AZELLA data set BIBREF3, developed by Pearson, has been collected and used as benchmark for some researches BIBREF4, BIBREF3. The corpus contains $1,500$ spoken tests, each double graded by human professionals, from a variety of tasks. A public set of spoken data has been recently distributed in a spoken CALL (Computer Assisted Language Learning) shared task where Swiss students learning English had to answer to both written and spoken prompts. The goal of this challenge is to label students' spoken responses as “accept” or “reject”. Refer to BIBREF5 for details of the challenge and of the associated data sets. Many non-native speech corpora (mostly in English as target language) have been collected during the years. A list, though not recent, as well as a brief description of most of them can be found in BIBREF6. The same paper also gives information on how the data sets are distributed and can be accessed (many of them are available through both LDC and ELDA agencies). Some of the corpora also provide proficiency ratings to be used in CALL applications. Among them, we mention the ISLE corpus BIBREF7, which also contains transcriptions at the phonetic level and was used in the experiments reported in BIBREF0. Note that all corpora mentioned in BIBREF6 come from adult speech while, to our knowledge, the access to publicly available non-native children's speech corpora, as well as of children's speech corpora in general, is still scarce. Specifically concerning non-native children's speech, we believe worth mentioning the following corpora. The PF-STAR corpus (see BIBREF8) contains English utterances read by both Italian and German children, between 6 and 13 years old. The same corpus also contains utterances read by English children. The ChildIt corpus BIBREF9 contains English utterances (both read and imitated) by Italian children. By distributing “TLT-school” corpus, we hope to help researchers to investigate novel approaches and models in the areas of both non-native and children's speech and to build related benchmarks. Data Acquisition In Trentino, an autonomous region in northern Italy, there is a series of evaluation campaigns underway for testing L2 linguistic competence of Italian students taking proficiency tests in both English and German. A set of three evaluation campaigns is underway, two having been completed in 2016 and 2018, and a final one scheduled in 2020. Note that the “TLT-school” corpus refers to only the 2018 campaign, that was split in two parts: 2017 try-out data set (involving about 500 pupils) and the actual 2018 data (about 2500 pupils). Each of the three campaigns (i.e. 2016, 2018 and 2020) involves about 3000 students ranging from 9 to 16 years, belonging to four different school grade levels and three proficiency levels (A1, A2, B1). The schools involved in the evaluations are located in most part of the Trentino region, not only in its main towns; Table highlights some information about the pupils that took part to the campaigns. Several tests, aimed at assessing the language learning skills of the students, were carried out by means of multiple-choice questions, which can be evaluated automatically. However, a detailed linguistic evaluation cannot be performed without allowing the students to express themselves in both written sentences and spoken utterances, which typically require the intervention of human experts to be scored. Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively. The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table . Since every utterance was scored by only one expert, it was not possible to evaluate any kind of agreement among experts. For future evaluations, more experts are expected to provide independent scoring on same data sets, so this kind of evaluation will be possible. Data Acquisition ::: Prompts The speaking part of the proficiency tests in 2017/2018 consists of 47 question prompts provided in written form: 24 in English and 23 in German, divided according to CEFR levels. Apart from A1 level, which differs in the number of questions (11 for English; 10 for German), both English and German A2 and B1 levels have respectively 6 and 7 questions each. As for A1 level, the first four introductory questions are the same (How old are you?, Where do you live?, What are your hobbies?, Wie alt bist du?, Wo wohnst du?, Was sind deine Hobbys?) or slightly different (What's your favourite pet?, Welche Tiere magst du?) in both languages, whereas the second part of the test puts the test-takers in the role of a customer in a pizzeria (English) or in a bar (German). A2 level test is composed of small talk questions which relate to everyday life situations. In this case, questions are more open-ended than the aforementioned ones and allow the test-takers to interact by means of a broader range of answers. Finally, as for B1 level, questions are similar to A2 ones, but they include a role-play activity in the final part, which allows a good amount of freedom and creativity in answering the question. Data Acquisition ::: Written Data Table reports some statistics extracted from the written data collected so far. In this table, the number of pupils taking part in the English and German evaluation is reported, along with the number of sentences and tokens, identified as character sequences bounded by spaces. It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...). To do this, we have applied some text processing, that in sequence: $\bullet $ removes strange characters; $\bullet $ performs some text normalisation (lowercase, umlaut, numbers, ...) and tokenisation (punctuation, etc.) $\bullet $ removes / corrects non words (e.g. hallooooooooooo becomes hallo; aaaaaaaaeeeeeeeeiiiiiiii is removed) $\bullet $ identifies the language of each word, choosing among Italian, English, German; $\bullet $ corrects common typing errors (e.g. ai em becomes i am) $\bullet $ replaces unknown words, with respect to a large lexicon, with the label $<$unk$>$. Table reports some samples of written answers. Data Acquisition ::: Spoken Data Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand. Manual Transcriptions In order to create both an adaptation and an evaluation set for ASR, we manually transcribed part of the 2017 data sets. We defined an initial set of guidelines for the annotation, which were used by 5 researchers to manually transcribe about 20 minutes of audio data. This experience led to a discussion, from which a second set of guidelines originated, aiming at reaching a reasonable trade-off between transcription accuracy and speed. As a consequence, we decided to apply the following transcription rules: only the main speaker has to be transcribed; presence of other voices (schoolmates, teacher) should be reported only with the label “@voices”, presence of whispered speech was found to be significant, so it should be explicitly marked with the label “()”, badly pronounced words have to be marked by a “#” sign, without trying to phonetically transcribe the pronounced sounds; “#*” marks incomprehensible speech; speech in a different language from the target language has to be reported by means of an explicit marker “I am 10 years old @it(io ho già risposto)”. Next, we concatenated utterances to be transcribed into blocks of about 5 minutes each. We noticed that knowing the question and hearing several answers could be of great help for transcribing some poorly pronounced words or phrases. Therefore, each block contains only answers to the same question, explicitly reported at the beginning of the block. We engaged about 30 students from two Italian linguistic high schools (namely “C” and “S”) to perform manual transcriptions. After a joint training session, we paired students together. Each pair first transcribed, individually, the same block of 5 minutes. Then, they went through a comparison phase, where each pair of students discussed their choices and agreed on a single transcription for the assigned data. Transcriptions made before the comparison phase were retained to evaluate inter-annotator agreement. Apart from this first 5 minute block, each utterance was transcribed by only one transcriber. Inter-annotator agreement for the 5-minute blocks is shown in Table in terms of words (after removing hesitations and other labels related to background voices and noises, etc.). The low level of agreement reflects the difficulty of the task. In order to assure quality of the manual transcriptions, every sentence transcribed by the high school students was automatically processed to find out possible formal errors, and manually validated by researchers in our lab. Speakers were assigned either to training or evaluation sets, with proportions of $\frac{2}{3}$ and $\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded. Usage of the Data From the above description it appears that the corpus can be effectively used in many research directions. Usage of the Data ::: ASR-related Challenges The spoken corpus features non-native speech recordings in real classrooms and, consequently, peculiar phenomena appear and can be investigated. Phonological and cross-language interference requires specific approaches for accurate acoustic modelling. Moreover, for coping with cross-language interference it is important to consider alternative ways to represent specific words (e.g. words of two languages with the same graphemic representation). Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29. As for language models, accurate transcriptions of spoken responses demand for models able to cope with not well-formed expressions (due to students' grammatical errors). Also the presence of code-switched words, words fragments and spontaneous speech phenomena requires specific investigations to reduce their impact on the final performance. We believe that the particular domain and set of data pave the way to investigate into various ASR topics, such as: non-native speech, children speech, spontaneous speech, code-switching, multiple pronunciation, etc. Usage of the Data ::: Data Annotation The corpus has been (partly) annotated using the guidelines presented in Section SECREF3 on the basis of a preliminary analysis of the most common acoustic phenomena appearing in the data sets. Additional annotations could be included to address topics related to other spurious segments, as for example: understandable words pronounced in other languages or by other students, detection of phonological interference, detection of spontaneous speech phenomena, detection of overlapped speech, etc. In order to measure specific proficiency indicators, e.g. related to pronunciation and fluency, suprasegmental annotations can be also inserted in the corpus. Usage of the Data ::: Proficiency Assessment of L2 Learners The corpus is a valuable resource for training and evaluating a scoring classifier based on different approaches. Preliminary results BIBREF0 show that the usage of suitable linguistic features mainly based on statistical language models allow to predict the scores assigned by the human experts. The evaluation campaign has been conceived to verify the expected proficiency level according to class grade; as a result, although the proposed test cannot be used to assign a precise score to a given student, it allows to study typical error patterns according to age and level of the students. Furthermore, the fine-grained annotation, at sentence level, of the indicators described above is particularly suitable for creating a test bed for approaches based on “word embeddings” BIBREF30, BIBREF31, BIBREF32 to automatically estimate the language learner proficiency. Actually, the experiments reported in BIBREF30 demonstrate superior performance of word-embeddings for speech scoring with respect to the well known (feature-based) SpeechRater system BIBREF33, BIBREF2. In this regard, we believe that additional, specific annotations can be developed and included in the “TLT-school” corpus. Usage of the Data ::: Modelling Pronunciation By looking at the manual transcriptions, it is straightforward to detect the most problematic words, i.e. frequently occurring words, which were often marked as mispronounced (preceded by label “#”). This allows to prepare a set of data composed by good pronounced vs. bad pronounced words. A list of words, partly mispronounced, is shown in Table , from which one can try to model typical pronunciation errors (note that other occurrences of the selected words could be easily extracted from the non-annotated data). Finally, as mentioned above, further manual checking and annotation could be introduced to improve modelling of pronunciation errors. Distribution of the Corpus The corpus to be released is still under preparation, given the huge amount of spoken and written data; in particular, some checks are in progress in order to: remove from the data responses with personal or inadequate content (e.g. bad language); normalise the written responses (e.g. upper/lower case, punctuation, evident typos); normalise and verify the consistency of the transcription of spoken responses; check the available human scores and - if possible - merge or map the scores according to more general performance categories (e.g. delivery, language use, topic development) and an acknowledged scale (e.g. from 0 to 4). In particular, the proposal for an international challenge focused on non-native children speech recognition is being submitted where an English subset will be released and the perspective participants are invited to propose and evaluate state-of-art techniques for dealing with the multiple issues related to this challenging ASR scenario (acoustic and language models, non-native lexicon, noisy recordings, etc.). Conclusions and Future Works We have described “TLT-school”, a corpus of both spoken and written answers collected during language evaluation campaigns carried out in schools of northern Italy. The procedure used for data acquisition and for their annotation in terms of proficiency indicators has been also reported. Part of the data has been manually transcribed according to some guidelines: this set of data is going to be made publicly available. With regard to data acquisition, some limitations of the corpus have been observed that might be easily overcome during next campaigns. Special attention should be paid to enhancing the elicitation techniques, starting from adjusting the questions presented to test-takers. Some of the question prompts show some lacks that can be filled in without major difficulty: on the one hand, in the spoken part, questions do not require test-takers to shift tense and some are too suggestive and close-ended; on the other hand, in the written part, some question prompts are presented both in source and target language, thus causing or encouraging code-mixing and negative transfer phenomena. The elicitation techniques in a broader sense will be object of revision (see BIBREF34 and specifically on children speech BIBREF35) in order to maximise the quality of the corpus. As for proficiency indicators, one first step that could be taken in order to increase accuracy in the evaluation phase both for human and automatic scoring would be to divide the second indicator (pronunciation and fluency) into two different indicators, since fluent students might not necessarily have good pronunciation skills and vice versa, drawing for example on the IELTS Speaking band descriptors. Also, next campaigns might consider an additional indicator specifically addressed to score prosody (in particular intonation and rhythm), especially for A2 and B1 level test-takers. Considering the scope of the evaluation campaign, it is important to be aware of the limitations of the associated data sets: proficiency levels limited to A1, B1 and B2 (CEFR); custom indicators conceived for expert evaluation (not particularly suitable for automated evaluation); limited amount of responses per speaker. Nevertheless, as already discussed, the fact that the TLT campaign was carried out in 2016 and 2018 in the whole Trentino region makes the corpus a valuable linguistic resource for a number of studies associated to second language acquisition and evaluation. In particular, besides the already introduced proposal for an ASR challenge in 2020, other initiatives for the international community can be envisaged: a study of a fully-automated evaluation procedure without the need of experts' supervision; the investigation of end-to-end classifiers that directly use the spoken response as input and produce proficiency scores according to suitable rubrics. Acknowledgements This work has been partially funded by IPRASE (http://www.iprase.tn.it) under the project “TLT - Trentino Language Testing 2018”. We thank ISIT (http://www.isit.tn.it) for having provided the data and the reference scores.
Total number of transcribed utterances including Train and Test for both Eng and Ger language is 5562 (2188 cleaned)
d7d611f622552142723e064f330d071f985e805c
d7d611f622552142723e064f330d071f985e805c_0
Q: How many utterances are in the corpus? Text: Introduction We have acquired large sets of both written and spoken data during the implementation of campaigns aimed at assessing the proficiency, at school, of Italian pupils learning both German and English. Part of the acquired data has been included in a corpus, named "Trentino Language Testing" in schools (TLT-school), that will be described in the following. All the collected sentences have been annotated by human experts in terms of some predefined “indicators” which, in turn, were used to assign the proficiency level to each student undertaking the assigned test. This level is expressed according to the well-known Common European Framework of Reference for Languages (Council of Europe, 2001) scale. The CEFR defines 6 levels of proficiency: A1 (beginner), A2, B1, B2, C1 and C2. The levels considered in the evaluation campaigns where the data have been collected are: A1, A2 and B1. The indicators measure the linguistic competence of test takers both in relation to the content (e.g. grammatical correctness, lexical richness, semantic coherence, etc.) and to the speaking capabilities (e.g. pronunciation, fluency, etc.). Refer to Section SECREF2 for a description of the adopted indicators. The learners are Italian students, between 9 and 16 years old. They took proficiency tests by answering question prompts provided in written form. The “TLT-school” corpus, that we are going to make publicly available, contains part of the spoken answers (together with the respective manual transcriptions) recorded during some of the above mentioned evaluation campaigns. We will release the written answers in future. Details and critical issues found during the acquisition of the answers of the test takers will be discussed in Section SECREF2. The tasks that can be addressed by using the corpus are very challenging and pose many problems, which have only partially been solved by the interested scientific community. From the ASR perspective, major difficulties are represented by: a) recognition of both child and non-native speech, i.e. Italian pupils speaking both English and German, b) presence of a large number of spontaneous speech phenomena (hesitations, false starts, fragments of words, etc.), c) presence of multiple languages (English, Italian and German words are frequently uttered in response to a single question), d) presence of a significant level of background noise due to the fact that the microphone remains open for a fixed time interval (e.g. 20 seconds - depending on the question), and e) presence of non-collaborative speakers (students often joke, laugh, speak softly, etc.). Refer to Section SECREF6 for a detailed description of the collected spoken data set. Furthermore, since the sets of data from which “TLT-school” was derived were primarily acquired for measuring proficiency of second language (L2) learners, it is quite obvious to exploit the corpus for automatic speech rating. To this purpose, one can try to develop automatic approaches to reliably estimate the above-mentioned indicators used by the human experts who scored the answers of the pupils (such an approach is described in BIBREF0). However, it has to be noticed that scientific literature proposes to use several features and indicators for automatic speech scoring, partly different from those adopted in “TLT-school” corpus (see below for a brief review of the literature). Hence, we believe that adding new annotations to the corpus, related to particular aspects of language proficiency, can stimulate research and experimentation in this area. Finally, it is worth mentioning that also written responses of “TLT-school” corpus are characterised by a high level of noise due to: spelling errors, insertion of word fragments, presence of words belonging to multiple languages, presence of off-topic answers (e.g. containing jokes, comments not related to the questions, etc.). This set of text data will allow scientists to investigate both language and behaviour of pupils learning second languages at school. Written data are described in detail in Section SECREF5 Relation to prior work. Scientific literature is rich in approaches for automated assessment of spoken language proficiency. Performance is directly dependent on ASR accuracy which, in turn, depends on the type of input, read or spontaneous, and on the speakers' age, adults or children (see BIBREF1 for an overview of spoken language technology for education). A recent publication reporting an overview of state-of-the-art automated speech scoring technology as it is currently used at Educational Testing Service (ETS) can be found in BIBREF2. In order to address automatic assessment of complex spoken tasks requiring more general communication capabilities from L2 learners, the AZELLA data set BIBREF3, developed by Pearson, has been collected and used as benchmark for some researches BIBREF4, BIBREF3. The corpus contains $1,500$ spoken tests, each double graded by human professionals, from a variety of tasks. A public set of spoken data has been recently distributed in a spoken CALL (Computer Assisted Language Learning) shared task where Swiss students learning English had to answer to both written and spoken prompts. The goal of this challenge is to label students' spoken responses as “accept” or “reject”. Refer to BIBREF5 for details of the challenge and of the associated data sets. Many non-native speech corpora (mostly in English as target language) have been collected during the years. A list, though not recent, as well as a brief description of most of them can be found in BIBREF6. The same paper also gives information on how the data sets are distributed and can be accessed (many of them are available through both LDC and ELDA agencies). Some of the corpora also provide proficiency ratings to be used in CALL applications. Among them, we mention the ISLE corpus BIBREF7, which also contains transcriptions at the phonetic level and was used in the experiments reported in BIBREF0. Note that all corpora mentioned in BIBREF6 come from adult speech while, to our knowledge, the access to publicly available non-native children's speech corpora, as well as of children's speech corpora in general, is still scarce. Specifically concerning non-native children's speech, we believe worth mentioning the following corpora. The PF-STAR corpus (see BIBREF8) contains English utterances read by both Italian and German children, between 6 and 13 years old. The same corpus also contains utterances read by English children. The ChildIt corpus BIBREF9 contains English utterances (both read and imitated) by Italian children. By distributing “TLT-school” corpus, we hope to help researchers to investigate novel approaches and models in the areas of both non-native and children's speech and to build related benchmarks. Data Acquisition In Trentino, an autonomous region in northern Italy, there is a series of evaluation campaigns underway for testing L2 linguistic competence of Italian students taking proficiency tests in both English and German. A set of three evaluation campaigns is underway, two having been completed in 2016 and 2018, and a final one scheduled in 2020. Note that the “TLT-school” corpus refers to only the 2018 campaign, that was split in two parts: 2017 try-out data set (involving about 500 pupils) and the actual 2018 data (about 2500 pupils). Each of the three campaigns (i.e. 2016, 2018 and 2020) involves about 3000 students ranging from 9 to 16 years, belonging to four different school grade levels and three proficiency levels (A1, A2, B1). The schools involved in the evaluations are located in most part of the Trentino region, not only in its main towns; Table highlights some information about the pupils that took part to the campaigns. Several tests, aimed at assessing the language learning skills of the students, were carried out by means of multiple-choice questions, which can be evaluated automatically. However, a detailed linguistic evaluation cannot be performed without allowing the students to express themselves in both written sentences and spoken utterances, which typically require the intervention of human experts to be scored. Tables and report some statistics extracted from both the written and spoken data collected so far in all the campaigns. Each written or spoken item received a total score by human experts, computed by summing up the scores related to 6 indicators in 2017/2018 (from 3 to 6 in the 2016 campaign, according to the proficiency levels and the type of test). Each indicator can assume a value 0, 1, 2, corresponding to bad, medium, good, respectively. The list of the indicators used by the experts to score written sentences and spoken utterances in the evaluations, grouped by similarity, is reported in Table . Since every utterance was scored by only one expert, it was not possible to evaluate any kind of agreement among experts. For future evaluations, more experts are expected to provide independent scoring on same data sets, so this kind of evaluation will be possible. Data Acquisition ::: Prompts The speaking part of the proficiency tests in 2017/2018 consists of 47 question prompts provided in written form: 24 in English and 23 in German, divided according to CEFR levels. Apart from A1 level, which differs in the number of questions (11 for English; 10 for German), both English and German A2 and B1 levels have respectively 6 and 7 questions each. As for A1 level, the first four introductory questions are the same (How old are you?, Where do you live?, What are your hobbies?, Wie alt bist du?, Wo wohnst du?, Was sind deine Hobbys?) or slightly different (What's your favourite pet?, Welche Tiere magst du?) in both languages, whereas the second part of the test puts the test-takers in the role of a customer in a pizzeria (English) or in a bar (German). A2 level test is composed of small talk questions which relate to everyday life situations. In this case, questions are more open-ended than the aforementioned ones and allow the test-takers to interact by means of a broader range of answers. Finally, as for B1 level, questions are similar to A2 ones, but they include a role-play activity in the final part, which allows a good amount of freedom and creativity in answering the question. Data Acquisition ::: Written Data Table reports some statistics extracted from the written data collected so far. In this table, the number of pupils taking part in the English and German evaluation is reported, along with the number of sentences and tokens, identified as character sequences bounded by spaces. It is worth mentioning that the collected texts contain a large quantity of errors of several types: orthographic, syntactic, code-switched words (i.e. words not in the required language), jokes, etc. Hence, the original written sentences have been processed in order to produce “cleaner” versions, in order to make the data usable for some research purposes (e.g. to train language models, to extract features for proficiency assessment, ...). To do this, we have applied some text processing, that in sequence: $\bullet $ removes strange characters; $\bullet $ performs some text normalisation (lowercase, umlaut, numbers, ...) and tokenisation (punctuation, etc.) $\bullet $ removes / corrects non words (e.g. hallooooooooooo becomes hallo; aaaaaaaaeeeeeeeeiiiiiiii is removed) $\bullet $ identifies the language of each word, choosing among Italian, English, German; $\bullet $ corrects common typing errors (e.g. ai em becomes i am) $\bullet $ replaces unknown words, with respect to a large lexicon, with the label $<$unk$>$. Table reports some samples of written answers. Data Acquisition ::: Spoken Data Table reports some statistics extracted from the acquired spoken data. Speech was recorded in classrooms, whose equipment depended on each school. In general, around 20 students took the test together, at the same time and in the same classrooms, so it is quite common that speech of mates or teachers often overlaps with the speech of the student speaking in her/his microphone. Also, the type of microphone depends on the equipment of the school. On average, the audio signal quality is nearly good, while the main problem is caused by a high percentage of extraneous speech. This is due to the fact that organisers decided to use a fixed duration - which depends on the question - for recording spoken utterances, so that all the recordings for a given question have the same length. However, while it is rare that a speaker has not enough time to answer, it is quite common that, especially after the end of the utterance, some other speech (e.g. comments, jokes with mates, indications from the teachers, etc.) is captured. In addition, background noise is often present due to several sources (doors, steps, keyboard typing, background voices, street noises if the windows are open, etc). Finally, it has to be pointed out that many answers are whispered and difficult to understand. Manual Transcriptions In order to create both an adaptation and an evaluation set for ASR, we manually transcribed part of the 2017 data sets. We defined an initial set of guidelines for the annotation, which were used by 5 researchers to manually transcribe about 20 minutes of audio data. This experience led to a discussion, from which a second set of guidelines originated, aiming at reaching a reasonable trade-off between transcription accuracy and speed. As a consequence, we decided to apply the following transcription rules: only the main speaker has to be transcribed; presence of other voices (schoolmates, teacher) should be reported only with the label “@voices”, presence of whispered speech was found to be significant, so it should be explicitly marked with the label “()”, badly pronounced words have to be marked by a “#” sign, without trying to phonetically transcribe the pronounced sounds; “#*” marks incomprehensible speech; speech in a different language from the target language has to be reported by means of an explicit marker “I am 10 years old @it(io ho già risposto)”. Next, we concatenated utterances to be transcribed into blocks of about 5 minutes each. We noticed that knowing the question and hearing several answers could be of great help for transcribing some poorly pronounced words or phrases. Therefore, each block contains only answers to the same question, explicitly reported at the beginning of the block. We engaged about 30 students from two Italian linguistic high schools (namely “C” and “S”) to perform manual transcriptions. After a joint training session, we paired students together. Each pair first transcribed, individually, the same block of 5 minutes. Then, they went through a comparison phase, where each pair of students discussed their choices and agreed on a single transcription for the assigned data. Transcriptions made before the comparison phase were retained to evaluate inter-annotator agreement. Apart from this first 5 minute block, each utterance was transcribed by only one transcriber. Inter-annotator agreement for the 5-minute blocks is shown in Table in terms of words (after removing hesitations and other labels related to background voices and noises, etc.). The low level of agreement reflects the difficulty of the task. In order to assure quality of the manual transcriptions, every sentence transcribed by the high school students was automatically processed to find out possible formal errors, and manually validated by researchers in our lab. Speakers were assigned either to training or evaluation sets, with proportions of $\frac{2}{3}$ and $\frac{1}{3}$, respectively; then training and evaluation lists were built, accordingly. Table reports statistics from the spoken data set. The id All identifies the whole data set, while Clean defines the subset in which sentences containing background voices, incomprehensible speech and word fragments were excluded. Usage of the Data From the above description it appears that the corpus can be effectively used in many research directions. Usage of the Data ::: ASR-related Challenges The spoken corpus features non-native speech recordings in real classrooms and, consequently, peculiar phenomena appear and can be investigated. Phonological and cross-language interference requires specific approaches for accurate acoustic modelling. Moreover, for coping with cross-language interference it is important to consider alternative ways to represent specific words (e.g. words of two languages with the same graphemic representation). Table , extracted from BIBREF0, reports WERs obtained on evaluation data sets with a strongly adapted ASR, demonstrating the difficulty of the related speech recognition task for both languages. Refer to BIBREF10 for comparisons with a different non-native children speech data set and to scientific literature BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19 for detailed descriptions of children speech recognition and related issues. Important, although not exhaustive of the topic, references on non-native speech recognition can be found in BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, BIBREF29. As for language models, accurate transcriptions of spoken responses demand for models able to cope with not well-formed expressions (due to students' grammatical errors). Also the presence of code-switched words, words fragments and spontaneous speech phenomena requires specific investigations to reduce their impact on the final performance. We believe that the particular domain and set of data pave the way to investigate into various ASR topics, such as: non-native speech, children speech, spontaneous speech, code-switching, multiple pronunciation, etc. Usage of the Data ::: Data Annotation The corpus has been (partly) annotated using the guidelines presented in Section SECREF3 on the basis of a preliminary analysis of the most common acoustic phenomena appearing in the data sets. Additional annotations could be included to address topics related to other spurious segments, as for example: understandable words pronounced in other languages or by other students, detection of phonological interference, detection of spontaneous speech phenomena, detection of overlapped speech, etc. In order to measure specific proficiency indicators, e.g. related to pronunciation and fluency, suprasegmental annotations can be also inserted in the corpus. Usage of the Data ::: Proficiency Assessment of L2 Learners The corpus is a valuable resource for training and evaluating a scoring classifier based on different approaches. Preliminary results BIBREF0 show that the usage of suitable linguistic features mainly based on statistical language models allow to predict the scores assigned by the human experts. The evaluation campaign has been conceived to verify the expected proficiency level according to class grade; as a result, although the proposed test cannot be used to assign a precise score to a given student, it allows to study typical error patterns according to age and level of the students. Furthermore, the fine-grained annotation, at sentence level, of the indicators described above is particularly suitable for creating a test bed for approaches based on “word embeddings” BIBREF30, BIBREF31, BIBREF32 to automatically estimate the language learner proficiency. Actually, the experiments reported in BIBREF30 demonstrate superior performance of word-embeddings for speech scoring with respect to the well known (feature-based) SpeechRater system BIBREF33, BIBREF2. In this regard, we believe that additional, specific annotations can be developed and included in the “TLT-school” corpus. Usage of the Data ::: Modelling Pronunciation By looking at the manual transcriptions, it is straightforward to detect the most problematic words, i.e. frequently occurring words, which were often marked as mispronounced (preceded by label “#”). This allows to prepare a set of data composed by good pronounced vs. bad pronounced words. A list of words, partly mispronounced, is shown in Table , from which one can try to model typical pronunciation errors (note that other occurrences of the selected words could be easily extracted from the non-annotated data). Finally, as mentioned above, further manual checking and annotation could be introduced to improve modelling of pronunciation errors. Distribution of the Corpus The corpus to be released is still under preparation, given the huge amount of spoken and written data; in particular, some checks are in progress in order to: remove from the data responses with personal or inadequate content (e.g. bad language); normalise the written responses (e.g. upper/lower case, punctuation, evident typos); normalise and verify the consistency of the transcription of spoken responses; check the available human scores and - if possible - merge or map the scores according to more general performance categories (e.g. delivery, language use, topic development) and an acknowledged scale (e.g. from 0 to 4). In particular, the proposal for an international challenge focused on non-native children speech recognition is being submitted where an English subset will be released and the perspective participants are invited to propose and evaluate state-of-art techniques for dealing with the multiple issues related to this challenging ASR scenario (acoustic and language models, non-native lexicon, noisy recordings, etc.). Conclusions and Future Works We have described “TLT-school”, a corpus of both spoken and written answers collected during language evaluation campaigns carried out in schools of northern Italy. The procedure used for data acquisition and for their annotation in terms of proficiency indicators has been also reported. Part of the data has been manually transcribed according to some guidelines: this set of data is going to be made publicly available. With regard to data acquisition, some limitations of the corpus have been observed that might be easily overcome during next campaigns. Special attention should be paid to enhancing the elicitation techniques, starting from adjusting the questions presented to test-takers. Some of the question prompts show some lacks that can be filled in without major difficulty: on the one hand, in the spoken part, questions do not require test-takers to shift tense and some are too suggestive and close-ended; on the other hand, in the written part, some question prompts are presented both in source and target language, thus causing or encouraging code-mixing and negative transfer phenomena. The elicitation techniques in a broader sense will be object of revision (see BIBREF34 and specifically on children speech BIBREF35) in order to maximise the quality of the corpus. As for proficiency indicators, one first step that could be taken in order to increase accuracy in the evaluation phase both for human and automatic scoring would be to divide the second indicator (pronunciation and fluency) into two different indicators, since fluent students might not necessarily have good pronunciation skills and vice versa, drawing for example on the IELTS Speaking band descriptors. Also, next campaigns might consider an additional indicator specifically addressed to score prosody (in particular intonation and rhythm), especially for A2 and B1 level test-takers. Considering the scope of the evaluation campaign, it is important to be aware of the limitations of the associated data sets: proficiency levels limited to A1, B1 and B2 (CEFR); custom indicators conceived for expert evaluation (not particularly suitable for automated evaluation); limited amount of responses per speaker. Nevertheless, as already discussed, the fact that the TLT campaign was carried out in 2016 and 2018 in the whole Trentino region makes the corpus a valuable linguistic resource for a number of studies associated to second language acquisition and evaluation. In particular, besides the already introduced proposal for an ASR challenge in 2020, other initiatives for the international community can be envisaged: a study of a fully-automated evaluation procedure without the need of experts' supervision; the investigation of end-to-end classifiers that directly use the spoken response as input and produce proficiency scores according to suitable rubrics. Acknowledgements This work has been partially funded by IPRASE (http://www.iprase.tn.it) under the project “TLT - Trentino Language Testing 2018”. We thank ISIT (http://www.isit.tn.it) for having provided the data and the reference scores.
Total number of utterances available is: 70607 (37344 ENG + 33263 GER)
9555aa8de322396a16a07a5423e6a79dcd76816a
9555aa8de322396a16a07a5423e6a79dcd76816a_0
Q: By how much does their model outperform both the state-of-the-art systems? Text: Introduction Encoder-decoder models have been widely used in sequence to sequence tasks such as machine translation ( BIBREF0 , BIBREF1 ). They consist of an encoder which represents the whole input sequence with a single feature vector. The decoder then takes this representation and generates the desired output sequence. The most successful models are LSTM and GRU as they are much easier to train than vanilla RNNs. In this paper we are interested in summarization where the input sequence is a sentence/paragraph and the output is a summary of the text. Several encoding-decoding approaches have been proposed ( BIBREF2 , BIBREF3 , BIBREF4 ). Despite their success, it is commonly believed that the intermediate feature vectors are limited as they are created by only looking at previous words. This is particularly detrimental when dealing with large input sequences. Bi-directorial RNNs ( BIBREF5 , BIBREF6 ) try to address this problem by computing two different representations resulting of reading the input sequence left-to-right and right-to-left. The final vectors are computed by concatenating the two representations. However, the word representations are computed with limited scope. The decoder employed in all these methods outputs at each time step a distribution over a fixed vocabulary. In practice, this introduces problems with rare words (e.g., proper nouns) which are out of vocabulary. To alleviate this problem, one could potentially increase the size of the decoder vocabulary, but decoding becomes computationally much harder, as one has to compute the soft-max over all possible words. BIBREF7 , BIBREF8 and BIBREF9 proposed to use a copy mechanism that dynamically copy the words from the input sequence while decoding. However, they lack the ability to extract proper embeddings of out-of-vocabulary words from the input context. BIBREF6 proposed to use an attention mechanism to emphasize specific parts of the input sentence when generating each word. However the encoder problem still remains in this approach. In this work, we propose two simple mechanisms to deal with both encoder and decoder problems. We borrowed intuition from human readers which read the text multiple times before generating summaries. We thus propose a `Read-Again' model that first reads the input sequence before committing to a representation of each word. The first read representation then biases the second read representation and thus allows the intermediate hidden vectors to capture the meaning appropriate for the input text. We show that this idea can be applied to both LSTM and GRU models. Our second contribution is a copy mechanism which allows us to use much smaller decoder vocabulary sizes resulting in much faster decoding. Our copy mechanism also allows us to construct a better representation of out-of-vocabulary words. We demonstrate the effectiveness of our approach in the challenging Gigaword dataset and DUC competition showing state-of-the-art performance. Summarization In the past few years, there has been a lot of work on extractive summarization, where a summary is created by composing words or sentences from the source text. Notable examples are BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 and BIBREF14 . As a consequence of their extractive nature the summary is restricted to words (sentences) in the source text. Abstractive summarization, on the contrary, aims at generating consistent summaries based on understanding the input text. Although there has been much less work on abstractive methods, they can in principle produce much richer summaries. Abstractive summarization is standardized by the DUC2003 and DUC2004 competitions ( BIBREF15 ). Some of the prominent approaches on this task includes BIBREF16 , BIBREF17 , BIBREF18 and BIBREF19 . Among them, the TOPIARY system ( BIBREF17 ) performs the best in the competitions amongst non neural net based methods. Very recently, the success of deep neural networks in many natural language processing tasks ( BIBREF20 ) has inspired new work in abstractive summarization . BIBREF2 propose a neural attention model with a convolutional encoder to solve this task. BIBREF3 build a large dataset for Chinese text summarization and propose to feed all hidden states from the encoder into the decoder. More recently, BIBREF4 extended BIBREF2 's work with an RNN decoder, and BIBREF8 proposed an RNN encoder-decoder architecture for summarization. Both techniques are currently the state-of-the-art on the DUC competition. However, the encoders exploited in these methods lack the ability to encode each word condition on the whole text, as an RNN encodes a word into a hidden vector by taking into account only the words up to that time step. In contrast, in this work we propose a `Read-Again' encoder-decoder architecture, which enables the encoder to understand each input word after reading the whole sentence. Our encoder first reads the text, and the results from the first read help represent the text in the second pass over the source text. Our second contribution is a simple copy mechanism that allows us to significantly reduce the decoder vocabulary size resulting in much faster inference times. Furthermore our copy mechanism allows us to handle out-of-vocabulary words in a principled manner. Finally our experiments show state-of-the-art performance on the DUC competition. Neural Machine Translation Our work is also closely related to recent work on neural machine translation, where neural encoder-decoder models have shown promising results ( BIBREF21 , BIBREF0 , BIBREF1 ). BIBREF6 further developed an attention mechanism in the decoder in order to pay attention to a specific part of the input at every generating time-step. Our approach also exploits an attention mechanism during decoding. Out-Of-Vocabulary and Copy Mechanism Dealing with Out-Of-Vocabulary words (OOVs) is an important issue in sequence to sequence approaches as we cannot enumerate all possible words and learn their embeddings since they might not be part of our training set. BIBREF22 address this issue by annotating words on the source, and aligning OOVs in the target with those source words. Recently, BIBREF23 propose Pointer Networks, which calculate a probability distribution over the input sequence instead of predicting a token from a pre-defined dictionary. BIBREF24 develop a neural-based extractive summarization model, which predicts the targets from the input sequences. BIBREF7 , BIBREF8 add a hard gate to allow the model to decide wether to generate a target word from the fixed-size dictionary or from the input sequence. BIBREF9 use a softmax operation instead of the hard gating. This softmax pointer mechanism is similar to our decoder. However, our decoder can also extract different OOVs' embedding from the input text instead of using a single INLINEFORM0 UNK INLINEFORM1 embedding to represent all OOVs. This further enhances the model's ability to handle OOVs. The read again model Text summarization can be formulated as a sequence to sequence prediction task, where the input is a longer text and the output is a summary of that text. In this paper we develop an encoder-decoder approach to summarization. The encoder is used to represent the input text with a set of continuous vectors, and the decoder is used to generate a summary word by word. In the following, we first introduce our `Read-Again' model for encoding sentences. The idea behind our approach is very intuitive and is inspired by how humans do this task. When we create summaries, we first read the text and then we do a second read where we pay special attention to the words that are relevant to generate the summary. Our `Read-Again' model implements this idea by reading the input text twice and using the information acquired from the first read to bias the second read. This idea can be seamlessly plugged into LSTM and GRU models. Our second contribution is a copy mechanism used in the decoder. It allows us to reduce the decoder vocabulary size dramatically and can be used to extract a better embedding for OOVs. fig:model:overall gives an overview of our model. Encoder We first review the typical encoder used in machine translation (e.g., BIBREF1 , BIBREF6 ). Let INLINEFORM0 be the input sequence of words. An encoder sequentially reads each word and creates the hidden representation INLINEFORM1 by exploting a recurrent neural network (RNN) DISPLAYFORM0 where INLINEFORM0 is the word embedding of INLINEFORM1 . The hidden vectors INLINEFORM2 are then treated as the feature representations for the whole input sentence and can be used by another RNN to decode and generate a target sentence. Although RNNs have been shown to be useful in modeling sequences, one of the major drawback is that INLINEFORM3 depends only on past information i.e., INLINEFORM4 . However, it is hard (even for humans) to have a proper representation of a word without reading the whole input sentence. Following this intuition, we propose our `Read-Again' model where the encoder reads the input sentence twice. In particular, the first read is used to bias the second more attentive read. We apply this idea to two popular RNN architectures, i.e. GRU and LSTM, resulting in better encodings of the input text. Note that although other alternatives, such as bidirectional RNN exist, the hidden states from the forward RNN lack direct interactions with the backward RNN, and thus forward/backward hidden states still cannot utilize the whole sequence. Besides, although we only use our model in a uni-directional manner, it can also be easily adapted to the bidirectional case. We now describe the two variants of our model. We read the input sentence INLINEFORM0 for the first-time using a standard GRU DISPLAYFORM0 where the function INLINEFORM0 is defined as, DISPLAYFORM0 It consists of two gatings INLINEFORM0 , controlling whether the current hidden state INLINEFORM1 should be directly copied from INLINEFORM2 or should pass through a more complex path INLINEFORM3 . Given the sentence feature vector INLINEFORM0 , we then compute an importance weight vector INLINEFORM1 of each word for the second reading. We put the importance weight INLINEFORM2 on the skip-connections as shown in fig:model:gru to bias the two information flows: If the current word INLINEFORM3 has a very small weight INLINEFORM4 , then the second read hidden state INLINEFORM5 will mostly take the information directly from the previous state INLINEFORM6 , ignoring the influence of the current word. If INLINEFORM7 is close to 1 then it will be similar to a standard GRU, which is only influenced from the current word. Thus the second reading has the following update rule DISPLAYFORM0 where INLINEFORM0 means element-wise product. We compute the importance weights by attending INLINEFORM1 with INLINEFORM2 as follows DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are learnable parameters. Note that INLINEFORM3 is a vector representing the importance of each dimension in the word embedding. Empirically, we find that using a vector is better than a single value. We hypothesize that this is because different dimensions represent different semantic meanings, and a single value lacks the ability to model the variances among these dimensions. Combining this with the standard GRU update rule INLINEFORM0 we can simplify the updating rule Eq. ( EQREF15 ) to get DISPLAYFORM0 This equations shows that our `read-again' model on GRU is equivalent to replace the GRU cell with a more general gating mechanism that also depends on the feature representation of the whole sentence computed from the first reading pass. We argue that adding this global information could help direct the information flow for the forward pass resulting in a better encoder. We now apply the `Read-Again' idea to the LSTM architecture as shown in fig:model:lstm. Our first reading is performed by an INLINEFORM0 defined as DISPLAYFORM0 Different from the GRU architecture, LSTM calculates the hidden state by applying a non-linear activation function to the cell state INLINEFORM0 , instead of a linear combination of two paths used in the GRU. Thus for our second read, instead of using skip-connections, we make the gating functions explicitly depend on the whole sentence vector computed from the first reading pass. We argue that this helps the encoding of the second reading INLINEFORM1 , as all gating and updating increments are also conditioned on the whole sequence feature vector INLINEFORM2 , INLINEFORM3 . Thus DISPLAYFORM0 In this section we extend our `Read-Again' model to the case where the input sequence has more than one sentence. Towards this goal, we propose to use a hierarchical representation, where each sentence has its own feature vector from the first reading pass. We then combine them into a single vector to bias the second reading pass. We illustrate this in the context of two input sentences, but it is easy to generalize to more sentences. Let INLINEFORM0 and INLINEFORM1 be the two input sentences. The first RNN reads these two sentences independently to get two sentence feature vectors INLINEFORM2 and INLINEFORM3 respectively. Here we investigate two different ways to handle multiple sentences. Our first option is to simply concatenate the two feature vectors to bias our second reading pass: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are initial zero vectors. Feeding INLINEFORM2 into the second RNN provides more global information explicitly and helps acquire long term dependencies. The second option we explored is shown in fig:modelhierarchy. In particular, we use a non-linear transformation to get a single feature vector INLINEFORM0 from both sentence feature vectors: DISPLAYFORM0 The second reading pass is then DISPLAYFORM0 Note that this is more easily scalable to more sentences. In our experiments both approaches perform similarly. Decoder with copy mechanism In this paper we argue that only a small number of common words are needed for generating a summary in addition to the words that are present in the source text. We can consider this as a hybrid approach which combines extractive and abstractive summarization. This has two benefits: first it allow us to use a very small vocabulary size, speeding up inference. Furthermore, we can create summaries which contain OOVs if they are present in the source text. Our decoder reads the vector representations of the input text using an attention mechanism, and generates the target summary word by word. We use an LSTM as our decoder, with a fixed-size vocabulary dictionary INLINEFORM0 and learnable word embeddings INLINEFORM1 . At time-step INLINEFORM2 the LSTM generates a summary word INLINEFORM3 by first computing the current hidden state INLINEFORM4 from the previous hidden state INLINEFORM5 , previous summary word INLINEFORM6 and current context vector INLINEFORM7 DISPLAYFORM0 where the context vector INLINEFORM0 is computed with an attention mechanism on the encoder hidden states: DISPLAYFORM0 The attention score INLINEFORM0 at time-step INLINEFORM1 on the INLINEFORM2 -th word is computed via a soft-max over INLINEFORM3 , where DISPLAYFORM0 with INLINEFORM0 , INLINEFORM1 , INLINEFORM2 learnable parameters. A typical way to treat OOVs is to encode them with a single shared embedding. However, different OOVs can have very different meanings, and thus using a single embedding for all OOVs will confuse the model. This is particularly detrimental when using small vocabulary sizes. Here we address this issue by deriving the representations of OOVs from their corresponding context in the input text. Towards this goal, we change the update rule of INLINEFORM0 . In particular, if INLINEFORM1 belongs to a word that is in our decoder vocabulary we take its representation from the word embedding, otherwise if it appears in the input sentence as INLINEFORM2 we use DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are learnable parameters. Since INLINEFORM2 encodes useful context information of the source word INLINEFORM3 , INLINEFORM4 can be interpreted as the semantics of this word extracted from the input sentence. Furthermore, if INLINEFORM5 does not appear in the input text, nor in INLINEFORM6 , then we represent INLINEFORM7 using the INLINEFORM8 UNK INLINEFORM9 embedding. Given the current decoder's hidden state INLINEFORM0 , we can generate the target summary word INLINEFORM1 . As shown in fig:model:decoder, at each time step during decoding, the decoder outputs a distribution over generating words from INLINEFORM2 , as well as over copying a specific word INLINEFORM3 from the source sentence. Learning We jointly learn our encoder and decoder by maximizing the likelihood of decoding the correct word at each time step. We refer the reader to the experimental evaluation for more details. Experimental Evalaluation In this section, we show results of abstractive summarization on Gigaword ( BIBREF25 , BIBREF26 ) and DUC2004 ( BIBREF15 ) datasets. Our model can learn a meaningful re-reading weight distribution for each word in the input text, putting more emphasis on important verb and nous, while ignoring common words such as prepositions. As for the decoder, we demonstrate that our copy mechanism can successfully reduce the typical vocabulary size by a factor 5 while achieving much better performance than the state-of-the-art, and by a factor of 30 while maintaining the same level of performance. In addition, we provide an analysis and examples of which words are copied during decoding. Quantitative Evaluation Results on Gigaword: We compare the performances of different architectures and report ROUGE scores in Tab. TABREF32 . Our baselines include the ABS model of BIBREF2 with its proposed vocabulary size as well as an attention encoder-decoder model with uni-directional GRU encoder. We allow the decoder to generate variable length summaries. As shown in Tab. TABREF32 our Read-Again models outperform the baselines on all ROUGE scores, when using both 15K and 69K sized vocabularies. We also observe that adding the copy mechanism further helps to improve performance: Even though the decoder vocabulary size of our approach with copy (15K) is much smaller than ABS (69K) and GRU (69K), it achieves a higher ROUGE score. Besides, our Multiple-Sentences model achieves the best performance. Evaluation on DUC2004: DUC 2004 ( BIBREF15 ) is a commonly used benchmark on summarization task consisting of 500 news articles. Each article is paired with 4 different human-generated reference summaries, capped at 75 characters. This dataset is evaluation-only. Similar to BIBREF2 , we train our neural model on the Gigaword training set, and show the models' performances on DUC2004. Following the convention, we also use ROUGE limited-length recall as our evaluation metric, and set the capping length to 75 characters. We generate summaries with 15 words using beam-size of 10. As shown in Table TABREF35 , our method outperforms all previous methods on Rouge-1 and Rouge-L, and is comparable on Rouge-2. Furthermore, our model only uses 15k decoder vocabulary, while previous methods use 69k or 200k. Importance Weight Visualization: As we described in the section before, INLINEFORM0 is a high-dimension vector representing the importance of each word INLINEFORM1 . While the importance of a word is different over each dimension, by averaging we can still look at general trends of which word is more relevant. fig:weightvisual depicts sample sentences with the importance weight INLINEFORM2 over input words. Words such as the, a, 's, have small INLINEFORM3 , while words such as aeronautics, resettled, impediments, which carry more information have higher values. This shows that our read-again technique indeed extracts useful information from the first reading to help bias the second reading results. Decoder Vocabulary Size Table TABREF42 shows the effect on our model of decreasing the decoder vocabulary size. We can see that when using the copy mechanism, we are able to reduce the decoder vocabulary size from 69K to 2K, with only 2-3 points drop on ROUGE score. This contrasts the models that do not use the copy mechanism. This is possibly due to two reasons. First, when faced with OOVs during decoding time, our model can extract their meanings from the input text. Second, equipped with a copy mechanism, our model can generate OOVs as summary words, maintaining its expressive ability even with a small decoder vocabulary size. Tab. TABREF43 shows the decoding time as a function of vocabulary size. As computing the soft-max is usually the bottleneck for decoding, reducing vocabulary size dramatically reduces the decoding time from 0.38 second per sentence to 0.08 second. Tab. TABREF44 provides some examples of visualization of the copy mechanism. Note that we are able to copy key words from source sentences to improve the summary. From these examples we can see that our model is able to copy different types of rare words, such as special entities' names in case 1 and 2, rare nouns in case 3 and 4, adjectives in case 5 and 6, and even rare verbs in the last example. Note that in the third example, when the copy model's decoder uses the embedding of headmaster as its first input, which is extracted from the source sentence, it generates the same following sentence as the no-copy model. This probably means that the extracted embedding of headmaster is closely related to the learned embedding of teacher. Conclusion In this paper we have proposed two simple mechanisms to alleviate the problems of current encoder-decoder models. Our first contribution is a `Read-Again' model which does not form a representation of the input word until the whole sentence is read. Our second contribution is a copy mechanism that can handle out-of-vocabulary words in a principled manner allowing us to reduce the decoder vocabulary size and significantly speed up inference. We have demonstrated the effectiveness of our approach in the context of summarization and shown state-of-the-art performance. In the future, we plan to tackle summarization problems with large input text. We also plan to exploit our findings in other tasks such as machine translation.
w.r.t Rouge-1 their model outperforms by 0.98% and w.r.t Rouge-L their model outperforms by 0.45%
81e8d42dad08a58fe27eea838f060ec8f314465e
81e8d42dad08a58fe27eea838f060ec8f314465e_0
Q: What is the state-of-the art? Text: Introduction Encoder-decoder models have been widely used in sequence to sequence tasks such as machine translation ( BIBREF0 , BIBREF1 ). They consist of an encoder which represents the whole input sequence with a single feature vector. The decoder then takes this representation and generates the desired output sequence. The most successful models are LSTM and GRU as they are much easier to train than vanilla RNNs. In this paper we are interested in summarization where the input sequence is a sentence/paragraph and the output is a summary of the text. Several encoding-decoding approaches have been proposed ( BIBREF2 , BIBREF3 , BIBREF4 ). Despite their success, it is commonly believed that the intermediate feature vectors are limited as they are created by only looking at previous words. This is particularly detrimental when dealing with large input sequences. Bi-directorial RNNs ( BIBREF5 , BIBREF6 ) try to address this problem by computing two different representations resulting of reading the input sequence left-to-right and right-to-left. The final vectors are computed by concatenating the two representations. However, the word representations are computed with limited scope. The decoder employed in all these methods outputs at each time step a distribution over a fixed vocabulary. In practice, this introduces problems with rare words (e.g., proper nouns) which are out of vocabulary. To alleviate this problem, one could potentially increase the size of the decoder vocabulary, but decoding becomes computationally much harder, as one has to compute the soft-max over all possible words. BIBREF7 , BIBREF8 and BIBREF9 proposed to use a copy mechanism that dynamically copy the words from the input sequence while decoding. However, they lack the ability to extract proper embeddings of out-of-vocabulary words from the input context. BIBREF6 proposed to use an attention mechanism to emphasize specific parts of the input sentence when generating each word. However the encoder problem still remains in this approach. In this work, we propose two simple mechanisms to deal with both encoder and decoder problems. We borrowed intuition from human readers which read the text multiple times before generating summaries. We thus propose a `Read-Again' model that first reads the input sequence before committing to a representation of each word. The first read representation then biases the second read representation and thus allows the intermediate hidden vectors to capture the meaning appropriate for the input text. We show that this idea can be applied to both LSTM and GRU models. Our second contribution is a copy mechanism which allows us to use much smaller decoder vocabulary sizes resulting in much faster decoding. Our copy mechanism also allows us to construct a better representation of out-of-vocabulary words. We demonstrate the effectiveness of our approach in the challenging Gigaword dataset and DUC competition showing state-of-the-art performance. Summarization In the past few years, there has been a lot of work on extractive summarization, where a summary is created by composing words or sentences from the source text. Notable examples are BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 and BIBREF14 . As a consequence of their extractive nature the summary is restricted to words (sentences) in the source text. Abstractive summarization, on the contrary, aims at generating consistent summaries based on understanding the input text. Although there has been much less work on abstractive methods, they can in principle produce much richer summaries. Abstractive summarization is standardized by the DUC2003 and DUC2004 competitions ( BIBREF15 ). Some of the prominent approaches on this task includes BIBREF16 , BIBREF17 , BIBREF18 and BIBREF19 . Among them, the TOPIARY system ( BIBREF17 ) performs the best in the competitions amongst non neural net based methods. Very recently, the success of deep neural networks in many natural language processing tasks ( BIBREF20 ) has inspired new work in abstractive summarization . BIBREF2 propose a neural attention model with a convolutional encoder to solve this task. BIBREF3 build a large dataset for Chinese text summarization and propose to feed all hidden states from the encoder into the decoder. More recently, BIBREF4 extended BIBREF2 's work with an RNN decoder, and BIBREF8 proposed an RNN encoder-decoder architecture for summarization. Both techniques are currently the state-of-the-art on the DUC competition. However, the encoders exploited in these methods lack the ability to encode each word condition on the whole text, as an RNN encodes a word into a hidden vector by taking into account only the words up to that time step. In contrast, in this work we propose a `Read-Again' encoder-decoder architecture, which enables the encoder to understand each input word after reading the whole sentence. Our encoder first reads the text, and the results from the first read help represent the text in the second pass over the source text. Our second contribution is a simple copy mechanism that allows us to significantly reduce the decoder vocabulary size resulting in much faster inference times. Furthermore our copy mechanism allows us to handle out-of-vocabulary words in a principled manner. Finally our experiments show state-of-the-art performance on the DUC competition. Neural Machine Translation Our work is also closely related to recent work on neural machine translation, where neural encoder-decoder models have shown promising results ( BIBREF21 , BIBREF0 , BIBREF1 ). BIBREF6 further developed an attention mechanism in the decoder in order to pay attention to a specific part of the input at every generating time-step. Our approach also exploits an attention mechanism during decoding. Out-Of-Vocabulary and Copy Mechanism Dealing with Out-Of-Vocabulary words (OOVs) is an important issue in sequence to sequence approaches as we cannot enumerate all possible words and learn their embeddings since they might not be part of our training set. BIBREF22 address this issue by annotating words on the source, and aligning OOVs in the target with those source words. Recently, BIBREF23 propose Pointer Networks, which calculate a probability distribution over the input sequence instead of predicting a token from a pre-defined dictionary. BIBREF24 develop a neural-based extractive summarization model, which predicts the targets from the input sequences. BIBREF7 , BIBREF8 add a hard gate to allow the model to decide wether to generate a target word from the fixed-size dictionary or from the input sequence. BIBREF9 use a softmax operation instead of the hard gating. This softmax pointer mechanism is similar to our decoder. However, our decoder can also extract different OOVs' embedding from the input text instead of using a single INLINEFORM0 UNK INLINEFORM1 embedding to represent all OOVs. This further enhances the model's ability to handle OOVs. The read again model Text summarization can be formulated as a sequence to sequence prediction task, where the input is a longer text and the output is a summary of that text. In this paper we develop an encoder-decoder approach to summarization. The encoder is used to represent the input text with a set of continuous vectors, and the decoder is used to generate a summary word by word. In the following, we first introduce our `Read-Again' model for encoding sentences. The idea behind our approach is very intuitive and is inspired by how humans do this task. When we create summaries, we first read the text and then we do a second read where we pay special attention to the words that are relevant to generate the summary. Our `Read-Again' model implements this idea by reading the input text twice and using the information acquired from the first read to bias the second read. This idea can be seamlessly plugged into LSTM and GRU models. Our second contribution is a copy mechanism used in the decoder. It allows us to reduce the decoder vocabulary size dramatically and can be used to extract a better embedding for OOVs. fig:model:overall gives an overview of our model. Encoder We first review the typical encoder used in machine translation (e.g., BIBREF1 , BIBREF6 ). Let INLINEFORM0 be the input sequence of words. An encoder sequentially reads each word and creates the hidden representation INLINEFORM1 by exploting a recurrent neural network (RNN) DISPLAYFORM0 where INLINEFORM0 is the word embedding of INLINEFORM1 . The hidden vectors INLINEFORM2 are then treated as the feature representations for the whole input sentence and can be used by another RNN to decode and generate a target sentence. Although RNNs have been shown to be useful in modeling sequences, one of the major drawback is that INLINEFORM3 depends only on past information i.e., INLINEFORM4 . However, it is hard (even for humans) to have a proper representation of a word without reading the whole input sentence. Following this intuition, we propose our `Read-Again' model where the encoder reads the input sentence twice. In particular, the first read is used to bias the second more attentive read. We apply this idea to two popular RNN architectures, i.e. GRU and LSTM, resulting in better encodings of the input text. Note that although other alternatives, such as bidirectional RNN exist, the hidden states from the forward RNN lack direct interactions with the backward RNN, and thus forward/backward hidden states still cannot utilize the whole sequence. Besides, although we only use our model in a uni-directional manner, it can also be easily adapted to the bidirectional case. We now describe the two variants of our model. We read the input sentence INLINEFORM0 for the first-time using a standard GRU DISPLAYFORM0 where the function INLINEFORM0 is defined as, DISPLAYFORM0 It consists of two gatings INLINEFORM0 , controlling whether the current hidden state INLINEFORM1 should be directly copied from INLINEFORM2 or should pass through a more complex path INLINEFORM3 . Given the sentence feature vector INLINEFORM0 , we then compute an importance weight vector INLINEFORM1 of each word for the second reading. We put the importance weight INLINEFORM2 on the skip-connections as shown in fig:model:gru to bias the two information flows: If the current word INLINEFORM3 has a very small weight INLINEFORM4 , then the second read hidden state INLINEFORM5 will mostly take the information directly from the previous state INLINEFORM6 , ignoring the influence of the current word. If INLINEFORM7 is close to 1 then it will be similar to a standard GRU, which is only influenced from the current word. Thus the second reading has the following update rule DISPLAYFORM0 where INLINEFORM0 means element-wise product. We compute the importance weights by attending INLINEFORM1 with INLINEFORM2 as follows DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 are learnable parameters. Note that INLINEFORM3 is a vector representing the importance of each dimension in the word embedding. Empirically, we find that using a vector is better than a single value. We hypothesize that this is because different dimensions represent different semantic meanings, and a single value lacks the ability to model the variances among these dimensions. Combining this with the standard GRU update rule INLINEFORM0 we can simplify the updating rule Eq. ( EQREF15 ) to get DISPLAYFORM0 This equations shows that our `read-again' model on GRU is equivalent to replace the GRU cell with a more general gating mechanism that also depends on the feature representation of the whole sentence computed from the first reading pass. We argue that adding this global information could help direct the information flow for the forward pass resulting in a better encoder. We now apply the `Read-Again' idea to the LSTM architecture as shown in fig:model:lstm. Our first reading is performed by an INLINEFORM0 defined as DISPLAYFORM0 Different from the GRU architecture, LSTM calculates the hidden state by applying a non-linear activation function to the cell state INLINEFORM0 , instead of a linear combination of two paths used in the GRU. Thus for our second read, instead of using skip-connections, we make the gating functions explicitly depend on the whole sentence vector computed from the first reading pass. We argue that this helps the encoding of the second reading INLINEFORM1 , as all gating and updating increments are also conditioned on the whole sequence feature vector INLINEFORM2 , INLINEFORM3 . Thus DISPLAYFORM0 In this section we extend our `Read-Again' model to the case where the input sequence has more than one sentence. Towards this goal, we propose to use a hierarchical representation, where each sentence has its own feature vector from the first reading pass. We then combine them into a single vector to bias the second reading pass. We illustrate this in the context of two input sentences, but it is easy to generalize to more sentences. Let INLINEFORM0 and INLINEFORM1 be the two input sentences. The first RNN reads these two sentences independently to get two sentence feature vectors INLINEFORM2 and INLINEFORM3 respectively. Here we investigate two different ways to handle multiple sentences. Our first option is to simply concatenate the two feature vectors to bias our second reading pass: DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are initial zero vectors. Feeding INLINEFORM2 into the second RNN provides more global information explicitly and helps acquire long term dependencies. The second option we explored is shown in fig:modelhierarchy. In particular, we use a non-linear transformation to get a single feature vector INLINEFORM0 from both sentence feature vectors: DISPLAYFORM0 The second reading pass is then DISPLAYFORM0 Note that this is more easily scalable to more sentences. In our experiments both approaches perform similarly. Decoder with copy mechanism In this paper we argue that only a small number of common words are needed for generating a summary in addition to the words that are present in the source text. We can consider this as a hybrid approach which combines extractive and abstractive summarization. This has two benefits: first it allow us to use a very small vocabulary size, speeding up inference. Furthermore, we can create summaries which contain OOVs if they are present in the source text. Our decoder reads the vector representations of the input text using an attention mechanism, and generates the target summary word by word. We use an LSTM as our decoder, with a fixed-size vocabulary dictionary INLINEFORM0 and learnable word embeddings INLINEFORM1 . At time-step INLINEFORM2 the LSTM generates a summary word INLINEFORM3 by first computing the current hidden state INLINEFORM4 from the previous hidden state INLINEFORM5 , previous summary word INLINEFORM6 and current context vector INLINEFORM7 DISPLAYFORM0 where the context vector INLINEFORM0 is computed with an attention mechanism on the encoder hidden states: DISPLAYFORM0 The attention score INLINEFORM0 at time-step INLINEFORM1 on the INLINEFORM2 -th word is computed via a soft-max over INLINEFORM3 , where DISPLAYFORM0 with INLINEFORM0 , INLINEFORM1 , INLINEFORM2 learnable parameters. A typical way to treat OOVs is to encode them with a single shared embedding. However, different OOVs can have very different meanings, and thus using a single embedding for all OOVs will confuse the model. This is particularly detrimental when using small vocabulary sizes. Here we address this issue by deriving the representations of OOVs from their corresponding context in the input text. Towards this goal, we change the update rule of INLINEFORM0 . In particular, if INLINEFORM1 belongs to a word that is in our decoder vocabulary we take its representation from the word embedding, otherwise if it appears in the input sentence as INLINEFORM2 we use DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are learnable parameters. Since INLINEFORM2 encodes useful context information of the source word INLINEFORM3 , INLINEFORM4 can be interpreted as the semantics of this word extracted from the input sentence. Furthermore, if INLINEFORM5 does not appear in the input text, nor in INLINEFORM6 , then we represent INLINEFORM7 using the INLINEFORM8 UNK INLINEFORM9 embedding. Given the current decoder's hidden state INLINEFORM0 , we can generate the target summary word INLINEFORM1 . As shown in fig:model:decoder, at each time step during decoding, the decoder outputs a distribution over generating words from INLINEFORM2 , as well as over copying a specific word INLINEFORM3 from the source sentence. Learning We jointly learn our encoder and decoder by maximizing the likelihood of decoding the correct word at each time step. We refer the reader to the experimental evaluation for more details. Experimental Evalaluation In this section, we show results of abstractive summarization on Gigaword ( BIBREF25 , BIBREF26 ) and DUC2004 ( BIBREF15 ) datasets. Our model can learn a meaningful re-reading weight distribution for each word in the input text, putting more emphasis on important verb and nous, while ignoring common words such as prepositions. As for the decoder, we demonstrate that our copy mechanism can successfully reduce the typical vocabulary size by a factor 5 while achieving much better performance than the state-of-the-art, and by a factor of 30 while maintaining the same level of performance. In addition, we provide an analysis and examples of which words are copied during decoding. Quantitative Evaluation Results on Gigaword: We compare the performances of different architectures and report ROUGE scores in Tab. TABREF32 . Our baselines include the ABS model of BIBREF2 with its proposed vocabulary size as well as an attention encoder-decoder model with uni-directional GRU encoder. We allow the decoder to generate variable length summaries. As shown in Tab. TABREF32 our Read-Again models outperform the baselines on all ROUGE scores, when using both 15K and 69K sized vocabularies. We also observe that adding the copy mechanism further helps to improve performance: Even though the decoder vocabulary size of our approach with copy (15K) is much smaller than ABS (69K) and GRU (69K), it achieves a higher ROUGE score. Besides, our Multiple-Sentences model achieves the best performance. Evaluation on DUC2004: DUC 2004 ( BIBREF15 ) is a commonly used benchmark on summarization task consisting of 500 news articles. Each article is paired with 4 different human-generated reference summaries, capped at 75 characters. This dataset is evaluation-only. Similar to BIBREF2 , we train our neural model on the Gigaword training set, and show the models' performances on DUC2004. Following the convention, we also use ROUGE limited-length recall as our evaluation metric, and set the capping length to 75 characters. We generate summaries with 15 words using beam-size of 10. As shown in Table TABREF35 , our method outperforms all previous methods on Rouge-1 and Rouge-L, and is comparable on Rouge-2. Furthermore, our model only uses 15k decoder vocabulary, while previous methods use 69k or 200k. Importance Weight Visualization: As we described in the section before, INLINEFORM0 is a high-dimension vector representing the importance of each word INLINEFORM1 . While the importance of a word is different over each dimension, by averaging we can still look at general trends of which word is more relevant. fig:weightvisual depicts sample sentences with the importance weight INLINEFORM2 over input words. Words such as the, a, 's, have small INLINEFORM3 , while words such as aeronautics, resettled, impediments, which carry more information have higher values. This shows that our read-again technique indeed extracts useful information from the first reading to help bias the second reading results. Decoder Vocabulary Size Table TABREF42 shows the effect on our model of decreasing the decoder vocabulary size. We can see that when using the copy mechanism, we are able to reduce the decoder vocabulary size from 69K to 2K, with only 2-3 points drop on ROUGE score. This contrasts the models that do not use the copy mechanism. This is possibly due to two reasons. First, when faced with OOVs during decoding time, our model can extract their meanings from the input text. Second, equipped with a copy mechanism, our model can generate OOVs as summary words, maintaining its expressive ability even with a small decoder vocabulary size. Tab. TABREF43 shows the decoding time as a function of vocabulary size. As computing the soft-max is usually the bottleneck for decoding, reducing vocabulary size dramatically reduces the decoding time from 0.38 second per sentence to 0.08 second. Tab. TABREF44 provides some examples of visualization of the copy mechanism. Note that we are able to copy key words from source sentences to improve the summary. From these examples we can see that our model is able to copy different types of rare words, such as special entities' names in case 1 and 2, rare nouns in case 3 and 4, adjectives in case 5 and 6, and even rare verbs in the last example. Note that in the third example, when the copy model's decoder uses the embedding of headmaster as its first input, which is extracted from the source sentence, it generates the same following sentence as the no-copy model. This probably means that the extracted embedding of headmaster is closely related to the learned embedding of teacher. Conclusion In this paper we have proposed two simple mechanisms to alleviate the problems of current encoder-decoder models. Our first contribution is a `Read-Again' model which does not form a representation of the input word until the whole sentence is read. Our second contribution is a copy mechanism that can handle out-of-vocabulary words in a principled manner allowing us to reduce the decoder vocabulary size and significantly speed up inference. We have demonstrated the effectiveness of our approach in the context of summarization and shown state-of-the-art performance. In the future, we plan to tackle summarization problems with large input text. We also plan to exploit our findings in other tasks such as machine translation.
neural attention model with a convolutional encoder with an RNN decoder and RNN encoder-decoder
482b4cc7676cf13912e27899c718f4dc5d92846d
482b4cc7676cf13912e27899c718f4dc5d92846d_0
Q: How do they identify abbreviations? Text: Introduction Abbreviations and acronyms appear frequently in the medical domain. Based on a popular online knowledge base, among the 3,096,346 stored abbreviations, 197,787 records are medical abbreviations, ranked first among all ten domains. An abbreviation can have over 100 possible explanations even within the medical domain. Medical record documentation, the authors of which are mainly physicians, other health professionals, and domain experts, is usually written under the pressure of time and high workload, requiring notation to be frequently compressed with shorthand jargon and acronyms. This is even more evident within intensive care medicine, where it is crucial that information is expressed in the most efficient manner possible to provide time-sensitive care to critically ill patients, but can result in code-like messages with poor readability. For example, given a sentence written by a physician with specialty training in critical care medicine, “STAT TTE c/w RVS. AKI - no CTA. .. etc”, it is difficult for non-experts to understand all abbreviations without specific context and/or knowledge. But when a doctor reads this, he/she would know that although “STAT” is widely used as the abbreviation of “statistic”, “statistics” and “statistical” in most domains, in hospital emergency rooms, it is often used to represent “immediately”. Within the arena of medical research, abbreviation expansion using a natural language processing system to automatically analyze clinical notes may enable knowledge discovery (e.g., relations between diseases) and has potential to improve communication and quality of care. In this paper, we study the task of abbreviation expansion in clinical notes. As shown in Figure 1, our goal is to normalize all the abbreviations in the intensive care unit (ICU) documentation to reduce misinterpretation and to make the texts accessible to a wider range of readers. For accurately capturing the semantics of an abbreviation in its context, we adopt word embedding, which can be seen as a distributional semantic representation and has been proven to be effective BIBREF0 to compute the semantic similarity between words based on the context without any labeled data. The intuition of distributional semantics BIBREF1 is that if two words share similar contexts, they should have highly similar semantics. For example, in Figure 1, “RF” and “respiratory failure” have very similar contexts so that their semantics should be similar. If we know “respiratory failure” is a possible candidate expansion of “RF” and its semantics is similar to the “RF” in the intensive care medicine texts, we can determine that it should be the correct expansion of “RF”. Due to the limited resource of intensive care medicine texts where full expansions rarely appear, we exploit abundant and easily-accessible task-oriented resources to enrich our dataset for training embeddings. To the best of our knowledge, we are the first to apply word embeddings to this task. Experimental results show that the embeddings trained on the task-oriented corpus are much more useful than those trained on other corpora. By combining the embeddings with domain-specific knowledge, we achieve 82.27% accuracy, which outperforms baselines and is close to human's performance. Related Work The task of abbreviation disambiguation in biomedical documents has been studied by various researchers using supervised machine learning algorithms BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . However, the performance of these supervised methods mainly depends on a large amount of labeled data which is extremely difficult to obtain for our task since intensive care medicine texts are very rare resources in clinical domain due to the high cost of de-identification and annotation. Tengstrand et al. tengstrand2014eacl proposed a distributional semantics-based approach for abbreviation expansion in Swedish but they focused only on expanding single words and cannot handle multi-word phrases. In contrast, we use word embeddings combined with task-oriented resources and knowledge, which can handle multiword expressions. Overview The overview of our approach is shown in Figure FIGREF6 . Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base as candidates. We train word embeddings using the clinical notes data with task-oriented resources such as Wikipedia articles of candidates and medical scientific papers and compute the semantic similarity between an abbreviation and its candidate expansions based on their embeddings (vector representations of words). Training embeddings with task oriented resources Given an abbreviation as input, we expect the correct expansion to be the most semantically similar to the abbreviation, which requires the abbreviation and the expansion share similar contexts. For this reason, we exploit rich task-oriented resources such as the Wikipedia articles of all the possible candidates, research papers and books written by the intensive care medicine fellows. Together with our clinical notes data which functions as a corpus, we train word embeddings since the expansions of abbreviations in the clinical notes are likely to appear in these resources and also share the similar contexts to the abbreviation's contexts. Handling MultiWord Phrases In most cases, an abbreviation's expansion is a multi-word phrase. Therefore, we need to obtain the phrase's embedding so that we can compute its semantic similarity to the abbreviation. It is proven that a phrase's embedding can be effectively obtained by summing the embeddings of words contained in the phrase BIBREF0 , BIBREF7 . For computing a phrase's embedding, we formally define a candidate INLINEFORM0 as a list of the words contained in the candidate, for example: one of MICU's candidate expansions is medical intensive care unit=[medical,intensive,care,unit]. Then, INLINEFORM1 's embedding can be computed as follows: DISPLAYFORM0 where INLINEFORM0 is a token in the candidate INLINEFORM1 and INLINEFORM2 denotes the embedding of a word/phrase, which is a vector of real-value entries. Expansion Candidate Ranking Even though embeddings are very helpful to compute the semantic similarity between an abbreviation and a candidate expansion, in some cases, context-independent information is also useful to identify the correct expansion. For example, CHF in the clinical notes usually refers to “congestive heart failure”. By using embedding-based semantic similarity, we can find two possible candidates – “congestive heart failure” (similarity=0.595) and “chronic heart failure”(similarity=0.621). These two candidates have close semantic similarity score but their popularity scores in the medical domain are quite different – the former has a rating score of 50 while the latter only has a rating score of 7. Therefore, we can see that the rating score, which can be seen as a kind of domain-specific knowledge, can also contribute to the candidate ranking. We combine semantic similarity with rating information. Formally, given an abbreviation INLINEFORM0 's candidate list INLINEFORM1 , we rank INLINEFORM2 based on the following formula: DISPLAYFORM0 where INLINEFORM0 denotes the rating of this candidate as an expansion of the abbreviation INLINEFORM1 , which reflects this candidate's popularity, INLINEFORM2 denotes the embedding of a word. The parameter INLINEFORM3 serves to adjust the weights of similarity and popularity Data and Evaluation Metrics The clinical notes we used for the experiment are provided by domain experts, consisting of 1,160 physician logs of Medical ICU admission requests at a tertiary care center affiliated to Mount Sanai. Prospectively collected over one year, these semi-structured logs contain free-text descriptions of patients' clinical presentations, medical history, and required critical care-level interventions. We identify 818 abbreviations and find 42,506 candidates using domain-specific knowledge (i.e., www.allacronym.com/_medical). The enriched corpus contains 42,506 Wikipedia articles, each of which corresponds to one candidate, 6 research papers and 2 critical care medicine textbooks, besides our raw ICU data. We use word2vec BIBREF0 to train the word embeddings. The dimension of embeddings is empirically set to 100. Since the goal of our task is to find the correct expansion for an abbreviation, we use accuracy as a metric to evaluate the performance of our approach. For ground-truth, we have 100 physician logs which are manually expanded and normalized by one of the authors Dr. Mathews, a well-trained domain expert, and thus we use these 100 physician logs as the test set to evaluate our approach's performance. Baseline Models For our task, it's difficult to re-implement the supervised methods as in previous works mentioned since we do not have sufficient training data. And a direct comparison is also impossible because all previous work used different data sets which are not publicly available. Alternatively, we use the following baselines to compare with our approach. Rating: This baseline model chooses the highest rating candidate expansion in the domain specific knowledge base. Raw Input embeddings: We trained word embeddings only from the 1,160 raw ICU texts and we choose the most semantically related candidate as the answer. General embeddings: Different from the Raw Input embeddings baseline, we use the embedding trained from a large biomedical data collection that includes knowledge bases like PubMed and PMC and a Wikipedia dump of biomedical related articles BIBREF8 for semantic similarity computation. Results Table 1 shows the performance of abbreviation expansion. Our approach significantly outperforms the baseline methods and achieves 82.27% accuracy. Figure FIGREF21 shows how our approach improves the performance of a rating-based approach. By using embeddings, we can learn that the meaning of “OD” used in our test cases should be “overdose” rather than “out-of-date” and this semantic information largely benefits the abbreviation expansion model. Compared with our approach, embeddings trained only from the ICU texts do not significantly contribute to the performance over the rating baseline. The reason is that the size of data for training the embeddings is so small that many candidate expansions of abbreviations do not appear in the corpus, which results in poor performance. It is notable that general embeddings trained from large biomedical data are not effective for this task because many abbreviations within critical care medicine appear in the biomedical corpus with different senses. For example, “OD” in intensive care medicine texts refers to “overdose” while in the PubMed corpus it usually refers to “optical density”, as shown in Figure FIGREF24 . Therefore, the embeddings trained from the PubMed corpus do not benefit the expansion of abbreviations in the ICU texts. Moreover, we estimated human performance for this task, shown in Table TABREF26 . Note that the performance is estimated by one of the authors Dr. Mathews who is a board-certified pulmonologist and critical care medicine specialist based on her experience and the human's performance estimated in Table TABREF26 is under the condition that the participants can not use any other external resources. We can see that our approach can achieve a performance close to domain experts and thus it is promising to tackle this challenge. Error Analysis The distribution of errors is shown in Table TABREF28 . There are mainly three reasons that cause the incorrect expansion. In some cases, some certain abbreviations do not exist in the knowledge base. In this case we would not be able to populate the corresponding candidate list. Secondly, in many cases although we have the correct expansion in the candidate list, it's not ranked as the top one due to the lower semantic similarity because there are not enough samples in the training data. Among all the incorrect expansions in our test set, such kind of errors accounted for about 54%. One possible solution may be adding more effective data to the embedding training, which means discovering more task-oriented resources. In a few cases, we failed to identify some abbreviations because of their complicated representations. For example, we have the following sentence in the patient's notes: “ No n/v/f/c.” and the correct expansion should be “No nausea/vomiting/fever/chills.” Such abbreviations are by far the most difficult to expand in our task because they do not exist in any knowledge base and usually only occur once in the training data. Conclusions and Future Work This paper proposes a simple but novel approach for automatic expansion of abbreviations. It achieves very good performance without any manually labeled data. Experiments demonstrate that using task-oriented resources to train word embeddings is much more effective than using general or arbitrary corpus. In the future, we plan to collectively expand semantically related abbreviations co-occurring in a sentence. In addition, we expect to integrate our work into a natural language processing system for processing the clinical notes for discovering knowledge, which will largely benefit the medical research. Acknowledgements This work is supported by RPI's Tetherless World Constellation, IARPA FUSE Numbers D11PC20154 and J71493 and DARPA DEFT No. FA8750-13-2-0041. Dr. Mathews' effort is supported by Award #1K12HL109005-01 from the National Heart, Lung, and Blood Institute (NHLBI). The content is solely the responsibility of the authors and does not necessarily represent the official views of NHLBI, the National Institutes of Health, IARPA, or DARPA.
identify all abbreviations using regular expressions
0c09ffb337be0feb25e2fd14164b35a0969d7b4c
0c09ffb337be0feb25e2fd14164b35a0969d7b4c_0
Q: What kind of model do they build to expand abbreviations? Text: Introduction Abbreviations and acronyms appear frequently in the medical domain. Based on a popular online knowledge base, among the 3,096,346 stored abbreviations, 197,787 records are medical abbreviations, ranked first among all ten domains. An abbreviation can have over 100 possible explanations even within the medical domain. Medical record documentation, the authors of which are mainly physicians, other health professionals, and domain experts, is usually written under the pressure of time and high workload, requiring notation to be frequently compressed with shorthand jargon and acronyms. This is even more evident within intensive care medicine, where it is crucial that information is expressed in the most efficient manner possible to provide time-sensitive care to critically ill patients, but can result in code-like messages with poor readability. For example, given a sentence written by a physician with specialty training in critical care medicine, “STAT TTE c/w RVS. AKI - no CTA. .. etc”, it is difficult for non-experts to understand all abbreviations without specific context and/or knowledge. But when a doctor reads this, he/she would know that although “STAT” is widely used as the abbreviation of “statistic”, “statistics” and “statistical” in most domains, in hospital emergency rooms, it is often used to represent “immediately”. Within the arena of medical research, abbreviation expansion using a natural language processing system to automatically analyze clinical notes may enable knowledge discovery (e.g., relations between diseases) and has potential to improve communication and quality of care. In this paper, we study the task of abbreviation expansion in clinical notes. As shown in Figure 1, our goal is to normalize all the abbreviations in the intensive care unit (ICU) documentation to reduce misinterpretation and to make the texts accessible to a wider range of readers. For accurately capturing the semantics of an abbreviation in its context, we adopt word embedding, which can be seen as a distributional semantic representation and has been proven to be effective BIBREF0 to compute the semantic similarity between words based on the context without any labeled data. The intuition of distributional semantics BIBREF1 is that if two words share similar contexts, they should have highly similar semantics. For example, in Figure 1, “RF” and “respiratory failure” have very similar contexts so that their semantics should be similar. If we know “respiratory failure” is a possible candidate expansion of “RF” and its semantics is similar to the “RF” in the intensive care medicine texts, we can determine that it should be the correct expansion of “RF”. Due to the limited resource of intensive care medicine texts where full expansions rarely appear, we exploit abundant and easily-accessible task-oriented resources to enrich our dataset for training embeddings. To the best of our knowledge, we are the first to apply word embeddings to this task. Experimental results show that the embeddings trained on the task-oriented corpus are much more useful than those trained on other corpora. By combining the embeddings with domain-specific knowledge, we achieve 82.27% accuracy, which outperforms baselines and is close to human's performance. Related Work The task of abbreviation disambiguation in biomedical documents has been studied by various researchers using supervised machine learning algorithms BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . However, the performance of these supervised methods mainly depends on a large amount of labeled data which is extremely difficult to obtain for our task since intensive care medicine texts are very rare resources in clinical domain due to the high cost of de-identification and annotation. Tengstrand et al. tengstrand2014eacl proposed a distributional semantics-based approach for abbreviation expansion in Swedish but they focused only on expanding single words and cannot handle multi-word phrases. In contrast, we use word embeddings combined with task-oriented resources and knowledge, which can handle multiword expressions. Overview The overview of our approach is shown in Figure FIGREF6 . Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base as candidates. We train word embeddings using the clinical notes data with task-oriented resources such as Wikipedia articles of candidates and medical scientific papers and compute the semantic similarity between an abbreviation and its candidate expansions based on their embeddings (vector representations of words). Training embeddings with task oriented resources Given an abbreviation as input, we expect the correct expansion to be the most semantically similar to the abbreviation, which requires the abbreviation and the expansion share similar contexts. For this reason, we exploit rich task-oriented resources such as the Wikipedia articles of all the possible candidates, research papers and books written by the intensive care medicine fellows. Together with our clinical notes data which functions as a corpus, we train word embeddings since the expansions of abbreviations in the clinical notes are likely to appear in these resources and also share the similar contexts to the abbreviation's contexts. Handling MultiWord Phrases In most cases, an abbreviation's expansion is a multi-word phrase. Therefore, we need to obtain the phrase's embedding so that we can compute its semantic similarity to the abbreviation. It is proven that a phrase's embedding can be effectively obtained by summing the embeddings of words contained in the phrase BIBREF0 , BIBREF7 . For computing a phrase's embedding, we formally define a candidate INLINEFORM0 as a list of the words contained in the candidate, for example: one of MICU's candidate expansions is medical intensive care unit=[medical,intensive,care,unit]. Then, INLINEFORM1 's embedding can be computed as follows: DISPLAYFORM0 where INLINEFORM0 is a token in the candidate INLINEFORM1 and INLINEFORM2 denotes the embedding of a word/phrase, which is a vector of real-value entries. Expansion Candidate Ranking Even though embeddings are very helpful to compute the semantic similarity between an abbreviation and a candidate expansion, in some cases, context-independent information is also useful to identify the correct expansion. For example, CHF in the clinical notes usually refers to “congestive heart failure”. By using embedding-based semantic similarity, we can find two possible candidates – “congestive heart failure” (similarity=0.595) and “chronic heart failure”(similarity=0.621). These two candidates have close semantic similarity score but their popularity scores in the medical domain are quite different – the former has a rating score of 50 while the latter only has a rating score of 7. Therefore, we can see that the rating score, which can be seen as a kind of domain-specific knowledge, can also contribute to the candidate ranking. We combine semantic similarity with rating information. Formally, given an abbreviation INLINEFORM0 's candidate list INLINEFORM1 , we rank INLINEFORM2 based on the following formula: DISPLAYFORM0 where INLINEFORM0 denotes the rating of this candidate as an expansion of the abbreviation INLINEFORM1 , which reflects this candidate's popularity, INLINEFORM2 denotes the embedding of a word. The parameter INLINEFORM3 serves to adjust the weights of similarity and popularity Data and Evaluation Metrics The clinical notes we used for the experiment are provided by domain experts, consisting of 1,160 physician logs of Medical ICU admission requests at a tertiary care center affiliated to Mount Sanai. Prospectively collected over one year, these semi-structured logs contain free-text descriptions of patients' clinical presentations, medical history, and required critical care-level interventions. We identify 818 abbreviations and find 42,506 candidates using domain-specific knowledge (i.e., www.allacronym.com/_medical). The enriched corpus contains 42,506 Wikipedia articles, each of which corresponds to one candidate, 6 research papers and 2 critical care medicine textbooks, besides our raw ICU data. We use word2vec BIBREF0 to train the word embeddings. The dimension of embeddings is empirically set to 100. Since the goal of our task is to find the correct expansion for an abbreviation, we use accuracy as a metric to evaluate the performance of our approach. For ground-truth, we have 100 physician logs which are manually expanded and normalized by one of the authors Dr. Mathews, a well-trained domain expert, and thus we use these 100 physician logs as the test set to evaluate our approach's performance. Baseline Models For our task, it's difficult to re-implement the supervised methods as in previous works mentioned since we do not have sufficient training data. And a direct comparison is also impossible because all previous work used different data sets which are not publicly available. Alternatively, we use the following baselines to compare with our approach. Rating: This baseline model chooses the highest rating candidate expansion in the domain specific knowledge base. Raw Input embeddings: We trained word embeddings only from the 1,160 raw ICU texts and we choose the most semantically related candidate as the answer. General embeddings: Different from the Raw Input embeddings baseline, we use the embedding trained from a large biomedical data collection that includes knowledge bases like PubMed and PMC and a Wikipedia dump of biomedical related articles BIBREF8 for semantic similarity computation. Results Table 1 shows the performance of abbreviation expansion. Our approach significantly outperforms the baseline methods and achieves 82.27% accuracy. Figure FIGREF21 shows how our approach improves the performance of a rating-based approach. By using embeddings, we can learn that the meaning of “OD” used in our test cases should be “overdose” rather than “out-of-date” and this semantic information largely benefits the abbreviation expansion model. Compared with our approach, embeddings trained only from the ICU texts do not significantly contribute to the performance over the rating baseline. The reason is that the size of data for training the embeddings is so small that many candidate expansions of abbreviations do not appear in the corpus, which results in poor performance. It is notable that general embeddings trained from large biomedical data are not effective for this task because many abbreviations within critical care medicine appear in the biomedical corpus with different senses. For example, “OD” in intensive care medicine texts refers to “overdose” while in the PubMed corpus it usually refers to “optical density”, as shown in Figure FIGREF24 . Therefore, the embeddings trained from the PubMed corpus do not benefit the expansion of abbreviations in the ICU texts. Moreover, we estimated human performance for this task, shown in Table TABREF26 . Note that the performance is estimated by one of the authors Dr. Mathews who is a board-certified pulmonologist and critical care medicine specialist based on her experience and the human's performance estimated in Table TABREF26 is under the condition that the participants can not use any other external resources. We can see that our approach can achieve a performance close to domain experts and thus it is promising to tackle this challenge. Error Analysis The distribution of errors is shown in Table TABREF28 . There are mainly three reasons that cause the incorrect expansion. In some cases, some certain abbreviations do not exist in the knowledge base. In this case we would not be able to populate the corresponding candidate list. Secondly, in many cases although we have the correct expansion in the candidate list, it's not ranked as the top one due to the lower semantic similarity because there are not enough samples in the training data. Among all the incorrect expansions in our test set, such kind of errors accounted for about 54%. One possible solution may be adding more effective data to the embedding training, which means discovering more task-oriented resources. In a few cases, we failed to identify some abbreviations because of their complicated representations. For example, we have the following sentence in the patient's notes: “ No n/v/f/c.” and the correct expansion should be “No nausea/vomiting/fever/chills.” Such abbreviations are by far the most difficult to expand in our task because they do not exist in any knowledge base and usually only occur once in the training data. Conclusions and Future Work This paper proposes a simple but novel approach for automatic expansion of abbreviations. It achieves very good performance without any manually labeled data. Experiments demonstrate that using task-oriented resources to train word embeddings is much more effective than using general or arbitrary corpus. In the future, we plan to collectively expand semantically related abbreviations co-occurring in a sentence. In addition, we expect to integrate our work into a natural language processing system for processing the clinical notes for discovering knowledge, which will largely benefit the medical research. Acknowledgements This work is supported by RPI's Tetherless World Constellation, IARPA FUSE Numbers D11PC20154 and J71493 and DARPA DEFT No. FA8750-13-2-0041. Dr. Mathews' effort is supported by Award #1K12HL109005-01 from the National Heart, Lung, and Blood Institute (NHLBI). The content is solely the responsibility of the authors and does not necessarily represent the official views of NHLBI, the National Institutes of Health, IARPA, or DARPA.
word2vec BIBREF0
385dc96604e077611fbd877c7f39d3c17cd63bf2
385dc96604e077611fbd877c7f39d3c17cd63bf2_0
Q: Do they use any knowledge base to expand abbreviations? Text: Introduction Abbreviations and acronyms appear frequently in the medical domain. Based on a popular online knowledge base, among the 3,096,346 stored abbreviations, 197,787 records are medical abbreviations, ranked first among all ten domains. An abbreviation can have over 100 possible explanations even within the medical domain. Medical record documentation, the authors of which are mainly physicians, other health professionals, and domain experts, is usually written under the pressure of time and high workload, requiring notation to be frequently compressed with shorthand jargon and acronyms. This is even more evident within intensive care medicine, where it is crucial that information is expressed in the most efficient manner possible to provide time-sensitive care to critically ill patients, but can result in code-like messages with poor readability. For example, given a sentence written by a physician with specialty training in critical care medicine, “STAT TTE c/w RVS. AKI - no CTA. .. etc”, it is difficult for non-experts to understand all abbreviations without specific context and/or knowledge. But when a doctor reads this, he/she would know that although “STAT” is widely used as the abbreviation of “statistic”, “statistics” and “statistical” in most domains, in hospital emergency rooms, it is often used to represent “immediately”. Within the arena of medical research, abbreviation expansion using a natural language processing system to automatically analyze clinical notes may enable knowledge discovery (e.g., relations between diseases) and has potential to improve communication and quality of care. In this paper, we study the task of abbreviation expansion in clinical notes. As shown in Figure 1, our goal is to normalize all the abbreviations in the intensive care unit (ICU) documentation to reduce misinterpretation and to make the texts accessible to a wider range of readers. For accurately capturing the semantics of an abbreviation in its context, we adopt word embedding, which can be seen as a distributional semantic representation and has been proven to be effective BIBREF0 to compute the semantic similarity between words based on the context without any labeled data. The intuition of distributional semantics BIBREF1 is that if two words share similar contexts, they should have highly similar semantics. For example, in Figure 1, “RF” and “respiratory failure” have very similar contexts so that their semantics should be similar. If we know “respiratory failure” is a possible candidate expansion of “RF” and its semantics is similar to the “RF” in the intensive care medicine texts, we can determine that it should be the correct expansion of “RF”. Due to the limited resource of intensive care medicine texts where full expansions rarely appear, we exploit abundant and easily-accessible task-oriented resources to enrich our dataset for training embeddings. To the best of our knowledge, we are the first to apply word embeddings to this task. Experimental results show that the embeddings trained on the task-oriented corpus are much more useful than those trained on other corpora. By combining the embeddings with domain-specific knowledge, we achieve 82.27% accuracy, which outperforms baselines and is close to human's performance. Related Work The task of abbreviation disambiguation in biomedical documents has been studied by various researchers using supervised machine learning algorithms BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . However, the performance of these supervised methods mainly depends on a large amount of labeled data which is extremely difficult to obtain for our task since intensive care medicine texts are very rare resources in clinical domain due to the high cost of de-identification and annotation. Tengstrand et al. tengstrand2014eacl proposed a distributional semantics-based approach for abbreviation expansion in Swedish but they focused only on expanding single words and cannot handle multi-word phrases. In contrast, we use word embeddings combined with task-oriented resources and knowledge, which can handle multiword expressions. Overview The overview of our approach is shown in Figure FIGREF6 . Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base as candidates. We train word embeddings using the clinical notes data with task-oriented resources such as Wikipedia articles of candidates and medical scientific papers and compute the semantic similarity between an abbreviation and its candidate expansions based on their embeddings (vector representations of words). Training embeddings with task oriented resources Given an abbreviation as input, we expect the correct expansion to be the most semantically similar to the abbreviation, which requires the abbreviation and the expansion share similar contexts. For this reason, we exploit rich task-oriented resources such as the Wikipedia articles of all the possible candidates, research papers and books written by the intensive care medicine fellows. Together with our clinical notes data which functions as a corpus, we train word embeddings since the expansions of abbreviations in the clinical notes are likely to appear in these resources and also share the similar contexts to the abbreviation's contexts. Handling MultiWord Phrases In most cases, an abbreviation's expansion is a multi-word phrase. Therefore, we need to obtain the phrase's embedding so that we can compute its semantic similarity to the abbreviation. It is proven that a phrase's embedding can be effectively obtained by summing the embeddings of words contained in the phrase BIBREF0 , BIBREF7 . For computing a phrase's embedding, we formally define a candidate INLINEFORM0 as a list of the words contained in the candidate, for example: one of MICU's candidate expansions is medical intensive care unit=[medical,intensive,care,unit]. Then, INLINEFORM1 's embedding can be computed as follows: DISPLAYFORM0 where INLINEFORM0 is a token in the candidate INLINEFORM1 and INLINEFORM2 denotes the embedding of a word/phrase, which is a vector of real-value entries. Expansion Candidate Ranking Even though embeddings are very helpful to compute the semantic similarity between an abbreviation and a candidate expansion, in some cases, context-independent information is also useful to identify the correct expansion. For example, CHF in the clinical notes usually refers to “congestive heart failure”. By using embedding-based semantic similarity, we can find two possible candidates – “congestive heart failure” (similarity=0.595) and “chronic heart failure”(similarity=0.621). These two candidates have close semantic similarity score but their popularity scores in the medical domain are quite different – the former has a rating score of 50 while the latter only has a rating score of 7. Therefore, we can see that the rating score, which can be seen as a kind of domain-specific knowledge, can also contribute to the candidate ranking. We combine semantic similarity with rating information. Formally, given an abbreviation INLINEFORM0 's candidate list INLINEFORM1 , we rank INLINEFORM2 based on the following formula: DISPLAYFORM0 where INLINEFORM0 denotes the rating of this candidate as an expansion of the abbreviation INLINEFORM1 , which reflects this candidate's popularity, INLINEFORM2 denotes the embedding of a word. The parameter INLINEFORM3 serves to adjust the weights of similarity and popularity Data and Evaluation Metrics The clinical notes we used for the experiment are provided by domain experts, consisting of 1,160 physician logs of Medical ICU admission requests at a tertiary care center affiliated to Mount Sanai. Prospectively collected over one year, these semi-structured logs contain free-text descriptions of patients' clinical presentations, medical history, and required critical care-level interventions. We identify 818 abbreviations and find 42,506 candidates using domain-specific knowledge (i.e., www.allacronym.com/_medical). The enriched corpus contains 42,506 Wikipedia articles, each of which corresponds to one candidate, 6 research papers and 2 critical care medicine textbooks, besides our raw ICU data. We use word2vec BIBREF0 to train the word embeddings. The dimension of embeddings is empirically set to 100. Since the goal of our task is to find the correct expansion for an abbreviation, we use accuracy as a metric to evaluate the performance of our approach. For ground-truth, we have 100 physician logs which are manually expanded and normalized by one of the authors Dr. Mathews, a well-trained domain expert, and thus we use these 100 physician logs as the test set to evaluate our approach's performance. Baseline Models For our task, it's difficult to re-implement the supervised methods as in previous works mentioned since we do not have sufficient training data. And a direct comparison is also impossible because all previous work used different data sets which are not publicly available. Alternatively, we use the following baselines to compare with our approach. Rating: This baseline model chooses the highest rating candidate expansion in the domain specific knowledge base. Raw Input embeddings: We trained word embeddings only from the 1,160 raw ICU texts and we choose the most semantically related candidate as the answer. General embeddings: Different from the Raw Input embeddings baseline, we use the embedding trained from a large biomedical data collection that includes knowledge bases like PubMed and PMC and a Wikipedia dump of biomedical related articles BIBREF8 for semantic similarity computation. Results Table 1 shows the performance of abbreviation expansion. Our approach significantly outperforms the baseline methods and achieves 82.27% accuracy. Figure FIGREF21 shows how our approach improves the performance of a rating-based approach. By using embeddings, we can learn that the meaning of “OD” used in our test cases should be “overdose” rather than “out-of-date” and this semantic information largely benefits the abbreviation expansion model. Compared with our approach, embeddings trained only from the ICU texts do not significantly contribute to the performance over the rating baseline. The reason is that the size of data for training the embeddings is so small that many candidate expansions of abbreviations do not appear in the corpus, which results in poor performance. It is notable that general embeddings trained from large biomedical data are not effective for this task because many abbreviations within critical care medicine appear in the biomedical corpus with different senses. For example, “OD” in intensive care medicine texts refers to “overdose” while in the PubMed corpus it usually refers to “optical density”, as shown in Figure FIGREF24 . Therefore, the embeddings trained from the PubMed corpus do not benefit the expansion of abbreviations in the ICU texts. Moreover, we estimated human performance for this task, shown in Table TABREF26 . Note that the performance is estimated by one of the authors Dr. Mathews who is a board-certified pulmonologist and critical care medicine specialist based on her experience and the human's performance estimated in Table TABREF26 is under the condition that the participants can not use any other external resources. We can see that our approach can achieve a performance close to domain experts and thus it is promising to tackle this challenge. Error Analysis The distribution of errors is shown in Table TABREF28 . There are mainly three reasons that cause the incorrect expansion. In some cases, some certain abbreviations do not exist in the knowledge base. In this case we would not be able to populate the corresponding candidate list. Secondly, in many cases although we have the correct expansion in the candidate list, it's not ranked as the top one due to the lower semantic similarity because there are not enough samples in the training data. Among all the incorrect expansions in our test set, such kind of errors accounted for about 54%. One possible solution may be adding more effective data to the embedding training, which means discovering more task-oriented resources. In a few cases, we failed to identify some abbreviations because of their complicated representations. For example, we have the following sentence in the patient's notes: “ No n/v/f/c.” and the correct expansion should be “No nausea/vomiting/fever/chills.” Such abbreviations are by far the most difficult to expand in our task because they do not exist in any knowledge base and usually only occur once in the training data. Conclusions and Future Work This paper proposes a simple but novel approach for automatic expansion of abbreviations. It achieves very good performance without any manually labeled data. Experiments demonstrate that using task-oriented resources to train word embeddings is much more effective than using general or arbitrary corpus. In the future, we plan to collectively expand semantically related abbreviations co-occurring in a sentence. In addition, we expect to integrate our work into a natural language processing system for processing the clinical notes for discovering knowledge, which will largely benefit the medical research. Acknowledgements This work is supported by RPI's Tetherless World Constellation, IARPA FUSE Numbers D11PC20154 and J71493 and DARPA DEFT No. FA8750-13-2-0041. Dr. Mathews' effort is supported by Award #1K12HL109005-01 from the National Heart, Lung, and Blood Institute (NHLBI). The content is solely the responsibility of the authors and does not necessarily represent the official views of NHLBI, the National Institutes of Health, IARPA, or DARPA.
Yes
551a17fc1d5b5c3d18bdc4923363cbbda7eb2516
551a17fc1d5b5c3d18bdc4923363cbbda7eb2516_0
Q: In their used dataset, do they study how many abbreviations are ambiguous? Text: Introduction Abbreviations and acronyms appear frequently in the medical domain. Based on a popular online knowledge base, among the 3,096,346 stored abbreviations, 197,787 records are medical abbreviations, ranked first among all ten domains. An abbreviation can have over 100 possible explanations even within the medical domain. Medical record documentation, the authors of which are mainly physicians, other health professionals, and domain experts, is usually written under the pressure of time and high workload, requiring notation to be frequently compressed with shorthand jargon and acronyms. This is even more evident within intensive care medicine, where it is crucial that information is expressed in the most efficient manner possible to provide time-sensitive care to critically ill patients, but can result in code-like messages with poor readability. For example, given a sentence written by a physician with specialty training in critical care medicine, “STAT TTE c/w RVS. AKI - no CTA. .. etc”, it is difficult for non-experts to understand all abbreviations without specific context and/or knowledge. But when a doctor reads this, he/she would know that although “STAT” is widely used as the abbreviation of “statistic”, “statistics” and “statistical” in most domains, in hospital emergency rooms, it is often used to represent “immediately”. Within the arena of medical research, abbreviation expansion using a natural language processing system to automatically analyze clinical notes may enable knowledge discovery (e.g., relations between diseases) and has potential to improve communication and quality of care. In this paper, we study the task of abbreviation expansion in clinical notes. As shown in Figure 1, our goal is to normalize all the abbreviations in the intensive care unit (ICU) documentation to reduce misinterpretation and to make the texts accessible to a wider range of readers. For accurately capturing the semantics of an abbreviation in its context, we adopt word embedding, which can be seen as a distributional semantic representation and has been proven to be effective BIBREF0 to compute the semantic similarity between words based on the context without any labeled data. The intuition of distributional semantics BIBREF1 is that if two words share similar contexts, they should have highly similar semantics. For example, in Figure 1, “RF” and “respiratory failure” have very similar contexts so that their semantics should be similar. If we know “respiratory failure” is a possible candidate expansion of “RF” and its semantics is similar to the “RF” in the intensive care medicine texts, we can determine that it should be the correct expansion of “RF”. Due to the limited resource of intensive care medicine texts where full expansions rarely appear, we exploit abundant and easily-accessible task-oriented resources to enrich our dataset for training embeddings. To the best of our knowledge, we are the first to apply word embeddings to this task. Experimental results show that the embeddings trained on the task-oriented corpus are much more useful than those trained on other corpora. By combining the embeddings with domain-specific knowledge, we achieve 82.27% accuracy, which outperforms baselines and is close to human's performance. Related Work The task of abbreviation disambiguation in biomedical documents has been studied by various researchers using supervised machine learning algorithms BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . However, the performance of these supervised methods mainly depends on a large amount of labeled data which is extremely difficult to obtain for our task since intensive care medicine texts are very rare resources in clinical domain due to the high cost of de-identification and annotation. Tengstrand et al. tengstrand2014eacl proposed a distributional semantics-based approach for abbreviation expansion in Swedish but they focused only on expanding single words and cannot handle multi-word phrases. In contrast, we use word embeddings combined with task-oriented resources and knowledge, which can handle multiword expressions. Overview The overview of our approach is shown in Figure FIGREF6 . Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base as candidates. We train word embeddings using the clinical notes data with task-oriented resources such as Wikipedia articles of candidates and medical scientific papers and compute the semantic similarity between an abbreviation and its candidate expansions based on their embeddings (vector representations of words). Training embeddings with task oriented resources Given an abbreviation as input, we expect the correct expansion to be the most semantically similar to the abbreviation, which requires the abbreviation and the expansion share similar contexts. For this reason, we exploit rich task-oriented resources such as the Wikipedia articles of all the possible candidates, research papers and books written by the intensive care medicine fellows. Together with our clinical notes data which functions as a corpus, we train word embeddings since the expansions of abbreviations in the clinical notes are likely to appear in these resources and also share the similar contexts to the abbreviation's contexts. Handling MultiWord Phrases In most cases, an abbreviation's expansion is a multi-word phrase. Therefore, we need to obtain the phrase's embedding so that we can compute its semantic similarity to the abbreviation. It is proven that a phrase's embedding can be effectively obtained by summing the embeddings of words contained in the phrase BIBREF0 , BIBREF7 . For computing a phrase's embedding, we formally define a candidate INLINEFORM0 as a list of the words contained in the candidate, for example: one of MICU's candidate expansions is medical intensive care unit=[medical,intensive,care,unit]. Then, INLINEFORM1 's embedding can be computed as follows: DISPLAYFORM0 where INLINEFORM0 is a token in the candidate INLINEFORM1 and INLINEFORM2 denotes the embedding of a word/phrase, which is a vector of real-value entries. Expansion Candidate Ranking Even though embeddings are very helpful to compute the semantic similarity between an abbreviation and a candidate expansion, in some cases, context-independent information is also useful to identify the correct expansion. For example, CHF in the clinical notes usually refers to “congestive heart failure”. By using embedding-based semantic similarity, we can find two possible candidates – “congestive heart failure” (similarity=0.595) and “chronic heart failure”(similarity=0.621). These two candidates have close semantic similarity score but their popularity scores in the medical domain are quite different – the former has a rating score of 50 while the latter only has a rating score of 7. Therefore, we can see that the rating score, which can be seen as a kind of domain-specific knowledge, can also contribute to the candidate ranking. We combine semantic similarity with rating information. Formally, given an abbreviation INLINEFORM0 's candidate list INLINEFORM1 , we rank INLINEFORM2 based on the following formula: DISPLAYFORM0 where INLINEFORM0 denotes the rating of this candidate as an expansion of the abbreviation INLINEFORM1 , which reflects this candidate's popularity, INLINEFORM2 denotes the embedding of a word. The parameter INLINEFORM3 serves to adjust the weights of similarity and popularity Data and Evaluation Metrics The clinical notes we used for the experiment are provided by domain experts, consisting of 1,160 physician logs of Medical ICU admission requests at a tertiary care center affiliated to Mount Sanai. Prospectively collected over one year, these semi-structured logs contain free-text descriptions of patients' clinical presentations, medical history, and required critical care-level interventions. We identify 818 abbreviations and find 42,506 candidates using domain-specific knowledge (i.e., www.allacronym.com/_medical). The enriched corpus contains 42,506 Wikipedia articles, each of which corresponds to one candidate, 6 research papers and 2 critical care medicine textbooks, besides our raw ICU data. We use word2vec BIBREF0 to train the word embeddings. The dimension of embeddings is empirically set to 100. Since the goal of our task is to find the correct expansion for an abbreviation, we use accuracy as a metric to evaluate the performance of our approach. For ground-truth, we have 100 physician logs which are manually expanded and normalized by one of the authors Dr. Mathews, a well-trained domain expert, and thus we use these 100 physician logs as the test set to evaluate our approach's performance. Baseline Models For our task, it's difficult to re-implement the supervised methods as in previous works mentioned since we do not have sufficient training data. And a direct comparison is also impossible because all previous work used different data sets which are not publicly available. Alternatively, we use the following baselines to compare with our approach. Rating: This baseline model chooses the highest rating candidate expansion in the domain specific knowledge base. Raw Input embeddings: We trained word embeddings only from the 1,160 raw ICU texts and we choose the most semantically related candidate as the answer. General embeddings: Different from the Raw Input embeddings baseline, we use the embedding trained from a large biomedical data collection that includes knowledge bases like PubMed and PMC and a Wikipedia dump of biomedical related articles BIBREF8 for semantic similarity computation. Results Table 1 shows the performance of abbreviation expansion. Our approach significantly outperforms the baseline methods and achieves 82.27% accuracy. Figure FIGREF21 shows how our approach improves the performance of a rating-based approach. By using embeddings, we can learn that the meaning of “OD” used in our test cases should be “overdose” rather than “out-of-date” and this semantic information largely benefits the abbreviation expansion model. Compared with our approach, embeddings trained only from the ICU texts do not significantly contribute to the performance over the rating baseline. The reason is that the size of data for training the embeddings is so small that many candidate expansions of abbreviations do not appear in the corpus, which results in poor performance. It is notable that general embeddings trained from large biomedical data are not effective for this task because many abbreviations within critical care medicine appear in the biomedical corpus with different senses. For example, “OD” in intensive care medicine texts refers to “overdose” while in the PubMed corpus it usually refers to “optical density”, as shown in Figure FIGREF24 . Therefore, the embeddings trained from the PubMed corpus do not benefit the expansion of abbreviations in the ICU texts. Moreover, we estimated human performance for this task, shown in Table TABREF26 . Note that the performance is estimated by one of the authors Dr. Mathews who is a board-certified pulmonologist and critical care medicine specialist based on her experience and the human's performance estimated in Table TABREF26 is under the condition that the participants can not use any other external resources. We can see that our approach can achieve a performance close to domain experts and thus it is promising to tackle this challenge. Error Analysis The distribution of errors is shown in Table TABREF28 . There are mainly three reasons that cause the incorrect expansion. In some cases, some certain abbreviations do not exist in the knowledge base. In this case we would not be able to populate the corresponding candidate list. Secondly, in many cases although we have the correct expansion in the candidate list, it's not ranked as the top one due to the lower semantic similarity because there are not enough samples in the training data. Among all the incorrect expansions in our test set, such kind of errors accounted for about 54%. One possible solution may be adding more effective data to the embedding training, which means discovering more task-oriented resources. In a few cases, we failed to identify some abbreviations because of their complicated representations. For example, we have the following sentence in the patient's notes: “ No n/v/f/c.” and the correct expansion should be “No nausea/vomiting/fever/chills.” Such abbreviations are by far the most difficult to expand in our task because they do not exist in any knowledge base and usually only occur once in the training data. Conclusions and Future Work This paper proposes a simple but novel approach for automatic expansion of abbreviations. It achieves very good performance without any manually labeled data. Experiments demonstrate that using task-oriented resources to train word embeddings is much more effective than using general or arbitrary corpus. In the future, we plan to collectively expand semantically related abbreviations co-occurring in a sentence. In addition, we expect to integrate our work into a natural language processing system for processing the clinical notes for discovering knowledge, which will largely benefit the medical research. Acknowledgements This work is supported by RPI's Tetherless World Constellation, IARPA FUSE Numbers D11PC20154 and J71493 and DARPA DEFT No. FA8750-13-2-0041. Dr. Mathews' effort is supported by Award #1K12HL109005-01 from the National Heart, Lung, and Blood Institute (NHLBI). The content is solely the responsibility of the authors and does not necessarily represent the official views of NHLBI, the National Institutes of Health, IARPA, or DARPA.
No
62a3dc90ba427c5985789001a02825c9434ce67d
62a3dc90ba427c5985789001a02825c9434ce67d_0
Q: Which dataset do they use to build their model? Text: Introduction Abbreviations and acronyms appear frequently in the medical domain. Based on a popular online knowledge base, among the 3,096,346 stored abbreviations, 197,787 records are medical abbreviations, ranked first among all ten domains. An abbreviation can have over 100 possible explanations even within the medical domain. Medical record documentation, the authors of which are mainly physicians, other health professionals, and domain experts, is usually written under the pressure of time and high workload, requiring notation to be frequently compressed with shorthand jargon and acronyms. This is even more evident within intensive care medicine, where it is crucial that information is expressed in the most efficient manner possible to provide time-sensitive care to critically ill patients, but can result in code-like messages with poor readability. For example, given a sentence written by a physician with specialty training in critical care medicine, “STAT TTE c/w RVS. AKI - no CTA. .. etc”, it is difficult for non-experts to understand all abbreviations without specific context and/or knowledge. But when a doctor reads this, he/she would know that although “STAT” is widely used as the abbreviation of “statistic”, “statistics” and “statistical” in most domains, in hospital emergency rooms, it is often used to represent “immediately”. Within the arena of medical research, abbreviation expansion using a natural language processing system to automatically analyze clinical notes may enable knowledge discovery (e.g., relations between diseases) and has potential to improve communication and quality of care. In this paper, we study the task of abbreviation expansion in clinical notes. As shown in Figure 1, our goal is to normalize all the abbreviations in the intensive care unit (ICU) documentation to reduce misinterpretation and to make the texts accessible to a wider range of readers. For accurately capturing the semantics of an abbreviation in its context, we adopt word embedding, which can be seen as a distributional semantic representation and has been proven to be effective BIBREF0 to compute the semantic similarity between words based on the context without any labeled data. The intuition of distributional semantics BIBREF1 is that if two words share similar contexts, they should have highly similar semantics. For example, in Figure 1, “RF” and “respiratory failure” have very similar contexts so that their semantics should be similar. If we know “respiratory failure” is a possible candidate expansion of “RF” and its semantics is similar to the “RF” in the intensive care medicine texts, we can determine that it should be the correct expansion of “RF”. Due to the limited resource of intensive care medicine texts where full expansions rarely appear, we exploit abundant and easily-accessible task-oriented resources to enrich our dataset for training embeddings. To the best of our knowledge, we are the first to apply word embeddings to this task. Experimental results show that the embeddings trained on the task-oriented corpus are much more useful than those trained on other corpora. By combining the embeddings with domain-specific knowledge, we achieve 82.27% accuracy, which outperforms baselines and is close to human's performance. Related Work The task of abbreviation disambiguation in biomedical documents has been studied by various researchers using supervised machine learning algorithms BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 . However, the performance of these supervised methods mainly depends on a large amount of labeled data which is extremely difficult to obtain for our task since intensive care medicine texts are very rare resources in clinical domain due to the high cost of de-identification and annotation. Tengstrand et al. tengstrand2014eacl proposed a distributional semantics-based approach for abbreviation expansion in Swedish but they focused only on expanding single words and cannot handle multi-word phrases. In contrast, we use word embeddings combined with task-oriented resources and knowledge, which can handle multiword expressions. Overview The overview of our approach is shown in Figure FIGREF6 . Within ICU notes (e.g., text example in top-left box in Figure 2), we first identify all abbreviations using regular expressions and then try to find all possible expansions of these abbreviations from domain-specific knowledge base as candidates. We train word embeddings using the clinical notes data with task-oriented resources such as Wikipedia articles of candidates and medical scientific papers and compute the semantic similarity between an abbreviation and its candidate expansions based on their embeddings (vector representations of words). Training embeddings with task oriented resources Given an abbreviation as input, we expect the correct expansion to be the most semantically similar to the abbreviation, which requires the abbreviation and the expansion share similar contexts. For this reason, we exploit rich task-oriented resources such as the Wikipedia articles of all the possible candidates, research papers and books written by the intensive care medicine fellows. Together with our clinical notes data which functions as a corpus, we train word embeddings since the expansions of abbreviations in the clinical notes are likely to appear in these resources and also share the similar contexts to the abbreviation's contexts. Handling MultiWord Phrases In most cases, an abbreviation's expansion is a multi-word phrase. Therefore, we need to obtain the phrase's embedding so that we can compute its semantic similarity to the abbreviation. It is proven that a phrase's embedding can be effectively obtained by summing the embeddings of words contained in the phrase BIBREF0 , BIBREF7 . For computing a phrase's embedding, we formally define a candidate INLINEFORM0 as a list of the words contained in the candidate, for example: one of MICU's candidate expansions is medical intensive care unit=[medical,intensive,care,unit]. Then, INLINEFORM1 's embedding can be computed as follows: DISPLAYFORM0 where INLINEFORM0 is a token in the candidate INLINEFORM1 and INLINEFORM2 denotes the embedding of a word/phrase, which is a vector of real-value entries. Expansion Candidate Ranking Even though embeddings are very helpful to compute the semantic similarity between an abbreviation and a candidate expansion, in some cases, context-independent information is also useful to identify the correct expansion. For example, CHF in the clinical notes usually refers to “congestive heart failure”. By using embedding-based semantic similarity, we can find two possible candidates – “congestive heart failure” (similarity=0.595) and “chronic heart failure”(similarity=0.621). These two candidates have close semantic similarity score but their popularity scores in the medical domain are quite different – the former has a rating score of 50 while the latter only has a rating score of 7. Therefore, we can see that the rating score, which can be seen as a kind of domain-specific knowledge, can also contribute to the candidate ranking. We combine semantic similarity with rating information. Formally, given an abbreviation INLINEFORM0 's candidate list INLINEFORM1 , we rank INLINEFORM2 based on the following formula: DISPLAYFORM0 where INLINEFORM0 denotes the rating of this candidate as an expansion of the abbreviation INLINEFORM1 , which reflects this candidate's popularity, INLINEFORM2 denotes the embedding of a word. The parameter INLINEFORM3 serves to adjust the weights of similarity and popularity Data and Evaluation Metrics The clinical notes we used for the experiment are provided by domain experts, consisting of 1,160 physician logs of Medical ICU admission requests at a tertiary care center affiliated to Mount Sanai. Prospectively collected over one year, these semi-structured logs contain free-text descriptions of patients' clinical presentations, medical history, and required critical care-level interventions. We identify 818 abbreviations and find 42,506 candidates using domain-specific knowledge (i.e., www.allacronym.com/_medical). The enriched corpus contains 42,506 Wikipedia articles, each of which corresponds to one candidate, 6 research papers and 2 critical care medicine textbooks, besides our raw ICU data. We use word2vec BIBREF0 to train the word embeddings. The dimension of embeddings is empirically set to 100. Since the goal of our task is to find the correct expansion for an abbreviation, we use accuracy as a metric to evaluate the performance of our approach. For ground-truth, we have 100 physician logs which are manually expanded and normalized by one of the authors Dr. Mathews, a well-trained domain expert, and thus we use these 100 physician logs as the test set to evaluate our approach's performance. Baseline Models For our task, it's difficult to re-implement the supervised methods as in previous works mentioned since we do not have sufficient training data. And a direct comparison is also impossible because all previous work used different data sets which are not publicly available. Alternatively, we use the following baselines to compare with our approach. Rating: This baseline model chooses the highest rating candidate expansion in the domain specific knowledge base. Raw Input embeddings: We trained word embeddings only from the 1,160 raw ICU texts and we choose the most semantically related candidate as the answer. General embeddings: Different from the Raw Input embeddings baseline, we use the embedding trained from a large biomedical data collection that includes knowledge bases like PubMed and PMC and a Wikipedia dump of biomedical related articles BIBREF8 for semantic similarity computation. Results Table 1 shows the performance of abbreviation expansion. Our approach significantly outperforms the baseline methods and achieves 82.27% accuracy. Figure FIGREF21 shows how our approach improves the performance of a rating-based approach. By using embeddings, we can learn that the meaning of “OD” used in our test cases should be “overdose” rather than “out-of-date” and this semantic information largely benefits the abbreviation expansion model. Compared with our approach, embeddings trained only from the ICU texts do not significantly contribute to the performance over the rating baseline. The reason is that the size of data for training the embeddings is so small that many candidate expansions of abbreviations do not appear in the corpus, which results in poor performance. It is notable that general embeddings trained from large biomedical data are not effective for this task because many abbreviations within critical care medicine appear in the biomedical corpus with different senses. For example, “OD” in intensive care medicine texts refers to “overdose” while in the PubMed corpus it usually refers to “optical density”, as shown in Figure FIGREF24 . Therefore, the embeddings trained from the PubMed corpus do not benefit the expansion of abbreviations in the ICU texts. Moreover, we estimated human performance for this task, shown in Table TABREF26 . Note that the performance is estimated by one of the authors Dr. Mathews who is a board-certified pulmonologist and critical care medicine specialist based on her experience and the human's performance estimated in Table TABREF26 is under the condition that the participants can not use any other external resources. We can see that our approach can achieve a performance close to domain experts and thus it is promising to tackle this challenge. Error Analysis The distribution of errors is shown in Table TABREF28 . There are mainly three reasons that cause the incorrect expansion. In some cases, some certain abbreviations do not exist in the knowledge base. In this case we would not be able to populate the corresponding candidate list. Secondly, in many cases although we have the correct expansion in the candidate list, it's not ranked as the top one due to the lower semantic similarity because there are not enough samples in the training data. Among all the incorrect expansions in our test set, such kind of errors accounted for about 54%. One possible solution may be adding more effective data to the embedding training, which means discovering more task-oriented resources. In a few cases, we failed to identify some abbreviations because of their complicated representations. For example, we have the following sentence in the patient's notes: “ No n/v/f/c.” and the correct expansion should be “No nausea/vomiting/fever/chills.” Such abbreviations are by far the most difficult to expand in our task because they do not exist in any knowledge base and usually only occur once in the training data. Conclusions and Future Work This paper proposes a simple but novel approach for automatic expansion of abbreviations. It achieves very good performance without any manually labeled data. Experiments demonstrate that using task-oriented resources to train word embeddings is much more effective than using general or arbitrary corpus. In the future, we plan to collectively expand semantically related abbreviations co-occurring in a sentence. In addition, we expect to integrate our work into a natural language processing system for processing the clinical notes for discovering knowledge, which will largely benefit the medical research. Acknowledgements This work is supported by RPI's Tetherless World Constellation, IARPA FUSE Numbers D11PC20154 and J71493 and DARPA DEFT No. FA8750-13-2-0041. Dr. Mathews' effort is supported by Award #1K12HL109005-01 from the National Heart, Lung, and Blood Institute (NHLBI). The content is solely the responsibility of the authors and does not necessarily represent the official views of NHLBI, the National Institutes of Health, IARPA, or DARPA.
1,160 physician logs of Medical ICU admission requests, 42,506 Wikipedia articles, 6 research papers and 2 critical care medicine textbooks
b4f5bf3b7b37e2f22d13b724ca8fe7d0888e04a2
b4f5bf3b7b37e2f22d13b724ca8fe7d0888e04a2_0
Q: What is the domain of their collected corpus? Text: Introduction Spoken Language Understanding (SLU) is a core component in dialogue systems. It typically aims to identify the intent and semantic constituents for a given utterance, which are referred as intent detection and slot filling, respectively. Past years have witnessed rapid developments in diverse deep learning models BIBREF0, BIBREF1 for SLU. To take full advantage of supervised signals of slots and intents, and share knowledge between them, most of existing works apply joint models that mainly based on CNNs BIBREF2, BIBREF3, RNNs BIBREF4, BIBREF5, and asynchronous bi-model BIBREF6. Generally, these joint models encode words convolutionally or sequentially, and then aggregate hidden states into a utterance-level representation for the intent prediction, without interactions between representations of slots and intents. Intuitively, slots and intents from similar fields tend to occur simultaneously, which can be observed from Figure FIGREF2 and Table TABREF3. Therefore, it is beneficial to generate the representations of slots and intents with the guidance from each other. Some works explore enhancing the slot filling task unidirectionally with the guidance from intent representations via gating mechanisms BIBREF7, BIBREF8, while the predictions of intents lack the guidance from slots. Moreover, the capsule network with dynamic routing algorithms BIBREF9 is proposed to perform interactions in both directions. However, there are still two limitations in this model. The one is that the information flows from words to slots, slots to intents and intents to words in a pipeline manner, which is to some extent limited in capturing complicated correlations among words, slots and intents. The other is that the local context information which has been shown highly useful for the slot filling BIBREF10, is not explicitly modeled. In this paper, we try to address these issues, and thus propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork, named CM-Net. The main idea is to directly capture semantic relationships among words, slots and intents, which is conducted simultaneously at each word position in a collaborative manner. Specifically, we alternately perform information exchange among the task-specific features referred from memories, local context representations and global sequential information via the well-designed block, named CM-block, which consists of three computational components: Deliberate Attention: Obtaining slot-specific and intent-specific representations from memories in a collaborative manner. Local Calculation: Updating local context representations with the guidances of the referred slot and intent representations in the previous Deliberate Attention. Global Recurrence: Generating specific (slot and intent) global sequential representations based on local context representations from the previous Local Calculation. Above components in each CM-block are conducted consecutively, which are responsible for encoding information from different perspectives. Finally, multiple CM-blocks are stacked together, and construct our CM-Net. We firstly conduct experiments on two popular benchmarks, SNIPS BIBREF11 and ATIS BIBREF12, BIBREF13. Experimental results show that the CM-Net achieves the state-of-the-art results in 3 of 4 criteria (e.g., intent detection accuracy on ATIS) on both benchmarks. Additionally, trials on our self-collected dataset, named CAIS, demonstrate the effectiveness and generalizability of the CM-Net. Our main contributions are as follows: We propose a novel CM-Net for SLU, which explicitly captures semantic correlations among words, slots and intents in a collaborative manner, and incrementally enriches the specific features, local context representations and global sequential representations through stacked CM-blocks. Our CM-Net achieves the state-of-the-art results on two major SLU benchmarks (ATIS and SNIPS) in most of criteria. We contribute a new corpus CAIS with manual annotations of slot tags and intent labels to the research community. Background In principle, the slot filling is treated as a sequence labeling task, and the intent detection is a classification problem. Formally, given an utterance $X = \lbrace x_1, x_2, \cdots , x_N \rbrace $ with $N$ words and its corresponding slot tags $Y^{slot} = \lbrace y_1, y_2, \cdots , y_N \rbrace $, the slot filling task aims to learn a parameterized mapping function $f_{\theta } : X \rightarrow Y $ from input words to slot tags. For the intent detection, it is designed to predict the intent label $\hat{y}^{int}$ for the entire utterance $X$ from the predefined label set $S^{int}$. Typically, the input utterance is firstly encoded into a sequence of distributed representations $\mathbf {X} = \lbrace \mathbf {x}_1, \mathbf {x}_2, \cdots , \mathbf {x}_N\rbrace $ by character-aware and pre-trained word embeddings. Afterwards, the following bidirectional RNNs are applied to encode the embeddings $\mathbf {X}$ into context-sensitive representations $\mathbf {H} = \lbrace \mathbf {h}_1, \mathbf {h}_2, \cdots , \mathbf {h}_N\rbrace $. An external CRF BIBREF14 layer is widely utilized to calculate conditional probabilities of slot tags: Here $\mathbf {Y}_x$ is the set of all possible sequences of tags, and $F(\cdot )$ is the score function calculated by: where $\mathbf {A}$ is the transition matrix that $\mathbf {A}_{i,j}$ indicates the score of a transition from $i$ to $j$, and $\mathbf {P}$ is the score matrix output by RNNs. $P_{i,j}$ indicates the score of the $j^{th}$ tag of the $i^{th}$ word in a sentence BIBREF15. When testing, the Viterbi algorithm BIBREF16 is used to search the sequence of slot tags with maximum score: As to the prediction of intent, the word-level hidden states $\mathbf {H}$ are firstly summarized into a utterance-level representation $\mathbf {v}^{int}$ via mean pooling (or max pooling or self-attention, etc.): The most probable intent label $\hat{y}^{int}$ is predicted by softmax normalization over the intent label set: Generally, both tasks are trained jointly to minimize the sum of cross entropy from each individual task. Formally, the loss function of the join model is computed as follows: where $y^{int}_i$ and $y^{slot}_{i,j}$ are golden labels, and $\lambda $ is hyperparameter, and $|S^{int}|$ is the size of intent label set, and similarly for $|S^{slot}|$ . CM-Net ::: Overview In this section, we start with a brief overview of our CM-Net and then proceed to introduce each module. As shown in Figure FIGREF16, the input utterance is firstly encoded with the Embedding Layer, and then is transformed by multiple CM-blocks with the assistance of slot and intent memories, and finally make predictions in the Inference Layer. CM-Net ::: Embedding Layers ::: Pre-trained Word Embedding The pre-trained word embeddings has been indicated as a de-facto standard of neural network architectures for various NLP tasks. We adapt the cased, 300d Glove BIBREF17 to initialize word embeddings, and keep them frozen. CM-Net ::: Embedding Layers ::: Character-aware Word Embedding It has been demonstrated that character level information (e.g. capitalization and prefix) BIBREF18 is crucial for sequence labeling. We use one layer of CNN followed by max pooling to generate character-aware word embeddings. CM-Net ::: CM-block The CM-block is the core module of our CM-Net, which is designed with three computational components: Deliberate Attention, Local Calculation and Global Recurrence respectively. CM-Net ::: CM-block ::: Deliberate Attention To fully model semantic relations between slots and intents, we build the slot memory $\mathbf {M^{slot}} $ and intent memory $\mathbf {M^{int}}$, and further devise a collaborative retrieval approach. For the slot memory, it keeps $|S^{slot}|$ slot cells which are randomly initialized and updated as model parameters. Similarly for the intent memory. At each word position, we take the hidden state $\mathbf {h}_t$ as query, and obtain slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ from both memories by the deliberate attention mechanism, which will be illustrated in the following. Specifically for the slot feature $\mathbf {h}_t^{slot}$, we firstly get a rough intent representation $\widetilde{\mathbf {h}}_t^{int}$ by the word-aware attention with hidden state $\mathbf {h}_t$ over the intent memory $\mathbf {M^{int}}$, and then obtain the final slot feature $\mathbf {h}_t^{slot}$ by the intent-aware attention over the slot memory $\mathbf {M^{slot}}$ with the intent-enhanced representation $[\mathbf {h}_t;\widetilde{\mathbf {h}}_t^{int}]$. Formally, the above-mentioned procedures are computed as follows: where $ATT(\cdot )$ is the query function calculated by the weighted sum of all cells $\mathbf {m}_i^{x}$ in memory $\mathbf {M}^{x}$ ($x \in \lbrace slot, int\rbrace $) : Here $\mathbf {u}$ and $\mathbf {W}$ are model parameters. We name the above calculations of two-round attentions (Equation DISPLAY_FORM23) as “deliberate attention". The intent representation $\mathbf {h}_t^{int}$ is computed by the deliberate attention as well: These two deliberate attentions are conducted simultaneously at each word position in such collaborative manner, which guarantees adequate knowledge diffusions between slots and intents. The retrieved slot features $\mathbf {H}_t^{slot}$ and intent features $\mathbf {H}_t^{int}$ are utilized to provide guidances for the next local calculation layer. CM-Net ::: CM-block ::: Local Calculation Local context information is highly useful for sequence modeling BIBREF19, BIBREF20. BIBREF21 SLSTM2018 propose the S-LSTM to encode both local and sentence-level information simultaneously, and it has been shown more powerful for text representation when compared with the conventional BiLSTMs. We extend the S-LSTM with slot-specific features $\mathbf {H}_t^{slot}$ and intent-specific features $\mathbf {H}_t^{slot}$ retrieved from memories. Specifically, at each input position $t$, we take the local window context $\mathbf {\xi }_t$, word embedding $\mathbf {x}_t$, slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ as inputs to conduct combinatorial calculation simultaneously. Formally, in the $l^{th}$ layer, the hidden state $\mathbf {h_t}$ is updated as follows: where $\mathbf { \xi } _ { t } ^ { l }$ is the concatenation of hidden states in a local window, and $\mathbf {i}_t^l$, $\mathbf {f}_t^l$, $\mathbf {o}_t^l$, $\mathbf {l}_t^l$ and $\mathbf {r}_t^l$ are gates to control information flows, and $\mathbf {W}_n^x$ $(x \in \lbrace i, o, f, l, r, u\rbrace , n \in \lbrace 1, 2, 3, 4\rbrace )$ are model parameters. More details about the state transition can be referred in BIBREF21. In the first CM-block, the hidden state $\mathbf {h}_t$ is initialized with the corresponding word embedding. In other CM-blocks, the $\mathbf {h}_t$ is inherited from the output of the adjacent lower CM-block. At each word position of above procedures, the hidden state is updated with abundant information from different perspectives, namely word embeddings, local contexts, slots and intents representations. The local calculation layer in each CM-block has been shown highly useful for both tasks, and especially for the slot filling task, which will be validated in our experiments in Section SECREF46. CM-Net ::: CM-block ::: Global Recurrence Bi-directional RNNs, especially the BiLSTMs BIBREF22 are regarded to encode both past and future information of a sentence, which have become a dominant method in various sequence modeling tasks BIBREF23, BIBREF24. The inherent nature of BiLSTMs is able to supplement global sequential information, which is insufficiently modeled in the previous local calculation layer. Thus we apply an additional BiLSTMs layer upon the local calculation layer in each CM-block. By taking the slot- and intent-specific local context representations as inputs, we can obtain more specific global sequential representations. Formally, it takes the hidden state $\mathbf {h}_t^{l-1}$ inherited from the local calculation layer as input, and conduct recurrent steps as follows: The output “states" of the BiLSTMs are taken as “states" input of the local calculation in next CM-block. The global sequential information encoded by the BiLSTMs is shown necessary and effective for both tasks in our experiments in Section SECREF46. CM-Net ::: Inference Layer After multiple rounds of interactions among local context representations, global sequential information, slot and intent features, we conduct predictions upon the final CM-block. For the predictions of slots, we take the hidden states $\mathbf {H}$ along with the retrieved slot $\mathbf {H}^{slot}$ representations (both are from the final CM-block) as input features, and then conduct predictions of slots similarly with the Equation (DISPLAY_FORM12) in Section SECREF2: For the prediction of intent label, we firstly aggregate the hidden state $\mathbf {h}_t$ and the retrieved intent representation $\mathbf {h}_t^{int}$ at each word position (from the final CM-block as well) via mean pooling: and then take the summarized vector $\mathbf {v}^{int}$ as input feature to conduct prediction of intent consistently with the Equation (DISPLAY_FORM14) in Section SECREF2. Experiments ::: Datasets and Metrics We evaluate our proposed CM-Net on three real-word datasets, and statistics are listed in Table TABREF32. Experiments ::: Datasets and Metrics ::: ATIS The Airline Travel Information Systems (ATIS) corpus BIBREF12 is the most widely used benchmark for the SLU research. Please note that, there are extra named entity features in the ATIS, which almost determine slot tags. These hand-crafted features are not generally available in open domains BIBREF25, BIBREF29, therefore we train our model purely on the training set without additional hand-crafted features. Experiments ::: Datasets and Metrics ::: SNIPS SNIPS Natural Language Understanding benchmark BIBREF11 is collected in a crowsourced fashion by Snips. The intents of this dataset are more balanced when compared with the ATIS. We split another 700 utterances for validation set following previous works BIBREF7, BIBREF9. Experiments ::: Datasets and Metrics ::: CAIS We collect utterances from the $\mathbf {C}$hinese $\mathbf {A}$rtificial $\mathbf {I}$ntelligence $\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field. Experiments ::: Datasets and Metrics ::: Metrics Slot filling is typically treated as a sequence labeling problem, and thus we take the conlleval as the token-level $F_1$ metric. The intent detection is evaluated with the classification accuracy. Specially, several utterances in the ATIS are tagged with more than one labels. Following previous works BIBREF13, BIBREF25, we count an utterrance as a correct classification if any ground truth label is predicted. Experiments ::: Implementation Details All trainable parameters in our model are initialized by the method described in BIBREF31 Xavier. We apply dropout BIBREF32 to the embedding layer and hidden states with a rate of 0.5. All models are optimized by the Adam optimizer BIBREF33 with gradient clipping of 3 BIBREF34. The initial learning rate $\alpha $ is set to 0.001, and decrease with the growth of training steps. We monitor the training process on the validation set and report the final result on the test set. One layer CNN with a filter of size 3 and max pooling are utilized to generate 100d word embeddings. The cased 300d Glove is adapted to initialize word embeddings, and kept fixed when training. In auxiliary experiments, the output hidden states of BERT are taken as additional word embeddings and kept fixed as well. We share parameters of both memories with the parameter matrices in the corresponding softmax layers, which can be taken as introducing supervised signals into the memories to some extent. We conduct hyper-parameters tuning for layer size (finally set to 3) and loss weight $\lambda $ (finally set to 0.5), and empirically set other parameters to the values listed in the supplementary material. Experiments ::: Main Results Main results of our CM-Net on the SNIPS and ATIS are shown in Table TABREF33. Our CM-Net achieves the state-of-the-art results on both datasets in terms of slot filling $F_1$ score and intent detection accuracy, except for the $F_1$ score on the ATIS. We conjecture that the named entity feature in the ATIS has a great impact on the slot filling result as illustrated in Section SECREF34. Since the SNIPS is collected from multiple domains with more balanced labels when compared with the ATIS, the slot filling $F_1$ score on the SNIPS is able to demonstrate the superiority of our CM-Net. It is noteworthy that the CM-Net achieves comparable results when compared with models that exploit additional language models BIBREF27, BIBREF28. We conduct auxiliary experiments by leveraging the well-known BERT BIBREF35 as an external resource for a relatively fair comparison with those models, and report details in Section SECREF48. Analysis Since the SNIPS corpus is collected from multiple domains and its label distributions are more balanced when compared with the ATIS, we choose the SNIPS to elucidate properties of our CM-Net and conduct several additional experiments. Analysis ::: Whether Memories Promote Each Other? In the CM-Net, the deliberate attention mechanism is proposed in a collaborative manner to perform information exchange between slots and intents. We conduct experiments to verify whether such kind of knowledge diffusion in both memories can promote each other. More specifically, we remove one unidirectional diffusion (e.g. from slot to intent) or both in each experimental setup. The results are illustrated in Figure FIGREF43. We can observe obvious drops on both tasks when both directional knowledge diffusions are removed (CM-Net vs. neither). For the slot filling task (left part in Figure FIGREF43), the $F_1$ scores decrease slightly when the knowledge from slot to intent is blocked (CM-Net vs. “no slot2int"), and a more evident drop occurs when the knowledge from intent to slot is blocked (CM-Net vs. “no int2slot"). Similar observations can be found for the intent detection task (right part in Figure FIGREF43). In conclusion, the bidirectional knowledge diffusion between slots and intents are necessary and effective to promote each other. Analysis ::: Ablation Experiments We conduct ablation experiments to investigate the impacts of various components in our CM-Net. In particular, we remove one component among slot memory, intent memory, local calculation and global recurrence. Results of different combinations are presented in Table TABREF44. Once the slot memory and its corresponding interactions with other components are removed, scores on both tasks decrease to some extent, and a more obvious decline occurs for the slot filling (row 1 vs. row 0), which is consistent with the conclusion of Section SECREF45. Similar observations can be found for the intent memory (row 2). The local calculation layer is designed to capture better local context representations, which has an evident impact on the slot filling and slighter effect on the intent detection (row 3 vs. row 0). Opposite observations occur in term of global recurrence, which is supposed to model global sequential information and thus has larger effect on the intent detection (row 4 vs. row 0). Analysis ::: Effects of Pre-trained Language Models Recently, there has been a growing body of works exploring neural language models that trained on massive corpora to learn contextual representations (e.g. BERT BERT and EMLo EMLo). Inspired by the effectiveness of language model embeddings, we conduct experiments by leveraging the BERT as an additional feature. The results emerged in Table TABREF47 show that we establish new state-of-the-art results on both tasks of the SNIPS. Analysis ::: Evaluation on the CAIS We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture BIBREF36 for sequence labeling task, and the other one is the more powerful sententce-state LSTM BIBREF21. The results listed in Table TABREF50 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages. Related Work ::: Memory Network Memory network is a general machine learning framework introduced by BIBREF37 memory2014, which have been shown effective in question answering BIBREF37, BIBREF38, machine translation BIBREF39, BIBREF40, aspect level sentiment classification BIBREF41, etc. For spoken language understanding, BIBREF42 memoryslu2016 introduce memory mechanisms to encode historical utterances. In this paper, we propose two memories to explicitly capture the semantic correlations between slots and the intent in a given utterance, and devise a novel collaborative retrieval approach. Related Work ::: Interactions between slots and intents Considering the semantic proximity between slots and intents, some works propose to enhance the slot filling task unidirectionally with the guidance of intent representations via gating mechanisms BIBREF7, BIBREF8. Intuitively, the slot representations are also instructive to the intent detection task and thus bidirectional interactions between slots and intents are benefical for each other. BIBREF9 capsule2018 propose a hierarchical capsule network to perform interactions from words to slots, slots to intents and intents to words in a pipeline manner, which is relatively limited in capturing the complicated correlations among them. In our CM-Net, information exchanges are performed simultaneously with knowledge diffusions in both directions. The experiments demonstrate the superiority of our CM-Net in capturing the semantic correlations between slots and intents. Related Work ::: Sentence-State LSTM BIBREF21 BIBREF21 propose a novel graph RNN named S-LSTM, which models sentence between words simultaneously. Inspired by the new perspective of state transition in the S-LSTM, we further extend it with task-specific (i.e., slots and intents) representations via our collaborative memories. In addition, the global information in S-LSTM is modeled by aggregating the local features with gating mechanisms, which may lose sight of sequential information of the whole sentence. Therefore, We apply external BiLSTMs to supply global sequential features, which is shown highly necessary for both tasks in our experiments. Conclusion We propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork (CM-Net) for jointly modeling slot filling and intent detection. The CM-Net is able to explicitly capture the semantic correlations among words, slots and intents in a collaborative manner, and incrementally enrich the information flows with local context and global sequential information. Experiments on two standard benchmarks and our CAIS corpus demonstrate the effectiveness and generalizability of our proposed CM-Net. In addition, we contribute the new corpus (CAIS) to the research community. Acknowledgments Liu, Chen and Xu are supported by the National Natural Science Foundation of China (Contract 61370130, 61976015, 61473294 and 61876198), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010). We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions.
speaker systems in the real world
fa3312ae4bbed11a5bebd77caf15d651962e0b26
fa3312ae4bbed11a5bebd77caf15d651962e0b26_0
Q: What was the performance on the self-collected corpus? Text: Introduction Spoken Language Understanding (SLU) is a core component in dialogue systems. It typically aims to identify the intent and semantic constituents for a given utterance, which are referred as intent detection and slot filling, respectively. Past years have witnessed rapid developments in diverse deep learning models BIBREF0, BIBREF1 for SLU. To take full advantage of supervised signals of slots and intents, and share knowledge between them, most of existing works apply joint models that mainly based on CNNs BIBREF2, BIBREF3, RNNs BIBREF4, BIBREF5, and asynchronous bi-model BIBREF6. Generally, these joint models encode words convolutionally or sequentially, and then aggregate hidden states into a utterance-level representation for the intent prediction, without interactions between representations of slots and intents. Intuitively, slots and intents from similar fields tend to occur simultaneously, which can be observed from Figure FIGREF2 and Table TABREF3. Therefore, it is beneficial to generate the representations of slots and intents with the guidance from each other. Some works explore enhancing the slot filling task unidirectionally with the guidance from intent representations via gating mechanisms BIBREF7, BIBREF8, while the predictions of intents lack the guidance from slots. Moreover, the capsule network with dynamic routing algorithms BIBREF9 is proposed to perform interactions in both directions. However, there are still two limitations in this model. The one is that the information flows from words to slots, slots to intents and intents to words in a pipeline manner, which is to some extent limited in capturing complicated correlations among words, slots and intents. The other is that the local context information which has been shown highly useful for the slot filling BIBREF10, is not explicitly modeled. In this paper, we try to address these issues, and thus propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork, named CM-Net. The main idea is to directly capture semantic relationships among words, slots and intents, which is conducted simultaneously at each word position in a collaborative manner. Specifically, we alternately perform information exchange among the task-specific features referred from memories, local context representations and global sequential information via the well-designed block, named CM-block, which consists of three computational components: Deliberate Attention: Obtaining slot-specific and intent-specific representations from memories in a collaborative manner. Local Calculation: Updating local context representations with the guidances of the referred slot and intent representations in the previous Deliberate Attention. Global Recurrence: Generating specific (slot and intent) global sequential representations based on local context representations from the previous Local Calculation. Above components in each CM-block are conducted consecutively, which are responsible for encoding information from different perspectives. Finally, multiple CM-blocks are stacked together, and construct our CM-Net. We firstly conduct experiments on two popular benchmarks, SNIPS BIBREF11 and ATIS BIBREF12, BIBREF13. Experimental results show that the CM-Net achieves the state-of-the-art results in 3 of 4 criteria (e.g., intent detection accuracy on ATIS) on both benchmarks. Additionally, trials on our self-collected dataset, named CAIS, demonstrate the effectiveness and generalizability of the CM-Net. Our main contributions are as follows: We propose a novel CM-Net for SLU, which explicitly captures semantic correlations among words, slots and intents in a collaborative manner, and incrementally enriches the specific features, local context representations and global sequential representations through stacked CM-blocks. Our CM-Net achieves the state-of-the-art results on two major SLU benchmarks (ATIS and SNIPS) in most of criteria. We contribute a new corpus CAIS with manual annotations of slot tags and intent labels to the research community. Background In principle, the slot filling is treated as a sequence labeling task, and the intent detection is a classification problem. Formally, given an utterance $X = \lbrace x_1, x_2, \cdots , x_N \rbrace $ with $N$ words and its corresponding slot tags $Y^{slot} = \lbrace y_1, y_2, \cdots , y_N \rbrace $, the slot filling task aims to learn a parameterized mapping function $f_{\theta } : X \rightarrow Y $ from input words to slot tags. For the intent detection, it is designed to predict the intent label $\hat{y}^{int}$ for the entire utterance $X$ from the predefined label set $S^{int}$. Typically, the input utterance is firstly encoded into a sequence of distributed representations $\mathbf {X} = \lbrace \mathbf {x}_1, \mathbf {x}_2, \cdots , \mathbf {x}_N\rbrace $ by character-aware and pre-trained word embeddings. Afterwards, the following bidirectional RNNs are applied to encode the embeddings $\mathbf {X}$ into context-sensitive representations $\mathbf {H} = \lbrace \mathbf {h}_1, \mathbf {h}_2, \cdots , \mathbf {h}_N\rbrace $. An external CRF BIBREF14 layer is widely utilized to calculate conditional probabilities of slot tags: Here $\mathbf {Y}_x$ is the set of all possible sequences of tags, and $F(\cdot )$ is the score function calculated by: where $\mathbf {A}$ is the transition matrix that $\mathbf {A}_{i,j}$ indicates the score of a transition from $i$ to $j$, and $\mathbf {P}$ is the score matrix output by RNNs. $P_{i,j}$ indicates the score of the $j^{th}$ tag of the $i^{th}$ word in a sentence BIBREF15. When testing, the Viterbi algorithm BIBREF16 is used to search the sequence of slot tags with maximum score: As to the prediction of intent, the word-level hidden states $\mathbf {H}$ are firstly summarized into a utterance-level representation $\mathbf {v}^{int}$ via mean pooling (or max pooling or self-attention, etc.): The most probable intent label $\hat{y}^{int}$ is predicted by softmax normalization over the intent label set: Generally, both tasks are trained jointly to minimize the sum of cross entropy from each individual task. Formally, the loss function of the join model is computed as follows: where $y^{int}_i$ and $y^{slot}_{i,j}$ are golden labels, and $\lambda $ is hyperparameter, and $|S^{int}|$ is the size of intent label set, and similarly for $|S^{slot}|$ . CM-Net ::: Overview In this section, we start with a brief overview of our CM-Net and then proceed to introduce each module. As shown in Figure FIGREF16, the input utterance is firstly encoded with the Embedding Layer, and then is transformed by multiple CM-blocks with the assistance of slot and intent memories, and finally make predictions in the Inference Layer. CM-Net ::: Embedding Layers ::: Pre-trained Word Embedding The pre-trained word embeddings has been indicated as a de-facto standard of neural network architectures for various NLP tasks. We adapt the cased, 300d Glove BIBREF17 to initialize word embeddings, and keep them frozen. CM-Net ::: Embedding Layers ::: Character-aware Word Embedding It has been demonstrated that character level information (e.g. capitalization and prefix) BIBREF18 is crucial for sequence labeling. We use one layer of CNN followed by max pooling to generate character-aware word embeddings. CM-Net ::: CM-block The CM-block is the core module of our CM-Net, which is designed with three computational components: Deliberate Attention, Local Calculation and Global Recurrence respectively. CM-Net ::: CM-block ::: Deliberate Attention To fully model semantic relations between slots and intents, we build the slot memory $\mathbf {M^{slot}} $ and intent memory $\mathbf {M^{int}}$, and further devise a collaborative retrieval approach. For the slot memory, it keeps $|S^{slot}|$ slot cells which are randomly initialized and updated as model parameters. Similarly for the intent memory. At each word position, we take the hidden state $\mathbf {h}_t$ as query, and obtain slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ from both memories by the deliberate attention mechanism, which will be illustrated in the following. Specifically for the slot feature $\mathbf {h}_t^{slot}$, we firstly get a rough intent representation $\widetilde{\mathbf {h}}_t^{int}$ by the word-aware attention with hidden state $\mathbf {h}_t$ over the intent memory $\mathbf {M^{int}}$, and then obtain the final slot feature $\mathbf {h}_t^{slot}$ by the intent-aware attention over the slot memory $\mathbf {M^{slot}}$ with the intent-enhanced representation $[\mathbf {h}_t;\widetilde{\mathbf {h}}_t^{int}]$. Formally, the above-mentioned procedures are computed as follows: where $ATT(\cdot )$ is the query function calculated by the weighted sum of all cells $\mathbf {m}_i^{x}$ in memory $\mathbf {M}^{x}$ ($x \in \lbrace slot, int\rbrace $) : Here $\mathbf {u}$ and $\mathbf {W}$ are model parameters. We name the above calculations of two-round attentions (Equation DISPLAY_FORM23) as “deliberate attention". The intent representation $\mathbf {h}_t^{int}$ is computed by the deliberate attention as well: These two deliberate attentions are conducted simultaneously at each word position in such collaborative manner, which guarantees adequate knowledge diffusions between slots and intents. The retrieved slot features $\mathbf {H}_t^{slot}$ and intent features $\mathbf {H}_t^{int}$ are utilized to provide guidances for the next local calculation layer. CM-Net ::: CM-block ::: Local Calculation Local context information is highly useful for sequence modeling BIBREF19, BIBREF20. BIBREF21 SLSTM2018 propose the S-LSTM to encode both local and sentence-level information simultaneously, and it has been shown more powerful for text representation when compared with the conventional BiLSTMs. We extend the S-LSTM with slot-specific features $\mathbf {H}_t^{slot}$ and intent-specific features $\mathbf {H}_t^{slot}$ retrieved from memories. Specifically, at each input position $t$, we take the local window context $\mathbf {\xi }_t$, word embedding $\mathbf {x}_t$, slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ as inputs to conduct combinatorial calculation simultaneously. Formally, in the $l^{th}$ layer, the hidden state $\mathbf {h_t}$ is updated as follows: where $\mathbf { \xi } _ { t } ^ { l }$ is the concatenation of hidden states in a local window, and $\mathbf {i}_t^l$, $\mathbf {f}_t^l$, $\mathbf {o}_t^l$, $\mathbf {l}_t^l$ and $\mathbf {r}_t^l$ are gates to control information flows, and $\mathbf {W}_n^x$ $(x \in \lbrace i, o, f, l, r, u\rbrace , n \in \lbrace 1, 2, 3, 4\rbrace )$ are model parameters. More details about the state transition can be referred in BIBREF21. In the first CM-block, the hidden state $\mathbf {h}_t$ is initialized with the corresponding word embedding. In other CM-blocks, the $\mathbf {h}_t$ is inherited from the output of the adjacent lower CM-block. At each word position of above procedures, the hidden state is updated with abundant information from different perspectives, namely word embeddings, local contexts, slots and intents representations. The local calculation layer in each CM-block has been shown highly useful for both tasks, and especially for the slot filling task, which will be validated in our experiments in Section SECREF46. CM-Net ::: CM-block ::: Global Recurrence Bi-directional RNNs, especially the BiLSTMs BIBREF22 are regarded to encode both past and future information of a sentence, which have become a dominant method in various sequence modeling tasks BIBREF23, BIBREF24. The inherent nature of BiLSTMs is able to supplement global sequential information, which is insufficiently modeled in the previous local calculation layer. Thus we apply an additional BiLSTMs layer upon the local calculation layer in each CM-block. By taking the slot- and intent-specific local context representations as inputs, we can obtain more specific global sequential representations. Formally, it takes the hidden state $\mathbf {h}_t^{l-1}$ inherited from the local calculation layer as input, and conduct recurrent steps as follows: The output “states" of the BiLSTMs are taken as “states" input of the local calculation in next CM-block. The global sequential information encoded by the BiLSTMs is shown necessary and effective for both tasks in our experiments in Section SECREF46. CM-Net ::: Inference Layer After multiple rounds of interactions among local context representations, global sequential information, slot and intent features, we conduct predictions upon the final CM-block. For the predictions of slots, we take the hidden states $\mathbf {H}$ along with the retrieved slot $\mathbf {H}^{slot}$ representations (both are from the final CM-block) as input features, and then conduct predictions of slots similarly with the Equation (DISPLAY_FORM12) in Section SECREF2: For the prediction of intent label, we firstly aggregate the hidden state $\mathbf {h}_t$ and the retrieved intent representation $\mathbf {h}_t^{int}$ at each word position (from the final CM-block as well) via mean pooling: and then take the summarized vector $\mathbf {v}^{int}$ as input feature to conduct prediction of intent consistently with the Equation (DISPLAY_FORM14) in Section SECREF2. Experiments ::: Datasets and Metrics We evaluate our proposed CM-Net on three real-word datasets, and statistics are listed in Table TABREF32. Experiments ::: Datasets and Metrics ::: ATIS The Airline Travel Information Systems (ATIS) corpus BIBREF12 is the most widely used benchmark for the SLU research. Please note that, there are extra named entity features in the ATIS, which almost determine slot tags. These hand-crafted features are not generally available in open domains BIBREF25, BIBREF29, therefore we train our model purely on the training set without additional hand-crafted features. Experiments ::: Datasets and Metrics ::: SNIPS SNIPS Natural Language Understanding benchmark BIBREF11 is collected in a crowsourced fashion by Snips. The intents of this dataset are more balanced when compared with the ATIS. We split another 700 utterances for validation set following previous works BIBREF7, BIBREF9. Experiments ::: Datasets and Metrics ::: CAIS We collect utterances from the $\mathbf {C}$hinese $\mathbf {A}$rtificial $\mathbf {I}$ntelligence $\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field. Experiments ::: Datasets and Metrics ::: Metrics Slot filling is typically treated as a sequence labeling problem, and thus we take the conlleval as the token-level $F_1$ metric. The intent detection is evaluated with the classification accuracy. Specially, several utterances in the ATIS are tagged with more than one labels. Following previous works BIBREF13, BIBREF25, we count an utterrance as a correct classification if any ground truth label is predicted. Experiments ::: Implementation Details All trainable parameters in our model are initialized by the method described in BIBREF31 Xavier. We apply dropout BIBREF32 to the embedding layer and hidden states with a rate of 0.5. All models are optimized by the Adam optimizer BIBREF33 with gradient clipping of 3 BIBREF34. The initial learning rate $\alpha $ is set to 0.001, and decrease with the growth of training steps. We monitor the training process on the validation set and report the final result on the test set. One layer CNN with a filter of size 3 and max pooling are utilized to generate 100d word embeddings. The cased 300d Glove is adapted to initialize word embeddings, and kept fixed when training. In auxiliary experiments, the output hidden states of BERT are taken as additional word embeddings and kept fixed as well. We share parameters of both memories with the parameter matrices in the corresponding softmax layers, which can be taken as introducing supervised signals into the memories to some extent. We conduct hyper-parameters tuning for layer size (finally set to 3) and loss weight $\lambda $ (finally set to 0.5), and empirically set other parameters to the values listed in the supplementary material. Experiments ::: Main Results Main results of our CM-Net on the SNIPS and ATIS are shown in Table TABREF33. Our CM-Net achieves the state-of-the-art results on both datasets in terms of slot filling $F_1$ score and intent detection accuracy, except for the $F_1$ score on the ATIS. We conjecture that the named entity feature in the ATIS has a great impact on the slot filling result as illustrated in Section SECREF34. Since the SNIPS is collected from multiple domains with more balanced labels when compared with the ATIS, the slot filling $F_1$ score on the SNIPS is able to demonstrate the superiority of our CM-Net. It is noteworthy that the CM-Net achieves comparable results when compared with models that exploit additional language models BIBREF27, BIBREF28. We conduct auxiliary experiments by leveraging the well-known BERT BIBREF35 as an external resource for a relatively fair comparison with those models, and report details in Section SECREF48. Analysis Since the SNIPS corpus is collected from multiple domains and its label distributions are more balanced when compared with the ATIS, we choose the SNIPS to elucidate properties of our CM-Net and conduct several additional experiments. Analysis ::: Whether Memories Promote Each Other? In the CM-Net, the deliberate attention mechanism is proposed in a collaborative manner to perform information exchange between slots and intents. We conduct experiments to verify whether such kind of knowledge diffusion in both memories can promote each other. More specifically, we remove one unidirectional diffusion (e.g. from slot to intent) or both in each experimental setup. The results are illustrated in Figure FIGREF43. We can observe obvious drops on both tasks when both directional knowledge diffusions are removed (CM-Net vs. neither). For the slot filling task (left part in Figure FIGREF43), the $F_1$ scores decrease slightly when the knowledge from slot to intent is blocked (CM-Net vs. “no slot2int"), and a more evident drop occurs when the knowledge from intent to slot is blocked (CM-Net vs. “no int2slot"). Similar observations can be found for the intent detection task (right part in Figure FIGREF43). In conclusion, the bidirectional knowledge diffusion between slots and intents are necessary and effective to promote each other. Analysis ::: Ablation Experiments We conduct ablation experiments to investigate the impacts of various components in our CM-Net. In particular, we remove one component among slot memory, intent memory, local calculation and global recurrence. Results of different combinations are presented in Table TABREF44. Once the slot memory and its corresponding interactions with other components are removed, scores on both tasks decrease to some extent, and a more obvious decline occurs for the slot filling (row 1 vs. row 0), which is consistent with the conclusion of Section SECREF45. Similar observations can be found for the intent memory (row 2). The local calculation layer is designed to capture better local context representations, which has an evident impact on the slot filling and slighter effect on the intent detection (row 3 vs. row 0). Opposite observations occur in term of global recurrence, which is supposed to model global sequential information and thus has larger effect on the intent detection (row 4 vs. row 0). Analysis ::: Effects of Pre-trained Language Models Recently, there has been a growing body of works exploring neural language models that trained on massive corpora to learn contextual representations (e.g. BERT BERT and EMLo EMLo). Inspired by the effectiveness of language model embeddings, we conduct experiments by leveraging the BERT as an additional feature. The results emerged in Table TABREF47 show that we establish new state-of-the-art results on both tasks of the SNIPS. Analysis ::: Evaluation on the CAIS We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture BIBREF36 for sequence labeling task, and the other one is the more powerful sententce-state LSTM BIBREF21. The results listed in Table TABREF50 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages. Related Work ::: Memory Network Memory network is a general machine learning framework introduced by BIBREF37 memory2014, which have been shown effective in question answering BIBREF37, BIBREF38, machine translation BIBREF39, BIBREF40, aspect level sentiment classification BIBREF41, etc. For spoken language understanding, BIBREF42 memoryslu2016 introduce memory mechanisms to encode historical utterances. In this paper, we propose two memories to explicitly capture the semantic correlations between slots and the intent in a given utterance, and devise a novel collaborative retrieval approach. Related Work ::: Interactions between slots and intents Considering the semantic proximity between slots and intents, some works propose to enhance the slot filling task unidirectionally with the guidance of intent representations via gating mechanisms BIBREF7, BIBREF8. Intuitively, the slot representations are also instructive to the intent detection task and thus bidirectional interactions between slots and intents are benefical for each other. BIBREF9 capsule2018 propose a hierarchical capsule network to perform interactions from words to slots, slots to intents and intents to words in a pipeline manner, which is relatively limited in capturing the complicated correlations among them. In our CM-Net, information exchanges are performed simultaneously with knowledge diffusions in both directions. The experiments demonstrate the superiority of our CM-Net in capturing the semantic correlations between slots and intents. Related Work ::: Sentence-State LSTM BIBREF21 BIBREF21 propose a novel graph RNN named S-LSTM, which models sentence between words simultaneously. Inspired by the new perspective of state transition in the S-LSTM, we further extend it with task-specific (i.e., slots and intents) representations via our collaborative memories. In addition, the global information in S-LSTM is modeled by aggregating the local features with gating mechanisms, which may lose sight of sequential information of the whole sentence. Therefore, We apply external BiLSTMs to supply global sequential features, which is shown highly necessary for both tasks in our experiments. Conclusion We propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork (CM-Net) for jointly modeling slot filling and intent detection. The CM-Net is able to explicitly capture the semantic correlations among words, slots and intents in a collaborative manner, and incrementally enrich the information flows with local context and global sequential information. Experiments on two standard benchmarks and our CAIS corpus demonstrate the effectiveness and generalizability of our proposed CM-Net. In addition, we contribute the new corpus (CAIS) to the research community. Acknowledgments Liu, Chen and Xu are supported by the National Natural Science Foundation of China (Contract 61370130, 61976015, 61473294 and 61876198), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010). We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions.
F1 scores of 86.16 on slot filling and 94.56 on intent detection
26c290584c97e22b25035f5458625944db181552
26c290584c97e22b25035f5458625944db181552_0
Q: What is the size of their dataset? Text: Introduction Spoken Language Understanding (SLU) is a core component in dialogue systems. It typically aims to identify the intent and semantic constituents for a given utterance, which are referred as intent detection and slot filling, respectively. Past years have witnessed rapid developments in diverse deep learning models BIBREF0, BIBREF1 for SLU. To take full advantage of supervised signals of slots and intents, and share knowledge between them, most of existing works apply joint models that mainly based on CNNs BIBREF2, BIBREF3, RNNs BIBREF4, BIBREF5, and asynchronous bi-model BIBREF6. Generally, these joint models encode words convolutionally or sequentially, and then aggregate hidden states into a utterance-level representation for the intent prediction, without interactions between representations of slots and intents. Intuitively, slots and intents from similar fields tend to occur simultaneously, which can be observed from Figure FIGREF2 and Table TABREF3. Therefore, it is beneficial to generate the representations of slots and intents with the guidance from each other. Some works explore enhancing the slot filling task unidirectionally with the guidance from intent representations via gating mechanisms BIBREF7, BIBREF8, while the predictions of intents lack the guidance from slots. Moreover, the capsule network with dynamic routing algorithms BIBREF9 is proposed to perform interactions in both directions. However, there are still two limitations in this model. The one is that the information flows from words to slots, slots to intents and intents to words in a pipeline manner, which is to some extent limited in capturing complicated correlations among words, slots and intents. The other is that the local context information which has been shown highly useful for the slot filling BIBREF10, is not explicitly modeled. In this paper, we try to address these issues, and thus propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork, named CM-Net. The main idea is to directly capture semantic relationships among words, slots and intents, which is conducted simultaneously at each word position in a collaborative manner. Specifically, we alternately perform information exchange among the task-specific features referred from memories, local context representations and global sequential information via the well-designed block, named CM-block, which consists of three computational components: Deliberate Attention: Obtaining slot-specific and intent-specific representations from memories in a collaborative manner. Local Calculation: Updating local context representations with the guidances of the referred slot and intent representations in the previous Deliberate Attention. Global Recurrence: Generating specific (slot and intent) global sequential representations based on local context representations from the previous Local Calculation. Above components in each CM-block are conducted consecutively, which are responsible for encoding information from different perspectives. Finally, multiple CM-blocks are stacked together, and construct our CM-Net. We firstly conduct experiments on two popular benchmarks, SNIPS BIBREF11 and ATIS BIBREF12, BIBREF13. Experimental results show that the CM-Net achieves the state-of-the-art results in 3 of 4 criteria (e.g., intent detection accuracy on ATIS) on both benchmarks. Additionally, trials on our self-collected dataset, named CAIS, demonstrate the effectiveness and generalizability of the CM-Net. Our main contributions are as follows: We propose a novel CM-Net for SLU, which explicitly captures semantic correlations among words, slots and intents in a collaborative manner, and incrementally enriches the specific features, local context representations and global sequential representations through stacked CM-blocks. Our CM-Net achieves the state-of-the-art results on two major SLU benchmarks (ATIS and SNIPS) in most of criteria. We contribute a new corpus CAIS with manual annotations of slot tags and intent labels to the research community. Background In principle, the slot filling is treated as a sequence labeling task, and the intent detection is a classification problem. Formally, given an utterance $X = \lbrace x_1, x_2, \cdots , x_N \rbrace $ with $N$ words and its corresponding slot tags $Y^{slot} = \lbrace y_1, y_2, \cdots , y_N \rbrace $, the slot filling task aims to learn a parameterized mapping function $f_{\theta } : X \rightarrow Y $ from input words to slot tags. For the intent detection, it is designed to predict the intent label $\hat{y}^{int}$ for the entire utterance $X$ from the predefined label set $S^{int}$. Typically, the input utterance is firstly encoded into a sequence of distributed representations $\mathbf {X} = \lbrace \mathbf {x}_1, \mathbf {x}_2, \cdots , \mathbf {x}_N\rbrace $ by character-aware and pre-trained word embeddings. Afterwards, the following bidirectional RNNs are applied to encode the embeddings $\mathbf {X}$ into context-sensitive representations $\mathbf {H} = \lbrace \mathbf {h}_1, \mathbf {h}_2, \cdots , \mathbf {h}_N\rbrace $. An external CRF BIBREF14 layer is widely utilized to calculate conditional probabilities of slot tags: Here $\mathbf {Y}_x$ is the set of all possible sequences of tags, and $F(\cdot )$ is the score function calculated by: where $\mathbf {A}$ is the transition matrix that $\mathbf {A}_{i,j}$ indicates the score of a transition from $i$ to $j$, and $\mathbf {P}$ is the score matrix output by RNNs. $P_{i,j}$ indicates the score of the $j^{th}$ tag of the $i^{th}$ word in a sentence BIBREF15. When testing, the Viterbi algorithm BIBREF16 is used to search the sequence of slot tags with maximum score: As to the prediction of intent, the word-level hidden states $\mathbf {H}$ are firstly summarized into a utterance-level representation $\mathbf {v}^{int}$ via mean pooling (or max pooling or self-attention, etc.): The most probable intent label $\hat{y}^{int}$ is predicted by softmax normalization over the intent label set: Generally, both tasks are trained jointly to minimize the sum of cross entropy from each individual task. Formally, the loss function of the join model is computed as follows: where $y^{int}_i$ and $y^{slot}_{i,j}$ are golden labels, and $\lambda $ is hyperparameter, and $|S^{int}|$ is the size of intent label set, and similarly for $|S^{slot}|$ . CM-Net ::: Overview In this section, we start with a brief overview of our CM-Net and then proceed to introduce each module. As shown in Figure FIGREF16, the input utterance is firstly encoded with the Embedding Layer, and then is transformed by multiple CM-blocks with the assistance of slot and intent memories, and finally make predictions in the Inference Layer. CM-Net ::: Embedding Layers ::: Pre-trained Word Embedding The pre-trained word embeddings has been indicated as a de-facto standard of neural network architectures for various NLP tasks. We adapt the cased, 300d Glove BIBREF17 to initialize word embeddings, and keep them frozen. CM-Net ::: Embedding Layers ::: Character-aware Word Embedding It has been demonstrated that character level information (e.g. capitalization and prefix) BIBREF18 is crucial for sequence labeling. We use one layer of CNN followed by max pooling to generate character-aware word embeddings. CM-Net ::: CM-block The CM-block is the core module of our CM-Net, which is designed with three computational components: Deliberate Attention, Local Calculation and Global Recurrence respectively. CM-Net ::: CM-block ::: Deliberate Attention To fully model semantic relations between slots and intents, we build the slot memory $\mathbf {M^{slot}} $ and intent memory $\mathbf {M^{int}}$, and further devise a collaborative retrieval approach. For the slot memory, it keeps $|S^{slot}|$ slot cells which are randomly initialized and updated as model parameters. Similarly for the intent memory. At each word position, we take the hidden state $\mathbf {h}_t$ as query, and obtain slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ from both memories by the deliberate attention mechanism, which will be illustrated in the following. Specifically for the slot feature $\mathbf {h}_t^{slot}$, we firstly get a rough intent representation $\widetilde{\mathbf {h}}_t^{int}$ by the word-aware attention with hidden state $\mathbf {h}_t$ over the intent memory $\mathbf {M^{int}}$, and then obtain the final slot feature $\mathbf {h}_t^{slot}$ by the intent-aware attention over the slot memory $\mathbf {M^{slot}}$ with the intent-enhanced representation $[\mathbf {h}_t;\widetilde{\mathbf {h}}_t^{int}]$. Formally, the above-mentioned procedures are computed as follows: where $ATT(\cdot )$ is the query function calculated by the weighted sum of all cells $\mathbf {m}_i^{x}$ in memory $\mathbf {M}^{x}$ ($x \in \lbrace slot, int\rbrace $) : Here $\mathbf {u}$ and $\mathbf {W}$ are model parameters. We name the above calculations of two-round attentions (Equation DISPLAY_FORM23) as “deliberate attention". The intent representation $\mathbf {h}_t^{int}$ is computed by the deliberate attention as well: These two deliberate attentions are conducted simultaneously at each word position in such collaborative manner, which guarantees adequate knowledge diffusions between slots and intents. The retrieved slot features $\mathbf {H}_t^{slot}$ and intent features $\mathbf {H}_t^{int}$ are utilized to provide guidances for the next local calculation layer. CM-Net ::: CM-block ::: Local Calculation Local context information is highly useful for sequence modeling BIBREF19, BIBREF20. BIBREF21 SLSTM2018 propose the S-LSTM to encode both local and sentence-level information simultaneously, and it has been shown more powerful for text representation when compared with the conventional BiLSTMs. We extend the S-LSTM with slot-specific features $\mathbf {H}_t^{slot}$ and intent-specific features $\mathbf {H}_t^{slot}$ retrieved from memories. Specifically, at each input position $t$, we take the local window context $\mathbf {\xi }_t$, word embedding $\mathbf {x}_t$, slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ as inputs to conduct combinatorial calculation simultaneously. Formally, in the $l^{th}$ layer, the hidden state $\mathbf {h_t}$ is updated as follows: where $\mathbf { \xi } _ { t } ^ { l }$ is the concatenation of hidden states in a local window, and $\mathbf {i}_t^l$, $\mathbf {f}_t^l$, $\mathbf {o}_t^l$, $\mathbf {l}_t^l$ and $\mathbf {r}_t^l$ are gates to control information flows, and $\mathbf {W}_n^x$ $(x \in \lbrace i, o, f, l, r, u\rbrace , n \in \lbrace 1, 2, 3, 4\rbrace )$ are model parameters. More details about the state transition can be referred in BIBREF21. In the first CM-block, the hidden state $\mathbf {h}_t$ is initialized with the corresponding word embedding. In other CM-blocks, the $\mathbf {h}_t$ is inherited from the output of the adjacent lower CM-block. At each word position of above procedures, the hidden state is updated with abundant information from different perspectives, namely word embeddings, local contexts, slots and intents representations. The local calculation layer in each CM-block has been shown highly useful for both tasks, and especially for the slot filling task, which will be validated in our experiments in Section SECREF46. CM-Net ::: CM-block ::: Global Recurrence Bi-directional RNNs, especially the BiLSTMs BIBREF22 are regarded to encode both past and future information of a sentence, which have become a dominant method in various sequence modeling tasks BIBREF23, BIBREF24. The inherent nature of BiLSTMs is able to supplement global sequential information, which is insufficiently modeled in the previous local calculation layer. Thus we apply an additional BiLSTMs layer upon the local calculation layer in each CM-block. By taking the slot- and intent-specific local context representations as inputs, we can obtain more specific global sequential representations. Formally, it takes the hidden state $\mathbf {h}_t^{l-1}$ inherited from the local calculation layer as input, and conduct recurrent steps as follows: The output “states" of the BiLSTMs are taken as “states" input of the local calculation in next CM-block. The global sequential information encoded by the BiLSTMs is shown necessary and effective for both tasks in our experiments in Section SECREF46. CM-Net ::: Inference Layer After multiple rounds of interactions among local context representations, global sequential information, slot and intent features, we conduct predictions upon the final CM-block. For the predictions of slots, we take the hidden states $\mathbf {H}$ along with the retrieved slot $\mathbf {H}^{slot}$ representations (both are from the final CM-block) as input features, and then conduct predictions of slots similarly with the Equation (DISPLAY_FORM12) in Section SECREF2: For the prediction of intent label, we firstly aggregate the hidden state $\mathbf {h}_t$ and the retrieved intent representation $\mathbf {h}_t^{int}$ at each word position (from the final CM-block as well) via mean pooling: and then take the summarized vector $\mathbf {v}^{int}$ as input feature to conduct prediction of intent consistently with the Equation (DISPLAY_FORM14) in Section SECREF2. Experiments ::: Datasets and Metrics We evaluate our proposed CM-Net on three real-word datasets, and statistics are listed in Table TABREF32. Experiments ::: Datasets and Metrics ::: ATIS The Airline Travel Information Systems (ATIS) corpus BIBREF12 is the most widely used benchmark for the SLU research. Please note that, there are extra named entity features in the ATIS, which almost determine slot tags. These hand-crafted features are not generally available in open domains BIBREF25, BIBREF29, therefore we train our model purely on the training set without additional hand-crafted features. Experiments ::: Datasets and Metrics ::: SNIPS SNIPS Natural Language Understanding benchmark BIBREF11 is collected in a crowsourced fashion by Snips. The intents of this dataset are more balanced when compared with the ATIS. We split another 700 utterances for validation set following previous works BIBREF7, BIBREF9. Experiments ::: Datasets and Metrics ::: CAIS We collect utterances from the $\mathbf {C}$hinese $\mathbf {A}$rtificial $\mathbf {I}$ntelligence $\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field. Experiments ::: Datasets and Metrics ::: Metrics Slot filling is typically treated as a sequence labeling problem, and thus we take the conlleval as the token-level $F_1$ metric. The intent detection is evaluated with the classification accuracy. Specially, several utterances in the ATIS are tagged with more than one labels. Following previous works BIBREF13, BIBREF25, we count an utterrance as a correct classification if any ground truth label is predicted. Experiments ::: Implementation Details All trainable parameters in our model are initialized by the method described in BIBREF31 Xavier. We apply dropout BIBREF32 to the embedding layer and hidden states with a rate of 0.5. All models are optimized by the Adam optimizer BIBREF33 with gradient clipping of 3 BIBREF34. The initial learning rate $\alpha $ is set to 0.001, and decrease with the growth of training steps. We monitor the training process on the validation set and report the final result on the test set. One layer CNN with a filter of size 3 and max pooling are utilized to generate 100d word embeddings. The cased 300d Glove is adapted to initialize word embeddings, and kept fixed when training. In auxiliary experiments, the output hidden states of BERT are taken as additional word embeddings and kept fixed as well. We share parameters of both memories with the parameter matrices in the corresponding softmax layers, which can be taken as introducing supervised signals into the memories to some extent. We conduct hyper-parameters tuning for layer size (finally set to 3) and loss weight $\lambda $ (finally set to 0.5), and empirically set other parameters to the values listed in the supplementary material. Experiments ::: Main Results Main results of our CM-Net on the SNIPS and ATIS are shown in Table TABREF33. Our CM-Net achieves the state-of-the-art results on both datasets in terms of slot filling $F_1$ score and intent detection accuracy, except for the $F_1$ score on the ATIS. We conjecture that the named entity feature in the ATIS has a great impact on the slot filling result as illustrated in Section SECREF34. Since the SNIPS is collected from multiple domains with more balanced labels when compared with the ATIS, the slot filling $F_1$ score on the SNIPS is able to demonstrate the superiority of our CM-Net. It is noteworthy that the CM-Net achieves comparable results when compared with models that exploit additional language models BIBREF27, BIBREF28. We conduct auxiliary experiments by leveraging the well-known BERT BIBREF35 as an external resource for a relatively fair comparison with those models, and report details in Section SECREF48. Analysis Since the SNIPS corpus is collected from multiple domains and its label distributions are more balanced when compared with the ATIS, we choose the SNIPS to elucidate properties of our CM-Net and conduct several additional experiments. Analysis ::: Whether Memories Promote Each Other? In the CM-Net, the deliberate attention mechanism is proposed in a collaborative manner to perform information exchange between slots and intents. We conduct experiments to verify whether such kind of knowledge diffusion in both memories can promote each other. More specifically, we remove one unidirectional diffusion (e.g. from slot to intent) or both in each experimental setup. The results are illustrated in Figure FIGREF43. We can observe obvious drops on both tasks when both directional knowledge diffusions are removed (CM-Net vs. neither). For the slot filling task (left part in Figure FIGREF43), the $F_1$ scores decrease slightly when the knowledge from slot to intent is blocked (CM-Net vs. “no slot2int"), and a more evident drop occurs when the knowledge from intent to slot is blocked (CM-Net vs. “no int2slot"). Similar observations can be found for the intent detection task (right part in Figure FIGREF43). In conclusion, the bidirectional knowledge diffusion between slots and intents are necessary and effective to promote each other. Analysis ::: Ablation Experiments We conduct ablation experiments to investigate the impacts of various components in our CM-Net. In particular, we remove one component among slot memory, intent memory, local calculation and global recurrence. Results of different combinations are presented in Table TABREF44. Once the slot memory and its corresponding interactions with other components are removed, scores on both tasks decrease to some extent, and a more obvious decline occurs for the slot filling (row 1 vs. row 0), which is consistent with the conclusion of Section SECREF45. Similar observations can be found for the intent memory (row 2). The local calculation layer is designed to capture better local context representations, which has an evident impact on the slot filling and slighter effect on the intent detection (row 3 vs. row 0). Opposite observations occur in term of global recurrence, which is supposed to model global sequential information and thus has larger effect on the intent detection (row 4 vs. row 0). Analysis ::: Effects of Pre-trained Language Models Recently, there has been a growing body of works exploring neural language models that trained on massive corpora to learn contextual representations (e.g. BERT BERT and EMLo EMLo). Inspired by the effectiveness of language model embeddings, we conduct experiments by leveraging the BERT as an additional feature. The results emerged in Table TABREF47 show that we establish new state-of-the-art results on both tasks of the SNIPS. Analysis ::: Evaluation on the CAIS We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture BIBREF36 for sequence labeling task, and the other one is the more powerful sententce-state LSTM BIBREF21. The results listed in Table TABREF50 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages. Related Work ::: Memory Network Memory network is a general machine learning framework introduced by BIBREF37 memory2014, which have been shown effective in question answering BIBREF37, BIBREF38, machine translation BIBREF39, BIBREF40, aspect level sentiment classification BIBREF41, etc. For spoken language understanding, BIBREF42 memoryslu2016 introduce memory mechanisms to encode historical utterances. In this paper, we propose two memories to explicitly capture the semantic correlations between slots and the intent in a given utterance, and devise a novel collaborative retrieval approach. Related Work ::: Interactions between slots and intents Considering the semantic proximity between slots and intents, some works propose to enhance the slot filling task unidirectionally with the guidance of intent representations via gating mechanisms BIBREF7, BIBREF8. Intuitively, the slot representations are also instructive to the intent detection task and thus bidirectional interactions between slots and intents are benefical for each other. BIBREF9 capsule2018 propose a hierarchical capsule network to perform interactions from words to slots, slots to intents and intents to words in a pipeline manner, which is relatively limited in capturing the complicated correlations among them. In our CM-Net, information exchanges are performed simultaneously with knowledge diffusions in both directions. The experiments demonstrate the superiority of our CM-Net in capturing the semantic correlations between slots and intents. Related Work ::: Sentence-State LSTM BIBREF21 BIBREF21 propose a novel graph RNN named S-LSTM, which models sentence between words simultaneously. Inspired by the new perspective of state transition in the S-LSTM, we further extend it with task-specific (i.e., slots and intents) representations via our collaborative memories. In addition, the global information in S-LSTM is modeled by aggregating the local features with gating mechanisms, which may lose sight of sequential information of the whole sentence. Therefore, We apply external BiLSTMs to supply global sequential features, which is shown highly necessary for both tasks in our experiments. Conclusion We propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork (CM-Net) for jointly modeling slot filling and intent detection. The CM-Net is able to explicitly capture the semantic correlations among words, slots and intents in a collaborative manner, and incrementally enrich the information flows with local context and global sequential information. Experiments on two standard benchmarks and our CAIS corpus demonstrate the effectiveness and generalizability of our proposed CM-Net. In addition, we contribute the new corpus (CAIS) to the research community. Acknowledgments Liu, Chen and Xu are supported by the National Natural Science Foundation of China (Contract 61370130, 61976015, 61473294 and 61876198), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010). We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions.
10,001 utterances
d71772bfbc27ff1682e552484bc7c71818be50cf
d71772bfbc27ff1682e552484bc7c71818be50cf_0
Q: What is the source of the CAIS dataset? Text: Introduction Spoken Language Understanding (SLU) is a core component in dialogue systems. It typically aims to identify the intent and semantic constituents for a given utterance, which are referred as intent detection and slot filling, respectively. Past years have witnessed rapid developments in diverse deep learning models BIBREF0, BIBREF1 for SLU. To take full advantage of supervised signals of slots and intents, and share knowledge between them, most of existing works apply joint models that mainly based on CNNs BIBREF2, BIBREF3, RNNs BIBREF4, BIBREF5, and asynchronous bi-model BIBREF6. Generally, these joint models encode words convolutionally or sequentially, and then aggregate hidden states into a utterance-level representation for the intent prediction, without interactions between representations of slots and intents. Intuitively, slots and intents from similar fields tend to occur simultaneously, which can be observed from Figure FIGREF2 and Table TABREF3. Therefore, it is beneficial to generate the representations of slots and intents with the guidance from each other. Some works explore enhancing the slot filling task unidirectionally with the guidance from intent representations via gating mechanisms BIBREF7, BIBREF8, while the predictions of intents lack the guidance from slots. Moreover, the capsule network with dynamic routing algorithms BIBREF9 is proposed to perform interactions in both directions. However, there are still two limitations in this model. The one is that the information flows from words to slots, slots to intents and intents to words in a pipeline manner, which is to some extent limited in capturing complicated correlations among words, slots and intents. The other is that the local context information which has been shown highly useful for the slot filling BIBREF10, is not explicitly modeled. In this paper, we try to address these issues, and thus propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork, named CM-Net. The main idea is to directly capture semantic relationships among words, slots and intents, which is conducted simultaneously at each word position in a collaborative manner. Specifically, we alternately perform information exchange among the task-specific features referred from memories, local context representations and global sequential information via the well-designed block, named CM-block, which consists of three computational components: Deliberate Attention: Obtaining slot-specific and intent-specific representations from memories in a collaborative manner. Local Calculation: Updating local context representations with the guidances of the referred slot and intent representations in the previous Deliberate Attention. Global Recurrence: Generating specific (slot and intent) global sequential representations based on local context representations from the previous Local Calculation. Above components in each CM-block are conducted consecutively, which are responsible for encoding information from different perspectives. Finally, multiple CM-blocks are stacked together, and construct our CM-Net. We firstly conduct experiments on two popular benchmarks, SNIPS BIBREF11 and ATIS BIBREF12, BIBREF13. Experimental results show that the CM-Net achieves the state-of-the-art results in 3 of 4 criteria (e.g., intent detection accuracy on ATIS) on both benchmarks. Additionally, trials on our self-collected dataset, named CAIS, demonstrate the effectiveness and generalizability of the CM-Net. Our main contributions are as follows: We propose a novel CM-Net for SLU, which explicitly captures semantic correlations among words, slots and intents in a collaborative manner, and incrementally enriches the specific features, local context representations and global sequential representations through stacked CM-blocks. Our CM-Net achieves the state-of-the-art results on two major SLU benchmarks (ATIS and SNIPS) in most of criteria. We contribute a new corpus CAIS with manual annotations of slot tags and intent labels to the research community. Background In principle, the slot filling is treated as a sequence labeling task, and the intent detection is a classification problem. Formally, given an utterance $X = \lbrace x_1, x_2, \cdots , x_N \rbrace $ with $N$ words and its corresponding slot tags $Y^{slot} = \lbrace y_1, y_2, \cdots , y_N \rbrace $, the slot filling task aims to learn a parameterized mapping function $f_{\theta } : X \rightarrow Y $ from input words to slot tags. For the intent detection, it is designed to predict the intent label $\hat{y}^{int}$ for the entire utterance $X$ from the predefined label set $S^{int}$. Typically, the input utterance is firstly encoded into a sequence of distributed representations $\mathbf {X} = \lbrace \mathbf {x}_1, \mathbf {x}_2, \cdots , \mathbf {x}_N\rbrace $ by character-aware and pre-trained word embeddings. Afterwards, the following bidirectional RNNs are applied to encode the embeddings $\mathbf {X}$ into context-sensitive representations $\mathbf {H} = \lbrace \mathbf {h}_1, \mathbf {h}_2, \cdots , \mathbf {h}_N\rbrace $. An external CRF BIBREF14 layer is widely utilized to calculate conditional probabilities of slot tags: Here $\mathbf {Y}_x$ is the set of all possible sequences of tags, and $F(\cdot )$ is the score function calculated by: where $\mathbf {A}$ is the transition matrix that $\mathbf {A}_{i,j}$ indicates the score of a transition from $i$ to $j$, and $\mathbf {P}$ is the score matrix output by RNNs. $P_{i,j}$ indicates the score of the $j^{th}$ tag of the $i^{th}$ word in a sentence BIBREF15. When testing, the Viterbi algorithm BIBREF16 is used to search the sequence of slot tags with maximum score: As to the prediction of intent, the word-level hidden states $\mathbf {H}$ are firstly summarized into a utterance-level representation $\mathbf {v}^{int}$ via mean pooling (or max pooling or self-attention, etc.): The most probable intent label $\hat{y}^{int}$ is predicted by softmax normalization over the intent label set: Generally, both tasks are trained jointly to minimize the sum of cross entropy from each individual task. Formally, the loss function of the join model is computed as follows: where $y^{int}_i$ and $y^{slot}_{i,j}$ are golden labels, and $\lambda $ is hyperparameter, and $|S^{int}|$ is the size of intent label set, and similarly for $|S^{slot}|$ . CM-Net ::: Overview In this section, we start with a brief overview of our CM-Net and then proceed to introduce each module. As shown in Figure FIGREF16, the input utterance is firstly encoded with the Embedding Layer, and then is transformed by multiple CM-blocks with the assistance of slot and intent memories, and finally make predictions in the Inference Layer. CM-Net ::: Embedding Layers ::: Pre-trained Word Embedding The pre-trained word embeddings has been indicated as a de-facto standard of neural network architectures for various NLP tasks. We adapt the cased, 300d Glove BIBREF17 to initialize word embeddings, and keep them frozen. CM-Net ::: Embedding Layers ::: Character-aware Word Embedding It has been demonstrated that character level information (e.g. capitalization and prefix) BIBREF18 is crucial for sequence labeling. We use one layer of CNN followed by max pooling to generate character-aware word embeddings. CM-Net ::: CM-block The CM-block is the core module of our CM-Net, which is designed with three computational components: Deliberate Attention, Local Calculation and Global Recurrence respectively. CM-Net ::: CM-block ::: Deliberate Attention To fully model semantic relations between slots and intents, we build the slot memory $\mathbf {M^{slot}} $ and intent memory $\mathbf {M^{int}}$, and further devise a collaborative retrieval approach. For the slot memory, it keeps $|S^{slot}|$ slot cells which are randomly initialized and updated as model parameters. Similarly for the intent memory. At each word position, we take the hidden state $\mathbf {h}_t$ as query, and obtain slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ from both memories by the deliberate attention mechanism, which will be illustrated in the following. Specifically for the slot feature $\mathbf {h}_t^{slot}$, we firstly get a rough intent representation $\widetilde{\mathbf {h}}_t^{int}$ by the word-aware attention with hidden state $\mathbf {h}_t$ over the intent memory $\mathbf {M^{int}}$, and then obtain the final slot feature $\mathbf {h}_t^{slot}$ by the intent-aware attention over the slot memory $\mathbf {M^{slot}}$ with the intent-enhanced representation $[\mathbf {h}_t;\widetilde{\mathbf {h}}_t^{int}]$. Formally, the above-mentioned procedures are computed as follows: where $ATT(\cdot )$ is the query function calculated by the weighted sum of all cells $\mathbf {m}_i^{x}$ in memory $\mathbf {M}^{x}$ ($x \in \lbrace slot, int\rbrace $) : Here $\mathbf {u}$ and $\mathbf {W}$ are model parameters. We name the above calculations of two-round attentions (Equation DISPLAY_FORM23) as “deliberate attention". The intent representation $\mathbf {h}_t^{int}$ is computed by the deliberate attention as well: These two deliberate attentions are conducted simultaneously at each word position in such collaborative manner, which guarantees adequate knowledge diffusions between slots and intents. The retrieved slot features $\mathbf {H}_t^{slot}$ and intent features $\mathbf {H}_t^{int}$ are utilized to provide guidances for the next local calculation layer. CM-Net ::: CM-block ::: Local Calculation Local context information is highly useful for sequence modeling BIBREF19, BIBREF20. BIBREF21 SLSTM2018 propose the S-LSTM to encode both local and sentence-level information simultaneously, and it has been shown more powerful for text representation when compared with the conventional BiLSTMs. We extend the S-LSTM with slot-specific features $\mathbf {H}_t^{slot}$ and intent-specific features $\mathbf {H}_t^{slot}$ retrieved from memories. Specifically, at each input position $t$, we take the local window context $\mathbf {\xi }_t$, word embedding $\mathbf {x}_t$, slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ as inputs to conduct combinatorial calculation simultaneously. Formally, in the $l^{th}$ layer, the hidden state $\mathbf {h_t}$ is updated as follows: where $\mathbf { \xi } _ { t } ^ { l }$ is the concatenation of hidden states in a local window, and $\mathbf {i}_t^l$, $\mathbf {f}_t^l$, $\mathbf {o}_t^l$, $\mathbf {l}_t^l$ and $\mathbf {r}_t^l$ are gates to control information flows, and $\mathbf {W}_n^x$ $(x \in \lbrace i, o, f, l, r, u\rbrace , n \in \lbrace 1, 2, 3, 4\rbrace )$ are model parameters. More details about the state transition can be referred in BIBREF21. In the first CM-block, the hidden state $\mathbf {h}_t$ is initialized with the corresponding word embedding. In other CM-blocks, the $\mathbf {h}_t$ is inherited from the output of the adjacent lower CM-block. At each word position of above procedures, the hidden state is updated with abundant information from different perspectives, namely word embeddings, local contexts, slots and intents representations. The local calculation layer in each CM-block has been shown highly useful for both tasks, and especially for the slot filling task, which will be validated in our experiments in Section SECREF46. CM-Net ::: CM-block ::: Global Recurrence Bi-directional RNNs, especially the BiLSTMs BIBREF22 are regarded to encode both past and future information of a sentence, which have become a dominant method in various sequence modeling tasks BIBREF23, BIBREF24. The inherent nature of BiLSTMs is able to supplement global sequential information, which is insufficiently modeled in the previous local calculation layer. Thus we apply an additional BiLSTMs layer upon the local calculation layer in each CM-block. By taking the slot- and intent-specific local context representations as inputs, we can obtain more specific global sequential representations. Formally, it takes the hidden state $\mathbf {h}_t^{l-1}$ inherited from the local calculation layer as input, and conduct recurrent steps as follows: The output “states" of the BiLSTMs are taken as “states" input of the local calculation in next CM-block. The global sequential information encoded by the BiLSTMs is shown necessary and effective for both tasks in our experiments in Section SECREF46. CM-Net ::: Inference Layer After multiple rounds of interactions among local context representations, global sequential information, slot and intent features, we conduct predictions upon the final CM-block. For the predictions of slots, we take the hidden states $\mathbf {H}$ along with the retrieved slot $\mathbf {H}^{slot}$ representations (both are from the final CM-block) as input features, and then conduct predictions of slots similarly with the Equation (DISPLAY_FORM12) in Section SECREF2: For the prediction of intent label, we firstly aggregate the hidden state $\mathbf {h}_t$ and the retrieved intent representation $\mathbf {h}_t^{int}$ at each word position (from the final CM-block as well) via mean pooling: and then take the summarized vector $\mathbf {v}^{int}$ as input feature to conduct prediction of intent consistently with the Equation (DISPLAY_FORM14) in Section SECREF2. Experiments ::: Datasets and Metrics We evaluate our proposed CM-Net on three real-word datasets, and statistics are listed in Table TABREF32. Experiments ::: Datasets and Metrics ::: ATIS The Airline Travel Information Systems (ATIS) corpus BIBREF12 is the most widely used benchmark for the SLU research. Please note that, there are extra named entity features in the ATIS, which almost determine slot tags. These hand-crafted features are not generally available in open domains BIBREF25, BIBREF29, therefore we train our model purely on the training set without additional hand-crafted features. Experiments ::: Datasets and Metrics ::: SNIPS SNIPS Natural Language Understanding benchmark BIBREF11 is collected in a crowsourced fashion by Snips. The intents of this dataset are more balanced when compared with the ATIS. We split another 700 utterances for validation set following previous works BIBREF7, BIBREF9. Experiments ::: Datasets and Metrics ::: CAIS We collect utterances from the $\mathbf {C}$hinese $\mathbf {A}$rtificial $\mathbf {I}$ntelligence $\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field. Experiments ::: Datasets and Metrics ::: Metrics Slot filling is typically treated as a sequence labeling problem, and thus we take the conlleval as the token-level $F_1$ metric. The intent detection is evaluated with the classification accuracy. Specially, several utterances in the ATIS are tagged with more than one labels. Following previous works BIBREF13, BIBREF25, we count an utterrance as a correct classification if any ground truth label is predicted. Experiments ::: Implementation Details All trainable parameters in our model are initialized by the method described in BIBREF31 Xavier. We apply dropout BIBREF32 to the embedding layer and hidden states with a rate of 0.5. All models are optimized by the Adam optimizer BIBREF33 with gradient clipping of 3 BIBREF34. The initial learning rate $\alpha $ is set to 0.001, and decrease with the growth of training steps. We monitor the training process on the validation set and report the final result on the test set. One layer CNN with a filter of size 3 and max pooling are utilized to generate 100d word embeddings. The cased 300d Glove is adapted to initialize word embeddings, and kept fixed when training. In auxiliary experiments, the output hidden states of BERT are taken as additional word embeddings and kept fixed as well. We share parameters of both memories with the parameter matrices in the corresponding softmax layers, which can be taken as introducing supervised signals into the memories to some extent. We conduct hyper-parameters tuning for layer size (finally set to 3) and loss weight $\lambda $ (finally set to 0.5), and empirically set other parameters to the values listed in the supplementary material. Experiments ::: Main Results Main results of our CM-Net on the SNIPS and ATIS are shown in Table TABREF33. Our CM-Net achieves the state-of-the-art results on both datasets in terms of slot filling $F_1$ score and intent detection accuracy, except for the $F_1$ score on the ATIS. We conjecture that the named entity feature in the ATIS has a great impact on the slot filling result as illustrated in Section SECREF34. Since the SNIPS is collected from multiple domains with more balanced labels when compared with the ATIS, the slot filling $F_1$ score on the SNIPS is able to demonstrate the superiority of our CM-Net. It is noteworthy that the CM-Net achieves comparable results when compared with models that exploit additional language models BIBREF27, BIBREF28. We conduct auxiliary experiments by leveraging the well-known BERT BIBREF35 as an external resource for a relatively fair comparison with those models, and report details in Section SECREF48. Analysis Since the SNIPS corpus is collected from multiple domains and its label distributions are more balanced when compared with the ATIS, we choose the SNIPS to elucidate properties of our CM-Net and conduct several additional experiments. Analysis ::: Whether Memories Promote Each Other? In the CM-Net, the deliberate attention mechanism is proposed in a collaborative manner to perform information exchange between slots and intents. We conduct experiments to verify whether such kind of knowledge diffusion in both memories can promote each other. More specifically, we remove one unidirectional diffusion (e.g. from slot to intent) or both in each experimental setup. The results are illustrated in Figure FIGREF43. We can observe obvious drops on both tasks when both directional knowledge diffusions are removed (CM-Net vs. neither). For the slot filling task (left part in Figure FIGREF43), the $F_1$ scores decrease slightly when the knowledge from slot to intent is blocked (CM-Net vs. “no slot2int"), and a more evident drop occurs when the knowledge from intent to slot is blocked (CM-Net vs. “no int2slot"). Similar observations can be found for the intent detection task (right part in Figure FIGREF43). In conclusion, the bidirectional knowledge diffusion between slots and intents are necessary and effective to promote each other. Analysis ::: Ablation Experiments We conduct ablation experiments to investigate the impacts of various components in our CM-Net. In particular, we remove one component among slot memory, intent memory, local calculation and global recurrence. Results of different combinations are presented in Table TABREF44. Once the slot memory and its corresponding interactions with other components are removed, scores on both tasks decrease to some extent, and a more obvious decline occurs for the slot filling (row 1 vs. row 0), which is consistent with the conclusion of Section SECREF45. Similar observations can be found for the intent memory (row 2). The local calculation layer is designed to capture better local context representations, which has an evident impact on the slot filling and slighter effect on the intent detection (row 3 vs. row 0). Opposite observations occur in term of global recurrence, which is supposed to model global sequential information and thus has larger effect on the intent detection (row 4 vs. row 0). Analysis ::: Effects of Pre-trained Language Models Recently, there has been a growing body of works exploring neural language models that trained on massive corpora to learn contextual representations (e.g. BERT BERT and EMLo EMLo). Inspired by the effectiveness of language model embeddings, we conduct experiments by leveraging the BERT as an additional feature. The results emerged in Table TABREF47 show that we establish new state-of-the-art results on both tasks of the SNIPS. Analysis ::: Evaluation on the CAIS We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture BIBREF36 for sequence labeling task, and the other one is the more powerful sententce-state LSTM BIBREF21. The results listed in Table TABREF50 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages. Related Work ::: Memory Network Memory network is a general machine learning framework introduced by BIBREF37 memory2014, which have been shown effective in question answering BIBREF37, BIBREF38, machine translation BIBREF39, BIBREF40, aspect level sentiment classification BIBREF41, etc. For spoken language understanding, BIBREF42 memoryslu2016 introduce memory mechanisms to encode historical utterances. In this paper, we propose two memories to explicitly capture the semantic correlations between slots and the intent in a given utterance, and devise a novel collaborative retrieval approach. Related Work ::: Interactions between slots and intents Considering the semantic proximity between slots and intents, some works propose to enhance the slot filling task unidirectionally with the guidance of intent representations via gating mechanisms BIBREF7, BIBREF8. Intuitively, the slot representations are also instructive to the intent detection task and thus bidirectional interactions between slots and intents are benefical for each other. BIBREF9 capsule2018 propose a hierarchical capsule network to perform interactions from words to slots, slots to intents and intents to words in a pipeline manner, which is relatively limited in capturing the complicated correlations among them. In our CM-Net, information exchanges are performed simultaneously with knowledge diffusions in both directions. The experiments demonstrate the superiority of our CM-Net in capturing the semantic correlations between slots and intents. Related Work ::: Sentence-State LSTM BIBREF21 BIBREF21 propose a novel graph RNN named S-LSTM, which models sentence between words simultaneously. Inspired by the new perspective of state transition in the S-LSTM, we further extend it with task-specific (i.e., slots and intents) representations via our collaborative memories. In addition, the global information in S-LSTM is modeled by aggregating the local features with gating mechanisms, which may lose sight of sequential information of the whole sentence. Therefore, We apply external BiLSTMs to supply global sequential features, which is shown highly necessary for both tasks in our experiments. Conclusion We propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork (CM-Net) for jointly modeling slot filling and intent detection. The CM-Net is able to explicitly capture the semantic correlations among words, slots and intents in a collaborative manner, and incrementally enrich the information flows with local context and global sequential information. Experiments on two standard benchmarks and our CAIS corpus demonstrate the effectiveness and generalizability of our proposed CM-Net. In addition, we contribute the new corpus (CAIS) to the research community. Acknowledgments Liu, Chen and Xu are supported by the National Natural Science Foundation of China (Contract 61370130, 61976015, 61473294 and 61876198), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010). We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions.
the $\mathbf {C}$hinese $\mathbf {A}$rtificial $\mathbf {I}$ntelligence $\mathbf {S}$peakers (CAIS)
b6858c505936d981747962eae755a81489f62858
b6858c505936d981747962eae755a81489f62858_0
Q: What were the baselines models? Text: Introduction Spoken Language Understanding (SLU) is a core component in dialogue systems. It typically aims to identify the intent and semantic constituents for a given utterance, which are referred as intent detection and slot filling, respectively. Past years have witnessed rapid developments in diverse deep learning models BIBREF0, BIBREF1 for SLU. To take full advantage of supervised signals of slots and intents, and share knowledge between them, most of existing works apply joint models that mainly based on CNNs BIBREF2, BIBREF3, RNNs BIBREF4, BIBREF5, and asynchronous bi-model BIBREF6. Generally, these joint models encode words convolutionally or sequentially, and then aggregate hidden states into a utterance-level representation for the intent prediction, without interactions between representations of slots and intents. Intuitively, slots and intents from similar fields tend to occur simultaneously, which can be observed from Figure FIGREF2 and Table TABREF3. Therefore, it is beneficial to generate the representations of slots and intents with the guidance from each other. Some works explore enhancing the slot filling task unidirectionally with the guidance from intent representations via gating mechanisms BIBREF7, BIBREF8, while the predictions of intents lack the guidance from slots. Moreover, the capsule network with dynamic routing algorithms BIBREF9 is proposed to perform interactions in both directions. However, there are still two limitations in this model. The one is that the information flows from words to slots, slots to intents and intents to words in a pipeline manner, which is to some extent limited in capturing complicated correlations among words, slots and intents. The other is that the local context information which has been shown highly useful for the slot filling BIBREF10, is not explicitly modeled. In this paper, we try to address these issues, and thus propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork, named CM-Net. The main idea is to directly capture semantic relationships among words, slots and intents, which is conducted simultaneously at each word position in a collaborative manner. Specifically, we alternately perform information exchange among the task-specific features referred from memories, local context representations and global sequential information via the well-designed block, named CM-block, which consists of three computational components: Deliberate Attention: Obtaining slot-specific and intent-specific representations from memories in a collaborative manner. Local Calculation: Updating local context representations with the guidances of the referred slot and intent representations in the previous Deliberate Attention. Global Recurrence: Generating specific (slot and intent) global sequential representations based on local context representations from the previous Local Calculation. Above components in each CM-block are conducted consecutively, which are responsible for encoding information from different perspectives. Finally, multiple CM-blocks are stacked together, and construct our CM-Net. We firstly conduct experiments on two popular benchmarks, SNIPS BIBREF11 and ATIS BIBREF12, BIBREF13. Experimental results show that the CM-Net achieves the state-of-the-art results in 3 of 4 criteria (e.g., intent detection accuracy on ATIS) on both benchmarks. Additionally, trials on our self-collected dataset, named CAIS, demonstrate the effectiveness and generalizability of the CM-Net. Our main contributions are as follows: We propose a novel CM-Net for SLU, which explicitly captures semantic correlations among words, slots and intents in a collaborative manner, and incrementally enriches the specific features, local context representations and global sequential representations through stacked CM-blocks. Our CM-Net achieves the state-of-the-art results on two major SLU benchmarks (ATIS and SNIPS) in most of criteria. We contribute a new corpus CAIS with manual annotations of slot tags and intent labels to the research community. Background In principle, the slot filling is treated as a sequence labeling task, and the intent detection is a classification problem. Formally, given an utterance $X = \lbrace x_1, x_2, \cdots , x_N \rbrace $ with $N$ words and its corresponding slot tags $Y^{slot} = \lbrace y_1, y_2, \cdots , y_N \rbrace $, the slot filling task aims to learn a parameterized mapping function $f_{\theta } : X \rightarrow Y $ from input words to slot tags. For the intent detection, it is designed to predict the intent label $\hat{y}^{int}$ for the entire utterance $X$ from the predefined label set $S^{int}$. Typically, the input utterance is firstly encoded into a sequence of distributed representations $\mathbf {X} = \lbrace \mathbf {x}_1, \mathbf {x}_2, \cdots , \mathbf {x}_N\rbrace $ by character-aware and pre-trained word embeddings. Afterwards, the following bidirectional RNNs are applied to encode the embeddings $\mathbf {X}$ into context-sensitive representations $\mathbf {H} = \lbrace \mathbf {h}_1, \mathbf {h}_2, \cdots , \mathbf {h}_N\rbrace $. An external CRF BIBREF14 layer is widely utilized to calculate conditional probabilities of slot tags: Here $\mathbf {Y}_x$ is the set of all possible sequences of tags, and $F(\cdot )$ is the score function calculated by: where $\mathbf {A}$ is the transition matrix that $\mathbf {A}_{i,j}$ indicates the score of a transition from $i$ to $j$, and $\mathbf {P}$ is the score matrix output by RNNs. $P_{i,j}$ indicates the score of the $j^{th}$ tag of the $i^{th}$ word in a sentence BIBREF15. When testing, the Viterbi algorithm BIBREF16 is used to search the sequence of slot tags with maximum score: As to the prediction of intent, the word-level hidden states $\mathbf {H}$ are firstly summarized into a utterance-level representation $\mathbf {v}^{int}$ via mean pooling (or max pooling or self-attention, etc.): The most probable intent label $\hat{y}^{int}$ is predicted by softmax normalization over the intent label set: Generally, both tasks are trained jointly to minimize the sum of cross entropy from each individual task. Formally, the loss function of the join model is computed as follows: where $y^{int}_i$ and $y^{slot}_{i,j}$ are golden labels, and $\lambda $ is hyperparameter, and $|S^{int}|$ is the size of intent label set, and similarly for $|S^{slot}|$ . CM-Net ::: Overview In this section, we start with a brief overview of our CM-Net and then proceed to introduce each module. As shown in Figure FIGREF16, the input utterance is firstly encoded with the Embedding Layer, and then is transformed by multiple CM-blocks with the assistance of slot and intent memories, and finally make predictions in the Inference Layer. CM-Net ::: Embedding Layers ::: Pre-trained Word Embedding The pre-trained word embeddings has been indicated as a de-facto standard of neural network architectures for various NLP tasks. We adapt the cased, 300d Glove BIBREF17 to initialize word embeddings, and keep them frozen. CM-Net ::: Embedding Layers ::: Character-aware Word Embedding It has been demonstrated that character level information (e.g. capitalization and prefix) BIBREF18 is crucial for sequence labeling. We use one layer of CNN followed by max pooling to generate character-aware word embeddings. CM-Net ::: CM-block The CM-block is the core module of our CM-Net, which is designed with three computational components: Deliberate Attention, Local Calculation and Global Recurrence respectively. CM-Net ::: CM-block ::: Deliberate Attention To fully model semantic relations between slots and intents, we build the slot memory $\mathbf {M^{slot}} $ and intent memory $\mathbf {M^{int}}$, and further devise a collaborative retrieval approach. For the slot memory, it keeps $|S^{slot}|$ slot cells which are randomly initialized and updated as model parameters. Similarly for the intent memory. At each word position, we take the hidden state $\mathbf {h}_t$ as query, and obtain slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ from both memories by the deliberate attention mechanism, which will be illustrated in the following. Specifically for the slot feature $\mathbf {h}_t^{slot}$, we firstly get a rough intent representation $\widetilde{\mathbf {h}}_t^{int}$ by the word-aware attention with hidden state $\mathbf {h}_t$ over the intent memory $\mathbf {M^{int}}$, and then obtain the final slot feature $\mathbf {h}_t^{slot}$ by the intent-aware attention over the slot memory $\mathbf {M^{slot}}$ with the intent-enhanced representation $[\mathbf {h}_t;\widetilde{\mathbf {h}}_t^{int}]$. Formally, the above-mentioned procedures are computed as follows: where $ATT(\cdot )$ is the query function calculated by the weighted sum of all cells $\mathbf {m}_i^{x}$ in memory $\mathbf {M}^{x}$ ($x \in \lbrace slot, int\rbrace $) : Here $\mathbf {u}$ and $\mathbf {W}$ are model parameters. We name the above calculations of two-round attentions (Equation DISPLAY_FORM23) as “deliberate attention". The intent representation $\mathbf {h}_t^{int}$ is computed by the deliberate attention as well: These two deliberate attentions are conducted simultaneously at each word position in such collaborative manner, which guarantees adequate knowledge diffusions between slots and intents. The retrieved slot features $\mathbf {H}_t^{slot}$ and intent features $\mathbf {H}_t^{int}$ are utilized to provide guidances for the next local calculation layer. CM-Net ::: CM-block ::: Local Calculation Local context information is highly useful for sequence modeling BIBREF19, BIBREF20. BIBREF21 SLSTM2018 propose the S-LSTM to encode both local and sentence-level information simultaneously, and it has been shown more powerful for text representation when compared with the conventional BiLSTMs. We extend the S-LSTM with slot-specific features $\mathbf {H}_t^{slot}$ and intent-specific features $\mathbf {H}_t^{slot}$ retrieved from memories. Specifically, at each input position $t$, we take the local window context $\mathbf {\xi }_t$, word embedding $\mathbf {x}_t$, slot feature $\mathbf {h}_t^{slot}$ and intent feature $\mathbf {h}_t^{int}$ as inputs to conduct combinatorial calculation simultaneously. Formally, in the $l^{th}$ layer, the hidden state $\mathbf {h_t}$ is updated as follows: where $\mathbf { \xi } _ { t } ^ { l }$ is the concatenation of hidden states in a local window, and $\mathbf {i}_t^l$, $\mathbf {f}_t^l$, $\mathbf {o}_t^l$, $\mathbf {l}_t^l$ and $\mathbf {r}_t^l$ are gates to control information flows, and $\mathbf {W}_n^x$ $(x \in \lbrace i, o, f, l, r, u\rbrace , n \in \lbrace 1, 2, 3, 4\rbrace )$ are model parameters. More details about the state transition can be referred in BIBREF21. In the first CM-block, the hidden state $\mathbf {h}_t$ is initialized with the corresponding word embedding. In other CM-blocks, the $\mathbf {h}_t$ is inherited from the output of the adjacent lower CM-block. At each word position of above procedures, the hidden state is updated with abundant information from different perspectives, namely word embeddings, local contexts, slots and intents representations. The local calculation layer in each CM-block has been shown highly useful for both tasks, and especially for the slot filling task, which will be validated in our experiments in Section SECREF46. CM-Net ::: CM-block ::: Global Recurrence Bi-directional RNNs, especially the BiLSTMs BIBREF22 are regarded to encode both past and future information of a sentence, which have become a dominant method in various sequence modeling tasks BIBREF23, BIBREF24. The inherent nature of BiLSTMs is able to supplement global sequential information, which is insufficiently modeled in the previous local calculation layer. Thus we apply an additional BiLSTMs layer upon the local calculation layer in each CM-block. By taking the slot- and intent-specific local context representations as inputs, we can obtain more specific global sequential representations. Formally, it takes the hidden state $\mathbf {h}_t^{l-1}$ inherited from the local calculation layer as input, and conduct recurrent steps as follows: The output “states" of the BiLSTMs are taken as “states" input of the local calculation in next CM-block. The global sequential information encoded by the BiLSTMs is shown necessary and effective for both tasks in our experiments in Section SECREF46. CM-Net ::: Inference Layer After multiple rounds of interactions among local context representations, global sequential information, slot and intent features, we conduct predictions upon the final CM-block. For the predictions of slots, we take the hidden states $\mathbf {H}$ along with the retrieved slot $\mathbf {H}^{slot}$ representations (both are from the final CM-block) as input features, and then conduct predictions of slots similarly with the Equation (DISPLAY_FORM12) in Section SECREF2: For the prediction of intent label, we firstly aggregate the hidden state $\mathbf {h}_t$ and the retrieved intent representation $\mathbf {h}_t^{int}$ at each word position (from the final CM-block as well) via mean pooling: and then take the summarized vector $\mathbf {v}^{int}$ as input feature to conduct prediction of intent consistently with the Equation (DISPLAY_FORM14) in Section SECREF2. Experiments ::: Datasets and Metrics We evaluate our proposed CM-Net on three real-word datasets, and statistics are listed in Table TABREF32. Experiments ::: Datasets and Metrics ::: ATIS The Airline Travel Information Systems (ATIS) corpus BIBREF12 is the most widely used benchmark for the SLU research. Please note that, there are extra named entity features in the ATIS, which almost determine slot tags. These hand-crafted features are not generally available in open domains BIBREF25, BIBREF29, therefore we train our model purely on the training set without additional hand-crafted features. Experiments ::: Datasets and Metrics ::: SNIPS SNIPS Natural Language Understanding benchmark BIBREF11 is collected in a crowsourced fashion by Snips. The intents of this dataset are more balanced when compared with the ATIS. We split another 700 utterances for validation set following previous works BIBREF7, BIBREF9. Experiments ::: Datasets and Metrics ::: CAIS We collect utterances from the $\mathbf {C}$hinese $\mathbf {A}$rtificial $\mathbf {I}$ntelligence $\mathbf {S}$peakers (CAIS), and annotate them with slot tags and intent labels. The training, validation and test sets are split by the distribution of intents, where detailed statistics are provided in the supplementary material. Since the utterances are collected from speaker systems in the real world, intent labels are partial to the PlayMusic option. We adopt the BIOES tagging scheme for slots instead of the BIO2 used in the ATIS, since previous studies have highlighted meaningful improvements with this scheme BIBREF30 in the sequence labeling field. Experiments ::: Datasets and Metrics ::: Metrics Slot filling is typically treated as a sequence labeling problem, and thus we take the conlleval as the token-level $F_1$ metric. The intent detection is evaluated with the classification accuracy. Specially, several utterances in the ATIS are tagged with more than one labels. Following previous works BIBREF13, BIBREF25, we count an utterrance as a correct classification if any ground truth label is predicted. Experiments ::: Implementation Details All trainable parameters in our model are initialized by the method described in BIBREF31 Xavier. We apply dropout BIBREF32 to the embedding layer and hidden states with a rate of 0.5. All models are optimized by the Adam optimizer BIBREF33 with gradient clipping of 3 BIBREF34. The initial learning rate $\alpha $ is set to 0.001, and decrease with the growth of training steps. We monitor the training process on the validation set and report the final result on the test set. One layer CNN with a filter of size 3 and max pooling are utilized to generate 100d word embeddings. The cased 300d Glove is adapted to initialize word embeddings, and kept fixed when training. In auxiliary experiments, the output hidden states of BERT are taken as additional word embeddings and kept fixed as well. We share parameters of both memories with the parameter matrices in the corresponding softmax layers, which can be taken as introducing supervised signals into the memories to some extent. We conduct hyper-parameters tuning for layer size (finally set to 3) and loss weight $\lambda $ (finally set to 0.5), and empirically set other parameters to the values listed in the supplementary material. Experiments ::: Main Results Main results of our CM-Net on the SNIPS and ATIS are shown in Table TABREF33. Our CM-Net achieves the state-of-the-art results on both datasets in terms of slot filling $F_1$ score and intent detection accuracy, except for the $F_1$ score on the ATIS. We conjecture that the named entity feature in the ATIS has a great impact on the slot filling result as illustrated in Section SECREF34. Since the SNIPS is collected from multiple domains with more balanced labels when compared with the ATIS, the slot filling $F_1$ score on the SNIPS is able to demonstrate the superiority of our CM-Net. It is noteworthy that the CM-Net achieves comparable results when compared with models that exploit additional language models BIBREF27, BIBREF28. We conduct auxiliary experiments by leveraging the well-known BERT BIBREF35 as an external resource for a relatively fair comparison with those models, and report details in Section SECREF48. Analysis Since the SNIPS corpus is collected from multiple domains and its label distributions are more balanced when compared with the ATIS, we choose the SNIPS to elucidate properties of our CM-Net and conduct several additional experiments. Analysis ::: Whether Memories Promote Each Other? In the CM-Net, the deliberate attention mechanism is proposed in a collaborative manner to perform information exchange between slots and intents. We conduct experiments to verify whether such kind of knowledge diffusion in both memories can promote each other. More specifically, we remove one unidirectional diffusion (e.g. from slot to intent) or both in each experimental setup. The results are illustrated in Figure FIGREF43. We can observe obvious drops on both tasks when both directional knowledge diffusions are removed (CM-Net vs. neither). For the slot filling task (left part in Figure FIGREF43), the $F_1$ scores decrease slightly when the knowledge from slot to intent is blocked (CM-Net vs. “no slot2int"), and a more evident drop occurs when the knowledge from intent to slot is blocked (CM-Net vs. “no int2slot"). Similar observations can be found for the intent detection task (right part in Figure FIGREF43). In conclusion, the bidirectional knowledge diffusion between slots and intents are necessary and effective to promote each other. Analysis ::: Ablation Experiments We conduct ablation experiments to investigate the impacts of various components in our CM-Net. In particular, we remove one component among slot memory, intent memory, local calculation and global recurrence. Results of different combinations are presented in Table TABREF44. Once the slot memory and its corresponding interactions with other components are removed, scores on both tasks decrease to some extent, and a more obvious decline occurs for the slot filling (row 1 vs. row 0), which is consistent with the conclusion of Section SECREF45. Similar observations can be found for the intent memory (row 2). The local calculation layer is designed to capture better local context representations, which has an evident impact on the slot filling and slighter effect on the intent detection (row 3 vs. row 0). Opposite observations occur in term of global recurrence, which is supposed to model global sequential information and thus has larger effect on the intent detection (row 4 vs. row 0). Analysis ::: Effects of Pre-trained Language Models Recently, there has been a growing body of works exploring neural language models that trained on massive corpora to learn contextual representations (e.g. BERT BERT and EMLo EMLo). Inspired by the effectiveness of language model embeddings, we conduct experiments by leveraging the BERT as an additional feature. The results emerged in Table TABREF47 show that we establish new state-of-the-art results on both tasks of the SNIPS. Analysis ::: Evaluation on the CAIS We conduct experiments on our self-collected CAIS to evaluate the generalizability in different language. We apply two baseline models for comparison, one is the popular BiLSTMs + CRF architecture BIBREF36 for sequence labeling task, and the other one is the more powerful sententce-state LSTM BIBREF21. The results listed in Table TABREF50 demonstrate the generalizability and effectiveness of our CM-Net when handling various domains and different languages. Related Work ::: Memory Network Memory network is a general machine learning framework introduced by BIBREF37 memory2014, which have been shown effective in question answering BIBREF37, BIBREF38, machine translation BIBREF39, BIBREF40, aspect level sentiment classification BIBREF41, etc. For spoken language understanding, BIBREF42 memoryslu2016 introduce memory mechanisms to encode historical utterances. In this paper, we propose two memories to explicitly capture the semantic correlations between slots and the intent in a given utterance, and devise a novel collaborative retrieval approach. Related Work ::: Interactions between slots and intents Considering the semantic proximity between slots and intents, some works propose to enhance the slot filling task unidirectionally with the guidance of intent representations via gating mechanisms BIBREF7, BIBREF8. Intuitively, the slot representations are also instructive to the intent detection task and thus bidirectional interactions between slots and intents are benefical for each other. BIBREF9 capsule2018 propose a hierarchical capsule network to perform interactions from words to slots, slots to intents and intents to words in a pipeline manner, which is relatively limited in capturing the complicated correlations among them. In our CM-Net, information exchanges are performed simultaneously with knowledge diffusions in both directions. The experiments demonstrate the superiority of our CM-Net in capturing the semantic correlations between slots and intents. Related Work ::: Sentence-State LSTM BIBREF21 BIBREF21 propose a novel graph RNN named S-LSTM, which models sentence between words simultaneously. Inspired by the new perspective of state transition in the S-LSTM, we further extend it with task-specific (i.e., slots and intents) representations via our collaborative memories. In addition, the global information in S-LSTM is modeled by aggregating the local features with gating mechanisms, which may lose sight of sequential information of the whole sentence. Therefore, We apply external BiLSTMs to supply global sequential features, which is shown highly necessary for both tasks in our experiments. Conclusion We propose a novel $\mathbf {C}$ollaborative $\mathbf {M}$emory $\mathbf {N}$etwork (CM-Net) for jointly modeling slot filling and intent detection. The CM-Net is able to explicitly capture the semantic correlations among words, slots and intents in a collaborative manner, and incrementally enrich the information flows with local context and global sequential information. Experiments on two standard benchmarks and our CAIS corpus demonstrate the effectiveness and generalizability of our proposed CM-Net. In addition, we contribute the new corpus (CAIS) to the research community. Acknowledgments Liu, Chen and Xu are supported by the National Natural Science Foundation of China (Contract 61370130, 61976015, 61473294 and 61876198), and the Beijing Municipal Natural Science Foundation (Contract 4172047), and the International Science and Technology Cooperation Program of the Ministry of Science and Technology (K11F100010). We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions.
BiLSTMs + CRF architecture BIBREF36, sententce-state LSTM BIBREF21
defc17986d3c4aed9eccdbaebda5eb202fbcb6cf
defc17986d3c4aed9eccdbaebda5eb202fbcb6cf_0
Q: Are the document vectors that the authors introduce evaluated in any way other than the new way the authors propose? Text: Introduction A number of real-world problems related to text data have been studied under the framework of natural language processing (NLP). Example of such problems include topic categorization, sentiment analysis, machine translation, structured information extraction, or automatic summarization. Due to the overwhelming amount of text data available on the Internet from various sources such as user-generated content or digitized books, methods to automatically and intelligently process large collections of text documents are in high demand. For several text applications, machine learning (ML) models based on global word statistics like TFIDF BIBREF0 , BIBREF1 or linear classifiers are known to perform remarkably well, e.g. for unsupervised keyword extraction BIBREF2 or document classification BIBREF3 . However more recently, neural network models based on vector space representations of words (like BIBREF4 ) have shown to be of great benefit to a large number of tasks. The trend was initiated by the seminal work of BIBREF5 and BIBREF6 , who introduced word-based neural networks to perform various NLP tasks such as language modeling, chunking, named entity recognition, and semantic role labeling. A number of recent works (e.g. BIBREF6 , BIBREF7 ) also refined the basic neural network architecture by incorporating useful structures such as convolution, pooling, and parse tree hierarchies, leading to further improvements in model predictions. Overall, these ML models have permitted to assign automatically and accurately concepts to entire documents or to sub-document levels like phrases; the assigned information can then be mined on a large scale. In parallel, a set of techniques were developed in the context of image categorization to explain the predictions of convolutional neural networks (a state-of-the-art ML model in this field) or related models. These techniques were able to associate to each prediction of the model a meaningful pattern in the space of input features BIBREF8 , BIBREF9 , BIBREF10 or to perform a decomposition onto the input pixels of the model output BIBREF11 , BIBREF12 , BIBREF13 . In this paper, we will make use of the layer-wise relevance propagation (LRP) technique BIBREF12 , that was already substantially tested on various datasets and ML models BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . In the present work, we propose a method to identify which words in a text document are important to explain the category associated to it. The approach consists of using a ML classifier to predict the categories as accurately as possible, and in a second step, decompose the ML prediction onto the input domain, thus assigning to each word in the document a relevance score. The ML model of study will be a word-embedding based convolutional neural network that we train on a text classification task, namely topic categorization of newsgroup documents. As a second ML model we consider a classical bag-of-words support vector machine (BoW/SVM) classifier. We contribute the following: px (i) The LRP technique BIBREF12 is brought to the NLP domain and its suitability for identifying relevant words in text documents is demonstrated. px (ii) LRP relevances are validated, at the document level, by building document heatmap visualizations, and at the dataset level, by compiling representative words for a text category. It is also shown quantitatively that LRP better identifies relevant words than sensitivity analysis. px (iii) A novel way of generating vector-based document representations is introduced and it is verified that these document vectors present semantic regularities within their original feature space akin word vector representations. px (iv) A measure for model explanatory power is proposed and it is shown that two ML models, a neural network and a BoW/SVM classifier, although presenting similar classification performance may largely differ in terms of explainability. The work is organized as follows. In section "Representing Words and Documents" we describe the related work for explaining classifier decisions with respect to input space variables. In section "Predicting Category with a Convolutional Neural Network" we introduce our neural network ML model for document classification, as well as the LRP decomposition procedure associated to its predictions. We describe how LRP relevance scores can be used to identify important words in documents and introduce a novel way of condensing the semantical information of a text document into a single document vector. Likewise in section "Predicting Category with a Convolutional Neural Network" we introduce a baseline ML model for document classification, as well as a gradient-based alternative for assigning relevance scores to words. In section "Quality of Word Relevances and Model Explanatory Power" we define objective criteria for evaluating word relevance scores, as well as for assessing model explanatory power. In section "Results" we introduce the dataset and experimental setup, and present the results. Finally, section "Conclusion" concludes our work. Related Work Explanation of individual classification decisions in terms of input variables has been studied for a variety of machine learning classifiers such as additive classifiers BIBREF18 , kernel-based classifiers BIBREF19 or hierarchical networks BIBREF11 . Model-agnostic methods for explanations relying on random sampling have also been proposed BIBREF20 , BIBREF21 , BIBREF22 . Despite their generality, the latter however incur an additional computational cost due to the need to process the whole sample to provide a single explanation. Other methods are more specific to deep convolutional neural networks used in computer vision: the authors of BIBREF8 proposed a network propagation technique based on deconvolutions to reconstruct input image patterns that are linked to a particular feature map activation or prediction. The work of BIBREF9 aimed at revealing salient structures within images related to a specific class by computing the corresponding prediction score derivative with respect to the input image. The latter method reveals the sensitivity of the classifier decision to some local variation of the input image, and is related to sensitivity analysis BIBREF23 , BIBREF24 . In contrast, the LRP method of BIBREF12 corresponds to a full decomposition of the classifier output for the current input image. It is based on a layer-wise conservation principle and reveals parts of the input space that either support or speak against a specific classification decision. Note that the LRP framework can be applied to various models such as kernel support vector machines and deep neural networks BIBREF12 , BIBREF17 . We refer the reader to BIBREF14 for a comparison of the three explanation methods, and to BIBREF13 for a view of particular instances of LRP as a “deep Taylor decomposition” of the decision function. In the context of neural networks for text classification BIBREF25 proposed to extract salient sentences from text documents using loss gradient magnitudes. In order to validate the pertinence of the sentences extracted via the neural network classifier, the latter work proposed to subsequently use these sentences as an input to an external classifier and compare the resulting classification performance to random and heuristic sentence selection. The work by BIBREF26 also employs gradient magnitudes to identify salient words within sentences, analogously to the method proposed in computer vision by BIBREF9 . However their analysis is based on qualitative interpretation of saliency heatmaps for exemplary sentences. In addition to the heatmap visualizations, we provide a classifier-intrinsic quantitative validation of the word-level relevances. We furthermore extend previous work from BIBREF27 by adding a BoW/SVM baseline to the experiments and proposing a new criterion for assessing model explanatory power. Interpretable Text Classification In this section we describe our method for identifying words in a text document, that are relevant with respect to a given category of a classification problem. For this, we assume that we are given a vector-based word representation and a neural network that has already been trained to map accurately documents to their actual category. Our method can be divided in four steps: (1) Compute an input representation of a text document based on word vectors. (2) Forward-propagate the input representation through the convolutional neural network until the output is reached. (3) Backward-propagate the output through the network using the layer-wise relevance propagation (LRP) method, until the input is reached. (4) Pool the relevance scores associated to each input variable of the network onto the words to which they belong. As a result of this four-step procedure, a decomposition of the prediction score for a category onto the words of the documents is obtained. Decomposed terms are called relevance scores. These relevance scores can be viewed as highlighted text or can be used to form a list of top-words in the document. The whole procedure is also described visually in Figure 1 . While we detail in this section the LRP method for a specific network architecture and with predefined choices of layers, the method can in principle be extended to any architecture composed of similar or larger number of layers. At the end of this section we introduce different methods which will serve as baselines for comparison. A baseline for the convolutional neural network model is the BoW/SVM classifier, with the LRP procedure adapted accordingly BIBREF12 . A baseline for the LRP relevance decomposition procedure is gradient-based sensitivity analysis (SA), a technique which assigns sensitivity scores to individual words. In the vector-based document representation experiments, we will also compare LRP to uniform and TFIDF baselines. Representing Words and Documents Prior to training the neural network and using it for prediction and explanation, we first derive a numerical representation of the text documents that will serve as an input to the neural classifier. To this end, we map each individual word in the document to a vector embedding, and concatenate these embeddings to form a matrix of size the number of words in the document times the dimension of the word embeddings. A distributed representation of words can be learned from scratch, or fine-tuned simultaneously with the classification task of interest. In the present work, we use only pre-training as it was shown that, even without fine-tuning, this leads to good neural network classification performance for a variety of tasks like e.g. natural language tagging or sentiment analysis BIBREF6 , BIBREF28 . One shallow neural network model for learning word embeddings from unlabeled text sources, is the continuous bag-of-words (CBOW) model of BIBREF29 , which is similar to the log-bilinear language model from BIBREF30 , BIBREF31 but ignores the order of context words. In the CBOW model, the objective is to predict a target middle word from the average of the embeddings of the context words that are surrounding the middle word, by means of direct dot products between word embeddings. During training, a set of word embeddings for context words $v$ and for target words $v^{\prime }$ are learned separately. After training is completed, only the context word embeddings $v$ will be retained for further applications. The CBOW objective has a simple maximum likelihood formulation, where one maximizes over the training data the sum of the logarithm of probabilities of the form: $ P (w_t | w_{t-n:t+n} ) = \frac{\exp \Big ( ( {1 \over {2n}}\cdot \; {\sum _{-n\le j \le n, j \ne 0}{\; v_{w_{t+j}}} )^\top v^{\prime }_{w_t} \Big )} }{\sum _{w \in V} \; \exp \Big ( ( {1 \over {2n}}\cdot \; {\sum _{-n\le j \le n, j \ne 0}{\; v_{w_{t+j}}})^\top v^{\prime }_{w} \Big )}} $ where the softmax normalization runs over all words in the vocabulary $V$ , $2n$ is the number of context words per training text window, $w_t$ represents the target word at the $t^\mathrm {th}$ position in the training data and $w_{t-n:t+n}$ represent the corresponding context words. In the present work, we utilize pre-trained word embeddings obtained with the CBOW architecture and the negative sampling training procedure BIBREF4 . We will refer to these embeddings as word2vec embeddings. Predicting Category with a Convolutional Neural Network Our ML model for classifying text documents, is a word-embedding based convolutional neural network (CNN) model similar to the one proposed in BIBREF28 for sentence classification, which itself is a slight variant of the model introduced in BIBREF6 for semantic role labeling. This architecture is depicted in Figure 1 (left) and is composed of several layers. As previously described, in a first step we map each word in the document to its word2vec vector. Denoting by $D$ the word embedding dimension and by $L$ the document length, our input is a matrix of shape $D \times L$ . We denote by $x_{i,t}$ the value of the $i^\mathrm {th}$ component of the word2vec vector representing the $t^\mathrm {th}$ word in the document. The convolution/detection layer produces a new representation composed of $F$ sequences indexed by $j$ , where each element of the sequence is computed as: $ \forall {j,t}:~ x_{j,t} = { \textstyle \max \Big (0, \; \sum _{i,\tau } x_{i,t-\tau } \; w^{(1)}_{i, j ,\tau } + b^{(1)}_j\Big ) = \max \Big (0, \; \sum _{i} \; \big (x_{i} \ast w^{(1)}_{i,j}\big )_t + b^{(1)}_j\Big ) } $ where $t$ indicates a position within the text sequence, $j$ designates a feature map, and $\tau \in \lbrace 0,1,\dots ,H-1\rbrace $ is a delay with range $H$ the filter size of the one-dimensional convolutional operation $\ast $ . After the convolutional operation, which yields $F$ features maps of length $L-H+1$ , we apply the ReLU non-linearity element-wise. Note that the trainable parameters $w^{(1)}$ and $b^{(1)}$ do not depend on the position $t$ in the text document, hence the convolutional processing is equivariant with this physical dimension. In Figure 1 , we use $j$0 . The next layer computes, for each dimension $j$1 of the previous representation, the maximum over the entire text sequence of the document: $j$2 This layer creates invariance to the position of the features in the document. Finally, the $F$ pooled features are fed into an endmost logistic classifier where the unnormalized log-probability of each of the $C$ classes, indexed by the variable $k$ are given by: $$\forall {k}:~ x_k = { \textstyle { \sum _{j}} \; x_j \; w^{(2)}_{jk} + b^{(2)}_k }$$ (Eq. 4) where $w^{(2)}$ , $b^{(2)}$ are trainable parameters of size $F \times C$ resp. size $C$ defining a fully-connected linear layer. The outputs $x_k$ can be converted to probabilities through the softmax function $p_k = \exp (x_k) / \sum _{k^{\prime }} \exp (x_{k^{\prime }})$ . For the LRP decomposition we take the unnormalized classification scores $x_k$ as a starting point. Explaining Predictions with Layer-wise Relevance Propagation Layer-wise relevance propagation (LRP) BIBREF12 , BIBREF32 is a recently introduced technique for estimating which elements of a classifier input are important to achieve a certain classification decision. It can be applied to bag-of-words SVM classifiers as well as to layer-wise structured neural networks. For every input data point and possible target class, LRP delivers one scalar relevance value per input variable, hereby indicating whether the corresponding part of the input is contributing for or against a specific classifier decision, or if this input variable is rather uninvolved and irrelevant to the classification task at all. The main idea behind LRP is to redistribute, for each possible target class separately, the output prediction score (i.e. a scalar value) that causes the classification, back to the input space via a backward propagation procedure that satisfies a layer-wise conservation principle. Thereby each intermediate classifier layer up to the input layer gets allocated relevance values, and the sum of the relevances per layer is equal to the classifier prediction score for the considered class. Denoting by $x_{i,t}\,, x_{j,t}\,, x_{j}\,, x_{k}$ the neurons of the CNN layers presented in the previous section, we associate to each of them respectively a relevance score $R_{i,t}\,, R_{j,t}\,, R_j\,, R_k$ . Accordingly the layer-wise conservation principle can be written as: $${\textstyle \sum _{i,t} R_{i,t} = \sum _{j,t} R_{j,t} = \sum _j R_j = \sum _k R_k}$$ (Eq. 6) where each sum runs over all neurons of a given layer of the network. To formalize the redistribution process from one layer to another, we introduce the concept of messages $R_{a \leftarrow b}$ indicating how much relevance circulates from a given neuron $b$ to a neuron $a$ in the next lower-layer. We can then express the relevance of neuron $a$ as a sum of incoming messages using: ${ \textstyle R_a = \sum _{b \in {\text{upper}(a)}} R_{a \leftarrow b}}$ where ${\text{upper}(a)}$ denotes the upper-layer neurons connected to $a$ . To bootstrap the propagation algorithm, we set the top-layer relevance vector to $\forall _k: R_k = x_k \cdot \delta _{kc}$ where $\delta $ is the Kronecker delta function, and $c$ is the target class of interest for which we would like to explain the model prediction in isolation from other classes. In the top fully-connected layer, messages are computed following a weighted redistribution formula: $$R_{j \leftarrow k} = \frac{z_{jk}}{\sum _{j} z_{jk}} R_k$$ (Eq. 7) where we define $z_{jk} = x_j w^{(2)}_{jk} + F^{-1} (b^{(2)}_k + \epsilon \cdot (1_{x_k \ge 0} - 1_{x_k < 0}))$ . This formula redistributes relevance onto lower-layer neurons in proportions to $z_{jk}$ representing the contribution of each neuron to the upper-layer neuron value in the forward propagation, incremented with a small stabilizing term $\epsilon $ that prevents the denominator from nearing zero, and hence avoids too large positive or negative relevance messages. In the limit case where $\epsilon \rightarrow \infty $ , the relevance is redistributed uniformly along the network connections. As a stabilizer value we use $\epsilon = 0.01$ as introduced in BIBREF12 . After computation of the messages according to Equation 7 , the latter can be pooled onto the corresponding neuron by the formula $R_j = \sum _k R_{j \leftarrow k}$ . The relevance scores $R_j$ are then propagated through the max-pooling layer using the formula: $$R_{j,t} = \left\lbrace \begin{array}{ll} R_j & \text{if} \; \; t = \mathrm {arg}\max _{t^{\prime }} \; x_{j,t^{\prime }}\\ 0 & \text{else} \end{array} \right.$$ (Eq. 8) which is a “winner-take-all” redistribution analogous to the rule used during training for backpropagating gradients, i.e. the neuron that had the maximum value in the pool is granted all the relevance from the upper-layer neuron. Finally, for the convolutional layer we use the weighted redistribution formula: $$R_{(i,t-\tau ) \leftarrow (j,t)} = \frac{z_{i, j, \tau }}{ \sum _{i,\tau } z_{i, j, \tau }}$$ (Eq. 9) where $z_{i, j, \tau } = x_{i,t-\tau } w^{(1)}_{i, j, \tau } + (HD)^{-1} (b^{(1)}_j + \epsilon \cdot (1_{x_{j,t} > 0} - 1_{x_{j,t} \le 0}))$ , which is similar to Equation 7 except for the increased notational complexity incurred by the convolutional structure of the layer. Messages can finally be pooled onto the input neurons by computing $R_{i,t} = \sum _{j,\tau } R_{(i,t) \leftarrow (j,t+\tau )}$ . Word Relevance and Vector-Based Document Representation So far, the relevance has been redistributed only onto individual components of the word2vec vector associated to each word, in the form of single input neuron relevances $R_{i,t}$ . To obtain a word-level relevance value, one can pool the relevances over all dimensions of the word2vec vector, that is compute: $$R_t = {\textstyle \sum _i} R_{i,t}$$ (Eq. 11) and use this value to highlight words in a text document, as shown in Figure 1 (right). These word-level relevance scores can further be used to condense the semantic information of text documents, by building vectors $d \in \mathbb {R}^D$ representing full documents through linearly combining word2vec vectors: $$\forall _i:~d_i = {\textstyle \sum _t} \; R_{t} \cdot x_{i,t}$$ (Eq. 12) The vector $d$ is a summary that consists of an additive composition of the semantic representation of all relevant words in the document. Note that the resulting document vector lies in the same semantic space as word2vec vectors. A more fined-grained extraction technique does not apply word-level pooling as an intermediate step and extracts only the relevant subspace of each word: $$\forall _i:~d_i = {\textstyle \sum _t} \; R_{i,t} \cdot x_{i,t}$$ (Eq. 13) This last approach is particularly useful to address the problem of word homonymy, and will thus result in even finer semantic extraction from the document. In the remaining we will refer to the semantic extraction defined by Eq. 12 as word-level extraction, and to the one from Eq. 13 as element-wise (ew) extraction. In both cases we call vector $d$ a document summary vector. Baseline Methods In the following we briefly mention methods which will serve as baselines for comparison. Sensitivity Analysis. Sensitivity analysis (SA) BIBREF23 , BIBREF24 , BIBREF19 assigns scores $R_{i,t} = (\partial x_k / \partial x_{i,t})^2$ to input variables representing the steepness of the decision function in the input space. These partial derivatives are straightforward to compute using standard gradient propagation BIBREF33 and are readily available in most neural network implementations. Hereby we note that sensitivity analysis redistributes the quantity $\Vert \nabla x_k\Vert {_2^2}$ , while LRP redistributes $x_k$ . However, the local steepness information is a relatively weak proxy of the actual function value, which is the real quantity of interest when estimating the contribution of input variables w.r.t. to a current classifier's decision. We further note that relevance scores obtained with LRP are signed, while those obtained with SA are positive. BoW/SVM. As a baseline to the CNN model, a bag-of-words linear SVM classifier will be used to predict the document categories. In this model each text document is first mapped to a vector $x$ with dimensionality $V$ the size of the training data vocabulary, where each entry is computed as a term frequency - inverse document frequency (TFIDF) score of the corresponding word. Subsequently these vectors $x$ are normalized to unit euclidean norm. In a second step, using the vector representations $x$ of all documents, $C$ maximum margin separating hyperplanes are learned to separate each of the classes of the classification problem from the other ones. As a result we obtain for each class $c \in C$ a linear prediction score of the form $s_c = w_c^\top x + b_c$ , where $w_c\in \mathbb {R}^{V} $ and $b_c \in \mathbb {R}$ are class specific weights and bias. In order to obtain a LRP decomposition of the prediction score $s_c$ for class $V$0 onto the input variables, we simply compute $V$1 , where $V$2 is the number of non-zero entries of $V$3 . Respectively, the sensitivity analysis redistribution of the prediction score squared gradient reduces to $V$4 . Note that the BoW/SVM model being a linear predictor relying directly on word frequency statistics, it lacks expressive power in comparison to the CNN model which additionally learns intermediate hidden layer representations and convolutional filters. Moreover the CNN model can take advantage of the semantic similarity encoded in the distributed word2vec representations, while for the BoW/SVM model all words are “equidistant” in the bag-of-words semantic space. As our experiments will show, these limitations lead the BoW/SVM model to sometimes identify spurious words as relevant for the classification task. In analogy to the semantic extraction proposed in section "Word Relevance and Vector-Based Document Representation" for the CNN model, we can build vectors $d$ representing documents by leveraging the word relevances obtained with the BoW/SVM model. To this end, we introduce a binary vector $\tilde{x} \in \mathbb {R}^{V} $ whose entries are equal to one when the corresponding word from the vocabulary is present in the document and zero otherwise (i.e. $\tilde{x}$ is a binary bag-of-words representation of the document). Thereafter, we build the document summary vector $d$ component-wise, so that $d$ is just a vector of word relevances: $$\forall _i:~d_i = R_{i} \cdot {\tilde{x}}_{i}$$ (Eq. 15) Uniform/TFIDF based Document Summary Vector. In place of the word-level relevance $R_t$ resp. $R_i$ in Eq. 12 and Eq. 15 , we can use a uniform weighting. This corresponds to build the document vector $d$ as an average of word2vec word embeddings in the first case, and to take as a document representation $d$ a binary bag-of-words vector in the second case. Moreover, we can replace $R_t$ in Eq. 12 by an inverse document frequency (IDF) score, and $R_i$ in Eq. 15 by a TFIDF score. Both correspond to TFIDF weighting of either word2vec vectors, or of one-hot vectors representing words. Quality of Word Relevances and Model Explanatory Power In this section we describe how to evaluate and compare the outcomes of algorithms which assign relevance scores to words (such as LRP or SA) through intrinsic validation. Furthermore, we propose a measure of model explanatory power based on an extrinsic validation procedure. The latter will be used to analyze and compare the relevance decompositions or explanations obtained with the neural network and the BoW/SVM classifier. Both types of evaluations will be carried out in section "Results" . Measuring the Quality of Word Relevances through Intrinsic Validation An evaluation of how good a method identifies relevant words in text documents can be performed qualitatively, e.g. at the document level, by inspecting the heatmap visualization of a document, or by reviewing the list of the most (or of the least) relevant words per document. A similar analysis can also be conducted at the dataset level, e.g. by compiling the list of the most relevant words for one category across all documents. The latter allows one to identify words that are representatives for a document category, and eventually to detect potential dataset biases or classifier specific drawbacks. However, in order to quantitatively compare algorithms such as LRP and SA regarding the identification of relevant words, we need an objective measure of the quality of the explanations delivered by relevance decomposition methods. To this end we adopt an idea from BIBREF14 : A word $w$ is considered highly relevant for the classification $f(x)$ of the document $x$ if removing it and classifying the modified document $\tilde{x}$ results in a strong decrease of the classification score $f(\tilde{x})$ . This idea can be extended by sequentially deleting words from the most relevant to the least relevant or the other way round. The result is a graph of the prediction scores $f(\tilde{x})$ as a function of the number of deleted words. In our experiments, we employ this approach to track the changes in classification performance when successively deleting words according to their relevance value. By comparing the relative impact on the classification performance induced by different relevance decomposition methods, we can estimate how appropriate these methods are at identifying words that are really important for the classification task at hand. The above described procedure constitutes an intrinsic validation, as it does not rely on an external classifier. Measuring Model Explanatory Power through Extrinsic Validation Although intrinsic validation can be used to compare relevance decomposition methods for a given ML model, this approach is not suited to compare the explanatory power of different ML models, since the latter requires a common evaluation basis. Furthermore, even if we would track the classification performance changes induced by different ML models using an external classifier, it would not necessarily increase comparability, because removing words from a document may affect different classifiers very differently, so that their graphs $f(\tilde{x})$ are not comparable. Therefore, we propose a novel measure of model explanatory power which does not depend on a classification performance change, but only on the word relevances. Hereby we consider ML model A as being more explainable than ML model B if its word relevances are more “semantic extractive”, i.e. more helpful for solving a semantic related task such as the classification of document summary vectors. More precisely, in order to quantify the ML model explanatory power we undertake the following steps: px (1) Compute document summary vectors for all test set documents using Eq. 12 or 13 for the CNN and Eq. 15 for the BoW/SVM model. Hereby use the ML model's predicted class as target class for the relevance decomposition (i.e. the summary vector generation is unsupervised). px (2) Normalize the document summary vectors to unit euclidean norm, and perform a K-nearest-neighbors (KNN) classification of half of these vectors, using the other half of summary vectors as neighbors (hereby use standard KNN classification, i.e. nearest neighbors are identified by euclidean distance and neighbor votes are weighted uniformly). Use different hyperparameters $K$ . px (3) Repeat step (2) over 10 random data splits, and average the KNN classification accuracies for each $K$ . Finally, report the maximum (over different $K$ ) KNN accuracy as explanatory power index (EPI). The higher this value, the more explanatory power the ML model and the corresponding document summary vectors, will have. In a nutshell, our EPI metric of explanatory power of a given ML model “ $f$ ”, combined with a relevance map “ $R$ ”, can informally be summarized as: $$d(x) &= {\textstyle \sum _t} \; [R (f (x)) \odot x]_t \nonumber \\[2mm] {\text{EPI}}(f,R) \; &= \; \max _{K} \; \; \texttt {KNN\_accuracy} \Big (\lbrace d(x^{(1)}),\dots ,d(x^{(N)})\rbrace ,K\Big )$$ (Eq. 18) where $d(x)$ is the document summary vector for input document $x$ , and subscript $t$ denotes the words in the document. Thereby the sum $\sum _t$ and element-wise multiplication $\odot $ operations stand for the weighted combination specified explicitly in Eq. 12 - 15 . The KNN accuracy is estimated over all test set document summary vectors indexed from 1 to $N$ , and $K$ is the number of neighbors. In the proposed evaluation procedure, the use of KNN as a common external classifier enables us to unbiasedly and objectively compare different ML models, in terms of the density and local neighborhood structure of the semantic information extracted via the summary vectors in input feature space. Indeed we recall that summary vectors constructed via Eq. 12 and 13 lie in the same semantic space as word2vec embeddings, and that summary vectors obtained via Eq. 15 live in the bag-of-words space. Results This section summarizes our experimental results. We first describe the dataset, experimental setup, training procedure and classification accuracy of our ML models. We will consider four ML models: three CNNs with different filter sizes and a BoW/SVM classifier. Then, we demonstrate that LRP can be used to identify relevant words in text documents. We compare heatmaps for the best performing CNN model and the BoW/SVM classifier, and report the most representative words for three exemplary document categories. These results demonstrate qualitatively that the CNN model produces better explanations than the BoW/SVM classifier. After that we move to the evaluation of the document summary vectors, where we show that a 2D PCA projection of the document vectors computed from the LRP scores groups documents according to their topics (without requiring the true labels). Since worse results are obtained when using the SA scores or the uniform or TFIDF weighting, this indicates that the explanations produced by LRP are semantically more meaningful than the latter. Finally, we confirm quantitatively the observations made before, namely that (1) the LRP decomposition method provides better explanations than SA and that (2) the CNN model outperforms the BoW/SVM classifier in terms of explanatory power. Experimental Setup For our experiments we consider a topic categorization task, and employ the freely available 20Newsgroups dataset consisting of newsgroup posts evenly distributed among twenty fine-grained categories. More precisely we use the 20news-bydate version, which is already partitioned into 11314 training and 7532 test documents corresponding to different periods in time. As a first preprocessing step, we remove the headers from the documents (by splitting at the first blank line) and tokenize the text with NLTK. Then, we filter the tokenized data by retaining only tokens composed of the following four types of characters: alphabetic, hyphen, dot and apostrophe, and containing at least one alphabetic character. Hereby we aim to remove punctuation, numbers or dates, while keeping abbreviations and compound words. We do not apply any further preprocessing, as for instance stop-word removal or stemming, except for the SVM classifier where we additionally perform lowercasing, as this is a common setup for bag-of-words models. We truncate the resulting sequence of tokens to a chosen fixed length of 400 in order to simplify neural network training (in practice our CNN can process any arbitrary sized document). Lastly, we build the neural network input by horizontally concatenating pre-trained word embeddings, according to the sequence of tokens appearing in the preprocessed document. In particular, we take the 300-dimensional freely available word2vec embeddings BIBREF4 . Out-of-vocabulary words are simply initialized to zero vectors. As input normalization, we subtract the mean and divide by the standard deviation obtained over the flattened training data. We train the neural network by minimizing the cross-entropy loss via mini-batch stochastic gradient descent using $l_2$ -norm and dropout as regularization. We tune the ML model hyperparameters by 10-fold cross-validation in case of the SVM, and by employing 1000 random documents as fixed validation set for the CNN model. However, for the CNN hyperparameters we did not perform an extensive grid search and stopped the tuning once we obtained models with reasonable classification performance for the purpose of our experiments. Table 1 summarizes the performance of our trained models. Herein CNN1, CNN2, CNN3 respectively denote neural networks with convolutional filter size $H$ equal to 1, 2 and 3 (i.e. covering 1, 2 or 3 consecutive words in the document). One can see that the linear SVM performs on par with the neural networks, i.e. the non-linear structure of the CNN models does not yield a considerable advantage toward classification accuracy. Similar results have also been reported in previous studies BIBREF34 , where it was observed that for document classification a convolutional neural network model starts to outperform a TFIDF-based linear classifier only on datasets in the order of millions of documents. This can be explained by the fact that for most topic categorization tasks, the different categories can be separated linearly in the very high-dimensional bag-of-words or bag-of-N-grams space thanks to sufficiently disjoint sets of features. Identifying Relevant Words Figure 2 compiles the resulting LRP heatmaps we obtain on an exemplary sci.space test document that is correctly classified by the SVM and the best performing neural network model CNN2. Note that for the SVM model the relevance values are computed per bag-of-words feature, i.e., same words will have same relevance irrespectively of their context in the document, whereas for the CNN classifier we visualize one relevance value per word position. Hereby we consider as target class for the LRP decomposition the classes sci.space and sci.med. We can observe that the SVM model considers insignificant words like the, is, of as very relevant (either negatively or positively) for the target class sci.med, and at the same time mistakenly estimates words like sickness, mental or distress as negatively contributing to this class (indicated by blue coloring), while on the other hand the CNN2 heatmap is consistently more sparse and concentrated on semantically meaningful words. This sparsity property can be attributed to the max-pooling non-linearity which for each feature map in the neural network selects the first most relevant feature that occurs in the document. As can be seen, it significantly simplifies the interpretability of the results by a human. Another disadvantage of the SVM model is that it relies entirely on local and global word statistics, thus can only assign relevances proportionally to the TFIDF BoW features (plus a class-dependent bias term), while the neural network model benefits from the knowledge encoded in the word2vec embeddings. For instance, the word weightlessness is not highlighted by the SVM model for the target class sci.space, because this word does not occur in the training data and thus is simply ignored by the SVM classifier. The neural network however is able to detect and attribute relevance to unseen words thanks to the semantical information encoded in the pre-trained word2vec embeddings. As a dataset-wide analysis, we determine the words identified through LRP as constituting class representatives. For that purpose we set one class as target class for the relevance decomposition, and conduct LRP over all test set documents (i.e. irrespectively of the true or ML model's predicted class). Subsequently, we sort all the words appearing in the test data in decreasing order of the obtained word-level relevance values, and retrieve the thirty most relevant ones. The result is a list of words identified via LRP as being highly supportive for a classifier decision toward the considered class. Figures 2 and 2 list the most relevant words for different LRP target classes, as well as the corresponding word-level relevance values for the CNN2 and the SVM model. Through underlining we indicate words that do not occur in the training data. Interestingly, we observe that some of the most “class-characteristical” words identified via the neural network model correspond to words that do not even appear in the training data. In contrast, such words are simply ignored by the SVM model as they do not occur in the bag-of-words vocabulary. Similarly to the previous heatmap visualizations, the class-specific analysis reveals that the SVM classifier occasionally assigns high relevances to semantically insignificant words like for example the pronoun she for the target class sci.med (20th position in left column of Fig. 2 ), or to the names pat, henry, nicho for the target the class sci.space (resp. 7, 13, 20th position in middle column of Fig. 2 ). In the former case the high relevance is due to a high term frequency of the word (indeed the word she achieves its highest term frequency in one sci.med test document where it occurs 18 times), whereas in the latter case this can be explained by a high inverse document frequency or by a class-biased occurrence of the corresponding word in the training data (pat appears within 16 different training document categories but 54.1% of its occurrences are within the category sci.space alone, 79.1% of the 201 occurrences of henry appear among sci.space training documents, and nicho appears exclusively in nine sci.space training documents). On the contrary, the neural network model seems less affected by word counts regularities and systematically attributes the highest relevances to words semantically related to the considered target class. These results demonstrate that, subjectively, the neural network is better suited to identify relevant words in text documents than the BoW/SVM model. Document Summary Vectors The word2vec embeddings are known to exhibit linear regularities representing semantical relationships between words BIBREF29 , BIBREF4 . We explore whether these regularities can be transferred to a new document representation, which we denoted as document summary vector, when building this vector as a weighted combination of word2vec embeddings (see Eq. 12 and Eq. 13 ) or as a combination of one-hot word vectors (see Eq. 15 ). We compare the weighting scheme based on the LRP relevances to the following baselines: SA relevance, TFIDF and uniform weighting (see section "Baseline Methods" ). The two-dimensional PCA projection of the summary vectors obtained via the CNN2 resp. the SVM model, as well as the corresponding TFIDF/uniform weighting baselines are shown in Figure 3 . In these visualizations we group the 20Newsgroups test documents into six top-level categories (the grouping is performed according to the dataset website), and we color each document according to its true category (note however that, as mentioned earlier, the relevance decomposition is always performed in an unsupervised way, i.e., with the ML model's predicted class). For the CNN2 model, we observe that the two-dimensional PCA projection reveals a clear-cut clustered structure when using the element-wise LRP weighting for semantic extraction, while no such regularity is observed with uniform or TFIDF weighting. The word-level LRP or SA weightings, as well as the element-wise SA weighting present also a form of bundled layout, but not as dense and well-separated as in the case of element-wise LRP. For the SVM model, the two-dimensional visualization of the summary vectors exhibits partly a cross-shaped layout for LRP and SA weighting, while again no particular structure is observed for TFIDF or uniform semantic extraction. This analysis confirms the observations made in the last section, namely that the neural network outperforms the BoW/SVM classifier in terms of explainability. Figure 3 furthermore suggests that LRP provides semantically more meaningful semantic extraction than the baseline methods. In the next section we will confirm these observations quantitatively. Quantitative Evaluation In order to quantitatively validate the hypothesis that LRP is able to identify words that either support or inhibit a specific classifier decision, we conduct several word-deleting experiments on the CNN models using LRP scores as relevance indicator. More specifically, in accordance to the word-level relevances we delete a sequence of words from each document, re-classify the documents with “missing words”, and report the classification accuracy as a function of the number of deleted words. Hereby the word-level relevances are computed on the original documents (with no words deleted). For the deleting experiments, we consider only 20Newsgroups test documents that have a length greater or equal to 100 tokens (after prepocessing), this amounts to 4963 test documents, from which we delete up to 50 words. For deleting a word we simply set the corresponding word embedding to zero in the CNN input. Moreover, in order to assess the pertinence of the LRP decomposition method as opposed to alternative relevance models, we additionally perform word deletions according to SA word relevances, as well as random deletion. In the latter case we sample a random sequence of 50 words per document, and delete the corresponding words successively from each document. We repeat the random sampling 10 times, and report the average results (the standard deviation of the accuracy is less than 0.0141 in all our experiments). We additionally perform a biased random deletion, where we sample only among words comprised in the word2vec vocabulary (this way we avoid to delete words we have already initialized as zero-vectors as there are out of the word2vec vocabulary, however as our results show this biased deletion is almost equivalent to strict random selection). As a first deletion experiment, we start with the subset of test documents that are initially correctly classified by the CNN models, and successively delete words in decreasing order of their LRP/SA word-level relevance. In this first deletion experiment, the LRP/SA relevances are computed with the true document class as target class for the relevance decomposition. In a second experiment, we perform the opposite evaluation. Here we start with the subset of initially falsely classified documents, and delete successively words in increasing order of their relevance, while considering likewise the true document class as target class for the relevance computation. In the third experiment, we start again with the set of initially falsely classified documents, but now delete words in decreasing order of their relevance, considering the classifier's initially predicted class as target class for the relevance decomposition. Figure 4 summarizes the resulting accuracies when deleting words resp. from the CNN1, CNN2 and CNN3 input documents (each row in the figure corresponds to one of the three deletion experiments). Note that we do not report results for the BoW/SVM model, as our focus here is the comparison between LRP and SA and not between different ML models. Through successive deleting of either “positive-relevant” words in decreasing order of their LRP relevance, or of “negative-relevant” words in increasing order of their LRP relevance, we confirm that both extremal LRP relevance values capture pertinent information with respect to the classification problem. Indeed in all deletion experiments, we observe the most pregnant decrease resp. increase of the classification accuracy when using LRP as relevance model. We additionally note that SA, in contrast to LRP, is largely unable to provide suitable information linking to words that speak against a specific classification decision. Instead it appears that the lowest SA relevances (which mainly correspond to zero-valued relevances) are more likely to identify words that have no impact on the classifier decision at all, as this deletion scheme has even less impact on the classification performance than random deletion when deleting words in increasing order of their relevance, as shown by the second deletion experiment. When confronting the different CNN models, we observe that the CNN2 and CNN3 models, as opposed to CNN1, produce a steeper decrease of the classification performance when deleting the most relevant words from the initially correctly classified documents, both when considering LRP as well as SA as relevance model, as shown by the first deletion experiment. This indicates that the networks with greater filter sizes are more sensitive to single word deletions, most presumably because during these deletions the meaning of the surrounding words becomes less obvious to the classifier. This also provides some weak evidence that, while CNN2 and CNN3 behave similarly (which suggests that a convolutional filter size of two is already enough for the considered classification problem), the learned filters in CNN2 and CNN3 do not only focus on isolated words but additionally consider bigrams or trigrams of words, as their results differ a lot from the CNN1 model in the first deletion experiment. In order to quantitatively evaluate and compare the ML models in combination with a relevance decomposition or explanation technique, we apply the evaluation method described in section "Measuring Model Explanatory Power through Extrinsic Validation" . That is, we compute the accuracy of an external classifier (here KNN) on the classification of document summary vectors (obtained with the ML model's predicted class). For these experiments we remove test documents which are empty or contain only one word after preprocessing (this amounts to remove 25 documents from the 20Newsgroups test set). The maximum KNN mean accuracy obtained when varying the number of neighbors $K$ (corresponding to our EPI metric of explanatory power) is reported for several models and explanation techniques in Table 2 . When pairwise comparing the best CNN based weighting schemes with the corresponding TFIDF baseline result from Table 2 , we find that all LRP element-wise weighted combinations of word2vec vectors are statistical significantly better than the TFIDF weighting of word embeddings at a significance level of 0.05 (using a corrected resampled t-test BIBREF35 ). Similarly, in the bag-of-words space, the LRP combination of one-hot word vectors is significantly better than the corresponding TFIDF document representation with a significance level of 0.05. Lastly, the best CNN2 explanatory power index is significantly higher than the best SVM based explanation at a significance level of 0.10. In Figure 5 we plot the mean accuracy of KNN (averaged over ten random test data splits) as a function of the number of neighbors $K$ , for the CNN2 resp. the SVM model, as well as the corresponding TFIDF/uniform weighting baselines (for CNN1 and CNN3 we obtained a similar layout as for CNN2). One can further see from Figure 5 that (1) (element-wise) LRP provides consistently better semantic extraction than all baseline methods and that (2) the CNN2 model has a higher explanatory power than the BoW/SVM classifier since it produces semantically more meaningful summary vectors for KNN classification. Altogether the good performance, both qualitatively as well as quantitatively, of the element-wise combination of word2vec embeddings according to the LRP relevance illustrates the usefulness of LRP for extracting a new vector-based document representation presenting semantic neighborhood regularities in feature space, and let us presume other potential applications of relevance information, e.g. for aggregating word representations into sub-document representations like phrases, sentences or paragraphs. Conclusion We have demonstrated qualitatively and quantitatively that LRP constitutes a useful tool, both for fine-grained analysis at the document level or as a dataset-wide introspection across documents, to identify words that are important to a classifier's decision. This knowledge enables to broaden the scope of applications of standard machine learning classifiers like support vector machines or neural networks, by extending the primary classification result with additional information linking the classifier's decision back to components of the input, in our case words in a document. Furthermore, based on LRP relevance, we have introduced a new way of condensing the semantic information contained in word embeddings (such as word2vec) into a document vector representation that can be used for nearest neighbors classification, and that leads to better performance than standard TFIDF weighting of word embeddings. The resulting document vector is the basis of a new measure of model explanatory power which was proposed in this work, and its semantic properties could beyond find applications in various visualization and search tasks, where the document similarity is expressed as a dot product between vectors. Our work is a first step toward applying the LRP decomposition to the NLP domain, and we expect this technique to be also suitable for various types of applications that are based on other neural network architectures such as character-based or recurrent network classifiers, or on other types of classification problems (e.g. sentiment analysis). More generally, LRP could contribute to the design of more accurate and efficient classifiers, not only by inspecting and leveraging the input space relevances, but also through the analysis of intermediate relevance values at classifier “hidden” layers. Acknowledgments This work was supported by the German Ministry for Education and Research as Berlin Big Data Center BBDC, funding mark 01IS14013A and by DFG. KRM thanks for partial funding by the National Research Foundation of Korea funded by the Ministry of Education, Science, and Technology in the BK21 program. Correspondence should be addressed to KRM and WS. Contributions Conceived the theoretical framework: LA, GM, KRM, WS. Conceived and designed the experiments: LA, FH, GM, KRM, WS. Performed the experiments: LA. Wrote the manuscript: LA, FH, GM, KRM, WS. Revised the manuscript: LA, FH, GM, KRM, WS. Figure design: LA, GM, WS. Final drafting: all equally.
Yes
03895bc75e4d01c359cd269a9eb3b6ea57039817
03895bc75e4d01c359cd269a9eb3b6ea57039817_0
Q: According to the authors, why does the CNN model exhibit a higher level of explainability? Text: Introduction A number of real-world problems related to text data have been studied under the framework of natural language processing (NLP). Example of such problems include topic categorization, sentiment analysis, machine translation, structured information extraction, or automatic summarization. Due to the overwhelming amount of text data available on the Internet from various sources such as user-generated content or digitized books, methods to automatically and intelligently process large collections of text documents are in high demand. For several text applications, machine learning (ML) models based on global word statistics like TFIDF BIBREF0 , BIBREF1 or linear classifiers are known to perform remarkably well, e.g. for unsupervised keyword extraction BIBREF2 or document classification BIBREF3 . However more recently, neural network models based on vector space representations of words (like BIBREF4 ) have shown to be of great benefit to a large number of tasks. The trend was initiated by the seminal work of BIBREF5 and BIBREF6 , who introduced word-based neural networks to perform various NLP tasks such as language modeling, chunking, named entity recognition, and semantic role labeling. A number of recent works (e.g. BIBREF6 , BIBREF7 ) also refined the basic neural network architecture by incorporating useful structures such as convolution, pooling, and parse tree hierarchies, leading to further improvements in model predictions. Overall, these ML models have permitted to assign automatically and accurately concepts to entire documents or to sub-document levels like phrases; the assigned information can then be mined on a large scale. In parallel, a set of techniques were developed in the context of image categorization to explain the predictions of convolutional neural networks (a state-of-the-art ML model in this field) or related models. These techniques were able to associate to each prediction of the model a meaningful pattern in the space of input features BIBREF8 , BIBREF9 , BIBREF10 or to perform a decomposition onto the input pixels of the model output BIBREF11 , BIBREF12 , BIBREF13 . In this paper, we will make use of the layer-wise relevance propagation (LRP) technique BIBREF12 , that was already substantially tested on various datasets and ML models BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . In the present work, we propose a method to identify which words in a text document are important to explain the category associated to it. The approach consists of using a ML classifier to predict the categories as accurately as possible, and in a second step, decompose the ML prediction onto the input domain, thus assigning to each word in the document a relevance score. The ML model of study will be a word-embedding based convolutional neural network that we train on a text classification task, namely topic categorization of newsgroup documents. As a second ML model we consider a classical bag-of-words support vector machine (BoW/SVM) classifier. We contribute the following: px (i) The LRP technique BIBREF12 is brought to the NLP domain and its suitability for identifying relevant words in text documents is demonstrated. px (ii) LRP relevances are validated, at the document level, by building document heatmap visualizations, and at the dataset level, by compiling representative words for a text category. It is also shown quantitatively that LRP better identifies relevant words than sensitivity analysis. px (iii) A novel way of generating vector-based document representations is introduced and it is verified that these document vectors present semantic regularities within their original feature space akin word vector representations. px (iv) A measure for model explanatory power is proposed and it is shown that two ML models, a neural network and a BoW/SVM classifier, although presenting similar classification performance may largely differ in terms of explainability. The work is organized as follows. In section "Representing Words and Documents" we describe the related work for explaining classifier decisions with respect to input space variables. In section "Predicting Category with a Convolutional Neural Network" we introduce our neural network ML model for document classification, as well as the LRP decomposition procedure associated to its predictions. We describe how LRP relevance scores can be used to identify important words in documents and introduce a novel way of condensing the semantical information of a text document into a single document vector. Likewise in section "Predicting Category with a Convolutional Neural Network" we introduce a baseline ML model for document classification, as well as a gradient-based alternative for assigning relevance scores to words. In section "Quality of Word Relevances and Model Explanatory Power" we define objective criteria for evaluating word relevance scores, as well as for assessing model explanatory power. In section "Results" we introduce the dataset and experimental setup, and present the results. Finally, section "Conclusion" concludes our work. Related Work Explanation of individual classification decisions in terms of input variables has been studied for a variety of machine learning classifiers such as additive classifiers BIBREF18 , kernel-based classifiers BIBREF19 or hierarchical networks BIBREF11 . Model-agnostic methods for explanations relying on random sampling have also been proposed BIBREF20 , BIBREF21 , BIBREF22 . Despite their generality, the latter however incur an additional computational cost due to the need to process the whole sample to provide a single explanation. Other methods are more specific to deep convolutional neural networks used in computer vision: the authors of BIBREF8 proposed a network propagation technique based on deconvolutions to reconstruct input image patterns that are linked to a particular feature map activation or prediction. The work of BIBREF9 aimed at revealing salient structures within images related to a specific class by computing the corresponding prediction score derivative with respect to the input image. The latter method reveals the sensitivity of the classifier decision to some local variation of the input image, and is related to sensitivity analysis BIBREF23 , BIBREF24 . In contrast, the LRP method of BIBREF12 corresponds to a full decomposition of the classifier output for the current input image. It is based on a layer-wise conservation principle and reveals parts of the input space that either support or speak against a specific classification decision. Note that the LRP framework can be applied to various models such as kernel support vector machines and deep neural networks BIBREF12 , BIBREF17 . We refer the reader to BIBREF14 for a comparison of the three explanation methods, and to BIBREF13 for a view of particular instances of LRP as a “deep Taylor decomposition” of the decision function. In the context of neural networks for text classification BIBREF25 proposed to extract salient sentences from text documents using loss gradient magnitudes. In order to validate the pertinence of the sentences extracted via the neural network classifier, the latter work proposed to subsequently use these sentences as an input to an external classifier and compare the resulting classification performance to random and heuristic sentence selection. The work by BIBREF26 also employs gradient magnitudes to identify salient words within sentences, analogously to the method proposed in computer vision by BIBREF9 . However their analysis is based on qualitative interpretation of saliency heatmaps for exemplary sentences. In addition to the heatmap visualizations, we provide a classifier-intrinsic quantitative validation of the word-level relevances. We furthermore extend previous work from BIBREF27 by adding a BoW/SVM baseline to the experiments and proposing a new criterion for assessing model explanatory power. Interpretable Text Classification In this section we describe our method for identifying words in a text document, that are relevant with respect to a given category of a classification problem. For this, we assume that we are given a vector-based word representation and a neural network that has already been trained to map accurately documents to their actual category. Our method can be divided in four steps: (1) Compute an input representation of a text document based on word vectors. (2) Forward-propagate the input representation through the convolutional neural network until the output is reached. (3) Backward-propagate the output through the network using the layer-wise relevance propagation (LRP) method, until the input is reached. (4) Pool the relevance scores associated to each input variable of the network onto the words to which they belong. As a result of this four-step procedure, a decomposition of the prediction score for a category onto the words of the documents is obtained. Decomposed terms are called relevance scores. These relevance scores can be viewed as highlighted text or can be used to form a list of top-words in the document. The whole procedure is also described visually in Figure 1 . While we detail in this section the LRP method for a specific network architecture and with predefined choices of layers, the method can in principle be extended to any architecture composed of similar or larger number of layers. At the end of this section we introduce different methods which will serve as baselines for comparison. A baseline for the convolutional neural network model is the BoW/SVM classifier, with the LRP procedure adapted accordingly BIBREF12 . A baseline for the LRP relevance decomposition procedure is gradient-based sensitivity analysis (SA), a technique which assigns sensitivity scores to individual words. In the vector-based document representation experiments, we will also compare LRP to uniform and TFIDF baselines. Representing Words and Documents Prior to training the neural network and using it for prediction and explanation, we first derive a numerical representation of the text documents that will serve as an input to the neural classifier. To this end, we map each individual word in the document to a vector embedding, and concatenate these embeddings to form a matrix of size the number of words in the document times the dimension of the word embeddings. A distributed representation of words can be learned from scratch, or fine-tuned simultaneously with the classification task of interest. In the present work, we use only pre-training as it was shown that, even without fine-tuning, this leads to good neural network classification performance for a variety of tasks like e.g. natural language tagging or sentiment analysis BIBREF6 , BIBREF28 . One shallow neural network model for learning word embeddings from unlabeled text sources, is the continuous bag-of-words (CBOW) model of BIBREF29 , which is similar to the log-bilinear language model from BIBREF30 , BIBREF31 but ignores the order of context words. In the CBOW model, the objective is to predict a target middle word from the average of the embeddings of the context words that are surrounding the middle word, by means of direct dot products between word embeddings. During training, a set of word embeddings for context words $v$ and for target words $v^{\prime }$ are learned separately. After training is completed, only the context word embeddings $v$ will be retained for further applications. The CBOW objective has a simple maximum likelihood formulation, where one maximizes over the training data the sum of the logarithm of probabilities of the form: $ P (w_t | w_{t-n:t+n} ) = \frac{\exp \Big ( ( {1 \over {2n}}\cdot \; {\sum _{-n\le j \le n, j \ne 0}{\; v_{w_{t+j}}} )^\top v^{\prime }_{w_t} \Big )} }{\sum _{w \in V} \; \exp \Big ( ( {1 \over {2n}}\cdot \; {\sum _{-n\le j \le n, j \ne 0}{\; v_{w_{t+j}}})^\top v^{\prime }_{w} \Big )}} $ where the softmax normalization runs over all words in the vocabulary $V$ , $2n$ is the number of context words per training text window, $w_t$ represents the target word at the $t^\mathrm {th}$ position in the training data and $w_{t-n:t+n}$ represent the corresponding context words. In the present work, we utilize pre-trained word embeddings obtained with the CBOW architecture and the negative sampling training procedure BIBREF4 . We will refer to these embeddings as word2vec embeddings. Predicting Category with a Convolutional Neural Network Our ML model for classifying text documents, is a word-embedding based convolutional neural network (CNN) model similar to the one proposed in BIBREF28 for sentence classification, which itself is a slight variant of the model introduced in BIBREF6 for semantic role labeling. This architecture is depicted in Figure 1 (left) and is composed of several layers. As previously described, in a first step we map each word in the document to its word2vec vector. Denoting by $D$ the word embedding dimension and by $L$ the document length, our input is a matrix of shape $D \times L$ . We denote by $x_{i,t}$ the value of the $i^\mathrm {th}$ component of the word2vec vector representing the $t^\mathrm {th}$ word in the document. The convolution/detection layer produces a new representation composed of $F$ sequences indexed by $j$ , where each element of the sequence is computed as: $ \forall {j,t}:~ x_{j,t} = { \textstyle \max \Big (0, \; \sum _{i,\tau } x_{i,t-\tau } \; w^{(1)}_{i, j ,\tau } + b^{(1)}_j\Big ) = \max \Big (0, \; \sum _{i} \; \big (x_{i} \ast w^{(1)}_{i,j}\big )_t + b^{(1)}_j\Big ) } $ where $t$ indicates a position within the text sequence, $j$ designates a feature map, and $\tau \in \lbrace 0,1,\dots ,H-1\rbrace $ is a delay with range $H$ the filter size of the one-dimensional convolutional operation $\ast $ . After the convolutional operation, which yields $F$ features maps of length $L-H+1$ , we apply the ReLU non-linearity element-wise. Note that the trainable parameters $w^{(1)}$ and $b^{(1)}$ do not depend on the position $t$ in the text document, hence the convolutional processing is equivariant with this physical dimension. In Figure 1 , we use $j$0 . The next layer computes, for each dimension $j$1 of the previous representation, the maximum over the entire text sequence of the document: $j$2 This layer creates invariance to the position of the features in the document. Finally, the $F$ pooled features are fed into an endmost logistic classifier where the unnormalized log-probability of each of the $C$ classes, indexed by the variable $k$ are given by: $$\forall {k}:~ x_k = { \textstyle { \sum _{j}} \; x_j \; w^{(2)}_{jk} + b^{(2)}_k }$$ (Eq. 4) where $w^{(2)}$ , $b^{(2)}$ are trainable parameters of size $F \times C$ resp. size $C$ defining a fully-connected linear layer. The outputs $x_k$ can be converted to probabilities through the softmax function $p_k = \exp (x_k) / \sum _{k^{\prime }} \exp (x_{k^{\prime }})$ . For the LRP decomposition we take the unnormalized classification scores $x_k$ as a starting point. Explaining Predictions with Layer-wise Relevance Propagation Layer-wise relevance propagation (LRP) BIBREF12 , BIBREF32 is a recently introduced technique for estimating which elements of a classifier input are important to achieve a certain classification decision. It can be applied to bag-of-words SVM classifiers as well as to layer-wise structured neural networks. For every input data point and possible target class, LRP delivers one scalar relevance value per input variable, hereby indicating whether the corresponding part of the input is contributing for or against a specific classifier decision, or if this input variable is rather uninvolved and irrelevant to the classification task at all. The main idea behind LRP is to redistribute, for each possible target class separately, the output prediction score (i.e. a scalar value) that causes the classification, back to the input space via a backward propagation procedure that satisfies a layer-wise conservation principle. Thereby each intermediate classifier layer up to the input layer gets allocated relevance values, and the sum of the relevances per layer is equal to the classifier prediction score for the considered class. Denoting by $x_{i,t}\,, x_{j,t}\,, x_{j}\,, x_{k}$ the neurons of the CNN layers presented in the previous section, we associate to each of them respectively a relevance score $R_{i,t}\,, R_{j,t}\,, R_j\,, R_k$ . Accordingly the layer-wise conservation principle can be written as: $${\textstyle \sum _{i,t} R_{i,t} = \sum _{j,t} R_{j,t} = \sum _j R_j = \sum _k R_k}$$ (Eq. 6) where each sum runs over all neurons of a given layer of the network. To formalize the redistribution process from one layer to another, we introduce the concept of messages $R_{a \leftarrow b}$ indicating how much relevance circulates from a given neuron $b$ to a neuron $a$ in the next lower-layer. We can then express the relevance of neuron $a$ as a sum of incoming messages using: ${ \textstyle R_a = \sum _{b \in {\text{upper}(a)}} R_{a \leftarrow b}}$ where ${\text{upper}(a)}$ denotes the upper-layer neurons connected to $a$ . To bootstrap the propagation algorithm, we set the top-layer relevance vector to $\forall _k: R_k = x_k \cdot \delta _{kc}$ where $\delta $ is the Kronecker delta function, and $c$ is the target class of interest for which we would like to explain the model prediction in isolation from other classes. In the top fully-connected layer, messages are computed following a weighted redistribution formula: $$R_{j \leftarrow k} = \frac{z_{jk}}{\sum _{j} z_{jk}} R_k$$ (Eq. 7) where we define $z_{jk} = x_j w^{(2)}_{jk} + F^{-1} (b^{(2)}_k + \epsilon \cdot (1_{x_k \ge 0} - 1_{x_k < 0}))$ . This formula redistributes relevance onto lower-layer neurons in proportions to $z_{jk}$ representing the contribution of each neuron to the upper-layer neuron value in the forward propagation, incremented with a small stabilizing term $\epsilon $ that prevents the denominator from nearing zero, and hence avoids too large positive or negative relevance messages. In the limit case where $\epsilon \rightarrow \infty $ , the relevance is redistributed uniformly along the network connections. As a stabilizer value we use $\epsilon = 0.01$ as introduced in BIBREF12 . After computation of the messages according to Equation 7 , the latter can be pooled onto the corresponding neuron by the formula $R_j = \sum _k R_{j \leftarrow k}$ . The relevance scores $R_j$ are then propagated through the max-pooling layer using the formula: $$R_{j,t} = \left\lbrace \begin{array}{ll} R_j & \text{if} \; \; t = \mathrm {arg}\max _{t^{\prime }} \; x_{j,t^{\prime }}\\ 0 & \text{else} \end{array} \right.$$ (Eq. 8) which is a “winner-take-all” redistribution analogous to the rule used during training for backpropagating gradients, i.e. the neuron that had the maximum value in the pool is granted all the relevance from the upper-layer neuron. Finally, for the convolutional layer we use the weighted redistribution formula: $$R_{(i,t-\tau ) \leftarrow (j,t)} = \frac{z_{i, j, \tau }}{ \sum _{i,\tau } z_{i, j, \tau }}$$ (Eq. 9) where $z_{i, j, \tau } = x_{i,t-\tau } w^{(1)}_{i, j, \tau } + (HD)^{-1} (b^{(1)}_j + \epsilon \cdot (1_{x_{j,t} > 0} - 1_{x_{j,t} \le 0}))$ , which is similar to Equation 7 except for the increased notational complexity incurred by the convolutional structure of the layer. Messages can finally be pooled onto the input neurons by computing $R_{i,t} = \sum _{j,\tau } R_{(i,t) \leftarrow (j,t+\tau )}$ . Word Relevance and Vector-Based Document Representation So far, the relevance has been redistributed only onto individual components of the word2vec vector associated to each word, in the form of single input neuron relevances $R_{i,t}$ . To obtain a word-level relevance value, one can pool the relevances over all dimensions of the word2vec vector, that is compute: $$R_t = {\textstyle \sum _i} R_{i,t}$$ (Eq. 11) and use this value to highlight words in a text document, as shown in Figure 1 (right). These word-level relevance scores can further be used to condense the semantic information of text documents, by building vectors $d \in \mathbb {R}^D$ representing full documents through linearly combining word2vec vectors: $$\forall _i:~d_i = {\textstyle \sum _t} \; R_{t} \cdot x_{i,t}$$ (Eq. 12) The vector $d$ is a summary that consists of an additive composition of the semantic representation of all relevant words in the document. Note that the resulting document vector lies in the same semantic space as word2vec vectors. A more fined-grained extraction technique does not apply word-level pooling as an intermediate step and extracts only the relevant subspace of each word: $$\forall _i:~d_i = {\textstyle \sum _t} \; R_{i,t} \cdot x_{i,t}$$ (Eq. 13) This last approach is particularly useful to address the problem of word homonymy, and will thus result in even finer semantic extraction from the document. In the remaining we will refer to the semantic extraction defined by Eq. 12 as word-level extraction, and to the one from Eq. 13 as element-wise (ew) extraction. In both cases we call vector $d$ a document summary vector. Baseline Methods In the following we briefly mention methods which will serve as baselines for comparison. Sensitivity Analysis. Sensitivity analysis (SA) BIBREF23 , BIBREF24 , BIBREF19 assigns scores $R_{i,t} = (\partial x_k / \partial x_{i,t})^2$ to input variables representing the steepness of the decision function in the input space. These partial derivatives are straightforward to compute using standard gradient propagation BIBREF33 and are readily available in most neural network implementations. Hereby we note that sensitivity analysis redistributes the quantity $\Vert \nabla x_k\Vert {_2^2}$ , while LRP redistributes $x_k$ . However, the local steepness information is a relatively weak proxy of the actual function value, which is the real quantity of interest when estimating the contribution of input variables w.r.t. to a current classifier's decision. We further note that relevance scores obtained with LRP are signed, while those obtained with SA are positive. BoW/SVM. As a baseline to the CNN model, a bag-of-words linear SVM classifier will be used to predict the document categories. In this model each text document is first mapped to a vector $x$ with dimensionality $V$ the size of the training data vocabulary, where each entry is computed as a term frequency - inverse document frequency (TFIDF) score of the corresponding word. Subsequently these vectors $x$ are normalized to unit euclidean norm. In a second step, using the vector representations $x$ of all documents, $C$ maximum margin separating hyperplanes are learned to separate each of the classes of the classification problem from the other ones. As a result we obtain for each class $c \in C$ a linear prediction score of the form $s_c = w_c^\top x + b_c$ , where $w_c\in \mathbb {R}^{V} $ and $b_c \in \mathbb {R}$ are class specific weights and bias. In order to obtain a LRP decomposition of the prediction score $s_c$ for class $V$0 onto the input variables, we simply compute $V$1 , where $V$2 is the number of non-zero entries of $V$3 . Respectively, the sensitivity analysis redistribution of the prediction score squared gradient reduces to $V$4 . Note that the BoW/SVM model being a linear predictor relying directly on word frequency statistics, it lacks expressive power in comparison to the CNN model which additionally learns intermediate hidden layer representations and convolutional filters. Moreover the CNN model can take advantage of the semantic similarity encoded in the distributed word2vec representations, while for the BoW/SVM model all words are “equidistant” in the bag-of-words semantic space. As our experiments will show, these limitations lead the BoW/SVM model to sometimes identify spurious words as relevant for the classification task. In analogy to the semantic extraction proposed in section "Word Relevance and Vector-Based Document Representation" for the CNN model, we can build vectors $d$ representing documents by leveraging the word relevances obtained with the BoW/SVM model. To this end, we introduce a binary vector $\tilde{x} \in \mathbb {R}^{V} $ whose entries are equal to one when the corresponding word from the vocabulary is present in the document and zero otherwise (i.e. $\tilde{x}$ is a binary bag-of-words representation of the document). Thereafter, we build the document summary vector $d$ component-wise, so that $d$ is just a vector of word relevances: $$\forall _i:~d_i = R_{i} \cdot {\tilde{x}}_{i}$$ (Eq. 15) Uniform/TFIDF based Document Summary Vector. In place of the word-level relevance $R_t$ resp. $R_i$ in Eq. 12 and Eq. 15 , we can use a uniform weighting. This corresponds to build the document vector $d$ as an average of word2vec word embeddings in the first case, and to take as a document representation $d$ a binary bag-of-words vector in the second case. Moreover, we can replace $R_t$ in Eq. 12 by an inverse document frequency (IDF) score, and $R_i$ in Eq. 15 by a TFIDF score. Both correspond to TFIDF weighting of either word2vec vectors, or of one-hot vectors representing words. Quality of Word Relevances and Model Explanatory Power In this section we describe how to evaluate and compare the outcomes of algorithms which assign relevance scores to words (such as LRP or SA) through intrinsic validation. Furthermore, we propose a measure of model explanatory power based on an extrinsic validation procedure. The latter will be used to analyze and compare the relevance decompositions or explanations obtained with the neural network and the BoW/SVM classifier. Both types of evaluations will be carried out in section "Results" . Measuring the Quality of Word Relevances through Intrinsic Validation An evaluation of how good a method identifies relevant words in text documents can be performed qualitatively, e.g. at the document level, by inspecting the heatmap visualization of a document, or by reviewing the list of the most (or of the least) relevant words per document. A similar analysis can also be conducted at the dataset level, e.g. by compiling the list of the most relevant words for one category across all documents. The latter allows one to identify words that are representatives for a document category, and eventually to detect potential dataset biases or classifier specific drawbacks. However, in order to quantitatively compare algorithms such as LRP and SA regarding the identification of relevant words, we need an objective measure of the quality of the explanations delivered by relevance decomposition methods. To this end we adopt an idea from BIBREF14 : A word $w$ is considered highly relevant for the classification $f(x)$ of the document $x$ if removing it and classifying the modified document $\tilde{x}$ results in a strong decrease of the classification score $f(\tilde{x})$ . This idea can be extended by sequentially deleting words from the most relevant to the least relevant or the other way round. The result is a graph of the prediction scores $f(\tilde{x})$ as a function of the number of deleted words. In our experiments, we employ this approach to track the changes in classification performance when successively deleting words according to their relevance value. By comparing the relative impact on the classification performance induced by different relevance decomposition methods, we can estimate how appropriate these methods are at identifying words that are really important for the classification task at hand. The above described procedure constitutes an intrinsic validation, as it does not rely on an external classifier. Measuring Model Explanatory Power through Extrinsic Validation Although intrinsic validation can be used to compare relevance decomposition methods for a given ML model, this approach is not suited to compare the explanatory power of different ML models, since the latter requires a common evaluation basis. Furthermore, even if we would track the classification performance changes induced by different ML models using an external classifier, it would not necessarily increase comparability, because removing words from a document may affect different classifiers very differently, so that their graphs $f(\tilde{x})$ are not comparable. Therefore, we propose a novel measure of model explanatory power which does not depend on a classification performance change, but only on the word relevances. Hereby we consider ML model A as being more explainable than ML model B if its word relevances are more “semantic extractive”, i.e. more helpful for solving a semantic related task such as the classification of document summary vectors. More precisely, in order to quantify the ML model explanatory power we undertake the following steps: px (1) Compute document summary vectors for all test set documents using Eq. 12 or 13 for the CNN and Eq. 15 for the BoW/SVM model. Hereby use the ML model's predicted class as target class for the relevance decomposition (i.e. the summary vector generation is unsupervised). px (2) Normalize the document summary vectors to unit euclidean norm, and perform a K-nearest-neighbors (KNN) classification of half of these vectors, using the other half of summary vectors as neighbors (hereby use standard KNN classification, i.e. nearest neighbors are identified by euclidean distance and neighbor votes are weighted uniformly). Use different hyperparameters $K$ . px (3) Repeat step (2) over 10 random data splits, and average the KNN classification accuracies for each $K$ . Finally, report the maximum (over different $K$ ) KNN accuracy as explanatory power index (EPI). The higher this value, the more explanatory power the ML model and the corresponding document summary vectors, will have. In a nutshell, our EPI metric of explanatory power of a given ML model “ $f$ ”, combined with a relevance map “ $R$ ”, can informally be summarized as: $$d(x) &= {\textstyle \sum _t} \; [R (f (x)) \odot x]_t \nonumber \\[2mm] {\text{EPI}}(f,R) \; &= \; \max _{K} \; \; \texttt {KNN\_accuracy} \Big (\lbrace d(x^{(1)}),\dots ,d(x^{(N)})\rbrace ,K\Big )$$ (Eq. 18) where $d(x)$ is the document summary vector for input document $x$ , and subscript $t$ denotes the words in the document. Thereby the sum $\sum _t$ and element-wise multiplication $\odot $ operations stand for the weighted combination specified explicitly in Eq. 12 - 15 . The KNN accuracy is estimated over all test set document summary vectors indexed from 1 to $N$ , and $K$ is the number of neighbors. In the proposed evaluation procedure, the use of KNN as a common external classifier enables us to unbiasedly and objectively compare different ML models, in terms of the density and local neighborhood structure of the semantic information extracted via the summary vectors in input feature space. Indeed we recall that summary vectors constructed via Eq. 12 and 13 lie in the same semantic space as word2vec embeddings, and that summary vectors obtained via Eq. 15 live in the bag-of-words space. Results This section summarizes our experimental results. We first describe the dataset, experimental setup, training procedure and classification accuracy of our ML models. We will consider four ML models: three CNNs with different filter sizes and a BoW/SVM classifier. Then, we demonstrate that LRP can be used to identify relevant words in text documents. We compare heatmaps for the best performing CNN model and the BoW/SVM classifier, and report the most representative words for three exemplary document categories. These results demonstrate qualitatively that the CNN model produces better explanations than the BoW/SVM classifier. After that we move to the evaluation of the document summary vectors, where we show that a 2D PCA projection of the document vectors computed from the LRP scores groups documents according to their topics (without requiring the true labels). Since worse results are obtained when using the SA scores or the uniform or TFIDF weighting, this indicates that the explanations produced by LRP are semantically more meaningful than the latter. Finally, we confirm quantitatively the observations made before, namely that (1) the LRP decomposition method provides better explanations than SA and that (2) the CNN model outperforms the BoW/SVM classifier in terms of explanatory power. Experimental Setup For our experiments we consider a topic categorization task, and employ the freely available 20Newsgroups dataset consisting of newsgroup posts evenly distributed among twenty fine-grained categories. More precisely we use the 20news-bydate version, which is already partitioned into 11314 training and 7532 test documents corresponding to different periods in time. As a first preprocessing step, we remove the headers from the documents (by splitting at the first blank line) and tokenize the text with NLTK. Then, we filter the tokenized data by retaining only tokens composed of the following four types of characters: alphabetic, hyphen, dot and apostrophe, and containing at least one alphabetic character. Hereby we aim to remove punctuation, numbers or dates, while keeping abbreviations and compound words. We do not apply any further preprocessing, as for instance stop-word removal or stemming, except for the SVM classifier where we additionally perform lowercasing, as this is a common setup for bag-of-words models. We truncate the resulting sequence of tokens to a chosen fixed length of 400 in order to simplify neural network training (in practice our CNN can process any arbitrary sized document). Lastly, we build the neural network input by horizontally concatenating pre-trained word embeddings, according to the sequence of tokens appearing in the preprocessed document. In particular, we take the 300-dimensional freely available word2vec embeddings BIBREF4 . Out-of-vocabulary words are simply initialized to zero vectors. As input normalization, we subtract the mean and divide by the standard deviation obtained over the flattened training data. We train the neural network by minimizing the cross-entropy loss via mini-batch stochastic gradient descent using $l_2$ -norm and dropout as regularization. We tune the ML model hyperparameters by 10-fold cross-validation in case of the SVM, and by employing 1000 random documents as fixed validation set for the CNN model. However, for the CNN hyperparameters we did not perform an extensive grid search and stopped the tuning once we obtained models with reasonable classification performance for the purpose of our experiments. Table 1 summarizes the performance of our trained models. Herein CNN1, CNN2, CNN3 respectively denote neural networks with convolutional filter size $H$ equal to 1, 2 and 3 (i.e. covering 1, 2 or 3 consecutive words in the document). One can see that the linear SVM performs on par with the neural networks, i.e. the non-linear structure of the CNN models does not yield a considerable advantage toward classification accuracy. Similar results have also been reported in previous studies BIBREF34 , where it was observed that for document classification a convolutional neural network model starts to outperform a TFIDF-based linear classifier only on datasets in the order of millions of documents. This can be explained by the fact that for most topic categorization tasks, the different categories can be separated linearly in the very high-dimensional bag-of-words or bag-of-N-grams space thanks to sufficiently disjoint sets of features. Identifying Relevant Words Figure 2 compiles the resulting LRP heatmaps we obtain on an exemplary sci.space test document that is correctly classified by the SVM and the best performing neural network model CNN2. Note that for the SVM model the relevance values are computed per bag-of-words feature, i.e., same words will have same relevance irrespectively of their context in the document, whereas for the CNN classifier we visualize one relevance value per word position. Hereby we consider as target class for the LRP decomposition the classes sci.space and sci.med. We can observe that the SVM model considers insignificant words like the, is, of as very relevant (either negatively or positively) for the target class sci.med, and at the same time mistakenly estimates words like sickness, mental or distress as negatively contributing to this class (indicated by blue coloring), while on the other hand the CNN2 heatmap is consistently more sparse and concentrated on semantically meaningful words. This sparsity property can be attributed to the max-pooling non-linearity which for each feature map in the neural network selects the first most relevant feature that occurs in the document. As can be seen, it significantly simplifies the interpretability of the results by a human. Another disadvantage of the SVM model is that it relies entirely on local and global word statistics, thus can only assign relevances proportionally to the TFIDF BoW features (plus a class-dependent bias term), while the neural network model benefits from the knowledge encoded in the word2vec embeddings. For instance, the word weightlessness is not highlighted by the SVM model for the target class sci.space, because this word does not occur in the training data and thus is simply ignored by the SVM classifier. The neural network however is able to detect and attribute relevance to unseen words thanks to the semantical information encoded in the pre-trained word2vec embeddings. As a dataset-wide analysis, we determine the words identified through LRP as constituting class representatives. For that purpose we set one class as target class for the relevance decomposition, and conduct LRP over all test set documents (i.e. irrespectively of the true or ML model's predicted class). Subsequently, we sort all the words appearing in the test data in decreasing order of the obtained word-level relevance values, and retrieve the thirty most relevant ones. The result is a list of words identified via LRP as being highly supportive for a classifier decision toward the considered class. Figures 2 and 2 list the most relevant words for different LRP target classes, as well as the corresponding word-level relevance values for the CNN2 and the SVM model. Through underlining we indicate words that do not occur in the training data. Interestingly, we observe that some of the most “class-characteristical” words identified via the neural network model correspond to words that do not even appear in the training data. In contrast, such words are simply ignored by the SVM model as they do not occur in the bag-of-words vocabulary. Similarly to the previous heatmap visualizations, the class-specific analysis reveals that the SVM classifier occasionally assigns high relevances to semantically insignificant words like for example the pronoun she for the target class sci.med (20th position in left column of Fig. 2 ), or to the names pat, henry, nicho for the target the class sci.space (resp. 7, 13, 20th position in middle column of Fig. 2 ). In the former case the high relevance is due to a high term frequency of the word (indeed the word she achieves its highest term frequency in one sci.med test document where it occurs 18 times), whereas in the latter case this can be explained by a high inverse document frequency or by a class-biased occurrence of the corresponding word in the training data (pat appears within 16 different training document categories but 54.1% of its occurrences are within the category sci.space alone, 79.1% of the 201 occurrences of henry appear among sci.space training documents, and nicho appears exclusively in nine sci.space training documents). On the contrary, the neural network model seems less affected by word counts regularities and systematically attributes the highest relevances to words semantically related to the considered target class. These results demonstrate that, subjectively, the neural network is better suited to identify relevant words in text documents than the BoW/SVM model. Document Summary Vectors The word2vec embeddings are known to exhibit linear regularities representing semantical relationships between words BIBREF29 , BIBREF4 . We explore whether these regularities can be transferred to a new document representation, which we denoted as document summary vector, when building this vector as a weighted combination of word2vec embeddings (see Eq. 12 and Eq. 13 ) or as a combination of one-hot word vectors (see Eq. 15 ). We compare the weighting scheme based on the LRP relevances to the following baselines: SA relevance, TFIDF and uniform weighting (see section "Baseline Methods" ). The two-dimensional PCA projection of the summary vectors obtained via the CNN2 resp. the SVM model, as well as the corresponding TFIDF/uniform weighting baselines are shown in Figure 3 . In these visualizations we group the 20Newsgroups test documents into six top-level categories (the grouping is performed according to the dataset website), and we color each document according to its true category (note however that, as mentioned earlier, the relevance decomposition is always performed in an unsupervised way, i.e., with the ML model's predicted class). For the CNN2 model, we observe that the two-dimensional PCA projection reveals a clear-cut clustered structure when using the element-wise LRP weighting for semantic extraction, while no such regularity is observed with uniform or TFIDF weighting. The word-level LRP or SA weightings, as well as the element-wise SA weighting present also a form of bundled layout, but not as dense and well-separated as in the case of element-wise LRP. For the SVM model, the two-dimensional visualization of the summary vectors exhibits partly a cross-shaped layout for LRP and SA weighting, while again no particular structure is observed for TFIDF or uniform semantic extraction. This analysis confirms the observations made in the last section, namely that the neural network outperforms the BoW/SVM classifier in terms of explainability. Figure 3 furthermore suggests that LRP provides semantically more meaningful semantic extraction than the baseline methods. In the next section we will confirm these observations quantitatively. Quantitative Evaluation In order to quantitatively validate the hypothesis that LRP is able to identify words that either support or inhibit a specific classifier decision, we conduct several word-deleting experiments on the CNN models using LRP scores as relevance indicator. More specifically, in accordance to the word-level relevances we delete a sequence of words from each document, re-classify the documents with “missing words”, and report the classification accuracy as a function of the number of deleted words. Hereby the word-level relevances are computed on the original documents (with no words deleted). For the deleting experiments, we consider only 20Newsgroups test documents that have a length greater or equal to 100 tokens (after prepocessing), this amounts to 4963 test documents, from which we delete up to 50 words. For deleting a word we simply set the corresponding word embedding to zero in the CNN input. Moreover, in order to assess the pertinence of the LRP decomposition method as opposed to alternative relevance models, we additionally perform word deletions according to SA word relevances, as well as random deletion. In the latter case we sample a random sequence of 50 words per document, and delete the corresponding words successively from each document. We repeat the random sampling 10 times, and report the average results (the standard deviation of the accuracy is less than 0.0141 in all our experiments). We additionally perform a biased random deletion, where we sample only among words comprised in the word2vec vocabulary (this way we avoid to delete words we have already initialized as zero-vectors as there are out of the word2vec vocabulary, however as our results show this biased deletion is almost equivalent to strict random selection). As a first deletion experiment, we start with the subset of test documents that are initially correctly classified by the CNN models, and successively delete words in decreasing order of their LRP/SA word-level relevance. In this first deletion experiment, the LRP/SA relevances are computed with the true document class as target class for the relevance decomposition. In a second experiment, we perform the opposite evaluation. Here we start with the subset of initially falsely classified documents, and delete successively words in increasing order of their relevance, while considering likewise the true document class as target class for the relevance computation. In the third experiment, we start again with the set of initially falsely classified documents, but now delete words in decreasing order of their relevance, considering the classifier's initially predicted class as target class for the relevance decomposition. Figure 4 summarizes the resulting accuracies when deleting words resp. from the CNN1, CNN2 and CNN3 input documents (each row in the figure corresponds to one of the three deletion experiments). Note that we do not report results for the BoW/SVM model, as our focus here is the comparison between LRP and SA and not between different ML models. Through successive deleting of either “positive-relevant” words in decreasing order of their LRP relevance, or of “negative-relevant” words in increasing order of their LRP relevance, we confirm that both extremal LRP relevance values capture pertinent information with respect to the classification problem. Indeed in all deletion experiments, we observe the most pregnant decrease resp. increase of the classification accuracy when using LRP as relevance model. We additionally note that SA, in contrast to LRP, is largely unable to provide suitable information linking to words that speak against a specific classification decision. Instead it appears that the lowest SA relevances (which mainly correspond to zero-valued relevances) are more likely to identify words that have no impact on the classifier decision at all, as this deletion scheme has even less impact on the classification performance than random deletion when deleting words in increasing order of their relevance, as shown by the second deletion experiment. When confronting the different CNN models, we observe that the CNN2 and CNN3 models, as opposed to CNN1, produce a steeper decrease of the classification performance when deleting the most relevant words from the initially correctly classified documents, both when considering LRP as well as SA as relevance model, as shown by the first deletion experiment. This indicates that the networks with greater filter sizes are more sensitive to single word deletions, most presumably because during these deletions the meaning of the surrounding words becomes less obvious to the classifier. This also provides some weak evidence that, while CNN2 and CNN3 behave similarly (which suggests that a convolutional filter size of two is already enough for the considered classification problem), the learned filters in CNN2 and CNN3 do not only focus on isolated words but additionally consider bigrams or trigrams of words, as their results differ a lot from the CNN1 model in the first deletion experiment. In order to quantitatively evaluate and compare the ML models in combination with a relevance decomposition or explanation technique, we apply the evaluation method described in section "Measuring Model Explanatory Power through Extrinsic Validation" . That is, we compute the accuracy of an external classifier (here KNN) on the classification of document summary vectors (obtained with the ML model's predicted class). For these experiments we remove test documents which are empty or contain only one word after preprocessing (this amounts to remove 25 documents from the 20Newsgroups test set). The maximum KNN mean accuracy obtained when varying the number of neighbors $K$ (corresponding to our EPI metric of explanatory power) is reported for several models and explanation techniques in Table 2 . When pairwise comparing the best CNN based weighting schemes with the corresponding TFIDF baseline result from Table 2 , we find that all LRP element-wise weighted combinations of word2vec vectors are statistical significantly better than the TFIDF weighting of word embeddings at a significance level of 0.05 (using a corrected resampled t-test BIBREF35 ). Similarly, in the bag-of-words space, the LRP combination of one-hot word vectors is significantly better than the corresponding TFIDF document representation with a significance level of 0.05. Lastly, the best CNN2 explanatory power index is significantly higher than the best SVM based explanation at a significance level of 0.10. In Figure 5 we plot the mean accuracy of KNN (averaged over ten random test data splits) as a function of the number of neighbors $K$ , for the CNN2 resp. the SVM model, as well as the corresponding TFIDF/uniform weighting baselines (for CNN1 and CNN3 we obtained a similar layout as for CNN2). One can further see from Figure 5 that (1) (element-wise) LRP provides consistently better semantic extraction than all baseline methods and that (2) the CNN2 model has a higher explanatory power than the BoW/SVM classifier since it produces semantically more meaningful summary vectors for KNN classification. Altogether the good performance, both qualitatively as well as quantitatively, of the element-wise combination of word2vec embeddings according to the LRP relevance illustrates the usefulness of LRP for extracting a new vector-based document representation presenting semantic neighborhood regularities in feature space, and let us presume other potential applications of relevance information, e.g. for aggregating word representations into sub-document representations like phrases, sentences or paragraphs. Conclusion We have demonstrated qualitatively and quantitatively that LRP constitutes a useful tool, both for fine-grained analysis at the document level or as a dataset-wide introspection across documents, to identify words that are important to a classifier's decision. This knowledge enables to broaden the scope of applications of standard machine learning classifiers like support vector machines or neural networks, by extending the primary classification result with additional information linking the classifier's decision back to components of the input, in our case words in a document. Furthermore, based on LRP relevance, we have introduced a new way of condensing the semantic information contained in word embeddings (such as word2vec) into a document vector representation that can be used for nearest neighbors classification, and that leads to better performance than standard TFIDF weighting of word embeddings. The resulting document vector is the basis of a new measure of model explanatory power which was proposed in this work, and its semantic properties could beyond find applications in various visualization and search tasks, where the document similarity is expressed as a dot product between vectors. Our work is a first step toward applying the LRP decomposition to the NLP domain, and we expect this technique to be also suitable for various types of applications that are based on other neural network architectures such as character-based or recurrent network classifiers, or on other types of classification problems (e.g. sentiment analysis). More generally, LRP could contribute to the design of more accurate and efficient classifiers, not only by inspecting and leveraging the input space relevances, but also through the analysis of intermediate relevance values at classifier “hidden” layers. Acknowledgments This work was supported by the German Ministry for Education and Research as Berlin Big Data Center BBDC, funding mark 01IS14013A and by DFG. KRM thanks for partial funding by the National Research Foundation of Korea funded by the Ministry of Education, Science, and Technology in the BK21 program. Correspondence should be addressed to KRM and WS. Contributions Conceived the theoretical framework: LA, GM, KRM, WS. Conceived and designed the experiments: LA, FH, GM, KRM, WS. Performed the experiments: LA. Wrote the manuscript: LA, FH, GM, KRM, WS. Revised the manuscript: LA, FH, GM, KRM, WS. Figure design: LA, GM, WS. Final drafting: all equally.
CNN model which additionally learns intermediate hidden layer representations and convolutional filters. Moreover the CNN model can take advantage of the semantic similarity encoded in the distributed word2vec representations
6bcff3ef61aad6bf1280ea26ed79585e1b838e64
6bcff3ef61aad6bf1280ea26ed79585e1b838e64_0
Q: Does the LRP method work in settings that contextualize the words with respect to one another? Text: Introduction A number of real-world problems related to text data have been studied under the framework of natural language processing (NLP). Example of such problems include topic categorization, sentiment analysis, machine translation, structured information extraction, or automatic summarization. Due to the overwhelming amount of text data available on the Internet from various sources such as user-generated content or digitized books, methods to automatically and intelligently process large collections of text documents are in high demand. For several text applications, machine learning (ML) models based on global word statistics like TFIDF BIBREF0 , BIBREF1 or linear classifiers are known to perform remarkably well, e.g. for unsupervised keyword extraction BIBREF2 or document classification BIBREF3 . However more recently, neural network models based on vector space representations of words (like BIBREF4 ) have shown to be of great benefit to a large number of tasks. The trend was initiated by the seminal work of BIBREF5 and BIBREF6 , who introduced word-based neural networks to perform various NLP tasks such as language modeling, chunking, named entity recognition, and semantic role labeling. A number of recent works (e.g. BIBREF6 , BIBREF7 ) also refined the basic neural network architecture by incorporating useful structures such as convolution, pooling, and parse tree hierarchies, leading to further improvements in model predictions. Overall, these ML models have permitted to assign automatically and accurately concepts to entire documents or to sub-document levels like phrases; the assigned information can then be mined on a large scale. In parallel, a set of techniques were developed in the context of image categorization to explain the predictions of convolutional neural networks (a state-of-the-art ML model in this field) or related models. These techniques were able to associate to each prediction of the model a meaningful pattern in the space of input features BIBREF8 , BIBREF9 , BIBREF10 or to perform a decomposition onto the input pixels of the model output BIBREF11 , BIBREF12 , BIBREF13 . In this paper, we will make use of the layer-wise relevance propagation (LRP) technique BIBREF12 , that was already substantially tested on various datasets and ML models BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . In the present work, we propose a method to identify which words in a text document are important to explain the category associated to it. The approach consists of using a ML classifier to predict the categories as accurately as possible, and in a second step, decompose the ML prediction onto the input domain, thus assigning to each word in the document a relevance score. The ML model of study will be a word-embedding based convolutional neural network that we train on a text classification task, namely topic categorization of newsgroup documents. As a second ML model we consider a classical bag-of-words support vector machine (BoW/SVM) classifier. We contribute the following: px (i) The LRP technique BIBREF12 is brought to the NLP domain and its suitability for identifying relevant words in text documents is demonstrated. px (ii) LRP relevances are validated, at the document level, by building document heatmap visualizations, and at the dataset level, by compiling representative words for a text category. It is also shown quantitatively that LRP better identifies relevant words than sensitivity analysis. px (iii) A novel way of generating vector-based document representations is introduced and it is verified that these document vectors present semantic regularities within their original feature space akin word vector representations. px (iv) A measure for model explanatory power is proposed and it is shown that two ML models, a neural network and a BoW/SVM classifier, although presenting similar classification performance may largely differ in terms of explainability. The work is organized as follows. In section "Representing Words and Documents" we describe the related work for explaining classifier decisions with respect to input space variables. In section "Predicting Category with a Convolutional Neural Network" we introduce our neural network ML model for document classification, as well as the LRP decomposition procedure associated to its predictions. We describe how LRP relevance scores can be used to identify important words in documents and introduce a novel way of condensing the semantical information of a text document into a single document vector. Likewise in section "Predicting Category with a Convolutional Neural Network" we introduce a baseline ML model for document classification, as well as a gradient-based alternative for assigning relevance scores to words. In section "Quality of Word Relevances and Model Explanatory Power" we define objective criteria for evaluating word relevance scores, as well as for assessing model explanatory power. In section "Results" we introduce the dataset and experimental setup, and present the results. Finally, section "Conclusion" concludes our work. Related Work Explanation of individual classification decisions in terms of input variables has been studied for a variety of machine learning classifiers such as additive classifiers BIBREF18 , kernel-based classifiers BIBREF19 or hierarchical networks BIBREF11 . Model-agnostic methods for explanations relying on random sampling have also been proposed BIBREF20 , BIBREF21 , BIBREF22 . Despite their generality, the latter however incur an additional computational cost due to the need to process the whole sample to provide a single explanation. Other methods are more specific to deep convolutional neural networks used in computer vision: the authors of BIBREF8 proposed a network propagation technique based on deconvolutions to reconstruct input image patterns that are linked to a particular feature map activation or prediction. The work of BIBREF9 aimed at revealing salient structures within images related to a specific class by computing the corresponding prediction score derivative with respect to the input image. The latter method reveals the sensitivity of the classifier decision to some local variation of the input image, and is related to sensitivity analysis BIBREF23 , BIBREF24 . In contrast, the LRP method of BIBREF12 corresponds to a full decomposition of the classifier output for the current input image. It is based on a layer-wise conservation principle and reveals parts of the input space that either support or speak against a specific classification decision. Note that the LRP framework can be applied to various models such as kernel support vector machines and deep neural networks BIBREF12 , BIBREF17 . We refer the reader to BIBREF14 for a comparison of the three explanation methods, and to BIBREF13 for a view of particular instances of LRP as a “deep Taylor decomposition” of the decision function. In the context of neural networks for text classification BIBREF25 proposed to extract salient sentences from text documents using loss gradient magnitudes. In order to validate the pertinence of the sentences extracted via the neural network classifier, the latter work proposed to subsequently use these sentences as an input to an external classifier and compare the resulting classification performance to random and heuristic sentence selection. The work by BIBREF26 also employs gradient magnitudes to identify salient words within sentences, analogously to the method proposed in computer vision by BIBREF9 . However their analysis is based on qualitative interpretation of saliency heatmaps for exemplary sentences. In addition to the heatmap visualizations, we provide a classifier-intrinsic quantitative validation of the word-level relevances. We furthermore extend previous work from BIBREF27 by adding a BoW/SVM baseline to the experiments and proposing a new criterion for assessing model explanatory power. Interpretable Text Classification In this section we describe our method for identifying words in a text document, that are relevant with respect to a given category of a classification problem. For this, we assume that we are given a vector-based word representation and a neural network that has already been trained to map accurately documents to their actual category. Our method can be divided in four steps: (1) Compute an input representation of a text document based on word vectors. (2) Forward-propagate the input representation through the convolutional neural network until the output is reached. (3) Backward-propagate the output through the network using the layer-wise relevance propagation (LRP) method, until the input is reached. (4) Pool the relevance scores associated to each input variable of the network onto the words to which they belong. As a result of this four-step procedure, a decomposition of the prediction score for a category onto the words of the documents is obtained. Decomposed terms are called relevance scores. These relevance scores can be viewed as highlighted text or can be used to form a list of top-words in the document. The whole procedure is also described visually in Figure 1 . While we detail in this section the LRP method for a specific network architecture and with predefined choices of layers, the method can in principle be extended to any architecture composed of similar or larger number of layers. At the end of this section we introduce different methods which will serve as baselines for comparison. A baseline for the convolutional neural network model is the BoW/SVM classifier, with the LRP procedure adapted accordingly BIBREF12 . A baseline for the LRP relevance decomposition procedure is gradient-based sensitivity analysis (SA), a technique which assigns sensitivity scores to individual words. In the vector-based document representation experiments, we will also compare LRP to uniform and TFIDF baselines. Representing Words and Documents Prior to training the neural network and using it for prediction and explanation, we first derive a numerical representation of the text documents that will serve as an input to the neural classifier. To this end, we map each individual word in the document to a vector embedding, and concatenate these embeddings to form a matrix of size the number of words in the document times the dimension of the word embeddings. A distributed representation of words can be learned from scratch, or fine-tuned simultaneously with the classification task of interest. In the present work, we use only pre-training as it was shown that, even without fine-tuning, this leads to good neural network classification performance for a variety of tasks like e.g. natural language tagging or sentiment analysis BIBREF6 , BIBREF28 . One shallow neural network model for learning word embeddings from unlabeled text sources, is the continuous bag-of-words (CBOW) model of BIBREF29 , which is similar to the log-bilinear language model from BIBREF30 , BIBREF31 but ignores the order of context words. In the CBOW model, the objective is to predict a target middle word from the average of the embeddings of the context words that are surrounding the middle word, by means of direct dot products between word embeddings. During training, a set of word embeddings for context words $v$ and for target words $v^{\prime }$ are learned separately. After training is completed, only the context word embeddings $v$ will be retained for further applications. The CBOW objective has a simple maximum likelihood formulation, where one maximizes over the training data the sum of the logarithm of probabilities of the form: $ P (w_t | w_{t-n:t+n} ) = \frac{\exp \Big ( ( {1 \over {2n}}\cdot \; {\sum _{-n\le j \le n, j \ne 0}{\; v_{w_{t+j}}} )^\top v^{\prime }_{w_t} \Big )} }{\sum _{w \in V} \; \exp \Big ( ( {1 \over {2n}}\cdot \; {\sum _{-n\le j \le n, j \ne 0}{\; v_{w_{t+j}}})^\top v^{\prime }_{w} \Big )}} $ where the softmax normalization runs over all words in the vocabulary $V$ , $2n$ is the number of context words per training text window, $w_t$ represents the target word at the $t^\mathrm {th}$ position in the training data and $w_{t-n:t+n}$ represent the corresponding context words. In the present work, we utilize pre-trained word embeddings obtained with the CBOW architecture and the negative sampling training procedure BIBREF4 . We will refer to these embeddings as word2vec embeddings. Predicting Category with a Convolutional Neural Network Our ML model for classifying text documents, is a word-embedding based convolutional neural network (CNN) model similar to the one proposed in BIBREF28 for sentence classification, which itself is a slight variant of the model introduced in BIBREF6 for semantic role labeling. This architecture is depicted in Figure 1 (left) and is composed of several layers. As previously described, in a first step we map each word in the document to its word2vec vector. Denoting by $D$ the word embedding dimension and by $L$ the document length, our input is a matrix of shape $D \times L$ . We denote by $x_{i,t}$ the value of the $i^\mathrm {th}$ component of the word2vec vector representing the $t^\mathrm {th}$ word in the document. The convolution/detection layer produces a new representation composed of $F$ sequences indexed by $j$ , where each element of the sequence is computed as: $ \forall {j,t}:~ x_{j,t} = { \textstyle \max \Big (0, \; \sum _{i,\tau } x_{i,t-\tau } \; w^{(1)}_{i, j ,\tau } + b^{(1)}_j\Big ) = \max \Big (0, \; \sum _{i} \; \big (x_{i} \ast w^{(1)}_{i,j}\big )_t + b^{(1)}_j\Big ) } $ where $t$ indicates a position within the text sequence, $j$ designates a feature map, and $\tau \in \lbrace 0,1,\dots ,H-1\rbrace $ is a delay with range $H$ the filter size of the one-dimensional convolutional operation $\ast $ . After the convolutional operation, which yields $F$ features maps of length $L-H+1$ , we apply the ReLU non-linearity element-wise. Note that the trainable parameters $w^{(1)}$ and $b^{(1)}$ do not depend on the position $t$ in the text document, hence the convolutional processing is equivariant with this physical dimension. In Figure 1 , we use $j$0 . The next layer computes, for each dimension $j$1 of the previous representation, the maximum over the entire text sequence of the document: $j$2 This layer creates invariance to the position of the features in the document. Finally, the $F$ pooled features are fed into an endmost logistic classifier where the unnormalized log-probability of each of the $C$ classes, indexed by the variable $k$ are given by: $$\forall {k}:~ x_k = { \textstyle { \sum _{j}} \; x_j \; w^{(2)}_{jk} + b^{(2)}_k }$$ (Eq. 4) where $w^{(2)}$ , $b^{(2)}$ are trainable parameters of size $F \times C$ resp. size $C$ defining a fully-connected linear layer. The outputs $x_k$ can be converted to probabilities through the softmax function $p_k = \exp (x_k) / \sum _{k^{\prime }} \exp (x_{k^{\prime }})$ . For the LRP decomposition we take the unnormalized classification scores $x_k$ as a starting point. Explaining Predictions with Layer-wise Relevance Propagation Layer-wise relevance propagation (LRP) BIBREF12 , BIBREF32 is a recently introduced technique for estimating which elements of a classifier input are important to achieve a certain classification decision. It can be applied to bag-of-words SVM classifiers as well as to layer-wise structured neural networks. For every input data point and possible target class, LRP delivers one scalar relevance value per input variable, hereby indicating whether the corresponding part of the input is contributing for or against a specific classifier decision, or if this input variable is rather uninvolved and irrelevant to the classification task at all. The main idea behind LRP is to redistribute, for each possible target class separately, the output prediction score (i.e. a scalar value) that causes the classification, back to the input space via a backward propagation procedure that satisfies a layer-wise conservation principle. Thereby each intermediate classifier layer up to the input layer gets allocated relevance values, and the sum of the relevances per layer is equal to the classifier prediction score for the considered class. Denoting by $x_{i,t}\,, x_{j,t}\,, x_{j}\,, x_{k}$ the neurons of the CNN layers presented in the previous section, we associate to each of them respectively a relevance score $R_{i,t}\,, R_{j,t}\,, R_j\,, R_k$ . Accordingly the layer-wise conservation principle can be written as: $${\textstyle \sum _{i,t} R_{i,t} = \sum _{j,t} R_{j,t} = \sum _j R_j = \sum _k R_k}$$ (Eq. 6) where each sum runs over all neurons of a given layer of the network. To formalize the redistribution process from one layer to another, we introduce the concept of messages $R_{a \leftarrow b}$ indicating how much relevance circulates from a given neuron $b$ to a neuron $a$ in the next lower-layer. We can then express the relevance of neuron $a$ as a sum of incoming messages using: ${ \textstyle R_a = \sum _{b \in {\text{upper}(a)}} R_{a \leftarrow b}}$ where ${\text{upper}(a)}$ denotes the upper-layer neurons connected to $a$ . To bootstrap the propagation algorithm, we set the top-layer relevance vector to $\forall _k: R_k = x_k \cdot \delta _{kc}$ where $\delta $ is the Kronecker delta function, and $c$ is the target class of interest for which we would like to explain the model prediction in isolation from other classes. In the top fully-connected layer, messages are computed following a weighted redistribution formula: $$R_{j \leftarrow k} = \frac{z_{jk}}{\sum _{j} z_{jk}} R_k$$ (Eq. 7) where we define $z_{jk} = x_j w^{(2)}_{jk} + F^{-1} (b^{(2)}_k + \epsilon \cdot (1_{x_k \ge 0} - 1_{x_k < 0}))$ . This formula redistributes relevance onto lower-layer neurons in proportions to $z_{jk}$ representing the contribution of each neuron to the upper-layer neuron value in the forward propagation, incremented with a small stabilizing term $\epsilon $ that prevents the denominator from nearing zero, and hence avoids too large positive or negative relevance messages. In the limit case where $\epsilon \rightarrow \infty $ , the relevance is redistributed uniformly along the network connections. As a stabilizer value we use $\epsilon = 0.01$ as introduced in BIBREF12 . After computation of the messages according to Equation 7 , the latter can be pooled onto the corresponding neuron by the formula $R_j = \sum _k R_{j \leftarrow k}$ . The relevance scores $R_j$ are then propagated through the max-pooling layer using the formula: $$R_{j,t} = \left\lbrace \begin{array}{ll} R_j & \text{if} \; \; t = \mathrm {arg}\max _{t^{\prime }} \; x_{j,t^{\prime }}\\ 0 & \text{else} \end{array} \right.$$ (Eq. 8) which is a “winner-take-all” redistribution analogous to the rule used during training for backpropagating gradients, i.e. the neuron that had the maximum value in the pool is granted all the relevance from the upper-layer neuron. Finally, for the convolutional layer we use the weighted redistribution formula: $$R_{(i,t-\tau ) \leftarrow (j,t)} = \frac{z_{i, j, \tau }}{ \sum _{i,\tau } z_{i, j, \tau }}$$ (Eq. 9) where $z_{i, j, \tau } = x_{i,t-\tau } w^{(1)}_{i, j, \tau } + (HD)^{-1} (b^{(1)}_j + \epsilon \cdot (1_{x_{j,t} > 0} - 1_{x_{j,t} \le 0}))$ , which is similar to Equation 7 except for the increased notational complexity incurred by the convolutional structure of the layer. Messages can finally be pooled onto the input neurons by computing $R_{i,t} = \sum _{j,\tau } R_{(i,t) \leftarrow (j,t+\tau )}$ . Word Relevance and Vector-Based Document Representation So far, the relevance has been redistributed only onto individual components of the word2vec vector associated to each word, in the form of single input neuron relevances $R_{i,t}$ . To obtain a word-level relevance value, one can pool the relevances over all dimensions of the word2vec vector, that is compute: $$R_t = {\textstyle \sum _i} R_{i,t}$$ (Eq. 11) and use this value to highlight words in a text document, as shown in Figure 1 (right). These word-level relevance scores can further be used to condense the semantic information of text documents, by building vectors $d \in \mathbb {R}^D$ representing full documents through linearly combining word2vec vectors: $$\forall _i:~d_i = {\textstyle \sum _t} \; R_{t} \cdot x_{i,t}$$ (Eq. 12) The vector $d$ is a summary that consists of an additive composition of the semantic representation of all relevant words in the document. Note that the resulting document vector lies in the same semantic space as word2vec vectors. A more fined-grained extraction technique does not apply word-level pooling as an intermediate step and extracts only the relevant subspace of each word: $$\forall _i:~d_i = {\textstyle \sum _t} \; R_{i,t} \cdot x_{i,t}$$ (Eq. 13) This last approach is particularly useful to address the problem of word homonymy, and will thus result in even finer semantic extraction from the document. In the remaining we will refer to the semantic extraction defined by Eq. 12 as word-level extraction, and to the one from Eq. 13 as element-wise (ew) extraction. In both cases we call vector $d$ a document summary vector. Baseline Methods In the following we briefly mention methods which will serve as baselines for comparison. Sensitivity Analysis. Sensitivity analysis (SA) BIBREF23 , BIBREF24 , BIBREF19 assigns scores $R_{i,t} = (\partial x_k / \partial x_{i,t})^2$ to input variables representing the steepness of the decision function in the input space. These partial derivatives are straightforward to compute using standard gradient propagation BIBREF33 and are readily available in most neural network implementations. Hereby we note that sensitivity analysis redistributes the quantity $\Vert \nabla x_k\Vert {_2^2}$ , while LRP redistributes $x_k$ . However, the local steepness information is a relatively weak proxy of the actual function value, which is the real quantity of interest when estimating the contribution of input variables w.r.t. to a current classifier's decision. We further note that relevance scores obtained with LRP are signed, while those obtained with SA are positive. BoW/SVM. As a baseline to the CNN model, a bag-of-words linear SVM classifier will be used to predict the document categories. In this model each text document is first mapped to a vector $x$ with dimensionality $V$ the size of the training data vocabulary, where each entry is computed as a term frequency - inverse document frequency (TFIDF) score of the corresponding word. Subsequently these vectors $x$ are normalized to unit euclidean norm. In a second step, using the vector representations $x$ of all documents, $C$ maximum margin separating hyperplanes are learned to separate each of the classes of the classification problem from the other ones. As a result we obtain for each class $c \in C$ a linear prediction score of the form $s_c = w_c^\top x + b_c$ , where $w_c\in \mathbb {R}^{V} $ and $b_c \in \mathbb {R}$ are class specific weights and bias. In order to obtain a LRP decomposition of the prediction score $s_c$ for class $V$0 onto the input variables, we simply compute $V$1 , where $V$2 is the number of non-zero entries of $V$3 . Respectively, the sensitivity analysis redistribution of the prediction score squared gradient reduces to $V$4 . Note that the BoW/SVM model being a linear predictor relying directly on word frequency statistics, it lacks expressive power in comparison to the CNN model which additionally learns intermediate hidden layer representations and convolutional filters. Moreover the CNN model can take advantage of the semantic similarity encoded in the distributed word2vec representations, while for the BoW/SVM model all words are “equidistant” in the bag-of-words semantic space. As our experiments will show, these limitations lead the BoW/SVM model to sometimes identify spurious words as relevant for the classification task. In analogy to the semantic extraction proposed in section "Word Relevance and Vector-Based Document Representation" for the CNN model, we can build vectors $d$ representing documents by leveraging the word relevances obtained with the BoW/SVM model. To this end, we introduce a binary vector $\tilde{x} \in \mathbb {R}^{V} $ whose entries are equal to one when the corresponding word from the vocabulary is present in the document and zero otherwise (i.e. $\tilde{x}$ is a binary bag-of-words representation of the document). Thereafter, we build the document summary vector $d$ component-wise, so that $d$ is just a vector of word relevances: $$\forall _i:~d_i = R_{i} \cdot {\tilde{x}}_{i}$$ (Eq. 15) Uniform/TFIDF based Document Summary Vector. In place of the word-level relevance $R_t$ resp. $R_i$ in Eq. 12 and Eq. 15 , we can use a uniform weighting. This corresponds to build the document vector $d$ as an average of word2vec word embeddings in the first case, and to take as a document representation $d$ a binary bag-of-words vector in the second case. Moreover, we can replace $R_t$ in Eq. 12 by an inverse document frequency (IDF) score, and $R_i$ in Eq. 15 by a TFIDF score. Both correspond to TFIDF weighting of either word2vec vectors, or of one-hot vectors representing words. Quality of Word Relevances and Model Explanatory Power In this section we describe how to evaluate and compare the outcomes of algorithms which assign relevance scores to words (such as LRP or SA) through intrinsic validation. Furthermore, we propose a measure of model explanatory power based on an extrinsic validation procedure. The latter will be used to analyze and compare the relevance decompositions or explanations obtained with the neural network and the BoW/SVM classifier. Both types of evaluations will be carried out in section "Results" . Measuring the Quality of Word Relevances through Intrinsic Validation An evaluation of how good a method identifies relevant words in text documents can be performed qualitatively, e.g. at the document level, by inspecting the heatmap visualization of a document, or by reviewing the list of the most (or of the least) relevant words per document. A similar analysis can also be conducted at the dataset level, e.g. by compiling the list of the most relevant words for one category across all documents. The latter allows one to identify words that are representatives for a document category, and eventually to detect potential dataset biases or classifier specific drawbacks. However, in order to quantitatively compare algorithms such as LRP and SA regarding the identification of relevant words, we need an objective measure of the quality of the explanations delivered by relevance decomposition methods. To this end we adopt an idea from BIBREF14 : A word $w$ is considered highly relevant for the classification $f(x)$ of the document $x$ if removing it and classifying the modified document $\tilde{x}$ results in a strong decrease of the classification score $f(\tilde{x})$ . This idea can be extended by sequentially deleting words from the most relevant to the least relevant or the other way round. The result is a graph of the prediction scores $f(\tilde{x})$ as a function of the number of deleted words. In our experiments, we employ this approach to track the changes in classification performance when successively deleting words according to their relevance value. By comparing the relative impact on the classification performance induced by different relevance decomposition methods, we can estimate how appropriate these methods are at identifying words that are really important for the classification task at hand. The above described procedure constitutes an intrinsic validation, as it does not rely on an external classifier. Measuring Model Explanatory Power through Extrinsic Validation Although intrinsic validation can be used to compare relevance decomposition methods for a given ML model, this approach is not suited to compare the explanatory power of different ML models, since the latter requires a common evaluation basis. Furthermore, even if we would track the classification performance changes induced by different ML models using an external classifier, it would not necessarily increase comparability, because removing words from a document may affect different classifiers very differently, so that their graphs $f(\tilde{x})$ are not comparable. Therefore, we propose a novel measure of model explanatory power which does not depend on a classification performance change, but only on the word relevances. Hereby we consider ML model A as being more explainable than ML model B if its word relevances are more “semantic extractive”, i.e. more helpful for solving a semantic related task such as the classification of document summary vectors. More precisely, in order to quantify the ML model explanatory power we undertake the following steps: px (1) Compute document summary vectors for all test set documents using Eq. 12 or 13 for the CNN and Eq. 15 for the BoW/SVM model. Hereby use the ML model's predicted class as target class for the relevance decomposition (i.e. the summary vector generation is unsupervised). px (2) Normalize the document summary vectors to unit euclidean norm, and perform a K-nearest-neighbors (KNN) classification of half of these vectors, using the other half of summary vectors as neighbors (hereby use standard KNN classification, i.e. nearest neighbors are identified by euclidean distance and neighbor votes are weighted uniformly). Use different hyperparameters $K$ . px (3) Repeat step (2) over 10 random data splits, and average the KNN classification accuracies for each $K$ . Finally, report the maximum (over different $K$ ) KNN accuracy as explanatory power index (EPI). The higher this value, the more explanatory power the ML model and the corresponding document summary vectors, will have. In a nutshell, our EPI metric of explanatory power of a given ML model “ $f$ ”, combined with a relevance map “ $R$ ”, can informally be summarized as: $$d(x) &= {\textstyle \sum _t} \; [R (f (x)) \odot x]_t \nonumber \\[2mm] {\text{EPI}}(f,R) \; &= \; \max _{K} \; \; \texttt {KNN\_accuracy} \Big (\lbrace d(x^{(1)}),\dots ,d(x^{(N)})\rbrace ,K\Big )$$ (Eq. 18) where $d(x)$ is the document summary vector for input document $x$ , and subscript $t$ denotes the words in the document. Thereby the sum $\sum _t$ and element-wise multiplication $\odot $ operations stand for the weighted combination specified explicitly in Eq. 12 - 15 . The KNN accuracy is estimated over all test set document summary vectors indexed from 1 to $N$ , and $K$ is the number of neighbors. In the proposed evaluation procedure, the use of KNN as a common external classifier enables us to unbiasedly and objectively compare different ML models, in terms of the density and local neighborhood structure of the semantic information extracted via the summary vectors in input feature space. Indeed we recall that summary vectors constructed via Eq. 12 and 13 lie in the same semantic space as word2vec embeddings, and that summary vectors obtained via Eq. 15 live in the bag-of-words space. Results This section summarizes our experimental results. We first describe the dataset, experimental setup, training procedure and classification accuracy of our ML models. We will consider four ML models: three CNNs with different filter sizes and a BoW/SVM classifier. Then, we demonstrate that LRP can be used to identify relevant words in text documents. We compare heatmaps for the best performing CNN model and the BoW/SVM classifier, and report the most representative words for three exemplary document categories. These results demonstrate qualitatively that the CNN model produces better explanations than the BoW/SVM classifier. After that we move to the evaluation of the document summary vectors, where we show that a 2D PCA projection of the document vectors computed from the LRP scores groups documents according to their topics (without requiring the true labels). Since worse results are obtained when using the SA scores or the uniform or TFIDF weighting, this indicates that the explanations produced by LRP are semantically more meaningful than the latter. Finally, we confirm quantitatively the observations made before, namely that (1) the LRP decomposition method provides better explanations than SA and that (2) the CNN model outperforms the BoW/SVM classifier in terms of explanatory power. Experimental Setup For our experiments we consider a topic categorization task, and employ the freely available 20Newsgroups dataset consisting of newsgroup posts evenly distributed among twenty fine-grained categories. More precisely we use the 20news-bydate version, which is already partitioned into 11314 training and 7532 test documents corresponding to different periods in time. As a first preprocessing step, we remove the headers from the documents (by splitting at the first blank line) and tokenize the text with NLTK. Then, we filter the tokenized data by retaining only tokens composed of the following four types of characters: alphabetic, hyphen, dot and apostrophe, and containing at least one alphabetic character. Hereby we aim to remove punctuation, numbers or dates, while keeping abbreviations and compound words. We do not apply any further preprocessing, as for instance stop-word removal or stemming, except for the SVM classifier where we additionally perform lowercasing, as this is a common setup for bag-of-words models. We truncate the resulting sequence of tokens to a chosen fixed length of 400 in order to simplify neural network training (in practice our CNN can process any arbitrary sized document). Lastly, we build the neural network input by horizontally concatenating pre-trained word embeddings, according to the sequence of tokens appearing in the preprocessed document. In particular, we take the 300-dimensional freely available word2vec embeddings BIBREF4 . Out-of-vocabulary words are simply initialized to zero vectors. As input normalization, we subtract the mean and divide by the standard deviation obtained over the flattened training data. We train the neural network by minimizing the cross-entropy loss via mini-batch stochastic gradient descent using $l_2$ -norm and dropout as regularization. We tune the ML model hyperparameters by 10-fold cross-validation in case of the SVM, and by employing 1000 random documents as fixed validation set for the CNN model. However, for the CNN hyperparameters we did not perform an extensive grid search and stopped the tuning once we obtained models with reasonable classification performance for the purpose of our experiments. Table 1 summarizes the performance of our trained models. Herein CNN1, CNN2, CNN3 respectively denote neural networks with convolutional filter size $H$ equal to 1, 2 and 3 (i.e. covering 1, 2 or 3 consecutive words in the document). One can see that the linear SVM performs on par with the neural networks, i.e. the non-linear structure of the CNN models does not yield a considerable advantage toward classification accuracy. Similar results have also been reported in previous studies BIBREF34 , where it was observed that for document classification a convolutional neural network model starts to outperform a TFIDF-based linear classifier only on datasets in the order of millions of documents. This can be explained by the fact that for most topic categorization tasks, the different categories can be separated linearly in the very high-dimensional bag-of-words or bag-of-N-grams space thanks to sufficiently disjoint sets of features. Identifying Relevant Words Figure 2 compiles the resulting LRP heatmaps we obtain on an exemplary sci.space test document that is correctly classified by the SVM and the best performing neural network model CNN2. Note that for the SVM model the relevance values are computed per bag-of-words feature, i.e., same words will have same relevance irrespectively of their context in the document, whereas for the CNN classifier we visualize one relevance value per word position. Hereby we consider as target class for the LRP decomposition the classes sci.space and sci.med. We can observe that the SVM model considers insignificant words like the, is, of as very relevant (either negatively or positively) for the target class sci.med, and at the same time mistakenly estimates words like sickness, mental or distress as negatively contributing to this class (indicated by blue coloring), while on the other hand the CNN2 heatmap is consistently more sparse and concentrated on semantically meaningful words. This sparsity property can be attributed to the max-pooling non-linearity which for each feature map in the neural network selects the first most relevant feature that occurs in the document. As can be seen, it significantly simplifies the interpretability of the results by a human. Another disadvantage of the SVM model is that it relies entirely on local and global word statistics, thus can only assign relevances proportionally to the TFIDF BoW features (plus a class-dependent bias term), while the neural network model benefits from the knowledge encoded in the word2vec embeddings. For instance, the word weightlessness is not highlighted by the SVM model for the target class sci.space, because this word does not occur in the training data and thus is simply ignored by the SVM classifier. The neural network however is able to detect and attribute relevance to unseen words thanks to the semantical information encoded in the pre-trained word2vec embeddings. As a dataset-wide analysis, we determine the words identified through LRP as constituting class representatives. For that purpose we set one class as target class for the relevance decomposition, and conduct LRP over all test set documents (i.e. irrespectively of the true or ML model's predicted class). Subsequently, we sort all the words appearing in the test data in decreasing order of the obtained word-level relevance values, and retrieve the thirty most relevant ones. The result is a list of words identified via LRP as being highly supportive for a classifier decision toward the considered class. Figures 2 and 2 list the most relevant words for different LRP target classes, as well as the corresponding word-level relevance values for the CNN2 and the SVM model. Through underlining we indicate words that do not occur in the training data. Interestingly, we observe that some of the most “class-characteristical” words identified via the neural network model correspond to words that do not even appear in the training data. In contrast, such words are simply ignored by the SVM model as they do not occur in the bag-of-words vocabulary. Similarly to the previous heatmap visualizations, the class-specific analysis reveals that the SVM classifier occasionally assigns high relevances to semantically insignificant words like for example the pronoun she for the target class sci.med (20th position in left column of Fig. 2 ), or to the names pat, henry, nicho for the target the class sci.space (resp. 7, 13, 20th position in middle column of Fig. 2 ). In the former case the high relevance is due to a high term frequency of the word (indeed the word she achieves its highest term frequency in one sci.med test document where it occurs 18 times), whereas in the latter case this can be explained by a high inverse document frequency or by a class-biased occurrence of the corresponding word in the training data (pat appears within 16 different training document categories but 54.1% of its occurrences are within the category sci.space alone, 79.1% of the 201 occurrences of henry appear among sci.space training documents, and nicho appears exclusively in nine sci.space training documents). On the contrary, the neural network model seems less affected by word counts regularities and systematically attributes the highest relevances to words semantically related to the considered target class. These results demonstrate that, subjectively, the neural network is better suited to identify relevant words in text documents than the BoW/SVM model. Document Summary Vectors The word2vec embeddings are known to exhibit linear regularities representing semantical relationships between words BIBREF29 , BIBREF4 . We explore whether these regularities can be transferred to a new document representation, which we denoted as document summary vector, when building this vector as a weighted combination of word2vec embeddings (see Eq. 12 and Eq. 13 ) or as a combination of one-hot word vectors (see Eq. 15 ). We compare the weighting scheme based on the LRP relevances to the following baselines: SA relevance, TFIDF and uniform weighting (see section "Baseline Methods" ). The two-dimensional PCA projection of the summary vectors obtained via the CNN2 resp. the SVM model, as well as the corresponding TFIDF/uniform weighting baselines are shown in Figure 3 . In these visualizations we group the 20Newsgroups test documents into six top-level categories (the grouping is performed according to the dataset website), and we color each document according to its true category (note however that, as mentioned earlier, the relevance decomposition is always performed in an unsupervised way, i.e., with the ML model's predicted class). For the CNN2 model, we observe that the two-dimensional PCA projection reveals a clear-cut clustered structure when using the element-wise LRP weighting for semantic extraction, while no such regularity is observed with uniform or TFIDF weighting. The word-level LRP or SA weightings, as well as the element-wise SA weighting present also a form of bundled layout, but not as dense and well-separated as in the case of element-wise LRP. For the SVM model, the two-dimensional visualization of the summary vectors exhibits partly a cross-shaped layout for LRP and SA weighting, while again no particular structure is observed for TFIDF or uniform semantic extraction. This analysis confirms the observations made in the last section, namely that the neural network outperforms the BoW/SVM classifier in terms of explainability. Figure 3 furthermore suggests that LRP provides semantically more meaningful semantic extraction than the baseline methods. In the next section we will confirm these observations quantitatively. Quantitative Evaluation In order to quantitatively validate the hypothesis that LRP is able to identify words that either support or inhibit a specific classifier decision, we conduct several word-deleting experiments on the CNN models using LRP scores as relevance indicator. More specifically, in accordance to the word-level relevances we delete a sequence of words from each document, re-classify the documents with “missing words”, and report the classification accuracy as a function of the number of deleted words. Hereby the word-level relevances are computed on the original documents (with no words deleted). For the deleting experiments, we consider only 20Newsgroups test documents that have a length greater or equal to 100 tokens (after prepocessing), this amounts to 4963 test documents, from which we delete up to 50 words. For deleting a word we simply set the corresponding word embedding to zero in the CNN input. Moreover, in order to assess the pertinence of the LRP decomposition method as opposed to alternative relevance models, we additionally perform word deletions according to SA word relevances, as well as random deletion. In the latter case we sample a random sequence of 50 words per document, and delete the corresponding words successively from each document. We repeat the random sampling 10 times, and report the average results (the standard deviation of the accuracy is less than 0.0141 in all our experiments). We additionally perform a biased random deletion, where we sample only among words comprised in the word2vec vocabulary (this way we avoid to delete words we have already initialized as zero-vectors as there are out of the word2vec vocabulary, however as our results show this biased deletion is almost equivalent to strict random selection). As a first deletion experiment, we start with the subset of test documents that are initially correctly classified by the CNN models, and successively delete words in decreasing order of their LRP/SA word-level relevance. In this first deletion experiment, the LRP/SA relevances are computed with the true document class as target class for the relevance decomposition. In a second experiment, we perform the opposite evaluation. Here we start with the subset of initially falsely classified documents, and delete successively words in increasing order of their relevance, while considering likewise the true document class as target class for the relevance computation. In the third experiment, we start again with the set of initially falsely classified documents, but now delete words in decreasing order of their relevance, considering the classifier's initially predicted class as target class for the relevance decomposition. Figure 4 summarizes the resulting accuracies when deleting words resp. from the CNN1, CNN2 and CNN3 input documents (each row in the figure corresponds to one of the three deletion experiments). Note that we do not report results for the BoW/SVM model, as our focus here is the comparison between LRP and SA and not between different ML models. Through successive deleting of either “positive-relevant” words in decreasing order of their LRP relevance, or of “negative-relevant” words in increasing order of their LRP relevance, we confirm that both extremal LRP relevance values capture pertinent information with respect to the classification problem. Indeed in all deletion experiments, we observe the most pregnant decrease resp. increase of the classification accuracy when using LRP as relevance model. We additionally note that SA, in contrast to LRP, is largely unable to provide suitable information linking to words that speak against a specific classification decision. Instead it appears that the lowest SA relevances (which mainly correspond to zero-valued relevances) are more likely to identify words that have no impact on the classifier decision at all, as this deletion scheme has even less impact on the classification performance than random deletion when deleting words in increasing order of their relevance, as shown by the second deletion experiment. When confronting the different CNN models, we observe that the CNN2 and CNN3 models, as opposed to CNN1, produce a steeper decrease of the classification performance when deleting the most relevant words from the initially correctly classified documents, both when considering LRP as well as SA as relevance model, as shown by the first deletion experiment. This indicates that the networks with greater filter sizes are more sensitive to single word deletions, most presumably because during these deletions the meaning of the surrounding words becomes less obvious to the classifier. This also provides some weak evidence that, while CNN2 and CNN3 behave similarly (which suggests that a convolutional filter size of two is already enough for the considered classification problem), the learned filters in CNN2 and CNN3 do not only focus on isolated words but additionally consider bigrams or trigrams of words, as their results differ a lot from the CNN1 model in the first deletion experiment. In order to quantitatively evaluate and compare the ML models in combination with a relevance decomposition or explanation technique, we apply the evaluation method described in section "Measuring Model Explanatory Power through Extrinsic Validation" . That is, we compute the accuracy of an external classifier (here KNN) on the classification of document summary vectors (obtained with the ML model's predicted class). For these experiments we remove test documents which are empty or contain only one word after preprocessing (this amounts to remove 25 documents from the 20Newsgroups test set). The maximum KNN mean accuracy obtained when varying the number of neighbors $K$ (corresponding to our EPI metric of explanatory power) is reported for several models and explanation techniques in Table 2 . When pairwise comparing the best CNN based weighting schemes with the corresponding TFIDF baseline result from Table 2 , we find that all LRP element-wise weighted combinations of word2vec vectors are statistical significantly better than the TFIDF weighting of word embeddings at a significance level of 0.05 (using a corrected resampled t-test BIBREF35 ). Similarly, in the bag-of-words space, the LRP combination of one-hot word vectors is significantly better than the corresponding TFIDF document representation with a significance level of 0.05. Lastly, the best CNN2 explanatory power index is significantly higher than the best SVM based explanation at a significance level of 0.10. In Figure 5 we plot the mean accuracy of KNN (averaged over ten random test data splits) as a function of the number of neighbors $K$ , for the CNN2 resp. the SVM model, as well as the corresponding TFIDF/uniform weighting baselines (for CNN1 and CNN3 we obtained a similar layout as for CNN2). One can further see from Figure 5 that (1) (element-wise) LRP provides consistently better semantic extraction than all baseline methods and that (2) the CNN2 model has a higher explanatory power than the BoW/SVM classifier since it produces semantically more meaningful summary vectors for KNN classification. Altogether the good performance, both qualitatively as well as quantitatively, of the element-wise combination of word2vec embeddings according to the LRP relevance illustrates the usefulness of LRP for extracting a new vector-based document representation presenting semantic neighborhood regularities in feature space, and let us presume other potential applications of relevance information, e.g. for aggregating word representations into sub-document representations like phrases, sentences or paragraphs. Conclusion We have demonstrated qualitatively and quantitatively that LRP constitutes a useful tool, both for fine-grained analysis at the document level or as a dataset-wide introspection across documents, to identify words that are important to a classifier's decision. This knowledge enables to broaden the scope of applications of standard machine learning classifiers like support vector machines or neural networks, by extending the primary classification result with additional information linking the classifier's decision back to components of the input, in our case words in a document. Furthermore, based on LRP relevance, we have introduced a new way of condensing the semantic information contained in word embeddings (such as word2vec) into a document vector representation that can be used for nearest neighbors classification, and that leads to better performance than standard TFIDF weighting of word embeddings. The resulting document vector is the basis of a new measure of model explanatory power which was proposed in this work, and its semantic properties could beyond find applications in various visualization and search tasks, where the document similarity is expressed as a dot product between vectors. Our work is a first step toward applying the LRP decomposition to the NLP domain, and we expect this technique to be also suitable for various types of applications that are based on other neural network architectures such as character-based or recurrent network classifiers, or on other types of classification problems (e.g. sentiment analysis). More generally, LRP could contribute to the design of more accurate and efficient classifiers, not only by inspecting and leveraging the input space relevances, but also through the analysis of intermediate relevance values at classifier “hidden” layers. Acknowledgments This work was supported by the German Ministry for Education and Research as Berlin Big Data Center BBDC, funding mark 01IS14013A and by DFG. KRM thanks for partial funding by the National Research Foundation of Korea funded by the Ministry of Education, Science, and Technology in the BK21 program. Correspondence should be addressed to KRM and WS. Contributions Conceived the theoretical framework: LA, GM, KRM, WS. Conceived and designed the experiments: LA, FH, GM, KRM, WS. Performed the experiments: LA. Wrote the manuscript: LA, FH, GM, KRM, WS. Revised the manuscript: LA, FH, GM, KRM, WS. Figure design: LA, GM, WS. Final drafting: all equally.
Yes
5871d258f66b00fb716065086f757ef745645bfe
5871d258f66b00fb716065086f757ef745645bfe_0
Q: did they use other pretrained language models besides bert? Text: Introduction Understanding what facts can be inferred to be true or false from text is an essential part of natural language understanding. In many cases, these inferences can go well beyond what is immediately stated in the text. For example, a simple sentence like “Hanna Huyskova won the gold medal for Belarus in freestyle skiing." implies that (1) Belarus is a country, (2) Hanna Huyskova is an athlete, (3) Belarus won at least one Olympic event, (4) the USA did not win the freestyle skiing event, and so on. Work completed while interning at Google. Also affiliated with Columbia University, work done at Google. To test a model's ability to make these kinds of inferences, previous work in natural language inference (NLI) proposed the task of labeling candidate statements as being entailed or contradicted by a given passage. However, in practice, generating candidate statements that test for complex inferential abilities is challenging. For instance, evidence suggests BIBREF0 , BIBREF1 , BIBREF2 that simply asking human annotators to write candidate statements will result in examples that typically only require surface-level reasoning. In this paper we propose an alternative: we test models on their ability to answer naturally occurring yes/no questions. That is, questions that were authored by people who were not prompted to write particular kinds of questions, including even being required to write yes/no questions, and who did not know the answer to the question they were asking. Figure contains some examples from our dataset. We find such questions often query for non-factoid information, and that human annotators need to apply a wide range of inferential abilities when answering them. As a result, they can be used to construct highly inferential reading comprehension datasets that have the added benefit of being directly related to the practical end-task of answering user yes/no questions. Yes/No questions do appear as a subset of some existing datasets BIBREF3 , BIBREF4 , BIBREF5 . However, these datasets are primarily intended to test other aspects of question answering (QA), such as conversational QA or multi-step reasoning, and do not contain naturally occurring questions. We follow the data collection method used by Natural Questions (NQ) BIBREF6 to gather 16,000 naturally occurring yes/no questions into a dataset we call BoolQ (for Boolean Questions). Each question is paired with a paragraph from Wikipedia that an independent annotator has marked as containing the answer. The task is then to take a question and passage as input, and to return “yes" or “no" as output. Figure contains some examples, and Appendix SECREF17 contains additional randomly selected examples. Following recent work BIBREF7 , we focus on using transfer learning to establish baselines for our dataset. Yes/No QA is closely related to many other NLP tasks, including other forms of question answering, entailment, and paraphrasing. Therefore, it is not clear what the best data sources to transfer from are, or if it will be sufficient to just transfer from powerful pre-trained language models such as BERT BIBREF8 or ELMo BIBREF9 . We experiment with state-of-the-art unsupervised approaches, using existing entailment datasets, three methods of leveraging extractive QA data, and using a few other supervised datasets. We found that transferring from MultiNLI, and the unsupervised pre-training in BERT, gave us the best results. Notably, we found these approaches are surprisingly complementary and can be combined to achieve a large gain in performance. Overall, our best model reaches 80.43% accuracy, compared to 62.31% for the majority baseline and 90% human accuracy. In light of the fact BERT on its own has achieved human-like performance on several NLP tasks, this demonstrates the high degree of difficulty of our dataset. We present our data and code at https://goo.gl/boolq. Related Work Yes/No questions make up a subset of the reading comprehension datasets CoQA BIBREF3 , QuAC BIBREF4 , and HotPotQA BIBREF5 , and are present in the ShARC BIBREF10 dataset. These datasets were built to challenge models to understand conversational QA (for CoQA, ShARC and QuAC) or multi-step reasoning (for HotPotQA), which complicates our goal of using yes/no questions to test inferential abilities. Of the four, QuAC is the only one where the question authors were not allowed to view the text being used to answer their questions, making it the best candidate to contain naturally occurring questions. However, QuAC still heavily prompts users, including limiting their questions to be about pre-selected Wikipedia articles, and is highly class imbalanced with 80% “yes" answers. The MS Marco dataset BIBREF11 , which contains questions with free-form text answers, also includes some yes/no questions. We experiment with heuristically identifying them in Section SECREF4 , but this process can be noisy and the quality of the resulting annotations is unknown. We also found the resulting dataset is class imbalanced, with 80% “yes" answers. Yes/No QA has been used in other contexts, such as the templated bAbI stories BIBREF12 or some Visual QA datasets BIBREF13 , BIBREF14 . We focus on answering yes/no questions using natural language text. Question answering for reading comprehension in general has seen a great deal of recent work BIBREF15 , BIBREF16 , and there have been many recent attempts to construct QA datasets that require advanced reasoning abilities BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . However, these attempts typically involve engineering data to be more difficult by, for example, explicitly prompting users to write multi-step questions BIBREF5 , BIBREF18 , or filtering out easy questions BIBREF19 . This risks resulting in models that do not have obvious end-use applications since they are optimized to perform in an artificial setting. In this paper, we show that yes/no questions have the benefit of being very challenging even when they are gathered from natural sources. Natural language inference is also a well studied area of research, particularly on the MultiNLI BIBREF21 and SNLI BIBREF22 datasets. Other sources of entailment data include the PASCAL RTE challenges BIBREF23 , BIBREF24 or SciTail BIBREF25 . We note that, although SciTail, RTE-6 and RTE-7 did not use crowd workers to generate candidate statements, they still use sources (multiple choices questions or document summaries) that were written by humans with knowledge of the premise text. Using naturally occurring yes/no questions ensures even greater independence between the questions and premise text, and ties our dataset to a clear end-task. BoolQ also requires detecting entailment in paragraphs instead of sentence pairs. Transfer learning for entailment has been studied in GLUE BIBREF7 and SentEval BIBREF26 . Unsupervised pre-training in general has recently shown excellent results on many datasets, including entailment data BIBREF9 , BIBREF8 , BIBREF27 . Converting short-answer or multiple choice questions into entailment examples, as we do when experimenting with transfer learning, has been proposed in several prior works BIBREF28 , BIBREF29 , BIBREF25 . In this paper we found some evidence suggesting that these approaches are less effective than using crowd-sourced entailment examples when it comes to transferring to natural yes/no questions. Contemporaneously with our work, BIBREF30 showed that pre-training on supervised tasks could be beneficial even when using pre-trained language models, especially for a textual entailment task. Our work confirms these results for yes/no question answering. This work builds upon the Natural Questions (NQ) BIBREF6 , which contains some natural yes/no questions. However, there are too few (about 1% of the corpus) to make yes/no QA a very important aspect of that task. In this paper, we gather a large number of additional yes/no questions in order to construct a dedicated yes/no QA dataset. The BoolQ Dataset An example in our dataset consists of a question, a paragraph from a Wikipedia article, the title of the article, and an answer, which is either “yes" or “no". We include the article title since it can potentially help resolve ambiguities (e.g., coreferent phrases) in the passage, although none of the models presented in this paper make use of them. Data Collection We gather data using the pipeline from NQ BIBREF6 , but with an additional filtering step to focus on yes/no questions. We summarize the complete pipeline here, but refer to their paper for a more detailed description. Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Note that, unlike in NQ, we only use questions that were marked as having a yes/no answer, and pair each question with the selected passage instead of the entire document. This helps reduce ambiguity (ex., avoiding cases where the document supplies conflicting answers in different paragraphs), and keeps the input small enough so that existing entailment models can easily be applied to our dataset. We combine 13k questions gathered from this pipeline with an additional 3k questions with yes/no answers from the NQ training set to reach a total of 16k questions. We split these questions into a 3.2k dev set, 3.2k test set, and 9.4k train set, ensuring questions from NQ are always in the train set. “Yes” answers are slightly more common (62.31% in the train set). The queries are typically short (average length 8.9 tokens) with longer passages (average length 108 tokens). Analysis In the following section we analyze our dataset to better understand the nature of the questions, the annotation quality, and the kinds of reasoning abilities required to answer them. Annotation Quality First, in order to assess annotation quality, three of the authors labelled 110 randomly chosen examples. If there was a disagreement, the authors conferred and selected a single answer by mutual agreement. We call the resulting labels “gold-standard" labels. On the 110 selected examples, the answer annotations reached 90% accuracy compared to the gold-standard labels. Of the cases where the answer annotation differed from the gold-standard, six were ambiguous or debatable cases, and five were errors where the annotator misunderstood the passage. Since the agreement was sufficiently high, we elected to use singly-annotated examples in the training/dev/test sets in order to be able to gather a larger dataset. Question Types Part of the value of this dataset is that it contains questions that people genuinely want to answer. To explore this further, we manually define a set of topics that questions can be about. An author categorized 200 questions into these topics. The results can be found in the upper half of Table . Questions were often about entertainment media (including T.V., movies, and music), along with other popular topics like sports. However, there are still a good portion of questions asking for more general factual knowledge, including ones about historical events or the natural world. We also broke the questions into categories based on what kind of information they were requesting, shown in the lower half of Table . Roughly one-sixth of the questions are about whether anything with a particular property exists (Existence), another sixth are about whether a particular event occurred (Event Occurrence), and another sixth ask whether an object is known by a particular name, or belongs to a particular category (Definitional). The questions that do not fall into these three categories were split between requesting facts about a specific entity, or requesting more general factual information. We do find a correlation between the nature of the question and the likelihood of a “yes" answer. However, this correlation is too weak to help outperform the majority baseline because, even if the topic or type is known, it is never best to guess the minority class. We also found that question-only models perform very poorly on this task (see Section SECREF12 ), which helps confirm that the questions do not contain sufficient information to predict the answer on their own. Types of Inference Finally, we categorize the kinds of inference required to answer the questions in BoolQ. The definitions and results are shown in Table . Less than 40% of the examples can be solved by detecting paraphrases. Instead, many questions require making additional inferences (categories “Factual Reasoning", “By Example", and “Other Inference") to connect what is stated in the passage to the question. There is also a significant class of questions (categories “Implicit" and “Missing Mention") that require a subtler kind of inference based on how the passage is written. Discussion Why do natural yes/no questions require inference so often? We hypothesize that there are several factors. First, we notice factoid questions that ask about simple properties of entities, such as “Was Obama born in 1962?", are rare. We suspect this is because people will almost always prefer to phrase such questions as short-answer questions (e.g., “When was Obama born?"). Thus, there is a natural filtering effect where people tend to use yes/no questions exactly when they want more complex kinds of information. Second, both the passages and questions rarely include negation. As a result, detecting a “no" answer typically requires understanding that a positive assertion in the text excludes, or makes unlikely, a positive assertion in the question. This requires reasoning that goes beyond paraphrasing (see the “Other-Inference" or “Implicit" examples). We also think it was important that annotators only had to answer questions, rather than generate them. For example, imagine trying to construct questions that fall into the categories of “Missing Mention" or “Implicit". While possible, it would require a great deal of thought and creativity. On the other hand, detecting when a yes/no question can be answered using these strategies seems much easier and more intuitive. Thus, having annotators answer pre-existing questions opens the door to building datasets that contain more inference and have higher quality labels. A surprising result from our work is that the datasets that more closely resemble the format of BoolQ, meaning they contain questions and multi-sentence passages, such as SQuAD 2.0, RACE, or Y/N MS Marco, were not very useful for transfer. The entailment datasets were stronger despite consisting of sentence pairs. This suggests that adapting from sentence-pair input to question/passage input was not a large obstacle to achieving transfer. Preliminary work found attempting to convert the yes/no questions in BoolQ into declarative statements did not improve transfer from MultiNLI, which supports this hypothesis. The success of MultiNLI might also be surprising given recent concerns about the generalization abilities of models trained on it BIBREF37 , particularly related to “annotation artifacts" caused by using crowd workers to write the hypothesis statements BIBREF0 . We have shown that, despite these weaknesses, it can still be an important starting point for models being used on natural data. We hypothesize that a key advantage of MultiNLI is that it contains examples of contradictions. The other sources of transfer we consider, including the next-sentence-selection objective in BERT, are closer to providing examples of entailed text vs. neutral/unrelated text. Indeed, we found that our two step transfer procedure only reaches 78.43% dev set accuracy if we remove the contradiction class from MultiNLI, regressing its performance close to the level of BERTL when just using unsupervised pre-training. Note that it is possible to pre-train a model on several of the suggested datasets, either in succession or in a multi-task setup. We leave these experiments to future work. Our results also suggest pre-training on MultiNLI would be helpful for other corpora that contain yes/no questions. Training Yes/No QA Models Models on this dataset need to predict an output class given two pieces of input text, which is a well studied paradigm BIBREF7 . We find training models on our train set alone to be relatively ineffective. Our best model reaches 69.6% accuracy, only 8% better than the majority baseline. Therefore, we follow the recent trend in NLP of using transfer learning. In particular, we experiment with pre-training models on related tasks that have larger datasets, and then fine-tuning them on our training data. We list the sources we consider for pre-training below. Entailment: We consider two entailment datasets, MultiNLI BIBREF21 and SNLI BIBREF22 . We choose these datasets since they are widely-used and large enough to use for pre-training. We also experiment with ablating classes from MultiNLI. During fine-tuning we use the probability the model assigns to the “entailment" class as the probability of predicting a “yes" answer. Multiple-Choice QA: We use a multiple choice reading comprehension dataset, RACE BIBREF31 , which contains stories or short essays paired with questions built to test the reader's comprehension of the text. Following what was done in SciTail BIBREF25 , we convert questions and answer-options to statements by either substituting the answer-option for the blanks in fill-in-the-blank questions, or appending a separator token and the answer-option to the question. During training, we have models independently assign a score to each statement, and then apply the softmax operator between all statements per each question to get statement probabilities. We use the negative log probability of the correct statement as a loss function. To fine-tune on BoolQ, we apply the sigmoid operator to the score of the question given its passage to get the probability of a “yes" answer. Extractive QA: We consider several methods of leveraging extractive QA datasets, where the model must answer questions by selecting text from a relevant passage. Preliminary experiments found that simply transferring the lower-level weights of extractive QA models was ineffective, so we instead consider three methods of constructing entailment-like data from extractive QA data. First, we use the QNLI task from GLUE BIBREF7 , where the model must determine if a sentence from SQuAD 1.1 BIBREF15 contains the answer to an input question or not. Following previous work BIBREF32 , we also try building entailment-like training data from SQuAD 2.0 BIBREF33 . We concatenate questions with either the correct answer, or with the incorrect “distractor" answer candidate provided by the dataset, and train the model to classify which is which given the question's supporting text. Finally, we also experiment with leveraging the long-answer portion of NQ, where models must select a paragraph containing the answer to a question from a document. Following our method for Multiple-Choice QA, we train a model to assign a score to (question, paragraph) pairs, apply the softmax operator on paragraphs from the same document to get a probability distribution over the paragraphs, and train the model on the negative log probability of selecting an answer-containing paragraph. We only train on questions that were marked as having an answer, and select an answer-containing paragraph and up to 15 randomly chosen non-answer-containing paragraphs for each question. On BoolQ, we compute the probability of a “yes" answer by applying the sigmoid operator to the score the model gives to the input question and passage. Paraphrasing: We use the Quora Question Paraphrasing (QQP) dataset, which consists of pairs of questions labelled as being paraphrases or not. Paraphrasing is related to entailment since we expect, at least in some cases, passages will contain a paraphrase of the question. Heuristic Yes/No: We attempt to heuristically construct a corpus of yes/no questions from the MS Marco corpus BIBREF11 . MS Marco has free-form answers paired with snippets of related web documents. We search for answers starting with “yes" or “no", and then pair the corresponding questions with snippets marked as being related to the question. We call this task Y/N MS Marco; in total we gather 38k examples, 80% of which are “yes” answers. Unsupervised: It is well known that unsupervised pre-training using language-modeling objectives BIBREF9 , BIBREF8 , BIBREF27 , can improve performance on many tasks. We experiment with these methods by using the pre-trained models from ELMo, BERT, and OpenAI's Generative Pre-trained Transformer (OpenAI GPT) (see Section SECREF11 ). Shallow Models First, we experiment with using a linear classifier on our task. In general, we found features such as word overlap or TF-IDF statistics were not sufficient to achieve better than the majority-class baseline accuracy (62.17% on the dev set). We did find there was a correlation between the number of times question words occurred in the passage and the answer being “yes", but the correlation was not strong enough to build an effective classifier. “Yes" is the most common answer even among questions with zero shared words between the question and passage (with a 51% majority), and more common in other cases. Neural Models For our experiments that do not use unsupervised pre-training (except the use of pre-trained word vectors), we use a standard recurrent model with attention. Our experiments using unsupervised pre-training use the models provided by the authors. In more detail: Our Recurrent model follows a standard recurrent plus attention architecture for text-pair classification BIBREF7 . It embeds the premise/hypothesis text using fasttext word vectors BIBREF34 and learned character vectors, applies a shared bidirectional LSTM to both parts, applies co-attention BIBREF35 to share information between the two parts, applies another bi-LSTM to both parts, pools the result, and uses the pooled representation to predict the final class. See Appendix SECREF18 for details. Our Recurrent +ELMo model uses the language model from BIBREF9 to provide contextualized embeddings to the baseline model outlined above, as recommended by the authors. Our OpenAI GPT model fine-tunes the 12 layer 768 dimensional uni-directional transformer from BIBREF27 , which has been pre-trained as a language model on the Books corpus BIBREF36 . Our BERTL model fine-tunes the 24 layer 1024 dimensional transformer from BIBREF8 , which has been trained on next-sentence-selection and masked language modelling on the Book Corpus and Wikipedia. We fine-tune the BERTL and the OpenAI GPT models using the optimizers recommended by the authors, but found it important to tune the optimization parameters to achieve the best results. We use a batch size of 24, learning rate of 1e-5, and 5 training epochs for BERT and a learning rate of 6.25e-5, batch size of 6, language model loss of 0.5, and 3 training epochs for OpenAI GPT. Question/Passage Only Results Following the recommendation of BIBREF0 , we first experiment with models that are only allowed to observe the question or the passage. The pre-trained BERTL model reached 64.48% dev set accuracy using just the question and 66.74% using just the passage. Given that the majority baseline is 62.17%, this suggests there is little signal in the question by itself, but that some language patterns in the passage correlate with the answer. Possibly, passages that present more straightforward factual information (like Wikipedia introduction paragraphs) correlate with “yes" answers. Transfer Learning Results The results of our transfer learning methods are shown in Table . All results are averaged over five runs. For models pre-trained on supervised datasets, both the pre-training and the fine-tuning stages were repeated. For unsupervised pre-training, we use the pre-trained models provided by the authors, but continue to average over five runs of fine-tuning. QA Results: We were unable to transfer from RACE or SQuAD 2.0. For RACE, the problem might be domain mismatch. In RACE the passages are stories, and the questions often query for passage-specific information such as the author's intent or the state of a particular entity from the passage, instead of general knowledge. We would expect SQuAD 2.0 to be a better match for BoolQ since it is also Wikipedia-based, but its possible detecting the adversarially-constructed distractors used for negative examples does not relate well to yes/no QA. We got better results using QNLI, and even better results using NQ. This shows the task of selecting text relevant to a question is partially transferable to yes/no QA, although we are only able to gain a few points over the baseline. Entailment Results: The MultiNLI dataset out-performed all other supervised methods by a large margin. Remarkably, this approach is only a few points behind BERT despite using orders of magnitude less training data and a much more light-weight model, showing high-quality pre-training data can help compensate for these deficiencies. Our ablation results show that removing the neutral class from MultiNLI hurt transfer slightly, and removing either of the other classes was very harmful, suggesting the neutral examples had limited value. SNLI transferred better than other datasets, but worse than MultiNLI. We suspect this is due to limitations of the photo-caption domain it was constructed from. Other Supervised Results: We obtained a small amount of transfer using QQP and Y/N MS Marco. Although Y/N MS Marco is a yes/no QA dataset, its small size and class imbalance likely contributed to its limited effectiveness. The web snippets it uses as passages also present a large domain shift from the Wikipedia passages in BoolQ. Unsupervised Results: Following results on other datasets BIBREF7 , we found BERTL to be the most effective unsupervised method, surpassing all other methods of pre-training. Multi-Step Transfer Results Our best single-step transfer learning results were from using the pre-trained BERTL model and MultiNLI. We also experiment with combining these approaches using a two-step pre-training regime. In particular, we fine-tune the pre-trained BERTL on MultiNLI, and then fine-tune the resulting model again on the BoolQ train set. We found decreasing the number of training epochs to 3 resulted in a slight improvement when using the model pre-trained on MultiNLI. We show the test set results for this model, and some other pre-training variations, in Table . For these results we train five versions of each model using different training seeds, and show the model that had the best dev-set performance. Given how extensively the BERTL model has been pre-trained, and how successful it has been across many NLP tasks, the additional gain of 3.5 points due to using MultiNLI is remarkable. This suggests MultiNLI contains signal orthogonal to what is found in BERT's unsupervised objectives. Sample Efficiency In Figure 2, we graph model accuracy as more of the training data is used for fine-tuning, both with and without initially pre-training on MultiNLI. Pre-training on MultiNLI gives at least a 5-6 point gain, and nearly a 10 point gain for BERTL when only using 1000 examples. For small numbers of examples, the recurrent model with MultiNLI pre-training actually out-performs BERTL. Conclusion We have introduced BoolQ, a new reading comprehension dataset of naturally occurring yes/no questions. We have shown these questions are challenging and require a wide range of inference abilities to solve. We have also studied how transfer learning performs on this task, and found crowd-sourced entailment datasets can be leveraged to boost performance even on top of language model pre-training. Future work could include building a document-level version of this task, which would increase its difficulty and its correspondence to an end-user application. Randomly Selected Examples We include a number of randomly selected examples from the BoolQ train set in Figure FIGREF19 . For each example we show the question in bold, followed by the answer in parentheses, and then the passage below. Recurrent Model Our recurrent model is a standard model from the text pair classification literature, similar to the one used in the GLUE baseline BIBREF7 and the model from BIBREF38 . Our model has the following stages: Embed: Embed the words using a character CNN following what was done by BIBREF40 , and the fasttext crawl word embeddings BIBREF34 . Then run a BiLSTM over the results to get context-aware word hypothesis embeddings INLINEFORM0 and premise embeddings INLINEFORM1 . Co-Attention: Compute a co-attention matrix, INLINEFORM0 , between the hypothesis and premise where INLINEFORM1 , INLINEFORM2 is elementwise multiplication, and INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 are weights to be learned. Attend: For each row in INLINEFORM0 , apply the softmax operator and use the results to compute a weighed sum of the hypothesis embeddings, resulting in attended vectors INLINEFORM1 . We use the transpose of INLINEFORM2 to compute vectors INLINEFORM3 from the premise embeddings in a similar manner. Pool: Run another BiLSTM over INLINEFORM0 to get embeddings INLINEFORM1 . Then pool these embeddings by computing attention scores INLINEFORM2 , INLINEFORM3 , and then the sum INLINEFORM4 = INLINEFORM5 . Likewise we compute INLINEFORM6 from the premise. Classify: Finally we feed INLINEFORM0 into a fully connected layer, and then through a softmax layer to predict the output class. We apply dropout at a rate of 0.2 between all layers, and train the model using the Adam optimizer BIBREF39 . The learning rate is decayed by 0.999 every 100 steps. We use 200 dimensional LSTMs and a 100 dimensional fully connected layer.
Yes
c554a453b6b99d8b59e4ef1511b1b506ff6e5aa4
c554a453b6b99d8b59e4ef1511b1b506ff6e5aa4_0
Q: how was the dataset built? Text: Introduction Understanding what facts can be inferred to be true or false from text is an essential part of natural language understanding. In many cases, these inferences can go well beyond what is immediately stated in the text. For example, a simple sentence like “Hanna Huyskova won the gold medal for Belarus in freestyle skiing." implies that (1) Belarus is a country, (2) Hanna Huyskova is an athlete, (3) Belarus won at least one Olympic event, (4) the USA did not win the freestyle skiing event, and so on. Work completed while interning at Google. Also affiliated with Columbia University, work done at Google. To test a model's ability to make these kinds of inferences, previous work in natural language inference (NLI) proposed the task of labeling candidate statements as being entailed or contradicted by a given passage. However, in practice, generating candidate statements that test for complex inferential abilities is challenging. For instance, evidence suggests BIBREF0 , BIBREF1 , BIBREF2 that simply asking human annotators to write candidate statements will result in examples that typically only require surface-level reasoning. In this paper we propose an alternative: we test models on their ability to answer naturally occurring yes/no questions. That is, questions that were authored by people who were not prompted to write particular kinds of questions, including even being required to write yes/no questions, and who did not know the answer to the question they were asking. Figure contains some examples from our dataset. We find such questions often query for non-factoid information, and that human annotators need to apply a wide range of inferential abilities when answering them. As a result, they can be used to construct highly inferential reading comprehension datasets that have the added benefit of being directly related to the practical end-task of answering user yes/no questions. Yes/No questions do appear as a subset of some existing datasets BIBREF3 , BIBREF4 , BIBREF5 . However, these datasets are primarily intended to test other aspects of question answering (QA), such as conversational QA or multi-step reasoning, and do not contain naturally occurring questions. We follow the data collection method used by Natural Questions (NQ) BIBREF6 to gather 16,000 naturally occurring yes/no questions into a dataset we call BoolQ (for Boolean Questions). Each question is paired with a paragraph from Wikipedia that an independent annotator has marked as containing the answer. The task is then to take a question and passage as input, and to return “yes" or “no" as output. Figure contains some examples, and Appendix SECREF17 contains additional randomly selected examples. Following recent work BIBREF7 , we focus on using transfer learning to establish baselines for our dataset. Yes/No QA is closely related to many other NLP tasks, including other forms of question answering, entailment, and paraphrasing. Therefore, it is not clear what the best data sources to transfer from are, or if it will be sufficient to just transfer from powerful pre-trained language models such as BERT BIBREF8 or ELMo BIBREF9 . We experiment with state-of-the-art unsupervised approaches, using existing entailment datasets, three methods of leveraging extractive QA data, and using a few other supervised datasets. We found that transferring from MultiNLI, and the unsupervised pre-training in BERT, gave us the best results. Notably, we found these approaches are surprisingly complementary and can be combined to achieve a large gain in performance. Overall, our best model reaches 80.43% accuracy, compared to 62.31% for the majority baseline and 90% human accuracy. In light of the fact BERT on its own has achieved human-like performance on several NLP tasks, this demonstrates the high degree of difficulty of our dataset. We present our data and code at https://goo.gl/boolq. Related Work Yes/No questions make up a subset of the reading comprehension datasets CoQA BIBREF3 , QuAC BIBREF4 , and HotPotQA BIBREF5 , and are present in the ShARC BIBREF10 dataset. These datasets were built to challenge models to understand conversational QA (for CoQA, ShARC and QuAC) or multi-step reasoning (for HotPotQA), which complicates our goal of using yes/no questions to test inferential abilities. Of the four, QuAC is the only one where the question authors were not allowed to view the text being used to answer their questions, making it the best candidate to contain naturally occurring questions. However, QuAC still heavily prompts users, including limiting their questions to be about pre-selected Wikipedia articles, and is highly class imbalanced with 80% “yes" answers. The MS Marco dataset BIBREF11 , which contains questions with free-form text answers, also includes some yes/no questions. We experiment with heuristically identifying them in Section SECREF4 , but this process can be noisy and the quality of the resulting annotations is unknown. We also found the resulting dataset is class imbalanced, with 80% “yes" answers. Yes/No QA has been used in other contexts, such as the templated bAbI stories BIBREF12 or some Visual QA datasets BIBREF13 , BIBREF14 . We focus on answering yes/no questions using natural language text. Question answering for reading comprehension in general has seen a great deal of recent work BIBREF15 , BIBREF16 , and there have been many recent attempts to construct QA datasets that require advanced reasoning abilities BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . However, these attempts typically involve engineering data to be more difficult by, for example, explicitly prompting users to write multi-step questions BIBREF5 , BIBREF18 , or filtering out easy questions BIBREF19 . This risks resulting in models that do not have obvious end-use applications since they are optimized to perform in an artificial setting. In this paper, we show that yes/no questions have the benefit of being very challenging even when they are gathered from natural sources. Natural language inference is also a well studied area of research, particularly on the MultiNLI BIBREF21 and SNLI BIBREF22 datasets. Other sources of entailment data include the PASCAL RTE challenges BIBREF23 , BIBREF24 or SciTail BIBREF25 . We note that, although SciTail, RTE-6 and RTE-7 did not use crowd workers to generate candidate statements, they still use sources (multiple choices questions or document summaries) that were written by humans with knowledge of the premise text. Using naturally occurring yes/no questions ensures even greater independence between the questions and premise text, and ties our dataset to a clear end-task. BoolQ also requires detecting entailment in paragraphs instead of sentence pairs. Transfer learning for entailment has been studied in GLUE BIBREF7 and SentEval BIBREF26 . Unsupervised pre-training in general has recently shown excellent results on many datasets, including entailment data BIBREF9 , BIBREF8 , BIBREF27 . Converting short-answer or multiple choice questions into entailment examples, as we do when experimenting with transfer learning, has been proposed in several prior works BIBREF28 , BIBREF29 , BIBREF25 . In this paper we found some evidence suggesting that these approaches are less effective than using crowd-sourced entailment examples when it comes to transferring to natural yes/no questions. Contemporaneously with our work, BIBREF30 showed that pre-training on supervised tasks could be beneficial even when using pre-trained language models, especially for a textual entailment task. Our work confirms these results for yes/no question answering. This work builds upon the Natural Questions (NQ) BIBREF6 , which contains some natural yes/no questions. However, there are too few (about 1% of the corpus) to make yes/no QA a very important aspect of that task. In this paper, we gather a large number of additional yes/no questions in order to construct a dedicated yes/no QA dataset. The BoolQ Dataset An example in our dataset consists of a question, a paragraph from a Wikipedia article, the title of the article, and an answer, which is either “yes" or “no". We include the article title since it can potentially help resolve ambiguities (e.g., coreferent phrases) in the passage, although none of the models presented in this paper make use of them. Data Collection We gather data using the pipeline from NQ BIBREF6 , but with an additional filtering step to focus on yes/no questions. We summarize the complete pipeline here, but refer to their paper for a more detailed description. Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Note that, unlike in NQ, we only use questions that were marked as having a yes/no answer, and pair each question with the selected passage instead of the entire document. This helps reduce ambiguity (ex., avoiding cases where the document supplies conflicting answers in different paragraphs), and keeps the input small enough so that existing entailment models can easily be applied to our dataset. We combine 13k questions gathered from this pipeline with an additional 3k questions with yes/no answers from the NQ training set to reach a total of 16k questions. We split these questions into a 3.2k dev set, 3.2k test set, and 9.4k train set, ensuring questions from NQ are always in the train set. “Yes” answers are slightly more common (62.31% in the train set). The queries are typically short (average length 8.9 tokens) with longer passages (average length 108 tokens). Analysis In the following section we analyze our dataset to better understand the nature of the questions, the annotation quality, and the kinds of reasoning abilities required to answer them. Annotation Quality First, in order to assess annotation quality, three of the authors labelled 110 randomly chosen examples. If there was a disagreement, the authors conferred and selected a single answer by mutual agreement. We call the resulting labels “gold-standard" labels. On the 110 selected examples, the answer annotations reached 90% accuracy compared to the gold-standard labels. Of the cases where the answer annotation differed from the gold-standard, six were ambiguous or debatable cases, and five were errors where the annotator misunderstood the passage. Since the agreement was sufficiently high, we elected to use singly-annotated examples in the training/dev/test sets in order to be able to gather a larger dataset. Question Types Part of the value of this dataset is that it contains questions that people genuinely want to answer. To explore this further, we manually define a set of topics that questions can be about. An author categorized 200 questions into these topics. The results can be found in the upper half of Table . Questions were often about entertainment media (including T.V., movies, and music), along with other popular topics like sports. However, there are still a good portion of questions asking for more general factual knowledge, including ones about historical events or the natural world. We also broke the questions into categories based on what kind of information they were requesting, shown in the lower half of Table . Roughly one-sixth of the questions are about whether anything with a particular property exists (Existence), another sixth are about whether a particular event occurred (Event Occurrence), and another sixth ask whether an object is known by a particular name, or belongs to a particular category (Definitional). The questions that do not fall into these three categories were split between requesting facts about a specific entity, or requesting more general factual information. We do find a correlation between the nature of the question and the likelihood of a “yes" answer. However, this correlation is too weak to help outperform the majority baseline because, even if the topic or type is known, it is never best to guess the minority class. We also found that question-only models perform very poorly on this task (see Section SECREF12 ), which helps confirm that the questions do not contain sufficient information to predict the answer on their own. Types of Inference Finally, we categorize the kinds of inference required to answer the questions in BoolQ. The definitions and results are shown in Table . Less than 40% of the examples can be solved by detecting paraphrases. Instead, many questions require making additional inferences (categories “Factual Reasoning", “By Example", and “Other Inference") to connect what is stated in the passage to the question. There is also a significant class of questions (categories “Implicit" and “Missing Mention") that require a subtler kind of inference based on how the passage is written. Discussion Why do natural yes/no questions require inference so often? We hypothesize that there are several factors. First, we notice factoid questions that ask about simple properties of entities, such as “Was Obama born in 1962?", are rare. We suspect this is because people will almost always prefer to phrase such questions as short-answer questions (e.g., “When was Obama born?"). Thus, there is a natural filtering effect where people tend to use yes/no questions exactly when they want more complex kinds of information. Second, both the passages and questions rarely include negation. As a result, detecting a “no" answer typically requires understanding that a positive assertion in the text excludes, or makes unlikely, a positive assertion in the question. This requires reasoning that goes beyond paraphrasing (see the “Other-Inference" or “Implicit" examples). We also think it was important that annotators only had to answer questions, rather than generate them. For example, imagine trying to construct questions that fall into the categories of “Missing Mention" or “Implicit". While possible, it would require a great deal of thought and creativity. On the other hand, detecting when a yes/no question can be answered using these strategies seems much easier and more intuitive. Thus, having annotators answer pre-existing questions opens the door to building datasets that contain more inference and have higher quality labels. A surprising result from our work is that the datasets that more closely resemble the format of BoolQ, meaning they contain questions and multi-sentence passages, such as SQuAD 2.0, RACE, or Y/N MS Marco, were not very useful for transfer. The entailment datasets were stronger despite consisting of sentence pairs. This suggests that adapting from sentence-pair input to question/passage input was not a large obstacle to achieving transfer. Preliminary work found attempting to convert the yes/no questions in BoolQ into declarative statements did not improve transfer from MultiNLI, which supports this hypothesis. The success of MultiNLI might also be surprising given recent concerns about the generalization abilities of models trained on it BIBREF37 , particularly related to “annotation artifacts" caused by using crowd workers to write the hypothesis statements BIBREF0 . We have shown that, despite these weaknesses, it can still be an important starting point for models being used on natural data. We hypothesize that a key advantage of MultiNLI is that it contains examples of contradictions. The other sources of transfer we consider, including the next-sentence-selection objective in BERT, are closer to providing examples of entailed text vs. neutral/unrelated text. Indeed, we found that our two step transfer procedure only reaches 78.43% dev set accuracy if we remove the contradiction class from MultiNLI, regressing its performance close to the level of BERTL when just using unsupervised pre-training. Note that it is possible to pre-train a model on several of the suggested datasets, either in succession or in a multi-task setup. We leave these experiments to future work. Our results also suggest pre-training on MultiNLI would be helpful for other corpora that contain yes/no questions. Training Yes/No QA Models Models on this dataset need to predict an output class given two pieces of input text, which is a well studied paradigm BIBREF7 . We find training models on our train set alone to be relatively ineffective. Our best model reaches 69.6% accuracy, only 8% better than the majority baseline. Therefore, we follow the recent trend in NLP of using transfer learning. In particular, we experiment with pre-training models on related tasks that have larger datasets, and then fine-tuning them on our training data. We list the sources we consider for pre-training below. Entailment: We consider two entailment datasets, MultiNLI BIBREF21 and SNLI BIBREF22 . We choose these datasets since they are widely-used and large enough to use for pre-training. We also experiment with ablating classes from MultiNLI. During fine-tuning we use the probability the model assigns to the “entailment" class as the probability of predicting a “yes" answer. Multiple-Choice QA: We use a multiple choice reading comprehension dataset, RACE BIBREF31 , which contains stories or short essays paired with questions built to test the reader's comprehension of the text. Following what was done in SciTail BIBREF25 , we convert questions and answer-options to statements by either substituting the answer-option for the blanks in fill-in-the-blank questions, or appending a separator token and the answer-option to the question. During training, we have models independently assign a score to each statement, and then apply the softmax operator between all statements per each question to get statement probabilities. We use the negative log probability of the correct statement as a loss function. To fine-tune on BoolQ, we apply the sigmoid operator to the score of the question given its passage to get the probability of a “yes" answer. Extractive QA: We consider several methods of leveraging extractive QA datasets, where the model must answer questions by selecting text from a relevant passage. Preliminary experiments found that simply transferring the lower-level weights of extractive QA models was ineffective, so we instead consider three methods of constructing entailment-like data from extractive QA data. First, we use the QNLI task from GLUE BIBREF7 , where the model must determine if a sentence from SQuAD 1.1 BIBREF15 contains the answer to an input question or not. Following previous work BIBREF32 , we also try building entailment-like training data from SQuAD 2.0 BIBREF33 . We concatenate questions with either the correct answer, or with the incorrect “distractor" answer candidate provided by the dataset, and train the model to classify which is which given the question's supporting text. Finally, we also experiment with leveraging the long-answer portion of NQ, where models must select a paragraph containing the answer to a question from a document. Following our method for Multiple-Choice QA, we train a model to assign a score to (question, paragraph) pairs, apply the softmax operator on paragraphs from the same document to get a probability distribution over the paragraphs, and train the model on the negative log probability of selecting an answer-containing paragraph. We only train on questions that were marked as having an answer, and select an answer-containing paragraph and up to 15 randomly chosen non-answer-containing paragraphs for each question. On BoolQ, we compute the probability of a “yes" answer by applying the sigmoid operator to the score the model gives to the input question and passage. Paraphrasing: We use the Quora Question Paraphrasing (QQP) dataset, which consists of pairs of questions labelled as being paraphrases or not. Paraphrasing is related to entailment since we expect, at least in some cases, passages will contain a paraphrase of the question. Heuristic Yes/No: We attempt to heuristically construct a corpus of yes/no questions from the MS Marco corpus BIBREF11 . MS Marco has free-form answers paired with snippets of related web documents. We search for answers starting with “yes" or “no", and then pair the corresponding questions with snippets marked as being related to the question. We call this task Y/N MS Marco; in total we gather 38k examples, 80% of which are “yes” answers. Unsupervised: It is well known that unsupervised pre-training using language-modeling objectives BIBREF9 , BIBREF8 , BIBREF27 , can improve performance on many tasks. We experiment with these methods by using the pre-trained models from ELMo, BERT, and OpenAI's Generative Pre-trained Transformer (OpenAI GPT) (see Section SECREF11 ). Shallow Models First, we experiment with using a linear classifier on our task. In general, we found features such as word overlap or TF-IDF statistics were not sufficient to achieve better than the majority-class baseline accuracy (62.17% on the dev set). We did find there was a correlation between the number of times question words occurred in the passage and the answer being “yes", but the correlation was not strong enough to build an effective classifier. “Yes" is the most common answer even among questions with zero shared words between the question and passage (with a 51% majority), and more common in other cases. Neural Models For our experiments that do not use unsupervised pre-training (except the use of pre-trained word vectors), we use a standard recurrent model with attention. Our experiments using unsupervised pre-training use the models provided by the authors. In more detail: Our Recurrent model follows a standard recurrent plus attention architecture for text-pair classification BIBREF7 . It embeds the premise/hypothesis text using fasttext word vectors BIBREF34 and learned character vectors, applies a shared bidirectional LSTM to both parts, applies co-attention BIBREF35 to share information between the two parts, applies another bi-LSTM to both parts, pools the result, and uses the pooled representation to predict the final class. See Appendix SECREF18 for details. Our Recurrent +ELMo model uses the language model from BIBREF9 to provide contextualized embeddings to the baseline model outlined above, as recommended by the authors. Our OpenAI GPT model fine-tunes the 12 layer 768 dimensional uni-directional transformer from BIBREF27 , which has been pre-trained as a language model on the Books corpus BIBREF36 . Our BERTL model fine-tunes the 24 layer 1024 dimensional transformer from BIBREF8 , which has been trained on next-sentence-selection and masked language modelling on the Book Corpus and Wikipedia. We fine-tune the BERTL and the OpenAI GPT models using the optimizers recommended by the authors, but found it important to tune the optimization parameters to achieve the best results. We use a batch size of 24, learning rate of 1e-5, and 5 training epochs for BERT and a learning rate of 6.25e-5, batch size of 6, language model loss of 0.5, and 3 training epochs for OpenAI GPT. Question/Passage Only Results Following the recommendation of BIBREF0 , we first experiment with models that are only allowed to observe the question or the passage. The pre-trained BERTL model reached 64.48% dev set accuracy using just the question and 66.74% using just the passage. Given that the majority baseline is 62.17%, this suggests there is little signal in the question by itself, but that some language patterns in the passage correlate with the answer. Possibly, passages that present more straightforward factual information (like Wikipedia introduction paragraphs) correlate with “yes" answers. Transfer Learning Results The results of our transfer learning methods are shown in Table . All results are averaged over five runs. For models pre-trained on supervised datasets, both the pre-training and the fine-tuning stages were repeated. For unsupervised pre-training, we use the pre-trained models provided by the authors, but continue to average over five runs of fine-tuning. QA Results: We were unable to transfer from RACE or SQuAD 2.0. For RACE, the problem might be domain mismatch. In RACE the passages are stories, and the questions often query for passage-specific information such as the author's intent or the state of a particular entity from the passage, instead of general knowledge. We would expect SQuAD 2.0 to be a better match for BoolQ since it is also Wikipedia-based, but its possible detecting the adversarially-constructed distractors used for negative examples does not relate well to yes/no QA. We got better results using QNLI, and even better results using NQ. This shows the task of selecting text relevant to a question is partially transferable to yes/no QA, although we are only able to gain a few points over the baseline. Entailment Results: The MultiNLI dataset out-performed all other supervised methods by a large margin. Remarkably, this approach is only a few points behind BERT despite using orders of magnitude less training data and a much more light-weight model, showing high-quality pre-training data can help compensate for these deficiencies. Our ablation results show that removing the neutral class from MultiNLI hurt transfer slightly, and removing either of the other classes was very harmful, suggesting the neutral examples had limited value. SNLI transferred better than other datasets, but worse than MultiNLI. We suspect this is due to limitations of the photo-caption domain it was constructed from. Other Supervised Results: We obtained a small amount of transfer using QQP and Y/N MS Marco. Although Y/N MS Marco is a yes/no QA dataset, its small size and class imbalance likely contributed to its limited effectiveness. The web snippets it uses as passages also present a large domain shift from the Wikipedia passages in BoolQ. Unsupervised Results: Following results on other datasets BIBREF7 , we found BERTL to be the most effective unsupervised method, surpassing all other methods of pre-training. Multi-Step Transfer Results Our best single-step transfer learning results were from using the pre-trained BERTL model and MultiNLI. We also experiment with combining these approaches using a two-step pre-training regime. In particular, we fine-tune the pre-trained BERTL on MultiNLI, and then fine-tune the resulting model again on the BoolQ train set. We found decreasing the number of training epochs to 3 resulted in a slight improvement when using the model pre-trained on MultiNLI. We show the test set results for this model, and some other pre-training variations, in Table . For these results we train five versions of each model using different training seeds, and show the model that had the best dev-set performance. Given how extensively the BERTL model has been pre-trained, and how successful it has been across many NLP tasks, the additional gain of 3.5 points due to using MultiNLI is remarkable. This suggests MultiNLI contains signal orthogonal to what is found in BERT's unsupervised objectives. Sample Efficiency In Figure 2, we graph model accuracy as more of the training data is used for fine-tuning, both with and without initially pre-training on MultiNLI. Pre-training on MultiNLI gives at least a 5-6 point gain, and nearly a 10 point gain for BERTL when only using 1000 examples. For small numbers of examples, the recurrent model with MultiNLI pre-training actually out-performs BERTL. Conclusion We have introduced BoolQ, a new reading comprehension dataset of naturally occurring yes/no questions. We have shown these questions are challenging and require a wide range of inference abilities to solve. We have also studied how transfer learning performs on this task, and found crowd-sourced entailment datasets can be leveraged to boost performance even on top of language model pre-training. Future work could include building a document-level version of this task, which would increase its difficulty and its correspondence to an end-user application. Randomly Selected Examples We include a number of randomly selected examples from the BoolQ train set in Figure FIGREF19 . For each example we show the question in bold, followed by the answer in parentheses, and then the passage below. Recurrent Model Our recurrent model is a standard model from the text pair classification literature, similar to the one used in the GLUE baseline BIBREF7 and the model from BIBREF38 . Our model has the following stages: Embed: Embed the words using a character CNN following what was done by BIBREF40 , and the fasttext crawl word embeddings BIBREF34 . Then run a BiLSTM over the results to get context-aware word hypothesis embeddings INLINEFORM0 and premise embeddings INLINEFORM1 . Co-Attention: Compute a co-attention matrix, INLINEFORM0 , between the hypothesis and premise where INLINEFORM1 , INLINEFORM2 is elementwise multiplication, and INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 are weights to be learned. Attend: For each row in INLINEFORM0 , apply the softmax operator and use the results to compute a weighed sum of the hypothesis embeddings, resulting in attended vectors INLINEFORM1 . We use the transpose of INLINEFORM2 to compute vectors INLINEFORM3 from the premise embeddings in a similar manner. Pool: Run another BiLSTM over INLINEFORM0 to get embeddings INLINEFORM1 . Then pool these embeddings by computing attention scores INLINEFORM2 , INLINEFORM3 , and then the sum INLINEFORM4 = INLINEFORM5 . Likewise we compute INLINEFORM6 from the premise. Classify: Finally we feed INLINEFORM0 into a fully connected layer, and then through a softmax layer to predict the output class. We apply dropout at a rate of 0.2 between all layers, and train the model using the Adam optimizer BIBREF39 . The learning rate is decayed by 0.999 every 100 steps. We use 200 dimensional LSTMs and a 100 dimensional fully connected layer.
Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no"
10210d5c31dc937e765051ee066b971b6f04d3af
10210d5c31dc937e765051ee066b971b6f04d3af_0
Q: what is the size of BoolQ dataset? Text: Introduction Understanding what facts can be inferred to be true or false from text is an essential part of natural language understanding. In many cases, these inferences can go well beyond what is immediately stated in the text. For example, a simple sentence like “Hanna Huyskova won the gold medal for Belarus in freestyle skiing." implies that (1) Belarus is a country, (2) Hanna Huyskova is an athlete, (3) Belarus won at least one Olympic event, (4) the USA did not win the freestyle skiing event, and so on. Work completed while interning at Google. Also affiliated with Columbia University, work done at Google. To test a model's ability to make these kinds of inferences, previous work in natural language inference (NLI) proposed the task of labeling candidate statements as being entailed or contradicted by a given passage. However, in practice, generating candidate statements that test for complex inferential abilities is challenging. For instance, evidence suggests BIBREF0 , BIBREF1 , BIBREF2 that simply asking human annotators to write candidate statements will result in examples that typically only require surface-level reasoning. In this paper we propose an alternative: we test models on their ability to answer naturally occurring yes/no questions. That is, questions that were authored by people who were not prompted to write particular kinds of questions, including even being required to write yes/no questions, and who did not know the answer to the question they were asking. Figure contains some examples from our dataset. We find such questions often query for non-factoid information, and that human annotators need to apply a wide range of inferential abilities when answering them. As a result, they can be used to construct highly inferential reading comprehension datasets that have the added benefit of being directly related to the practical end-task of answering user yes/no questions. Yes/No questions do appear as a subset of some existing datasets BIBREF3 , BIBREF4 , BIBREF5 . However, these datasets are primarily intended to test other aspects of question answering (QA), such as conversational QA or multi-step reasoning, and do not contain naturally occurring questions. We follow the data collection method used by Natural Questions (NQ) BIBREF6 to gather 16,000 naturally occurring yes/no questions into a dataset we call BoolQ (for Boolean Questions). Each question is paired with a paragraph from Wikipedia that an independent annotator has marked as containing the answer. The task is then to take a question and passage as input, and to return “yes" or “no" as output. Figure contains some examples, and Appendix SECREF17 contains additional randomly selected examples. Following recent work BIBREF7 , we focus on using transfer learning to establish baselines for our dataset. Yes/No QA is closely related to many other NLP tasks, including other forms of question answering, entailment, and paraphrasing. Therefore, it is not clear what the best data sources to transfer from are, or if it will be sufficient to just transfer from powerful pre-trained language models such as BERT BIBREF8 or ELMo BIBREF9 . We experiment with state-of-the-art unsupervised approaches, using existing entailment datasets, three methods of leveraging extractive QA data, and using a few other supervised datasets. We found that transferring from MultiNLI, and the unsupervised pre-training in BERT, gave us the best results. Notably, we found these approaches are surprisingly complementary and can be combined to achieve a large gain in performance. Overall, our best model reaches 80.43% accuracy, compared to 62.31% for the majority baseline and 90% human accuracy. In light of the fact BERT on its own has achieved human-like performance on several NLP tasks, this demonstrates the high degree of difficulty of our dataset. We present our data and code at https://goo.gl/boolq. Related Work Yes/No questions make up a subset of the reading comprehension datasets CoQA BIBREF3 , QuAC BIBREF4 , and HotPotQA BIBREF5 , and are present in the ShARC BIBREF10 dataset. These datasets were built to challenge models to understand conversational QA (for CoQA, ShARC and QuAC) or multi-step reasoning (for HotPotQA), which complicates our goal of using yes/no questions to test inferential abilities. Of the four, QuAC is the only one where the question authors were not allowed to view the text being used to answer their questions, making it the best candidate to contain naturally occurring questions. However, QuAC still heavily prompts users, including limiting their questions to be about pre-selected Wikipedia articles, and is highly class imbalanced with 80% “yes" answers. The MS Marco dataset BIBREF11 , which contains questions with free-form text answers, also includes some yes/no questions. We experiment with heuristically identifying them in Section SECREF4 , but this process can be noisy and the quality of the resulting annotations is unknown. We also found the resulting dataset is class imbalanced, with 80% “yes" answers. Yes/No QA has been used in other contexts, such as the templated bAbI stories BIBREF12 or some Visual QA datasets BIBREF13 , BIBREF14 . We focus on answering yes/no questions using natural language text. Question answering for reading comprehension in general has seen a great deal of recent work BIBREF15 , BIBREF16 , and there have been many recent attempts to construct QA datasets that require advanced reasoning abilities BIBREF5 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . However, these attempts typically involve engineering data to be more difficult by, for example, explicitly prompting users to write multi-step questions BIBREF5 , BIBREF18 , or filtering out easy questions BIBREF19 . This risks resulting in models that do not have obvious end-use applications since they are optimized to perform in an artificial setting. In this paper, we show that yes/no questions have the benefit of being very challenging even when they are gathered from natural sources. Natural language inference is also a well studied area of research, particularly on the MultiNLI BIBREF21 and SNLI BIBREF22 datasets. Other sources of entailment data include the PASCAL RTE challenges BIBREF23 , BIBREF24 or SciTail BIBREF25 . We note that, although SciTail, RTE-6 and RTE-7 did not use crowd workers to generate candidate statements, they still use sources (multiple choices questions or document summaries) that were written by humans with knowledge of the premise text. Using naturally occurring yes/no questions ensures even greater independence between the questions and premise text, and ties our dataset to a clear end-task. BoolQ also requires detecting entailment in paragraphs instead of sentence pairs. Transfer learning for entailment has been studied in GLUE BIBREF7 and SentEval BIBREF26 . Unsupervised pre-training in general has recently shown excellent results on many datasets, including entailment data BIBREF9 , BIBREF8 , BIBREF27 . Converting short-answer or multiple choice questions into entailment examples, as we do when experimenting with transfer learning, has been proposed in several prior works BIBREF28 , BIBREF29 , BIBREF25 . In this paper we found some evidence suggesting that these approaches are less effective than using crowd-sourced entailment examples when it comes to transferring to natural yes/no questions. Contemporaneously with our work, BIBREF30 showed that pre-training on supervised tasks could be beneficial even when using pre-trained language models, especially for a textual entailment task. Our work confirms these results for yes/no question answering. This work builds upon the Natural Questions (NQ) BIBREF6 , which contains some natural yes/no questions. However, there are too few (about 1% of the corpus) to make yes/no QA a very important aspect of that task. In this paper, we gather a large number of additional yes/no questions in order to construct a dedicated yes/no QA dataset. The BoolQ Dataset An example in our dataset consists of a question, a paragraph from a Wikipedia article, the title of the article, and an answer, which is either “yes" or “no". We include the article title since it can potentially help resolve ambiguities (e.g., coreferent phrases) in the passage, although none of the models presented in this paper make use of them. Data Collection We gather data using the pipeline from NQ BIBREF6 , but with an additional filtering step to focus on yes/no questions. We summarize the complete pipeline here, but refer to their paper for a more detailed description. Questions are gathered from anonymized, aggregated queries to the Google search engine. Queries that are likely to be yes/no questions are heuristically identified: we found selecting queries where the first word is in a manually constructed set of indicator words and are of sufficient length, to be effective. Questions are only kept if a Wikipedia page is returned as one of the first five results, in which case the question and Wikipedia page are given to a human annotator for further processing. Annotators label question/article pairs in a three-step process. First, they decide if the question is good, meaning it is comprehensible, unambiguous, and requesting factual information. This judgment is made before the annotator sees the Wikipedia page. Next, for good questions, annotators find a passage within the document that contains enough information to answer the question. Annotators can mark questions as “not answerable" if the Wikipedia article does not contain the requested information. Finally, annotators mark whether the question's answer is “yes" or “no". Annotating data in this manner is quite expensive since annotators need to search entire Wikipedia documents for relevant evidence and read the text carefully. Note that, unlike in NQ, we only use questions that were marked as having a yes/no answer, and pair each question with the selected passage instead of the entire document. This helps reduce ambiguity (ex., avoiding cases where the document supplies conflicting answers in different paragraphs), and keeps the input small enough so that existing entailment models can easily be applied to our dataset. We combine 13k questions gathered from this pipeline with an additional 3k questions with yes/no answers from the NQ training set to reach a total of 16k questions. We split these questions into a 3.2k dev set, 3.2k test set, and 9.4k train set, ensuring questions from NQ are always in the train set. “Yes” answers are slightly more common (62.31% in the train set). The queries are typically short (average length 8.9 tokens) with longer passages (average length 108 tokens). Analysis In the following section we analyze our dataset to better understand the nature of the questions, the annotation quality, and the kinds of reasoning abilities required to answer them. Annotation Quality First, in order to assess annotation quality, three of the authors labelled 110 randomly chosen examples. If there was a disagreement, the authors conferred and selected a single answer by mutual agreement. We call the resulting labels “gold-standard" labels. On the 110 selected examples, the answer annotations reached 90% accuracy compared to the gold-standard labels. Of the cases where the answer annotation differed from the gold-standard, six were ambiguous or debatable cases, and five were errors where the annotator misunderstood the passage. Since the agreement was sufficiently high, we elected to use singly-annotated examples in the training/dev/test sets in order to be able to gather a larger dataset. Question Types Part of the value of this dataset is that it contains questions that people genuinely want to answer. To explore this further, we manually define a set of topics that questions can be about. An author categorized 200 questions into these topics. The results can be found in the upper half of Table . Questions were often about entertainment media (including T.V., movies, and music), along with other popular topics like sports. However, there are still a good portion of questions asking for more general factual knowledge, including ones about historical events or the natural world. We also broke the questions into categories based on what kind of information they were requesting, shown in the lower half of Table . Roughly one-sixth of the questions are about whether anything with a particular property exists (Existence), another sixth are about whether a particular event occurred (Event Occurrence), and another sixth ask whether an object is known by a particular name, or belongs to a particular category (Definitional). The questions that do not fall into these three categories were split between requesting facts about a specific entity, or requesting more general factual information. We do find a correlation between the nature of the question and the likelihood of a “yes" answer. However, this correlation is too weak to help outperform the majority baseline because, even if the topic or type is known, it is never best to guess the minority class. We also found that question-only models perform very poorly on this task (see Section SECREF12 ), which helps confirm that the questions do not contain sufficient information to predict the answer on their own. Types of Inference Finally, we categorize the kinds of inference required to answer the questions in BoolQ. The definitions and results are shown in Table . Less than 40% of the examples can be solved by detecting paraphrases. Instead, many questions require making additional inferences (categories “Factual Reasoning", “By Example", and “Other Inference") to connect what is stated in the passage to the question. There is also a significant class of questions (categories “Implicit" and “Missing Mention") that require a subtler kind of inference based on how the passage is written. Discussion Why do natural yes/no questions require inference so often? We hypothesize that there are several factors. First, we notice factoid questions that ask about simple properties of entities, such as “Was Obama born in 1962?", are rare. We suspect this is because people will almost always prefer to phrase such questions as short-answer questions (e.g., “When was Obama born?"). Thus, there is a natural filtering effect where people tend to use yes/no questions exactly when they want more complex kinds of information. Second, both the passages and questions rarely include negation. As a result, detecting a “no" answer typically requires understanding that a positive assertion in the text excludes, or makes unlikely, a positive assertion in the question. This requires reasoning that goes beyond paraphrasing (see the “Other-Inference" or “Implicit" examples). We also think it was important that annotators only had to answer questions, rather than generate them. For example, imagine trying to construct questions that fall into the categories of “Missing Mention" or “Implicit". While possible, it would require a great deal of thought and creativity. On the other hand, detecting when a yes/no question can be answered using these strategies seems much easier and more intuitive. Thus, having annotators answer pre-existing questions opens the door to building datasets that contain more inference and have higher quality labels. A surprising result from our work is that the datasets that more closely resemble the format of BoolQ, meaning they contain questions and multi-sentence passages, such as SQuAD 2.0, RACE, or Y/N MS Marco, were not very useful for transfer. The entailment datasets were stronger despite consisting of sentence pairs. This suggests that adapting from sentence-pair input to question/passage input was not a large obstacle to achieving transfer. Preliminary work found attempting to convert the yes/no questions in BoolQ into declarative statements did not improve transfer from MultiNLI, which supports this hypothesis. The success of MultiNLI might also be surprising given recent concerns about the generalization abilities of models trained on it BIBREF37 , particularly related to “annotation artifacts" caused by using crowd workers to write the hypothesis statements BIBREF0 . We have shown that, despite these weaknesses, it can still be an important starting point for models being used on natural data. We hypothesize that a key advantage of MultiNLI is that it contains examples of contradictions. The other sources of transfer we consider, including the next-sentence-selection objective in BERT, are closer to providing examples of entailed text vs. neutral/unrelated text. Indeed, we found that our two step transfer procedure only reaches 78.43% dev set accuracy if we remove the contradiction class from MultiNLI, regressing its performance close to the level of BERTL when just using unsupervised pre-training. Note that it is possible to pre-train a model on several of the suggested datasets, either in succession or in a multi-task setup. We leave these experiments to future work. Our results also suggest pre-training on MultiNLI would be helpful for other corpora that contain yes/no questions. Training Yes/No QA Models Models on this dataset need to predict an output class given two pieces of input text, which is a well studied paradigm BIBREF7 . We find training models on our train set alone to be relatively ineffective. Our best model reaches 69.6% accuracy, only 8% better than the majority baseline. Therefore, we follow the recent trend in NLP of using transfer learning. In particular, we experiment with pre-training models on related tasks that have larger datasets, and then fine-tuning them on our training data. We list the sources we consider for pre-training below. Entailment: We consider two entailment datasets, MultiNLI BIBREF21 and SNLI BIBREF22 . We choose these datasets since they are widely-used and large enough to use for pre-training. We also experiment with ablating classes from MultiNLI. During fine-tuning we use the probability the model assigns to the “entailment" class as the probability of predicting a “yes" answer. Multiple-Choice QA: We use a multiple choice reading comprehension dataset, RACE BIBREF31 , which contains stories or short essays paired with questions built to test the reader's comprehension of the text. Following what was done in SciTail BIBREF25 , we convert questions and answer-options to statements by either substituting the answer-option for the blanks in fill-in-the-blank questions, or appending a separator token and the answer-option to the question. During training, we have models independently assign a score to each statement, and then apply the softmax operator between all statements per each question to get statement probabilities. We use the negative log probability of the correct statement as a loss function. To fine-tune on BoolQ, we apply the sigmoid operator to the score of the question given its passage to get the probability of a “yes" answer. Extractive QA: We consider several methods of leveraging extractive QA datasets, where the model must answer questions by selecting text from a relevant passage. Preliminary experiments found that simply transferring the lower-level weights of extractive QA models was ineffective, so we instead consider three methods of constructing entailment-like data from extractive QA data. First, we use the QNLI task from GLUE BIBREF7 , where the model must determine if a sentence from SQuAD 1.1 BIBREF15 contains the answer to an input question or not. Following previous work BIBREF32 , we also try building entailment-like training data from SQuAD 2.0 BIBREF33 . We concatenate questions with either the correct answer, or with the incorrect “distractor" answer candidate provided by the dataset, and train the model to classify which is which given the question's supporting text. Finally, we also experiment with leveraging the long-answer portion of NQ, where models must select a paragraph containing the answer to a question from a document. Following our method for Multiple-Choice QA, we train a model to assign a score to (question, paragraph) pairs, apply the softmax operator on paragraphs from the same document to get a probability distribution over the paragraphs, and train the model on the negative log probability of selecting an answer-containing paragraph. We only train on questions that were marked as having an answer, and select an answer-containing paragraph and up to 15 randomly chosen non-answer-containing paragraphs for each question. On BoolQ, we compute the probability of a “yes" answer by applying the sigmoid operator to the score the model gives to the input question and passage. Paraphrasing: We use the Quora Question Paraphrasing (QQP) dataset, which consists of pairs of questions labelled as being paraphrases or not. Paraphrasing is related to entailment since we expect, at least in some cases, passages will contain a paraphrase of the question. Heuristic Yes/No: We attempt to heuristically construct a corpus of yes/no questions from the MS Marco corpus BIBREF11 . MS Marco has free-form answers paired with snippets of related web documents. We search for answers starting with “yes" or “no", and then pair the corresponding questions with snippets marked as being related to the question. We call this task Y/N MS Marco; in total we gather 38k examples, 80% of which are “yes” answers. Unsupervised: It is well known that unsupervised pre-training using language-modeling objectives BIBREF9 , BIBREF8 , BIBREF27 , can improve performance on many tasks. We experiment with these methods by using the pre-trained models from ELMo, BERT, and OpenAI's Generative Pre-trained Transformer (OpenAI GPT) (see Section SECREF11 ). Shallow Models First, we experiment with using a linear classifier on our task. In general, we found features such as word overlap or TF-IDF statistics were not sufficient to achieve better than the majority-class baseline accuracy (62.17% on the dev set). We did find there was a correlation between the number of times question words occurred in the passage and the answer being “yes", but the correlation was not strong enough to build an effective classifier. “Yes" is the most common answer even among questions with zero shared words between the question and passage (with a 51% majority), and more common in other cases. Neural Models For our experiments that do not use unsupervised pre-training (except the use of pre-trained word vectors), we use a standard recurrent model with attention. Our experiments using unsupervised pre-training use the models provided by the authors. In more detail: Our Recurrent model follows a standard recurrent plus attention architecture for text-pair classification BIBREF7 . It embeds the premise/hypothesis text using fasttext word vectors BIBREF34 and learned character vectors, applies a shared bidirectional LSTM to both parts, applies co-attention BIBREF35 to share information between the two parts, applies another bi-LSTM to both parts, pools the result, and uses the pooled representation to predict the final class. See Appendix SECREF18 for details. Our Recurrent +ELMo model uses the language model from BIBREF9 to provide contextualized embeddings to the baseline model outlined above, as recommended by the authors. Our OpenAI GPT model fine-tunes the 12 layer 768 dimensional uni-directional transformer from BIBREF27 , which has been pre-trained as a language model on the Books corpus BIBREF36 . Our BERTL model fine-tunes the 24 layer 1024 dimensional transformer from BIBREF8 , which has been trained on next-sentence-selection and masked language modelling on the Book Corpus and Wikipedia. We fine-tune the BERTL and the OpenAI GPT models using the optimizers recommended by the authors, but found it important to tune the optimization parameters to achieve the best results. We use a batch size of 24, learning rate of 1e-5, and 5 training epochs for BERT and a learning rate of 6.25e-5, batch size of 6, language model loss of 0.5, and 3 training epochs for OpenAI GPT. Question/Passage Only Results Following the recommendation of BIBREF0 , we first experiment with models that are only allowed to observe the question or the passage. The pre-trained BERTL model reached 64.48% dev set accuracy using just the question and 66.74% using just the passage. Given that the majority baseline is 62.17%, this suggests there is little signal in the question by itself, but that some language patterns in the passage correlate with the answer. Possibly, passages that present more straightforward factual information (like Wikipedia introduction paragraphs) correlate with “yes" answers. Transfer Learning Results The results of our transfer learning methods are shown in Table . All results are averaged over five runs. For models pre-trained on supervised datasets, both the pre-training and the fine-tuning stages were repeated. For unsupervised pre-training, we use the pre-trained models provided by the authors, but continue to average over five runs of fine-tuning. QA Results: We were unable to transfer from RACE or SQuAD 2.0. For RACE, the problem might be domain mismatch. In RACE the passages are stories, and the questions often query for passage-specific information such as the author's intent or the state of a particular entity from the passage, instead of general knowledge. We would expect SQuAD 2.0 to be a better match for BoolQ since it is also Wikipedia-based, but its possible detecting the adversarially-constructed distractors used for negative examples does not relate well to yes/no QA. We got better results using QNLI, and even better results using NQ. This shows the task of selecting text relevant to a question is partially transferable to yes/no QA, although we are only able to gain a few points over the baseline. Entailment Results: The MultiNLI dataset out-performed all other supervised methods by a large margin. Remarkably, this approach is only a few points behind BERT despite using orders of magnitude less training data and a much more light-weight model, showing high-quality pre-training data can help compensate for these deficiencies. Our ablation results show that removing the neutral class from MultiNLI hurt transfer slightly, and removing either of the other classes was very harmful, suggesting the neutral examples had limited value. SNLI transferred better than other datasets, but worse than MultiNLI. We suspect this is due to limitations of the photo-caption domain it was constructed from. Other Supervised Results: We obtained a small amount of transfer using QQP and Y/N MS Marco. Although Y/N MS Marco is a yes/no QA dataset, its small size and class imbalance likely contributed to its limited effectiveness. The web snippets it uses as passages also present a large domain shift from the Wikipedia passages in BoolQ. Unsupervised Results: Following results on other datasets BIBREF7 , we found BERTL to be the most effective unsupervised method, surpassing all other methods of pre-training. Multi-Step Transfer Results Our best single-step transfer learning results were from using the pre-trained BERTL model and MultiNLI. We also experiment with combining these approaches using a two-step pre-training regime. In particular, we fine-tune the pre-trained BERTL on MultiNLI, and then fine-tune the resulting model again on the BoolQ train set. We found decreasing the number of training epochs to 3 resulted in a slight improvement when using the model pre-trained on MultiNLI. We show the test set results for this model, and some other pre-training variations, in Table . For these results we train five versions of each model using different training seeds, and show the model that had the best dev-set performance. Given how extensively the BERTL model has been pre-trained, and how successful it has been across many NLP tasks, the additional gain of 3.5 points due to using MultiNLI is remarkable. This suggests MultiNLI contains signal orthogonal to what is found in BERT's unsupervised objectives. Sample Efficiency In Figure 2, we graph model accuracy as more of the training data is used for fine-tuning, both with and without initially pre-training on MultiNLI. Pre-training on MultiNLI gives at least a 5-6 point gain, and nearly a 10 point gain for BERTL when only using 1000 examples. For small numbers of examples, the recurrent model with MultiNLI pre-training actually out-performs BERTL. Conclusion We have introduced BoolQ, a new reading comprehension dataset of naturally occurring yes/no questions. We have shown these questions are challenging and require a wide range of inference abilities to solve. We have also studied how transfer learning performs on this task, and found crowd-sourced entailment datasets can be leveraged to boost performance even on top of language model pre-training. Future work could include building a document-level version of this task, which would increase its difficulty and its correspondence to an end-user application. Randomly Selected Examples We include a number of randomly selected examples from the BoolQ train set in Figure FIGREF19 . For each example we show the question in bold, followed by the answer in parentheses, and then the passage below. Recurrent Model Our recurrent model is a standard model from the text pair classification literature, similar to the one used in the GLUE baseline BIBREF7 and the model from BIBREF38 . Our model has the following stages: Embed: Embed the words using a character CNN following what was done by BIBREF40 , and the fasttext crawl word embeddings BIBREF34 . Then run a BiLSTM over the results to get context-aware word hypothesis embeddings INLINEFORM0 and premise embeddings INLINEFORM1 . Co-Attention: Compute a co-attention matrix, INLINEFORM0 , between the hypothesis and premise where INLINEFORM1 , INLINEFORM2 is elementwise multiplication, and INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 are weights to be learned. Attend: For each row in INLINEFORM0 , apply the softmax operator and use the results to compute a weighed sum of the hypothesis embeddings, resulting in attended vectors INLINEFORM1 . We use the transpose of INLINEFORM2 to compute vectors INLINEFORM3 from the premise embeddings in a similar manner. Pool: Run another BiLSTM over INLINEFORM0 to get embeddings INLINEFORM1 . Then pool these embeddings by computing attention scores INLINEFORM2 , INLINEFORM3 , and then the sum INLINEFORM4 = INLINEFORM5 . Likewise we compute INLINEFORM6 from the premise. Classify: Finally we feed INLINEFORM0 into a fully connected layer, and then through a softmax layer to predict the output class. We apply dropout at a rate of 0.2 between all layers, and train the model using the Adam optimizer BIBREF39 . The learning rate is decayed by 0.999 every 100 steps. We use 200 dimensional LSTMs and a 100 dimensional fully connected layer.
16k questions
5d9b088bb066750b60debfb0b9439049b5a5c0ce
5d9b088bb066750b60debfb0b9439049b5a5c0ce_0
Q: what processing was done on the speeches before being parsed? Text: Introduction Almost all political decisions and political opinions are, in one way or another, expressed in written or spoken texts. Great leaders in history become famous for their ability to motivate the masses with their speeches; parties publish policy programmes before elections in order to provide information about their policy objectives; parliamentary decisions are discussed and deliberated on the floor in order to exchange opinions; members of the executive in most political systems are legally obliged to provide written or verbal answers to questions from legislators; and citizens express their opinions about political events on internet blogs or in public online chats. Political texts and speeches are everywhere that people express their political opinions and preferences. It is not until recently that social scientists have discovered the potential of analyzing political texts to test theories of political behavior. One reason is that systematically processing large quantities of textual data to retrieve information is technically challenging. Computational advances in natural language processing have greatly facilitated this task. Adaptation of such techniques in social science – for example, Wordscore BIBREF0 , BIBREF1 or Wordfish BIBREF2 – now enable researchers to systematically compare documents with one another and extract relevant information from them. Applied to party manifestos, for which most of these techniques have been developed, these methods can be used to evaluate the similarity or dissimilarity between manifestos, which can then be used to derive estimates about parties' policy preferences and their ideological distance to each other. One area of research that increasingly makes use of quantitative text methods are studies of legislative behavior and parliaments BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . Only a few parliaments in the world use roll-call votes (the recording of each legislator's decision in a floor vote) that allow for the monitoring of individual members' behavior. In all other cases, contributions to debates are the only outcome that can be observed from individual members. Using such debates for social science research, however, is often limited by data availability. Although most parliaments keep written records of parliamentary debates and often make such records electronically available, they are never published in formats that facilitate social science research. A significant amount of labor is usually required to collect, clean and organize parliamentary records before they can be used for analytical purposes, often requiring technical skills that many social scientists lack. The purpose of this paper is to present a new database of parliamentary debates to overcome precisely this barrier. Our database contains all debates as well as questions and answers in Dáil Éireann, covering almost a century of political discourse from 1919 to 2013. These debates are organized in a way that allows users to search by date, topics or speaker. More importantly, and lacking in the official records of parliamentary debates, we have identified all speakers and linked their debate contributions to the information on party affiliation and constituencies from the official members database. This enables researchers to retrieve member-specific speeches on particular topics or within a particular timeframe. Furthermore, all data can be retrieved and stored in formats that can be accessed using commonly used statistical software packages. In addition to documenting this database, we also present three applications in which we make use of the new data (Section SECREF3 ). In the first study, we analyze budget speeches delivered by all finance ministers from 1922 to 2008 (Section SECREF11 ) and show how the policy agenda and ministers' policy preferences have changed over time (Section SECREF16 ). In the second application we compare contributions that were made on one particular topic: the 2008 budget debate (Section SECREF20 ). Here we demonstrate how text analytics can be used to estimate members' policy preferences on a dimension that represents pro- versus anti-government attitudes. Finally, we estimate all contributions from members of the 26th government that formed as a coalition between Fianna Fáil and the Progressive Democrats in 2002. Here we estimate the policy positions of all cabinet ministers on a pro- versus anti-spending dimension and show that positions on this dimension are highly correlated with the actual spending levels of each ministerial department (Section SECREF25 ). Overview of Database Content Parliamentary debates in Dáil Éireann are collected by the Oireachtas' Debates Office and published as the Official Record. The Debates Office records and transcribes all debates and then publishes them both in printed as well as in digital form. All debates are then published on Oireachtas' website as single HTML files. At the time of writing, the official debates website contains 549,292 HTML files. The content of all these HTML files forms the data source for our database. It is obviously impossible to hand-code that much information. We therefore wrote a computer script that automated the processing of all files. This script is able to find all debate contributions and the names of all speakers in each file. In addition, it retrieves the date as well as the topic of each debate. As already explained above, the official online version of the Official Records does not provide information about speakers besides their name. Each speaker's name is “hard coded” into the HTML files and not linked to the information in the official members database. In addition, speaker names are not coded consistently, hence making it difficult to collect speeches from a particular deputy. Our goal was to identify every single speaker name that appears in the Official Record and integrate parliamentary speeches with information about deputies' party affiliation, constituency, age and profession from the official members database into a single database. We therefore used an automated record-linkage procedure to identify every single speaker. The final database contains all debates and written answers from the first meeting of the Dáil on 21 January 1919 through to 28 March 2013, covering every Dáil session that has met during this period. In total, the database contains 4,443,713 individual contributions by 1,178 TDs. The data is organized in a way that facilitates analysis for substantive questions of interest to social scientists. Every row in the data set is one contribution with columns containing information on the following variables: Analyzing the Content of Parliamentary Debates In the previous section, we have explained the structure of the database. In the following three sections we demonstrate how the data can be used for social science research. We do this by demonstrating three different applications. In the first application, we analyze the budget speeches of all finance ministers from 1922 to 2008. Budget speeches are delivered by Finance Ministers once a year, with the exception of emergency budgets. Analyzing this data, we show how policy agendas and ministers' fiscal preferences have changed over time. In the second application, we construct a data set that resembles a cross-sectional analysis as we retrieve all speeches from one particular year and on one particular topic from our database: the 2008 budget debate. This data structure enables us to estimate the policy positions of all speakers who contributed to the budget debate and to compare how similar or dissimilar their preferences were. We find that policy positions are clustered into two groups: the government and the opposition; but we also find considerable variation within each group. Finally, we take all contributions made during the term of one government and use the data to estimate the policy positions of all cabinet members on a dimension representing pro- versus anti-spending. We demonstrate the validity of estimated policy positions by comparing them against actual spending levels of each cabinet ministers' department and show that the two measures are almost perfectly correlated with each other. The Content of Budget Speeches in Historical Perspective The quantitative analysis of text is primarily based on the proposition that preference profiles of speakers can be constructed from their word frequencies BIBREF15 , BIBREF16 . This makes word frequencies the most important data input to almost all existing methods of text analysis. Word frequencies can be easily visualized as word clouds. These word clouds show the most frequently used words in a text with font size being proportional to frequency of appearance. Despite their simplicity, word clouds can be used as a first descriptive view of the data. Here we look at word clouds for the speeches made by Irish Ministers for Finance. We have extracted the budget speeches of all finance ministers from our database, the first being Cosgrave's speech in April 1923, and the latest being Lenihan's speech in October 2008. In total, there are 90 speeches given by 23 different finance ministers for whom we have generated word clouds as shown in Figure FIGREF12 . One way to look at Figure FIGREF12 is to consider that each individual word cloud panel presents a snapshot into the preference profiles of individual ministers. With taxation being the key instrument of fiscal policy it is unsurprising that the word “tax” is on average the most frequently used word across all Ministers for Finance. We can also discern that frequency of references to “government” has been uneven over time with relatively high usage in the 1960s to 1980s and then subsequent decline (apart from Quinn's tenure) until the later speeches of Cowen and particularly Lenihan. What is more clearly evident is the change in the number of unique words used by different ministers. This reflects the fact that some budget speeches were very short, while others were long and covered many distinct topics. The easiest example is to compare speeches by two consecutive ministers: Cowen and Lenihan. Word clouds reflect the sheer multitude of problems facing the country that needed to be addressed by Lenihan compared to the relatively “quieter” (on average) three budgets delivered by Cowen. Overall, while catchy word clouds can only be used as easy first-cut visualizations of the data, rather than methods for any meaningful analysis. One thing that becomes readily apparent from Figure FIGREF12 is that word clouds do not facilitate systematic comparison of documents and their content with one another. Next, we show how our data facilitates the application of relatively simple text analysis techniques to answer more complex empirical questions without the ambiguity in interpretation that is inherent in word clouds. Estimation of Finance Ministers' Policy Positions Wordfish BIBREF2 is a method that combines Item Response Theory BIBREF17 with text classification. Wordfish assumes that there is a latent policy dimension and that each author has a position on this dimension. Words are assumed to be distributed over this dimension such that INLINEFORM0 , where INLINEFORM1 is the count of word INLINEFORM2 in document INLINEFORM3 at time INLINEFORM4 . The functional form of the model is assumed to be INLINEFORM5 where INLINEFORM0 are fixed effects to control for differences in the length of speeches and INLINEFORM1 are fixed effects to control for the fact that some words are used more often than others in all documents. INLINEFORM2 are the estimates of authors' position on the latent dimension and INLINEFORM3 are estimates of word-weights that are determined by how important specific words are in discriminating documents from each other. In this model each document is treated as a separate actor's position and all positions are estimated simultaneously. If a minister maintains a similar position from one budget speech to the next, this means that words with similar frequencies were used over time. At the same time any movement detected by the model towards a position held by, for example, his predecessor, means that the minister's word choice is now much closer to his predecessor than to his own word usage in the previous budget speech. The identification strategy for the model also sets the mean of all positions to 0 and the standard deviation to 1, thus allowing over time a change in positions relative to the mean with the total variance of all positions over time fixed BIBREF2 . Effectively this standardizes the results and allows for the comparison of positions over time on a comparable scale. Before including documents in the analysis, we have removed all numbers, punctuation marks, and stop words. In addition, we follow the advice in BIBREF18 and delete words that appear in less than 20% of all speeches. We do this in order to prevent words that are specific to a small time period (and hence only appear in a few speeches) from having a large impact on discriminating speeches from each other. Figure FIGREF17 shows the results of estimation, with an overlaid regression line. The results in Figure FIGREF17 indicate a concept drift – the gradual change over time of the underlying concept behind the text categorization class BIBREF19 . In the political science text scaling literature, this issue is known as agenda shift BIBREF18 . In supervised learning models like Wordscore, this problem has typically been dealt with by estimating text models separately for each time period BIBREF20 , BIBREF21 , where the definition of the dimensions remains stable through the choice of training documents. However, this approach is not easily transferrable to inductive techniques like Wordfish, where there may be substantively different policy dimensions at different time periods, rendering comparison of positions over time challenging, if not impossible. A clear presence of the concept drift issue in Wordfish estimation should be a cautionary note for using the approach with time series data, even though the original method was specifically designed to deal with time-series data as indicated in the title of the paper BIBREF2 . Looking at Figure FIGREF17 we can also observe that some ministers have similar preference profiles while others differ significantly. For example, Ahern and Reynolds are very similar in their profile but differ from a group consisting of Quinn, McCreevy, Cowen, and Lenihan who are very close to each other. There also appears to be a dramatic shift in agenda between the tenures of Lynch and Haughey (and also during Taoiseach Lynch's delivery of the budget speech for the Minister for Finance Charles Haughey in 1970). Overall, it appears that topics covered in budget speeches develop in waves, with clear bands formed by, for example, Lenihan, Cowen, McCreevy and Quin; Ahern and Reynolds; MacSharry, Dukes, Bruton, Fitzgerald, O'Kennedy and Colley; R. Ryan, Colley, Lynch (for Haughey); MacEntee, McGilligan and Aiken; Blythe and MacEntee. One intuitive interpretation of our Wordfish results is that budget speeches by finance ministers are related to underlying macroeconomic dynamics in the country. We consider the relationship between estimated policy positions of Minsters and three core economic indicators: unemployment, inflation, and per capita GDP growth rates. Figure FIGREF18 shows the three economic indicators, inflation (1923–2008), GDP growth (annual %; 1961–2008) and unemployment rate (1956–2008), over time. Figure FIGREF19 show Ministers' estimated positions plotted against the three indicators. As expected, the results presented here show that the policy positions of some Ministers can be partly explained by the contemporaneous economic situation in the country. However, the fact that some of the Ministers are clear outliers highlights the effect of individual characteristics on policy-making. One of the avenues for research that arises from this exercise is to analyze the determinants of these individual idiosyncrasies, possibly looking at education, class, and previous ministerial career. Such questions can now be easily investigated by researchers using our database. Speakers' Policy Position in the 2008 Budget Debate In the previous section, we used budget speeches from each year and compared them over time. In this section, we restrict the analysis to a single year but take multiple speeches made on the same topic. More specifically, we estimate the preferences of all speakers who participated in the debate over the 2008 budget. We extract these speeches from the database by selecting all contributions to the topic “Financial Resolution” in year 2007. This leaves us with a total of 22 speakers from all five parties. Table TABREF22 shows the speeches included in the analysis. To estimate speakers' position we use Wordscore BIBREF1 – a version of the Naive Bayes classifier that is deployed for text categorization problems BIBREF22 . In a similar application, BIBREF1 have already demonstrated that Wordscore can be effectively used to derive estimates of TDs policy positions. As in the example above, we pre-process documents by removing all numbers and interjections. Wordscore uses two documents with well-known positions as reference texts (training set). The positions of all other documents are then estimated by comparing them to these reference documents. The underlying idea is that a document that, in terms of word frequencies, is similar to a reference document was produced by an author with similar preferences. The selection of reference documents furthermore determines the (assumed) underlying dimension for which documents' positions are estimated. For example, using two opposing documents on climate change would scale documents on the underlying dimension “climate politics”. It has also been shown that under certain assumptions the Wordscore algorithm is related to the Wordfish algorithm used in the previous section BIBREF23 . We assume that contributions in budget debates have the underlying dimension of being either pro or contra the current government. Our interpretation from reading the speeches is that, apart from the budget speech itself, all other speeches largely either attack or defend the incumbent government and to a lesser extent debate the issues of the next budget. We can therefore use contributions during the budget debate as an indicator for how much a speaker is supporting or opposing the current government, here consisting of Fianna Fáil and the Green Party. As our reference texts we therefore chose the speeches of Bertie Ahern (Taoiseach) and Enda Kenny (FG party leader). The former should obviously be strongly supportive of the government while the latter, as party leader of the largest opposition party, should strongly oppose it. Figure FIGREF24 shows estimated positions for all speakers grouped by party affiliation. The estimated positions are clustered into two groups, one representing the government and one the opposition. Within the government cluster, Deputy Batt O'Keeffe (Minister of State at the Department of Environment, Heritage and Local Government) is estimated to be the most supportive speaker for the government, while Deputy Pat Carey (Minister of State at the Department of Community, Rural and Gaeltacht Affairs) and Deputy Sean Ardagh are estimated to be relatively closer to the opposition. Deputy John Gormley, leader of the Green party and Minister for the Environment, Heritage and Local Government in the FF-Green coalition, is estimated to be in the centre of the government cluster. Among all positions in the opposition cluster, the speech of Róisín Shortall is the closest to the government side, with Neville being the farthest out. Ministers' policy position in the 26th government The government cabinet in parliamentary democracies is at the core of political decision making, yet it is difficult to model intra-cabinet bargaining as the preferences of most cabinet members are unknown. Cabinet decisions are usually made behind closed doors and the doctrine of joint cabinet responsibility prevents ministers from publicly opposing decisions, even if they disagree with them. Using ministers' speeches and their responses during question times offer a unique opportunity to infer their preferences on policy dimensions of interest. In our final application we estimate policy positions for all cabinet members in the 26th government. The dimension on which positions are estimated represents pro- versus contra-government spending (or spending left-right). We show that estimated positions are highly correlated with departments' actual spending, which means that estimated positions are not only meaningful but can also be used to predict actual policy-making. The 26th government was formed as a coalition between Fianna Fáil and the Progressive Democrats after the election for the 29th Dáil in 2002. The cabinet was reshuffled on 29 September 2004 and we only include ministers' speeches until that date. Table TABREF26 lists all cabinet members (and their portfolios) included in our analysis. To estimate ministers' policy positions, we retrieve the complete record of each minister's contribution in parliament from the first meeting on 6 June 2002 until the date of the reshuffle. On average, each minister made 3,643 contributions with an average number of 587,077 words. Table TABREF27 provides summary statistics for all ministers, sorted by total word count. We again use Wordscore BIBREF0 , BIBREF1 to estimate positions as it allows us to define the underlying policy dimension by choosing appropriate reference texts. We estimate positions on a social-economic left-right dimension that reflects pro- versus contra-government spending. We therefore use contributions by Mary Coughlan (Minister for Social and Family Affairs) and Charlie McCreevy (Minister for Finance) as reference texts, assuming that the former is more in favor of spending than the latter. Figure FIGREF28 shows the results of estimation grouped by the two parties. As expected, we find that the two PD members, Mary Harney and Michael McDowell, are at the right side of the dimension. We estimate the most left-wing members to be Éamon Ó Cuív (Minister for Community, Rural and Gaeltacht Affairs), Noel Dempsey (Minister for Education and Science), and Micheál Martin (Minister for Health and Children). The most right-wing members are John O'Donoghue (Minister for Arts, Sport and Tourism), Charlie McCreevy (whose contributions we used as right-wing reference text), and Michael Smith (Minister for Defense). How valid are these estimated positions? In order to have substantive meaning, our estimates should be able to predict political decisions on the same policy dimension. We therefore use ministers' estimated positions to predict their departmental spending level BIBREF3 . Our outcome variable is each department's spending as share of the total budget in 2004 modeled as a function of estimated policy positions. We conjecture that more left-wing ministers should have higher spending levels than right-wing ministers, which we test by estimating DISPLAYFORM0 via ordinary least-square regression. Figure FIGREF31 shows the two variables plotted against each other together with the estimated regression line from equation EQREF29 . In one analysis shown we include all cabinet members. In the other, we exclude non-spending departments with small budgets, such as the office of the Taoiseach or the Department of Foreign Affairs. Figure FIGREF31 reveals that there is a negative, albeit weak, relationship between estimated positions and spending, with more left-wing cabinet members having higher spending levels than right-wing members. The correlation between the two variables is -0.53 ( INLINEFORM0 ) which is not significant at the 0.05 level. However, if we only take members from high-spending departments into account (second pane in Figure FIGREF31 ) we find a significant linear relationship between the two variables with a correlation coefficient of -0.95 ( INLINEFORM1 ). This result provides some level of validation for our data and analysis. These results also open up an intriguing question about the endogeneity of observable policy preferences of ministers. Do higher spending portfolios receive more pro-spending ministers or do ministers adapt their policy preferences after appointment and literally grow into the job? This and related questions are outside the scope of this paper and can be pursued by researchers with the help of our database of parliamentary speeches. Conclusion Policy preferences of individual politicians (ministers or TDs in general), are inherently unobservable. However, we have abundant data on speeches made by political actors. The latest developments in automated text analysis techniques allow us to estimate the policy positions of individual actors from these speeches. In relation to Irish political actors such estimation has been hindered by the structure of the available data. While all speeches made in Dáil Éireann are dutifully recorded, the architecture of the data set, where digitized versions of speeches are stored, makes it impossible to apply any of the existing text analysis software. Speeches are currently stored by Dáil Éireann in more than half a million separate HTML files with entries that are not related to one another. In this paper we present a new database of speeches that was created with the purpose of allowing the estimation of policy preferences of individual politicians. For that reason we created a relational database where speeches are related to the members database and structured in terms of dates, topics of debates, and names of speakers, their constituency and party affiliation. This gives the necessary flexibility to use available text scaling methods in order to estimate the policy positions of actors. We also present several examples for which this data can be used. We show how to estimate the policy positions of all Irish Ministers for Finance, and highlight how this can lead to interesting research questions in estimating the determinants of their positions. We show that for some ministers the position can be explained by the country's economic performance, while the preferences of other ministers seem to be idiosyncratic. In another example we estimate positions of individual TDs in a budget debate, followed by the estimation of policy positions of cabinet members of the 26th Government. With the introduction of our database, we aim to make text analysis an easy and accessible tool for social scientists engaged in empirical research on policy-making that requires estimation of policy preferences of political actors.
Remove numbers and interjections
4c1847f0f3e6f9cc6ac3dfbac9e135d34641a854
4c1847f0f3e6f9cc6ac3dfbac9e135d34641a854_0
Q: What programming language is the tool written in? Text: Introduction Entity extraction is one of the most major NLP components. Most NLP tools (e.g., NLTK, Stanford CoreNLP, etc.), including commercial services (e.g., Google Cloud API, Alchemy API, etc.), provide entity extraction functions to recognize named entities (e.g., PERSON, LOCATION, ORGANIZATION, etc.) from texts. Some studies have defined fine-grained entity types and developed extraction methods BIBREF0 based on these types. However, these methods cannot comprehensively cover domain-specific entities. For instance, a real estate search engine needs housing equipment names to index these terms for providing fine-grained search conditions. There is a significant demand for constructing user-specific entity dictionaries, such as the case of cuisine and ingredient names for restaurant services. A straightforward solution is to prepare a set of these entity names as a domain-specific dictionary. Therefore, this paper focuses on the entity population task, which is a task of collecting entities that belong to an entity type required by a user. We develop LUWAK, a lightweight tool for effective interactive entity population. The key features are four-fold: We think these features are key components for effective interactive entity population. We choose an interactive user feedback strategy for entity population for LUWAK. A major approach to entity population is bootstrapping, which uses several entities that have been prepared as a seed set for finding new entities. Then, these new entities are integrated into the initial seed set to create a new seed set. The bootstrapping approach usually repeats the procedure until it has collected a sufficient number of entities. The framework cannot prevent the incorporation of incorrect entities that do not belong to the entity type unless user interaction between iterations. The problem is commonly called semantic drift BIBREF1 . Therefore, we consider user interaction, in which feedback is given to expanded candidates, as essential to maintaining the quality of an entity set. LUWAK implements fundamental functions for entity population, including (a) importing an initial entity set, (b) generating entity candidates, (c) obtaining user feedback, and (d) publishing populated entity dictionary. We aim to reduce the user’s total workload as a key metric of an entity population tool. That is, an entity population tool should provide the easiest and fastest solution to collecting entities of a particular entity type. User interaction cost is a dominant factor in the entire workload of an interactive tool. Thus, we carefully design the user interface for users to give feedbacks to the tool intuitively. Furthermore, we also consider the end-to-end user cost reduction. We adhere to the concept of developing installation-free software to distribute the tool among a wide variety of users, including nontechnical clusters. This lightweight design of LUWAK might speed up the procedure of the whole interactive entity population workflow. Furthermore, this advantage might be beneficial to continuously improve the whole pipeline of interactive entity population system. LUWAK: A lightweight tool for interactive entity population Our framework adopts the interactive entity expansion approach. This approach organizes the collaboration of a human worker and entity expansion algorithms to generate a user-specific entity dictionary efficiently. We show the basic workflow of LUWAK in Figure 1 . (Step 1) LUWAK assumes that a user prepares an initial seed set manually. The seed set is shown in the Entity table. (Step 2) A user can send entities in the Entity table to an Expansion API for obtaining entity candidates. (Step 3) LUWAK shows the entity candidates in the Candidate table for user interaction. Then, the user checks accept/reject buttons to update the Entity table. After submitting the judgments, LUWAK shows the Entity table again. The user can directly add, edit, or delete entities in the table at any time. (Step 4) the user can also easily see how these entities stored in the Entity table appear in a document. (Step 5) After repeating the same procedure (Steps 2–4) for a sufficient time, the user can publish the Entity table as an output. Implementation LUWAK is implemented in pure JavaScript code, and it uses the LocalStorage of a web browser. A user does not have to install any packages except for a web browser. The only thing the user must do is to download the LUWAK software and put it in a local directory. We believe the cost of installing the tool will keep away a good amount of potential users. This philosophy follows our empirical feeling of the curse of installing required packages/libraries. Moreover, LUWAK does not need users to consider the usage and maintenance of these additional packages. That is why we are deeply committed to making LUWAK a pure client-side tool in the off-the-shelf style. LUWAK Dashboard LUWAK has a dashboard for quickly viewing an entity dictionary in progress. The dashboard consists of two tables: the Entity table and the Feedback table. The Entity table provides efficient ways to construct and modify an entity dictionary. Figure UID11 shows the screenshot of the Entity table. The table shows entities in the current entity set. Each row corresponds to an entity entry. Each entry has a label, which denotes whether the predefined entity type is a positive or a negative example, an original entity, which was used to find the entity, and the score, which denotes the confidence score. A user can directly edit the table by adding, renaming, and deleting entities. Moreover, the entity inactivation function allows a user to manually inactivate entities, so that entity expansion algorithms do not use the inactivated entities. The table implements a page switching function, a search function, and a sorting function to ensure visibility even when there is a large number of entities in the table. Entity Candidate Generation We design the entity candidate generation module as an external API (Expansion API). The Expansion API receives a set of entities with positive labels. The Expansion API returns top- $k$ entity candidates. As an initial implementation, we used GloVe BIBREF2 as word embedding models for implementing an Expansion API. This API calculates the cosine similarity between a set of positive entities and entities candidates to generate a ranked list. We prepared models trained based on the CommonCrawl corpus and the Twitter corpus. Note that the specification of the expansion algorithm is not limited to the algorithm described in this paper, as LUWAK considers the Expansion API as an external function. Moreover, we also utilize the category-based expansion module, in which we used is-a relationship between the ontological category and each entity and expanded seeds via category-level. For example, if most of the entities already inserted in the dictionary share the same category, such as Programming Languages, the system suggests that "Programming Language" entities should be inserted in the dictionary when we develop a job skill name dictionary. Category-based entity expansion is helpful to avoid the candidate entity one by one. We used Yago BIBREF3 as an existing knowledge base. External API. In our design of LUWAK, Expansion APIs are placed as an external function outside LUWAK. There are three reasons why we adopt this design. First, we want LUWAK to remain a corpus-free tool. Users do not have to download any corpora or models to start using LUWAK, and it takes too much time to launch an Expansion API server. Second, LUWAK’s design allows external contributors to build their own expansion APIs that are compatible with LUWAK’s interface. We developed the initial version of the LUWAK package to contain an entity Expansion API so users can launch their expansion APIs internally. Third, the separation between LUWAK and the Expansion APIs enables Expansion APIs to use predetermined options for algorithms, including non-embedding-based methods (e.g., pattern-based methods). We can use more than one entity expansion model to find related entities. For instance, general embedding models, such as those built on Wikipedia, might be a good choice in early iterations, whereas more domain-specific models trained on domain-specific corpora might be helpful in later iterations. LUWAK is flexible to change and use more than one Expansion API. This design encourages us to continuously refine the entity expansion module easily. Example: Housing Equipment Entity Population We show an example of populating house equipment entities using LUWAK for improving a real estate search engine. The preliminary step is to prepare seed entities that belong to the real estate house equipment entity type (e.g., kitchen, bath). In this case, a user is supposed to provide several entities ( $\sim $ 10) as an initial set of the category. LUWAK first asks the user to upload an initial seed set. The user can add, rename, and delete entities on the Entity table as he or she wants. The user can also choose a set of entity expansion models at any time. Figure 2 shows the entity dashboard in this example. When the user submits the current entity set by clicking the Expand Seed Set button (Figure UID11 ), LUWAK sends a request to the external Expansion APIs that are selected to obtain expanded entities. The returned values will be stored in the Feedback table, as Figure UID12 shows. The Feedback table provides a function to capture user feedback intuitively. The user can click the + or - buttons to assign positive or negative labels to the entity candidates. The score column stores the similarity score, which is calculated by the Expansion API as reference information for users. The user can also see how these entities are generated by looking at the original entities in the original column. The original entity information can be used to detect semantic drift. For instance, if the user finds the original entity of some entity candidates has negative labels, the user might consider inactivating the entity to prevent semantic drift. In the next step, the user reflects the feedback by clicking the Submit Feedback button. Then, the user will see the entity dashboard with the newly added entities as shown in Figure UID13 . The user can inactivate the entity by clicking the inactivate button. The user can sort rows by column values to take a brief look at the current entity set. Also, the entity dashboard provides a search function to find an entity for action. The user can also check how entities appear in a test document. As shown in Figure UID14 , LUWAK highlights these entities in the current entity set. After the user is satisfied with the amount of the current entity set in the table, the Export button allows the user to download the entire table, including inactivated entities. Related Work and Discussion Entity population is one of the important practical problems in NLP. Generated entity dictionaries can be used in various applications, including search engines, named entity extraction, and entity linking. Iterative seed expansion is known to be an efficient approach to construct user-specific entity dictionaries. Previous studies have aimed to construct a high-quality entity dictionary from a small number of seed entities BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 . As we stated in "Entity Candidate Generation" , LUWAK is flexible with the types of algorithms used for entity population. A user can select any combinations of different methods once the Expansion API of the methods are available. Stanford Pattern-based Information Extraction and Diagnostics (SPIED) BIBREF8 is a pattern-based entity population system. SPIED requires not only an initial seed set but also document collection because it uses the pattern-based approach. After a user inputs initial seed entities, SPIED generates regular expression patterns to find entity candidates from a given document collection. This approach incurs a huge computational cost for calculating the scores of every regular expression pattern and every entity candidate in each iteration. Furthermore, SPIED adopts a bootstrapping approach, which does not involve user feedback for each iteration. This approach can easily result in semantic drift. Interactive Knowledge Extraction BIBREF9 (IKE) is an interactive bootstrapping tool for collecting relation-extraction patterns. IKE also provides a search-based entity extraction function and an embedding-based entity expansion function for entity population. A user can interactively add entity candidates generated by an embedding-based algorithm to an entity dictionary. LUWAK is a more lightweight tool than IKE, which only focuses on the entity population task. LUWAK has numerous features, such as the multiple entity expansion model choices, that are not implemented in IKE. Moreover, LUWAK is a corpus-free tool that does not require a document collection for entity population. Thus, we differentiate LUWAK from IKE, considering it a more lightweight entity population tool. Summary This paper has presented LUWAK, a lightweight front-end tool for interactive entity population. LUWAK provides a set of basic functions such as entity expansion and user feedback assignment. We have implemented LUWAK in pure JavaScript with LocalStorage to make it an installation-free tool. We believe that LUWAK plays an important role in delivering the values of existing entity expansion techniques to potential users including nontechnical people without supposing a large amount of human cost. Moreover, we believe that this design makes it easy to compare performances between interactive entity population pipelines and develop more sophisticated ones.
JavaScript
7f9bc06cfa81a4e3f7df4c69a1afef146ed5a1cf
7f9bc06cfa81a4e3f7df4c69a1afef146ed5a1cf_0
Q: What is the performance change of the textual semantic similarity task when no error and maximum errors (noise) are present? Text: Introduction In recent times, pre-trained contextual language models have led to significant improvement in the performance for many NLP tasks. Among the family of these models, the most popular one is BERT BIBREF0, which is also the focus of this work. The strength of the BERT model FIGREF2 stems from its transformerBIBREF1 based encoder architectureFIGREF1. While it is still not very clear as to why BERT along with its embedding works so well for downstream tasks when it is fine tuned, there has been some work in this direction that that gives some important cluesBIBREF2, BIBREF3. At a high level, BERT’s pipelines looks as follows: given a input sentence, BERT tokenizes it using wordPiece tokenizerBIBREF4. The tokens are then fed as input to the BERT model and it learns contextualized embeddings for each of those tokens. It does so via pre-training on two tasks - Masked Language Model (MLM)BIBREF0 and Next Sentence Prediction (NSP)BIBREF0. The focus of this work is to understand the issues that a practitioner can run into while trying to use BERT for building NLP applications in industrial settings. It is a well known fact that NLP applications in industrial settings often have to deal with the noisy data. There are different kinds of possible noise namely non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages to name a few. Such noisy data is a hallmark of user generated text content and commonly found on social media, chats, online reviews, web forums to name a few. Owing to this noise a common issue that NLP models have to deal with is Out Of Vocabulary (OOV) words. These are words that are found in test and production data but not part of training data. In this work we highlight how BERT fails to handle Out Of Vocabulary(OOV) words, given its limited vocabulary. We show that this negatively impacts the performance of BERT when working with user generated text data and evaluate the same. This evaluation is motivated from the business use case we are solving where we are building a dialogue system to screen candidates for blue collar jobs. Our candidate user base, coming from underprivileged backgrounds, are often high school graduates. This coupled with ‘fat finger’ problem over a mobile keypad leads to a lot of typos and spelling mistakes in the responses sent to the dialogue system. Hence, for this work we focus on spelling mistakes as the noise in the data. While this work is motivated from our business use case, our findings are applicable across various use cases in industry - be it be sentiment classification on twitter data or topic detection of a web forum. To simulate noise in the data, we begin with a clean dataset and introduce spelling errors in a fraction of words present in it. These words are chosen randomly. We will explain this process in detail later. Spelling mistakes introduced mimic the typographical errors in the text introduced by our users. We then use the BERT model for tasks using both clean and noisy datasets and compare the results. We show that the introduction of noise leads to a significant drop in performance of the BERT model for the task at hand as compared to clean dataset. We further show that as we increase the amount of noise in the data, the performance degrades sharply. Related Work In recent years pre-trained language models ((e.g. ELMoBIBREF5, BERTBIBREF0) have made breakthroughs in several natural language tasks. These models are trained over large corpora that are not human annotated and are easily available. Chief among these models is BERTBIBREF0. The popularity of BERT stems from its ability to be fine-tuned for a variety of downstream NLP tasks such as text classification, regression, named-entity recognition, question answeringBIBREF0, machine translationBIBREF6 etc. BERT has been able to establish State-of-the-art (SOTA) results for many of these tasks. People have been able to show how one can leverage BERT to improve searchBIBREF7. Owing to its success, researchers have started to focus on uncovering drawbacks in BERT, if any. BIBREF8 introduce TEXTFOOLER, a system to generate adversarial text. They apply it to NLP tasks of text classification and textual entailment to attack the BERT model. BIBREF9 evaluate three models - RoBERTa, XLNet, and BERT in Natural Language Inference (NLI) and Question Answering (QA) tasks for robustness. They show that while RoBERTa, XLNet and BERT are more robust than recurrent neural network models to stress tests for both NLI and QA tasks; these models are still very fragile and show many unexpected behaviors. BIBREF10 discuss length-based and sentence-based misclassification attacks for the Fake News Detection task trained using a context-aware BERT model and they show 78% and 39% attack accuracy respectively. Our contribution in this paper is to answer that can we use large language models like BERT directly over user generated data. Experiment For our experiments, we use pre-trained BERT implementation as given by huggingface transformer library. We use the BERTBase uncased model. We work with three datasets namely - IMDB movie reviewsBIBREF11, Stanford Sentiment Treebank (SST-2) BIBREF12 and Semantic Textual Similarity (STS-B) BIBREF13. IMDB dataset is a popular dataset for sentiment analysis tasks, which is a binary classification problem with equal number of positive and negative examples. Both STS-B and SST-2 datasets are a part of GLUE benchmark[2] tasks . In STS-B too, we predict positive and negative sentiments. In SST-2 we predict textual semantic similarity between two sentences. It is a regression problem where the similarity score varies between 0 to 5. To evaluate the performance of BERT we use standard metrics of F1-score for imdb and STS-B, and Pearson-Spearman correlation for SST-2. In Table TABREF5, we give the statistics for each of the datasets. We take the original datasets and add varying degrees of noise (i.e. spelling errors to word utterances) to create datasets for our experiments. From each dataset, we create 4 additional datasets each with varying percentage levels of noise in them. For example from IMDB, we create 4 variants, each having 5%, 10%, 15% and 20% noise in them. Here, the number denotes the percentage of words in the original dataset that have spelling mistakes. Thus, we have one dataset with no noise and 4 variants datasets with increasing levels of noise. Likewise, we do the same for SST-2 and STS-B. All the parameters of the BERTBase model remain the same for all 5 experiments on the IMDB dataset and its 4 variants. This also remains the same across other 2 datasets and their variants. For all the experiments, the learning rate is set to 4e-5, for optimization we use Adam optimizer with epsilon value 1e-8. We ran each of the experiments for 10 and 50 epochs. Results Let us discuss the results from the above mentioned experiments. We show the plots of accuracy vs noise for each of the tasks. For IMDB, we fine tune the model for the sentiment analysis task. We plot F1 score vs % of error, as shown in Figure FIGREF6. Figure FIGREF6imdba shows the performance after fine tuning for 10 epochs, while Figure FIGREF6imdbb shows the performance after fine tuning for 50 epochs. Similarly, Figure FIGREF9ssta and Figure FIGREF9sstb) shows F1 score vs % of error for Sentiment analysis on SST-2 dataset after fine tuning for 10 and 50 epochs respectively. Figure FIGREF12stsa and FIGREF12stsb shows Pearson-Spearman correlation vs % of error for textual semantic similarity on STS-B dataset after fine tuning for 10 and 50 epochs respectively. Results ::: Key Findings It is clear from the above plots that as we increase the percentage of error, for each of the three tasks, we see a significant drop in BERT’s performance. Also, from the plots it is evident that the reason for this drop in performance is introduction of noise (spelling mistakes). After all we get very good numbers, for each of the three tasks, when there is no error (0.0 % error). To understand the reason behind the drop in performance, first we need to understand how BERT processes input text data. BERT uses WordPiece tokenizer to tokenize the text. WordPiece tokenizer utterances based on the longest prefix matching algorithm to generate tokens . The tokens thus obtained are fed as input of the BERT model. When it comes to tokenizing noisy data, we see a very interesting behaviour from WordPiece tokenizer. Owing to the spelling mistakes, these words are not directly found in BERT’s dictionary. Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance. To understand this better, let us look into two examples, one each from the IMDB and STS-B datasets respectively, as shown below. Here, (a) is the sentence as it appears in the dataset ( before adding noise) while (b) is the corresponding sentence after adding noise. The mistakes are highlighted with italics. The sentences are followed by the corresponding output of the WordPiece tokenizer on these sentences: In the output ‘##’ is WordPiece tokenizer’s way of distinguishing subwords from words. ‘##’ signifies subwords as opposed to words. Example 1 (imdb example): “that loves its characters and communicates something rather beautiful about human nature” (0% error) “that loves 8ts characters abd communicates something rathee beautiful about human natuee” (5% error) Output of wordPiece tokenizer: ['that', 'loves', 'its', 'characters', 'and', 'communicate', '##s', 'something', 'rather', 'beautiful', 'about', 'human','nature'] (0% error IMDB example) ['that', 'loves', '8', '##ts', 'characters', 'abd', 'communicate','##s', 'something','rat', '##hee', 'beautiful', 'about', 'human','nat', '##ue', '##e'] (5% error IMDB example) Example 2(STS example): “poor ben bratt could n't find stardom if mapquest emailed himpoint-to-point driving directions.” (0% error) “poor ben bratt could n't find stardom if mapquest emailed him point-to-point drivibg dirsctioge.” (5% error) Output of wordPiece tokenizer: ['poor', 'ben', 'brat', '##t', 'could', 'n', "'", 't', 'find','star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him','point', '-', 'to', '-', 'point', 'driving', 'directions', '.'] (0% error STS example) ['poor', 'ben', 'brat', '##t', 'could', 'n', "'", 't', 'find','star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him', 'point', '-', 'to', '-', 'point', 'dr', '##iv', '##ib','##g','dir','##sc', '##ti', '##oge', '.'] (5% error STS example) In example 1, the tokenizer splits communicates into [‘communicate’, ‘##s’] based on longest prefix matching because there is no exact match for “communicates” in BERT vocabulary. The longest prefix in this case is “communicate” and left over is “s” both of which are present in the vocabulary of BERT. We have contextual embeddings for both “communicate” and “##s”. By using these two embeddings, one can get an approximate embedding for “communicates”. However, this approach goes for a complete toss when the word is misspelled. In example 1(b) the word natuee (‘nature’ is misspelled) is split into ['nat', '##ue', '##e'] based on the longest prefix match. Combining the three embeddings one cannot approximate the embedding of nature. This is because the word nat has a very different meaning (it means ‘a person who advocates political independence for a particular country’). This misrepresentation in turn impacts the performance of downstream subcomponents of BERT bringing down the overall performance of BERT model. Hence, as we systematically introduce more errors, the quality of output of the tokenizer degrades further, resulting in the overall performance drop. Our results and analysis shows that one cannot apply BERT blindly to solve NLP problems especially in industrial settings. If the application you are developing gets data from channels that are known to introduce noise in the text, then BERT will perform badly. Examples of such scenarios are applications working with twitter data, mobile based chat system, user comments on platforms like youtube, reddit to name a few. The reason for the introduction of noise could vary - while for twitter, reddit it's often deliberate because that is how users prefer to write, while for mobile based chat it often suffers from ‘fat finger’ typing error problem. Depending on the amount of noise in the data, BERT can perform well below expectations. We further conducted experiments with different tokenizers other than WordPiece tokenizer. For this we used stanfordNLP WhiteSpace BIBREF14 and Character N-gram BIBREF15 tokenizers. WhiteSpace tokenizer splits text into tokens based on white space. Character N-gram tokenizer splits words that have more than n characters in them. Thus, each token has at most n characters in them. The resultant tokens from the respective tokenizer are fed to BERT as inputs. For our case, we work with n = 6. Results of these experiments are presented in Table TABREF25. Even though wordPiece tokenizer has the issues stated earlier, it is still performing better than whitespace and character n-gram tokenizer. This is primarily because of the vocabulary overlap between STS-B dataset and BERT vocabulary. Conclusion and Future Work In this work we systematically studied the effect of noise (spelling mistakes) in user generated text data on the performance of BERT. We demonstrated that as the noise increases, BERT’s performance drops drastically. We further investigated the BERT system to understand the reason for this drop in performance. We show that the problem lies with how misspelt words are tokenized to create a representation of the original word. There are 2 ways to address the problem - either (i) preprocess the data to correct spelling mistakes or (ii) incorporate ways in BERT architecture to make it robust to noise. The problem with (i) is that in most industrial settings this becomes a separate project in itself. We leave (ii) as a future work to fix the issues.
10 Epochs: pearson-Spearman correlation drops 60 points when error increase by 20% 50 Epochs: pearson-Spearman correlation drops 55 points when error increase by 20%
58a340c338e41002c8555202ef9adbf51ddbb7a1
58a340c338e41002c8555202ef9adbf51ddbb7a1_0
Q: Which sentiment analysis data set has a larger performance drop when a 10% error is introduced? Text: Introduction In recent times, pre-trained contextual language models have led to significant improvement in the performance for many NLP tasks. Among the family of these models, the most popular one is BERT BIBREF0, which is also the focus of this work. The strength of the BERT model FIGREF2 stems from its transformerBIBREF1 based encoder architectureFIGREF1. While it is still not very clear as to why BERT along with its embedding works so well for downstream tasks when it is fine tuned, there has been some work in this direction that that gives some important cluesBIBREF2, BIBREF3. At a high level, BERT’s pipelines looks as follows: given a input sentence, BERT tokenizes it using wordPiece tokenizerBIBREF4. The tokens are then fed as input to the BERT model and it learns contextualized embeddings for each of those tokens. It does so via pre-training on two tasks - Masked Language Model (MLM)BIBREF0 and Next Sentence Prediction (NSP)BIBREF0. The focus of this work is to understand the issues that a practitioner can run into while trying to use BERT for building NLP applications in industrial settings. It is a well known fact that NLP applications in industrial settings often have to deal with the noisy data. There are different kinds of possible noise namely non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages to name a few. Such noisy data is a hallmark of user generated text content and commonly found on social media, chats, online reviews, web forums to name a few. Owing to this noise a common issue that NLP models have to deal with is Out Of Vocabulary (OOV) words. These are words that are found in test and production data but not part of training data. In this work we highlight how BERT fails to handle Out Of Vocabulary(OOV) words, given its limited vocabulary. We show that this negatively impacts the performance of BERT when working with user generated text data and evaluate the same. This evaluation is motivated from the business use case we are solving where we are building a dialogue system to screen candidates for blue collar jobs. Our candidate user base, coming from underprivileged backgrounds, are often high school graduates. This coupled with ‘fat finger’ problem over a mobile keypad leads to a lot of typos and spelling mistakes in the responses sent to the dialogue system. Hence, for this work we focus on spelling mistakes as the noise in the data. While this work is motivated from our business use case, our findings are applicable across various use cases in industry - be it be sentiment classification on twitter data or topic detection of a web forum. To simulate noise in the data, we begin with a clean dataset and introduce spelling errors in a fraction of words present in it. These words are chosen randomly. We will explain this process in detail later. Spelling mistakes introduced mimic the typographical errors in the text introduced by our users. We then use the BERT model for tasks using both clean and noisy datasets and compare the results. We show that the introduction of noise leads to a significant drop in performance of the BERT model for the task at hand as compared to clean dataset. We further show that as we increase the amount of noise in the data, the performance degrades sharply. Related Work In recent years pre-trained language models ((e.g. ELMoBIBREF5, BERTBIBREF0) have made breakthroughs in several natural language tasks. These models are trained over large corpora that are not human annotated and are easily available. Chief among these models is BERTBIBREF0. The popularity of BERT stems from its ability to be fine-tuned for a variety of downstream NLP tasks such as text classification, regression, named-entity recognition, question answeringBIBREF0, machine translationBIBREF6 etc. BERT has been able to establish State-of-the-art (SOTA) results for many of these tasks. People have been able to show how one can leverage BERT to improve searchBIBREF7. Owing to its success, researchers have started to focus on uncovering drawbacks in BERT, if any. BIBREF8 introduce TEXTFOOLER, a system to generate adversarial text. They apply it to NLP tasks of text classification and textual entailment to attack the BERT model. BIBREF9 evaluate three models - RoBERTa, XLNet, and BERT in Natural Language Inference (NLI) and Question Answering (QA) tasks for robustness. They show that while RoBERTa, XLNet and BERT are more robust than recurrent neural network models to stress tests for both NLI and QA tasks; these models are still very fragile and show many unexpected behaviors. BIBREF10 discuss length-based and sentence-based misclassification attacks for the Fake News Detection task trained using a context-aware BERT model and they show 78% and 39% attack accuracy respectively. Our contribution in this paper is to answer that can we use large language models like BERT directly over user generated data. Experiment For our experiments, we use pre-trained BERT implementation as given by huggingface transformer library. We use the BERTBase uncased model. We work with three datasets namely - IMDB movie reviewsBIBREF11, Stanford Sentiment Treebank (SST-2) BIBREF12 and Semantic Textual Similarity (STS-B) BIBREF13. IMDB dataset is a popular dataset for sentiment analysis tasks, which is a binary classification problem with equal number of positive and negative examples. Both STS-B and SST-2 datasets are a part of GLUE benchmark[2] tasks . In STS-B too, we predict positive and negative sentiments. In SST-2 we predict textual semantic similarity between two sentences. It is a regression problem where the similarity score varies between 0 to 5. To evaluate the performance of BERT we use standard metrics of F1-score for imdb and STS-B, and Pearson-Spearman correlation for SST-2. In Table TABREF5, we give the statistics for each of the datasets. We take the original datasets and add varying degrees of noise (i.e. spelling errors to word utterances) to create datasets for our experiments. From each dataset, we create 4 additional datasets each with varying percentage levels of noise in them. For example from IMDB, we create 4 variants, each having 5%, 10%, 15% and 20% noise in them. Here, the number denotes the percentage of words in the original dataset that have spelling mistakes. Thus, we have one dataset with no noise and 4 variants datasets with increasing levels of noise. Likewise, we do the same for SST-2 and STS-B. All the parameters of the BERTBase model remain the same for all 5 experiments on the IMDB dataset and its 4 variants. This also remains the same across other 2 datasets and their variants. For all the experiments, the learning rate is set to 4e-5, for optimization we use Adam optimizer with epsilon value 1e-8. We ran each of the experiments for 10 and 50 epochs. Results Let us discuss the results from the above mentioned experiments. We show the plots of accuracy vs noise for each of the tasks. For IMDB, we fine tune the model for the sentiment analysis task. We plot F1 score vs % of error, as shown in Figure FIGREF6. Figure FIGREF6imdba shows the performance after fine tuning for 10 epochs, while Figure FIGREF6imdbb shows the performance after fine tuning for 50 epochs. Similarly, Figure FIGREF9ssta and Figure FIGREF9sstb) shows F1 score vs % of error for Sentiment analysis on SST-2 dataset after fine tuning for 10 and 50 epochs respectively. Figure FIGREF12stsa and FIGREF12stsb shows Pearson-Spearman correlation vs % of error for textual semantic similarity on STS-B dataset after fine tuning for 10 and 50 epochs respectively. Results ::: Key Findings It is clear from the above plots that as we increase the percentage of error, for each of the three tasks, we see a significant drop in BERT’s performance. Also, from the plots it is evident that the reason for this drop in performance is introduction of noise (spelling mistakes). After all we get very good numbers, for each of the three tasks, when there is no error (0.0 % error). To understand the reason behind the drop in performance, first we need to understand how BERT processes input text data. BERT uses WordPiece tokenizer to tokenize the text. WordPiece tokenizer utterances based on the longest prefix matching algorithm to generate tokens . The tokens thus obtained are fed as input of the BERT model. When it comes to tokenizing noisy data, we see a very interesting behaviour from WordPiece tokenizer. Owing to the spelling mistakes, these words are not directly found in BERT’s dictionary. Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance. To understand this better, let us look into two examples, one each from the IMDB and STS-B datasets respectively, as shown below. Here, (a) is the sentence as it appears in the dataset ( before adding noise) while (b) is the corresponding sentence after adding noise. The mistakes are highlighted with italics. The sentences are followed by the corresponding output of the WordPiece tokenizer on these sentences: In the output ‘##’ is WordPiece tokenizer’s way of distinguishing subwords from words. ‘##’ signifies subwords as opposed to words. Example 1 (imdb example): “that loves its characters and communicates something rather beautiful about human nature” (0% error) “that loves 8ts characters abd communicates something rathee beautiful about human natuee” (5% error) Output of wordPiece tokenizer: ['that', 'loves', 'its', 'characters', 'and', 'communicate', '##s', 'something', 'rather', 'beautiful', 'about', 'human','nature'] (0% error IMDB example) ['that', 'loves', '8', '##ts', 'characters', 'abd', 'communicate','##s', 'something','rat', '##hee', 'beautiful', 'about', 'human','nat', '##ue', '##e'] (5% error IMDB example) Example 2(STS example): “poor ben bratt could n't find stardom if mapquest emailed himpoint-to-point driving directions.” (0% error) “poor ben bratt could n't find stardom if mapquest emailed him point-to-point drivibg dirsctioge.” (5% error) Output of wordPiece tokenizer: ['poor', 'ben', 'brat', '##t', 'could', 'n', "'", 't', 'find','star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him','point', '-', 'to', '-', 'point', 'driving', 'directions', '.'] (0% error STS example) ['poor', 'ben', 'brat', '##t', 'could', 'n', "'", 't', 'find','star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him', 'point', '-', 'to', '-', 'point', 'dr', '##iv', '##ib','##g','dir','##sc', '##ti', '##oge', '.'] (5% error STS example) In example 1, the tokenizer splits communicates into [‘communicate’, ‘##s’] based on longest prefix matching because there is no exact match for “communicates” in BERT vocabulary. The longest prefix in this case is “communicate” and left over is “s” both of which are present in the vocabulary of BERT. We have contextual embeddings for both “communicate” and “##s”. By using these two embeddings, one can get an approximate embedding for “communicates”. However, this approach goes for a complete toss when the word is misspelled. In example 1(b) the word natuee (‘nature’ is misspelled) is split into ['nat', '##ue', '##e'] based on the longest prefix match. Combining the three embeddings one cannot approximate the embedding of nature. This is because the word nat has a very different meaning (it means ‘a person who advocates political independence for a particular country’). This misrepresentation in turn impacts the performance of downstream subcomponents of BERT bringing down the overall performance of BERT model. Hence, as we systematically introduce more errors, the quality of output of the tokenizer degrades further, resulting in the overall performance drop. Our results and analysis shows that one cannot apply BERT blindly to solve NLP problems especially in industrial settings. If the application you are developing gets data from channels that are known to introduce noise in the text, then BERT will perform badly. Examples of such scenarios are applications working with twitter data, mobile based chat system, user comments on platforms like youtube, reddit to name a few. The reason for the introduction of noise could vary - while for twitter, reddit it's often deliberate because that is how users prefer to write, while for mobile based chat it often suffers from ‘fat finger’ typing error problem. Depending on the amount of noise in the data, BERT can perform well below expectations. We further conducted experiments with different tokenizers other than WordPiece tokenizer. For this we used stanfordNLP WhiteSpace BIBREF14 and Character N-gram BIBREF15 tokenizers. WhiteSpace tokenizer splits text into tokens based on white space. Character N-gram tokenizer splits words that have more than n characters in them. Thus, each token has at most n characters in them. The resultant tokens from the respective tokenizer are fed to BERT as inputs. For our case, we work with n = 6. Results of these experiments are presented in Table TABREF25. Even though wordPiece tokenizer has the issues stated earlier, it is still performing better than whitespace and character n-gram tokenizer. This is primarily because of the vocabulary overlap between STS-B dataset and BERT vocabulary. Conclusion and Future Work In this work we systematically studied the effect of noise (spelling mistakes) in user generated text data on the performance of BERT. We demonstrated that as the noise increases, BERT’s performance drops drastically. We further investigated the BERT system to understand the reason for this drop in performance. We show that the problem lies with how misspelt words are tokenized to create a representation of the original word. There are 2 ways to address the problem - either (i) preprocess the data to correct spelling mistakes or (ii) incorporate ways in BERT architecture to make it robust to noise. The problem with (i) is that in most industrial settings this becomes a separate project in itself. We leave (ii) as a future work to fix the issues.
SST-2 dataset
0ca02893bda50007f7a76e7c8804101718fbb01c
0ca02893bda50007f7a76e7c8804101718fbb01c_0
Q: What kind is noise is present in typical industrial data? Text: Introduction In recent times, pre-trained contextual language models have led to significant improvement in the performance for many NLP tasks. Among the family of these models, the most popular one is BERT BIBREF0, which is also the focus of this work. The strength of the BERT model FIGREF2 stems from its transformerBIBREF1 based encoder architectureFIGREF1. While it is still not very clear as to why BERT along with its embedding works so well for downstream tasks when it is fine tuned, there has been some work in this direction that that gives some important cluesBIBREF2, BIBREF3. At a high level, BERT’s pipelines looks as follows: given a input sentence, BERT tokenizes it using wordPiece tokenizerBIBREF4. The tokens are then fed as input to the BERT model and it learns contextualized embeddings for each of those tokens. It does so via pre-training on two tasks - Masked Language Model (MLM)BIBREF0 and Next Sentence Prediction (NSP)BIBREF0. The focus of this work is to understand the issues that a practitioner can run into while trying to use BERT for building NLP applications in industrial settings. It is a well known fact that NLP applications in industrial settings often have to deal with the noisy data. There are different kinds of possible noise namely non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages to name a few. Such noisy data is a hallmark of user generated text content and commonly found on social media, chats, online reviews, web forums to name a few. Owing to this noise a common issue that NLP models have to deal with is Out Of Vocabulary (OOV) words. These are words that are found in test and production data but not part of training data. In this work we highlight how BERT fails to handle Out Of Vocabulary(OOV) words, given its limited vocabulary. We show that this negatively impacts the performance of BERT when working with user generated text data and evaluate the same. This evaluation is motivated from the business use case we are solving where we are building a dialogue system to screen candidates for blue collar jobs. Our candidate user base, coming from underprivileged backgrounds, are often high school graduates. This coupled with ‘fat finger’ problem over a mobile keypad leads to a lot of typos and spelling mistakes in the responses sent to the dialogue system. Hence, for this work we focus on spelling mistakes as the noise in the data. While this work is motivated from our business use case, our findings are applicable across various use cases in industry - be it be sentiment classification on twitter data or topic detection of a web forum. To simulate noise in the data, we begin with a clean dataset and introduce spelling errors in a fraction of words present in it. These words are chosen randomly. We will explain this process in detail later. Spelling mistakes introduced mimic the typographical errors in the text introduced by our users. We then use the BERT model for tasks using both clean and noisy datasets and compare the results. We show that the introduction of noise leads to a significant drop in performance of the BERT model for the task at hand as compared to clean dataset. We further show that as we increase the amount of noise in the data, the performance degrades sharply. Related Work In recent years pre-trained language models ((e.g. ELMoBIBREF5, BERTBIBREF0) have made breakthroughs in several natural language tasks. These models are trained over large corpora that are not human annotated and are easily available. Chief among these models is BERTBIBREF0. The popularity of BERT stems from its ability to be fine-tuned for a variety of downstream NLP tasks such as text classification, regression, named-entity recognition, question answeringBIBREF0, machine translationBIBREF6 etc. BERT has been able to establish State-of-the-art (SOTA) results for many of these tasks. People have been able to show how one can leverage BERT to improve searchBIBREF7. Owing to its success, researchers have started to focus on uncovering drawbacks in BERT, if any. BIBREF8 introduce TEXTFOOLER, a system to generate adversarial text. They apply it to NLP tasks of text classification and textual entailment to attack the BERT model. BIBREF9 evaluate three models - RoBERTa, XLNet, and BERT in Natural Language Inference (NLI) and Question Answering (QA) tasks for robustness. They show that while RoBERTa, XLNet and BERT are more robust than recurrent neural network models to stress tests for both NLI and QA tasks; these models are still very fragile and show many unexpected behaviors. BIBREF10 discuss length-based and sentence-based misclassification attacks for the Fake News Detection task trained using a context-aware BERT model and they show 78% and 39% attack accuracy respectively. Our contribution in this paper is to answer that can we use large language models like BERT directly over user generated data. Experiment For our experiments, we use pre-trained BERT implementation as given by huggingface transformer library. We use the BERTBase uncased model. We work with three datasets namely - IMDB movie reviewsBIBREF11, Stanford Sentiment Treebank (SST-2) BIBREF12 and Semantic Textual Similarity (STS-B) BIBREF13. IMDB dataset is a popular dataset for sentiment analysis tasks, which is a binary classification problem with equal number of positive and negative examples. Both STS-B and SST-2 datasets are a part of GLUE benchmark[2] tasks . In STS-B too, we predict positive and negative sentiments. In SST-2 we predict textual semantic similarity between two sentences. It is a regression problem where the similarity score varies between 0 to 5. To evaluate the performance of BERT we use standard metrics of F1-score for imdb and STS-B, and Pearson-Spearman correlation for SST-2. In Table TABREF5, we give the statistics for each of the datasets. We take the original datasets and add varying degrees of noise (i.e. spelling errors to word utterances) to create datasets for our experiments. From each dataset, we create 4 additional datasets each with varying percentage levels of noise in them. For example from IMDB, we create 4 variants, each having 5%, 10%, 15% and 20% noise in them. Here, the number denotes the percentage of words in the original dataset that have spelling mistakes. Thus, we have one dataset with no noise and 4 variants datasets with increasing levels of noise. Likewise, we do the same for SST-2 and STS-B. All the parameters of the BERTBase model remain the same for all 5 experiments on the IMDB dataset and its 4 variants. This also remains the same across other 2 datasets and their variants. For all the experiments, the learning rate is set to 4e-5, for optimization we use Adam optimizer with epsilon value 1e-8. We ran each of the experiments for 10 and 50 epochs. Results Let us discuss the results from the above mentioned experiments. We show the plots of accuracy vs noise for each of the tasks. For IMDB, we fine tune the model for the sentiment analysis task. We plot F1 score vs % of error, as shown in Figure FIGREF6. Figure FIGREF6imdba shows the performance after fine tuning for 10 epochs, while Figure FIGREF6imdbb shows the performance after fine tuning for 50 epochs. Similarly, Figure FIGREF9ssta and Figure FIGREF9sstb) shows F1 score vs % of error for Sentiment analysis on SST-2 dataset after fine tuning for 10 and 50 epochs respectively. Figure FIGREF12stsa and FIGREF12stsb shows Pearson-Spearman correlation vs % of error for textual semantic similarity on STS-B dataset after fine tuning for 10 and 50 epochs respectively. Results ::: Key Findings It is clear from the above plots that as we increase the percentage of error, for each of the three tasks, we see a significant drop in BERT’s performance. Also, from the plots it is evident that the reason for this drop in performance is introduction of noise (spelling mistakes). After all we get very good numbers, for each of the three tasks, when there is no error (0.0 % error). To understand the reason behind the drop in performance, first we need to understand how BERT processes input text data. BERT uses WordPiece tokenizer to tokenize the text. WordPiece tokenizer utterances based on the longest prefix matching algorithm to generate tokens . The tokens thus obtained are fed as input of the BERT model. When it comes to tokenizing noisy data, we see a very interesting behaviour from WordPiece tokenizer. Owing to the spelling mistakes, these words are not directly found in BERT’s dictionary. Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance. To understand this better, let us look into two examples, one each from the IMDB and STS-B datasets respectively, as shown below. Here, (a) is the sentence as it appears in the dataset ( before adding noise) while (b) is the corresponding sentence after adding noise. The mistakes are highlighted with italics. The sentences are followed by the corresponding output of the WordPiece tokenizer on these sentences: In the output ‘##’ is WordPiece tokenizer’s way of distinguishing subwords from words. ‘##’ signifies subwords as opposed to words. Example 1 (imdb example): “that loves its characters and communicates something rather beautiful about human nature” (0% error) “that loves 8ts characters abd communicates something rathee beautiful about human natuee” (5% error) Output of wordPiece tokenizer: ['that', 'loves', 'its', 'characters', 'and', 'communicate', '##s', 'something', 'rather', 'beautiful', 'about', 'human','nature'] (0% error IMDB example) ['that', 'loves', '8', '##ts', 'characters', 'abd', 'communicate','##s', 'something','rat', '##hee', 'beautiful', 'about', 'human','nat', '##ue', '##e'] (5% error IMDB example) Example 2(STS example): “poor ben bratt could n't find stardom if mapquest emailed himpoint-to-point driving directions.” (0% error) “poor ben bratt could n't find stardom if mapquest emailed him point-to-point drivibg dirsctioge.” (5% error) Output of wordPiece tokenizer: ['poor', 'ben', 'brat', '##t', 'could', 'n', "'", 't', 'find','star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him','point', '-', 'to', '-', 'point', 'driving', 'directions', '.'] (0% error STS example) ['poor', 'ben', 'brat', '##t', 'could', 'n', "'", 't', 'find','star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him', 'point', '-', 'to', '-', 'point', 'dr', '##iv', '##ib','##g','dir','##sc', '##ti', '##oge', '.'] (5% error STS example) In example 1, the tokenizer splits communicates into [‘communicate’, ‘##s’] based on longest prefix matching because there is no exact match for “communicates” in BERT vocabulary. The longest prefix in this case is “communicate” and left over is “s” both of which are present in the vocabulary of BERT. We have contextual embeddings for both “communicate” and “##s”. By using these two embeddings, one can get an approximate embedding for “communicates”. However, this approach goes for a complete toss when the word is misspelled. In example 1(b) the word natuee (‘nature’ is misspelled) is split into ['nat', '##ue', '##e'] based on the longest prefix match. Combining the three embeddings one cannot approximate the embedding of nature. This is because the word nat has a very different meaning (it means ‘a person who advocates political independence for a particular country’). This misrepresentation in turn impacts the performance of downstream subcomponents of BERT bringing down the overall performance of BERT model. Hence, as we systematically introduce more errors, the quality of output of the tokenizer degrades further, resulting in the overall performance drop. Our results and analysis shows that one cannot apply BERT blindly to solve NLP problems especially in industrial settings. If the application you are developing gets data from channels that are known to introduce noise in the text, then BERT will perform badly. Examples of such scenarios are applications working with twitter data, mobile based chat system, user comments on platforms like youtube, reddit to name a few. The reason for the introduction of noise could vary - while for twitter, reddit it's often deliberate because that is how users prefer to write, while for mobile based chat it often suffers from ‘fat finger’ typing error problem. Depending on the amount of noise in the data, BERT can perform well below expectations. We further conducted experiments with different tokenizers other than WordPiece tokenizer. For this we used stanfordNLP WhiteSpace BIBREF14 and Character N-gram BIBREF15 tokenizers. WhiteSpace tokenizer splits text into tokens based on white space. Character N-gram tokenizer splits words that have more than n characters in them. Thus, each token has at most n characters in them. The resultant tokens from the respective tokenizer are fed to BERT as inputs. For our case, we work with n = 6. Results of these experiments are presented in Table TABREF25. Even though wordPiece tokenizer has the issues stated earlier, it is still performing better than whitespace and character n-gram tokenizer. This is primarily because of the vocabulary overlap between STS-B dataset and BERT vocabulary. Conclusion and Future Work In this work we systematically studied the effect of noise (spelling mistakes) in user generated text data on the performance of BERT. We demonstrated that as the noise increases, BERT’s performance drops drastically. We further investigated the BERT system to understand the reason for this drop in performance. We show that the problem lies with how misspelt words are tokenized to create a representation of the original word. There are 2 ways to address the problem - either (i) preprocess the data to correct spelling mistakes or (ii) incorporate ways in BERT architecture to make it robust to noise. The problem with (i) is that in most industrial settings this becomes a separate project in itself. We leave (ii) as a future work to fix the issues.
non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages
751aa2b1531a17496536887288699cc8d5c3cec9
751aa2b1531a17496536887288699cc8d5c3cec9_0
Q: What is the reason behind the drop in performance using BERT for some popular task? Text: Introduction In recent times, pre-trained contextual language models have led to significant improvement in the performance for many NLP tasks. Among the family of these models, the most popular one is BERT BIBREF0, which is also the focus of this work. The strength of the BERT model FIGREF2 stems from its transformerBIBREF1 based encoder architectureFIGREF1. While it is still not very clear as to why BERT along with its embedding works so well for downstream tasks when it is fine tuned, there has been some work in this direction that that gives some important cluesBIBREF2, BIBREF3. At a high level, BERT’s pipelines looks as follows: given a input sentence, BERT tokenizes it using wordPiece tokenizerBIBREF4. The tokens are then fed as input to the BERT model and it learns contextualized embeddings for each of those tokens. It does so via pre-training on two tasks - Masked Language Model (MLM)BIBREF0 and Next Sentence Prediction (NSP)BIBREF0. The focus of this work is to understand the issues that a practitioner can run into while trying to use BERT for building NLP applications in industrial settings. It is a well known fact that NLP applications in industrial settings often have to deal with the noisy data. There are different kinds of possible noise namely non-canonical text such as spelling mistakes, typographic errors, colloquialisms, abbreviations, slang, internet jargon, emojis, embedded metadata (such as hashtags, URLs, mentions), non standard syntactic constructions and spelling variations, grammatically incorrect text, mixture of two or more languages to name a few. Such noisy data is a hallmark of user generated text content and commonly found on social media, chats, online reviews, web forums to name a few. Owing to this noise a common issue that NLP models have to deal with is Out Of Vocabulary (OOV) words. These are words that are found in test and production data but not part of training data. In this work we highlight how BERT fails to handle Out Of Vocabulary(OOV) words, given its limited vocabulary. We show that this negatively impacts the performance of BERT when working with user generated text data and evaluate the same. This evaluation is motivated from the business use case we are solving where we are building a dialogue system to screen candidates for blue collar jobs. Our candidate user base, coming from underprivileged backgrounds, are often high school graduates. This coupled with ‘fat finger’ problem over a mobile keypad leads to a lot of typos and spelling mistakes in the responses sent to the dialogue system. Hence, for this work we focus on spelling mistakes as the noise in the data. While this work is motivated from our business use case, our findings are applicable across various use cases in industry - be it be sentiment classification on twitter data or topic detection of a web forum. To simulate noise in the data, we begin with a clean dataset and introduce spelling errors in a fraction of words present in it. These words are chosen randomly. We will explain this process in detail later. Spelling mistakes introduced mimic the typographical errors in the text introduced by our users. We then use the BERT model for tasks using both clean and noisy datasets and compare the results. We show that the introduction of noise leads to a significant drop in performance of the BERT model for the task at hand as compared to clean dataset. We further show that as we increase the amount of noise in the data, the performance degrades sharply. Related Work In recent years pre-trained language models ((e.g. ELMoBIBREF5, BERTBIBREF0) have made breakthroughs in several natural language tasks. These models are trained over large corpora that are not human annotated and are easily available. Chief among these models is BERTBIBREF0. The popularity of BERT stems from its ability to be fine-tuned for a variety of downstream NLP tasks such as text classification, regression, named-entity recognition, question answeringBIBREF0, machine translationBIBREF6 etc. BERT has been able to establish State-of-the-art (SOTA) results for many of these tasks. People have been able to show how one can leverage BERT to improve searchBIBREF7. Owing to its success, researchers have started to focus on uncovering drawbacks in BERT, if any. BIBREF8 introduce TEXTFOOLER, a system to generate adversarial text. They apply it to NLP tasks of text classification and textual entailment to attack the BERT model. BIBREF9 evaluate three models - RoBERTa, XLNet, and BERT in Natural Language Inference (NLI) and Question Answering (QA) tasks for robustness. They show that while RoBERTa, XLNet and BERT are more robust than recurrent neural network models to stress tests for both NLI and QA tasks; these models are still very fragile and show many unexpected behaviors. BIBREF10 discuss length-based and sentence-based misclassification attacks for the Fake News Detection task trained using a context-aware BERT model and they show 78% and 39% attack accuracy respectively. Our contribution in this paper is to answer that can we use large language models like BERT directly over user generated data. Experiment For our experiments, we use pre-trained BERT implementation as given by huggingface transformer library. We use the BERTBase uncased model. We work with three datasets namely - IMDB movie reviewsBIBREF11, Stanford Sentiment Treebank (SST-2) BIBREF12 and Semantic Textual Similarity (STS-B) BIBREF13. IMDB dataset is a popular dataset for sentiment analysis tasks, which is a binary classification problem with equal number of positive and negative examples. Both STS-B and SST-2 datasets are a part of GLUE benchmark[2] tasks . In STS-B too, we predict positive and negative sentiments. In SST-2 we predict textual semantic similarity between two sentences. It is a regression problem where the similarity score varies between 0 to 5. To evaluate the performance of BERT we use standard metrics of F1-score for imdb and STS-B, and Pearson-Spearman correlation for SST-2. In Table TABREF5, we give the statistics for each of the datasets. We take the original datasets and add varying degrees of noise (i.e. spelling errors to word utterances) to create datasets for our experiments. From each dataset, we create 4 additional datasets each with varying percentage levels of noise in them. For example from IMDB, we create 4 variants, each having 5%, 10%, 15% and 20% noise in them. Here, the number denotes the percentage of words in the original dataset that have spelling mistakes. Thus, we have one dataset with no noise and 4 variants datasets with increasing levels of noise. Likewise, we do the same for SST-2 and STS-B. All the parameters of the BERTBase model remain the same for all 5 experiments on the IMDB dataset and its 4 variants. This also remains the same across other 2 datasets and their variants. For all the experiments, the learning rate is set to 4e-5, for optimization we use Adam optimizer with epsilon value 1e-8. We ran each of the experiments for 10 and 50 epochs. Results Let us discuss the results from the above mentioned experiments. We show the plots of accuracy vs noise for each of the tasks. For IMDB, we fine tune the model for the sentiment analysis task. We plot F1 score vs % of error, as shown in Figure FIGREF6. Figure FIGREF6imdba shows the performance after fine tuning for 10 epochs, while Figure FIGREF6imdbb shows the performance after fine tuning for 50 epochs. Similarly, Figure FIGREF9ssta and Figure FIGREF9sstb) shows F1 score vs % of error for Sentiment analysis on SST-2 dataset after fine tuning for 10 and 50 epochs respectively. Figure FIGREF12stsa and FIGREF12stsb shows Pearson-Spearman correlation vs % of error for textual semantic similarity on STS-B dataset after fine tuning for 10 and 50 epochs respectively. Results ::: Key Findings It is clear from the above plots that as we increase the percentage of error, for each of the three tasks, we see a significant drop in BERT’s performance. Also, from the plots it is evident that the reason for this drop in performance is introduction of noise (spelling mistakes). After all we get very good numbers, for each of the three tasks, when there is no error (0.0 % error). To understand the reason behind the drop in performance, first we need to understand how BERT processes input text data. BERT uses WordPiece tokenizer to tokenize the text. WordPiece tokenizer utterances based on the longest prefix matching algorithm to generate tokens . The tokens thus obtained are fed as input of the BERT model. When it comes to tokenizing noisy data, we see a very interesting behaviour from WordPiece tokenizer. Owing to the spelling mistakes, these words are not directly found in BERT’s dictionary. Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance. To understand this better, let us look into two examples, one each from the IMDB and STS-B datasets respectively, as shown below. Here, (a) is the sentence as it appears in the dataset ( before adding noise) while (b) is the corresponding sentence after adding noise. The mistakes are highlighted with italics. The sentences are followed by the corresponding output of the WordPiece tokenizer on these sentences: In the output ‘##’ is WordPiece tokenizer’s way of distinguishing subwords from words. ‘##’ signifies subwords as opposed to words. Example 1 (imdb example): “that loves its characters and communicates something rather beautiful about human nature” (0% error) “that loves 8ts characters abd communicates something rathee beautiful about human natuee” (5% error) Output of wordPiece tokenizer: ['that', 'loves', 'its', 'characters', 'and', 'communicate', '##s', 'something', 'rather', 'beautiful', 'about', 'human','nature'] (0% error IMDB example) ['that', 'loves', '8', '##ts', 'characters', 'abd', 'communicate','##s', 'something','rat', '##hee', 'beautiful', 'about', 'human','nat', '##ue', '##e'] (5% error IMDB example) Example 2(STS example): “poor ben bratt could n't find stardom if mapquest emailed himpoint-to-point driving directions.” (0% error) “poor ben bratt could n't find stardom if mapquest emailed him point-to-point drivibg dirsctioge.” (5% error) Output of wordPiece tokenizer: ['poor', 'ben', 'brat', '##t', 'could', 'n', "'", 't', 'find','star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him','point', '-', 'to', '-', 'point', 'driving', 'directions', '.'] (0% error STS example) ['poor', 'ben', 'brat', '##t', 'could', 'n', "'", 't', 'find','star', '##dom', 'if', 'map', '##quest', 'email', '##ed', 'him', 'point', '-', 'to', '-', 'point', 'dr', '##iv', '##ib','##g','dir','##sc', '##ti', '##oge', '.'] (5% error STS example) In example 1, the tokenizer splits communicates into [‘communicate’, ‘##s’] based on longest prefix matching because there is no exact match for “communicates” in BERT vocabulary. The longest prefix in this case is “communicate” and left over is “s” both of which are present in the vocabulary of BERT. We have contextual embeddings for both “communicate” and “##s”. By using these two embeddings, one can get an approximate embedding for “communicates”. However, this approach goes for a complete toss when the word is misspelled. In example 1(b) the word natuee (‘nature’ is misspelled) is split into ['nat', '##ue', '##e'] based on the longest prefix match. Combining the three embeddings one cannot approximate the embedding of nature. This is because the word nat has a very different meaning (it means ‘a person who advocates political independence for a particular country’). This misrepresentation in turn impacts the performance of downstream subcomponents of BERT bringing down the overall performance of BERT model. Hence, as we systematically introduce more errors, the quality of output of the tokenizer degrades further, resulting in the overall performance drop. Our results and analysis shows that one cannot apply BERT blindly to solve NLP problems especially in industrial settings. If the application you are developing gets data from channels that are known to introduce noise in the text, then BERT will perform badly. Examples of such scenarios are applications working with twitter data, mobile based chat system, user comments on platforms like youtube, reddit to name a few. The reason for the introduction of noise could vary - while for twitter, reddit it's often deliberate because that is how users prefer to write, while for mobile based chat it often suffers from ‘fat finger’ typing error problem. Depending on the amount of noise in the data, BERT can perform well below expectations. We further conducted experiments with different tokenizers other than WordPiece tokenizer. For this we used stanfordNLP WhiteSpace BIBREF14 and Character N-gram BIBREF15 tokenizers. WhiteSpace tokenizer splits text into tokens based on white space. Character N-gram tokenizer splits words that have more than n characters in them. Thus, each token has at most n characters in them. The resultant tokens from the respective tokenizer are fed to BERT as inputs. For our case, we work with n = 6. Results of these experiments are presented in Table TABREF25. Even though wordPiece tokenizer has the issues stated earlier, it is still performing better than whitespace and character n-gram tokenizer. This is primarily because of the vocabulary overlap between STS-B dataset and BERT vocabulary. Conclusion and Future Work In this work we systematically studied the effect of noise (spelling mistakes) in user generated text data on the performance of BERT. We demonstrated that as the noise increases, BERT’s performance drops drastically. We further investigated the BERT system to understand the reason for this drop in performance. We show that the problem lies with how misspelt words are tokenized to create a representation of the original word. There are 2 ways to address the problem - either (i) preprocess the data to correct spelling mistakes or (ii) incorporate ways in BERT architecture to make it robust to noise. The problem with (i) is that in most industrial settings this becomes a separate project in itself. We leave (ii) as a future work to fix the issues.
Hence WordPiece tokenizer tokenizes noisy words into subwords. However, it ends up breaking them into subwords whose meaning can be very different from the meaning of the original word. Often, this changes the meaning of the sentence completely, therefore leading to substantial dip in the performance.
dc4096b8bab0afcbbd4fbb015da2bea5d38251cd
dc4096b8bab0afcbbd4fbb015da2bea5d38251cd_0
Q: How they observe that fine-tuning BERT on a specific task does not improve its prunability? Text: Introduction Pre-trained feature extractors, such as BERT BIBREF0 for natural language processing and VGG BIBREF1 for computer vision, have become effective methods for improving the performance of deep learning models. In the last year, models similar to BERT have become state-of-the-art in many NLP tasks, including natural language inference (NLI), named entity recognition (NER), sentiment analysis, etc. These models follow a pre-training paradigm: they are trained on a large amount of unlabeled text via a task that resembles language modeling BIBREF2, BIBREF3 and are then fine-tuned on a smaller amount of “downstream” data, which is labeled for a specific task. Pre-trained models usually achieve higher accuracy than any model trained on downstream data alone. The pre-training paradigm, while effective, still has some problems. While some claim that language model pre-training is a “universal language learning task" BIBREF4, there is no theoretical justification for this, only empirical evidence. Second, due to the size of the pre-training dataset, BERT models tend to be slow and require impractically large amounts of GPU memory. BERT-Large can only be used with access to a Google TPU, and BERT-Base requires some optimization tricks such as gradient checkpointing or gradient accumulation to be trained effectively on consumer hardware BIBREF5. Training BERT-Base from scratch costs $\sim $$7k and emits $\sim $1438 pounds of CO$_2$ BIBREF6. Model compression BIBREF7, which attempts to shrink a model without losing accuracy, is a viable approach to decreasing GPU usage. It might also be used to trade accuracy for memory in some low-resource cases, such as deploying to smartphones for real-time prediction. The main questions this paper attempts to answer are: Does compressing BERT impede it's ability to transfer to new tasks? And does fine-tuning make BERT more or less compressible? To explore these questions, we compressed English BERT using magnitude weight pruning BIBREF8 and observed the results on transfer learning to the General Language Understanding Evaluation (GLUE) benchmark BIBREF9, a diverse set of natural language understanding tasks including sentiment analysis, NLI, and textual similarity evaluation. We chose magnitude weight pruning, which compresses models by removing weights close to 0, because it is one of the most fine-grained and effective compression methods and because there are many interesting ways to view pruning, which we explore in the next section. Our findings are as follows: Low levels of pruning (30-40%) do not increase pre-training loss or affect transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks. This information is not equally useful to each task; tasks degrade linearly with pre-train loss, but at different rates. High levels of pruning, depending on the size of the downstream dataset, may additionally degrade performance by preventing models from fitting downstream datasets. Finally, we observe that fine-tuning BERT on a specific task does not improve its prunability or change the order of pruning by a meaningful amount. To our knowledge, prior work had not shown whether BERT could be compressed in a task-generic way, keeping the benefits of pre-training while avoiding costly experimentation associated with compressing and re-training BERT multiple times. Nor had it shown whether BERT could be over-pruned for a memory / accuracy trade-off for deployment to low-resource devices. In this work, we conclude that BERT can be pruned prior to distribution without affecting it's universality, and that BERT may be over-pruned during pre-training for a reasonable accuracy trade-off for certain tasks. Pruning: Compression, Regularization, Architecture Search Neural network pruning involves examining a trained network and removing parts deemed to be unnecessary by some heuristic saliency criterion. One might remove weights, neurons, layers, channels, attention heads, etc. depending on which heuristic is used. Below, we describe three different lenses through which we might interpret pruning. Compression Pruning a neural network decreases the number of parameters required to specify the model, which decreases the disk space required to store it. This allows large models to be deployed on edge computing devices like smartphones. Pruning can also increase inference speed if whole neurons or convolutional channels are pruned, which reduces GPU usage. Regularization Pruning a neural network also regularizes it. We might consider pruning to be a form of permanent dropout BIBREF11 or a heuristic-based L0 regularizer BIBREF12. Through this lens, pruning decreases the complexity of the network and therefore narrows the range of possible functions it can express. The main difference between L0 or L1 regularization and weight pruning is that the former induce sparsity via a penalty on the loss function, which is learned during gradient descent via stochastic relaxation. It's not clear which approach is more principled or preferred. BIBREF13 Interestingly, recent work used compression not to induce simplicity but to measure it BIBREF14. Sparse Architecture Search Finally, we can view neural network pruning as a type of sparse architecture search. BIBREF15 and BIBREF16 show that they can train carefully re-initialized pruned architectures to similar performance levels as dense networks. Under this lens, stochastic gradient descent (SGD) induces network sparsity, and pruning simply makes that sparsity explicit. These sparse architectures, along with the appropriate initializations, are sometimes referred to as “lottery tickets.” Sparse networks are difficult to train from scratch BIBREF17. However, BIBREF18 and BIBREF19 present methods to do this by allowing SGD to search over the space of possible subnetworks. Our findings suggest that these methods might be used to train sparse BERT from scratch. Pruning: Compression, Regularization, Architecture Search ::: Magnitude Weight Pruning In this work, we focus on weight magnitude pruning because it is one of the most fine-grained and effective pruning methods. It also has a compelling saliency criterion BIBREF8: if a weight is close to zero, then its input is effectively ignored, which means the weight can be pruned. Magnitude weight pruning itself is a simple procedure: 1. Pick a target percentage of weights to be pruned, say 50%. 2. Calculate a threshold such that 50% of weight magnitudes are under that threshold. 3. Remove those weights. 4. Continue training the network to recover any lost accuracy. 5. Optionally, return to step 1 and increase the percentage of weights pruned. This procedure is conveniently implemented in a Tensorflow BIBREF20 package, which we use BIBREF21. Calculating a threshold and pruning can be done for all network parameters holistically (global pruning) or for each weight matrix individually (matrix-local pruning). Both methods will prune to the same sparsity, but in global pruning the sparsity might be unevenly distributed across weight matrices. We use matrix-local pruning because it is more popular in the community. For information on other pruning techniques, we recommend BIBREF13 and BIBREF15. Experimental Setup BERT is a large Transformer encoder; for background, we refer readers to BIBREF22 or one of these excellent tutorials BIBREF23, BIBREF24. Experimental Setup ::: Implementing BERT Pruning BERT-Base consists of 12 encoder layers, each of which contains 6 prunable matrices: 4 for the multi-headed self-attention and 2 for the layer's output feed-forward network. Recall that self-attention first projects layer inputs into key, query, and value embeddings via linear projections. While there is a separate key, query, and value projection matrix for each attention head, implementations typically “stack” matrices from each attention head, resulting in only 3 parameter matrices: one for key projections, one for value projections, and one for query projections. We prune each of these matrices separately, calculating a threshold for each. We also prune the linear output projection, which combines outputs from each attention head into a single embedding. We prune word embeddings in the same way we prune feed-foward networks and self-attention parameters. The justification is similar: if a word embedding value is close to zero, we can assume it's zero and store the rest in a sparse matrix. This is useful because token / subword embeddings tend to account for a large portion of a natural language model's memory. In BERT-Base specifically, the embeddings account for $\sim $21% of the model's memory. Our experimental code for pruning BERT, based on the public BERT repository, is available here. Experimental Setup ::: Pruning During Pre-Training We perform weight magnitude pruning on a pre-trained BERT-Base model. We select sparsities from 0% to 90% in increments of 10% and gradually prune BERT to this sparsity over the first 10k steps of training. We continue pre-training on English Wikipedia and BookCorpus for another 90k steps to regain any lost accuracy. The resulting pre-training losses are shown in Table TABREF27. We then fine-tune these pruned models on tasks from the General Language Understanding Evaluation (GLUE) benchmark, which is a standard set of 9 tasks that include sentiment analysis, natural language inference, etc. We avoid WNLI, which is known to be problematic. We also avoid tasks with less than 5k training examples because the results tend to be noisy (RTE, MRPC, STS-B). We fine-tune a separate model on each of the remaining 5 GLUE tasks for 3 epochs and try 4 learning rates: $[2, 3, 4, 5] \times 10^{-5}$. The best evaluation accuracies are averaged and plotted in Figure FIGREF15. Individual task results are in Table TABREF27. BERT can be used as a static feature-extractor or as a pre-trained model which is fine-tuned end-to-end. In all experiments, we fine-tune weights in all layers of BERT on downstream tasks. Experimental Setup ::: Disentangling Complexity Restriction and Information Deletion Pruning involves two steps: it deletes the information stored in a weight by setting it to 0 and then regularizes the model by preventing that weight from changing during further training. To disentangle these two effects (model complexity restriction and information deletion), we repeat the experiments from Section SECREF9 with an identical pre-training setup, but instead of pruning we simply set the weights to 0 and allow them to vary during downstream training. This deletes the pre-training information associated with the weight but does not prevent the model from fitting downstream datasets by keeping the weight at zero during downstream training. We also fine-tune on downstream tasks until training loss becomes comparable to models with no pruning. We trained most models for 13 epochs rather than 3. Models with 70-90% information deletion required 15 epochs to fit the training data. The results are also included in Figure FIGREF15 and Table TABREF27. Experimental Setup ::: Pruning After Downstream Fine-tuning We might expect that BERT would be more compressible after downstream fine-tuning. Intuitively, the information needed for downstream tasks is a subset of the information learned during pre-training; some tasks require more semantic information than syntactic, and vice-versa. We should be able to discard the “extra" information and only keep what we need for, say, parsing BIBREF25. For magnitude weight pruning specifically, we might expect downstream training to change the distribution of weights in the parameter matrices. This, in turn, changes the sort-order of the absolute values of those weights, which changes the order that we prune them in. This new pruning order, hypothetically, would be less degrading to our specific downstream task. To test this, we fine-tuned pre-trained BERT-Base on downstream data for 3 epochs. We then pruned at various sparsity levels and continued training for 5 more epochs (7 for 80/90% sparsity), at which point the training losses became comparable to those of models pruned during pre-training. We repeat this for learning rates in $[2, 3, 4, 5] \times 10^{-5}$ and show the results with the best development accuracy in Figure FIGREF15 / Table TABREF27. We also measure the difference in which weights are selected for pruning during pre-training vs. downstream fine-tuning and plot the results in Figure FIGREF25. Pruning Regimes ::: 30-40% of Weights Are Not Useful Figure FIGREF15 shows that the first 30-40% of weights pruned by magnitude weight pruning do not impact pre-training loss or inference on any downstream task. These weights can be pruned either before or after fine-tuning. This makes sense from the perspective of pruning as sparse architecture search: when we initialize BERT-Base, we initialize many possible subnetworks. SGD selects the best one for pre-training and pushes the rest of the weights to 0. We can then prune those weights without affecting the output of the network. Pruning Regimes ::: Medium Pruning Levels Prevent Information Transfer Past 40% pruning, performance starts to degrade. Pre-training loss increases as we prune weights necessary for fitting the pre-training data (Table TABREF27). Feature activations of the hidden layers start to diverge from models with low levels of pruning (Figure FIGREF18). Downstream accuracy also begins to degrade at this point. We believe this observation may point towards a more principled stopping criterion for pruning. Currently, the only way to know how much to prune is by trial and (dev-set) error. Predictors of performance degradation while pruning might help us decide which level of sparsity is appropriate for a given trained network without trying many at once. Why does pruning at these levels hurt downstream performance? On one hand, pruning deletes pre-training information by setting weights to 0, preventing the transfer of the useful inductive biases learned during pre-training. On the other hand, pruning regularizes the model by keeping certain weights at zero, which might prevent fitting downstream datasets. Figure FIGREF15 and Table TABREF27 show information deletion is the main cause of performance degradation between 40 - 60% sparsity, since pruning and information deletion degrade models by the same amount. Information deletion would not be a problem if pre-training and downstream datasets contained similar information. However, pre-training is effective precisely because the pre-training dataset is much larger than the labeled downstream dataset, which allows learning of more robust representations. We see that the main obstacle to compressing pre-trained models is maintaining the inductive bias of the model learned during pre-training. Encoding this bias requires many more weights than fitting downstream datasets, and it cannot be recovered due to a fundamental information gap between pre-training and downstream datasets. The amount a model can be pruned is limited by the largest dataset the model has been trained on: in this case, the pre-training dataset. Practitioners should be aware of this; pruning may subtly harm downstream generalization without affecting training loss. We might consider finding a lottery ticket for BERT, which we would expect to fit the GLUE training data just as well as pre-trained BERT BIBREF27, BIBREF28. However, we predict that the lottery-ticket will not reach similar generalization levels unless the lottery ticket encodes enough information to close the information gap. Pruning Regimes ::: High Pruning Levels Also Prevent Fitting Downstream Datasets At 70% sparsity and above, models with information deletion recover some accuracy w.r.t. pruned models, so complexity restriction is a secondary cause of performance degradation. However, these models do not recover all evaluation accuracy, despite matching un-pruned model's training loss. Table TABREF27 shows that on the MNLI and QQP tasks, which have the largest amount of training data, information deletion performs much better than pruning. In contrast, models do not recover as well on SST-2 and CoLA, which have less data. We believe this is because the larger datasets require larger models to fit, so complexity restriction becomes an issue earlier. We might be concerned that poorly performing models are over-fitting, since they have lower training losses than unpruned models. But the best performing information-deleted models have the lowest training error of all, so overfitting seems unlikely. Pruning Regimes ::: How Much Is A Bit Of BERT Worth? We've seen that over-pruning BERT deletes information useful for downstream tasks. Is this information equally useful to all tasks? We might consider the pre-training loss as a proxy for how much pre-training information we've deleted in total. Similarly, the performance of information-deletion models is a proxy for how much of that information was useful for each task. Figure FIGREF18 shows that the pre-training loss linearly predicts the effects of information deletion on downstream accuracy. For every bit of information we delete from BERT, it appears only a fraction is useful for CoLA, and an even smaller fraction useful for QQP. This relationship should be taken into account when considering the memory / accuracy trade-off of over-pruning. Pruning an extra 30% of BERT's weights is worth only one accuracy point on QQP but 10 points on CoLA. It's unclear, however, whether this is because the pre-training task is less relevant to QQP or whether QQP simply has a bigger dataset with more information content. Downstream Fine-tuning Does Not Improve Prunability Since pre-training information deletion plays a central role in performance degradation while over-pruning, we might expect that downstream fine-tuning would improve prunability by making important weights more salient (increasing their magnitude). However, Figure FIGREF15 shows that models pruned after downstream fine-tuning do not surpass the development accuracies of models pruned during pre-training, despite achieving similar training losses. Figure FIGREF25 shows fine-tuning changes which weights are pruned by less than 6%. Why doesn't fine-tuning change which weights are pruned much? Table TABREF30 shows that the magnitude sorting order of weights is mostly preserved; weights move on average 0-4% away from their starting positions in the sort order. We also see that high magnitude weights are more stable than lower ones (Figure FIGREF31). Our experiments suggest that training on downstream data before pruning is too blunt an instrument to improve prunability. Even so, we might consider simply training on the downstream tasks for much longer, which would increase the difference in weights pruned. However, Figure FIGREF26 shows that even after an epoch of downstream fine-tuning, weights quickly re-stabilize in a new sorting order, meaning longer downstream training will have only a marginal effect on which weights are pruned. Indeed, Figure FIGREF25 shows that the weights selected for 60% pruning quickly stabilize and evaluation accuracy does not improve with more training before pruning. Related Work Compressing BERT for Specific Tasks Section SECREF5 showed that downstream fine-tuning does not increase prunability. However, several alternative compression approaches have been proposed to discard non-task-specific information. BIBREF25 used an information bottleneck to discard non-syntactic information. BIBREF31 used BERT as a knowledge distillation teacher to compress relevant information into smaller Bi-LSTMs, while BIBREF32 took a similar distillation approach. While fine-tuning does not increase prunability, task-specific knowledge might be extracted from BERT with other methods. Attention Head Pruning BIBREF33 previously showed redundancy in transformer models by pruning entire attention heads. BIBREF34 showed that after fine-tuning on MNLI, up to 40% of attention heads can be pruned from BERT without affecting test accuracy. They show redundancy in BERT after fine-tuning on a single downstream task; in contrast, our work emphasizes the interplay between compression and transfer learning to many tasks, pruning both before and after fine-tuning. Also, magnitude weight pruning allows us to additionally prune the feed-foward networks and sub-word embeddings in BERT (not just self-attention), which account for $\sim $72% of BERT's total memory usage. We suspect that attention head pruning and weight pruning remove different redundancies from BERT. Figure FIGREF26 shows that weight pruning does not prune any specific attention head much more than the pruning rate for the whole model. It is not clear, however, whether weight pruning and recovery training makes attention heads less prunable by distributing functionality to unused heads. Conclusion And Future Work We've shown that encoding BERT's inductive bias requires many more weights than are required to fit downstream data. Future work on compressing pre-trained models should focus on maintaining that inductive bias and quantifying its relevance to various tasks during accuracy/memory trade-offs. For magnitude weight pruning, we've shown that 30-40% of the weights do not encode any useful inductive bias and can be discarded without affecting BERT's universality. The relevance of the rest of the weights vary from task to task, and fine-tuning on downstream tasks does not change the nature of this trade-off by changing which weights are pruned. In future work, we will investigate the factors that influence language modeling's relevance to downstream tasks and how to improve compression in a task-general way. It's reasonable to believe that these conclusions will generalize to other pre-trained language models such as Kermit BIBREF3, XLNet BIBREF2, GPT-2 BIBREF4, RoBERTa BIBREF35 or ELMO BIBREF36. All of these learn some variant of language modeling, and most use Transformer architectures. While it remains to be shown in future work, viewing pruning as architecture search implies these models will be prunable due to the training dynamics inherent to neural networks.
we compressed English BERT using magnitude weight pruning BIBREF8 and observed the results on transfer learning to the General Language Understanding Evaluation (GLUE) benchmark BIBREF9, a diverse set of natural language understanding tasks including sentiment analysis, NLI, and textual similarity evaluation.
c4c9c7900a0480743acc7599efb359bc81cf3a4d
c4c9c7900a0480743acc7599efb359bc81cf3a4d_0
Q: How much is pre-training loss increased in Low/Medium/Hard level of pruning? Text: Introduction Pre-trained feature extractors, such as BERT BIBREF0 for natural language processing and VGG BIBREF1 for computer vision, have become effective methods for improving the performance of deep learning models. In the last year, models similar to BERT have become state-of-the-art in many NLP tasks, including natural language inference (NLI), named entity recognition (NER), sentiment analysis, etc. These models follow a pre-training paradigm: they are trained on a large amount of unlabeled text via a task that resembles language modeling BIBREF2, BIBREF3 and are then fine-tuned on a smaller amount of “downstream” data, which is labeled for a specific task. Pre-trained models usually achieve higher accuracy than any model trained on downstream data alone. The pre-training paradigm, while effective, still has some problems. While some claim that language model pre-training is a “universal language learning task" BIBREF4, there is no theoretical justification for this, only empirical evidence. Second, due to the size of the pre-training dataset, BERT models tend to be slow and require impractically large amounts of GPU memory. BERT-Large can only be used with access to a Google TPU, and BERT-Base requires some optimization tricks such as gradient checkpointing or gradient accumulation to be trained effectively on consumer hardware BIBREF5. Training BERT-Base from scratch costs $\sim $$7k and emits $\sim $1438 pounds of CO$_2$ BIBREF6. Model compression BIBREF7, which attempts to shrink a model without losing accuracy, is a viable approach to decreasing GPU usage. It might also be used to trade accuracy for memory in some low-resource cases, such as deploying to smartphones for real-time prediction. The main questions this paper attempts to answer are: Does compressing BERT impede it's ability to transfer to new tasks? And does fine-tuning make BERT more or less compressible? To explore these questions, we compressed English BERT using magnitude weight pruning BIBREF8 and observed the results on transfer learning to the General Language Understanding Evaluation (GLUE) benchmark BIBREF9, a diverse set of natural language understanding tasks including sentiment analysis, NLI, and textual similarity evaluation. We chose magnitude weight pruning, which compresses models by removing weights close to 0, because it is one of the most fine-grained and effective compression methods and because there are many interesting ways to view pruning, which we explore in the next section. Our findings are as follows: Low levels of pruning (30-40%) do not increase pre-training loss or affect transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks. This information is not equally useful to each task; tasks degrade linearly with pre-train loss, but at different rates. High levels of pruning, depending on the size of the downstream dataset, may additionally degrade performance by preventing models from fitting downstream datasets. Finally, we observe that fine-tuning BERT on a specific task does not improve its prunability or change the order of pruning by a meaningful amount. To our knowledge, prior work had not shown whether BERT could be compressed in a task-generic way, keeping the benefits of pre-training while avoiding costly experimentation associated with compressing and re-training BERT multiple times. Nor had it shown whether BERT could be over-pruned for a memory / accuracy trade-off for deployment to low-resource devices. In this work, we conclude that BERT can be pruned prior to distribution without affecting it's universality, and that BERT may be over-pruned during pre-training for a reasonable accuracy trade-off for certain tasks. Pruning: Compression, Regularization, Architecture Search Neural network pruning involves examining a trained network and removing parts deemed to be unnecessary by some heuristic saliency criterion. One might remove weights, neurons, layers, channels, attention heads, etc. depending on which heuristic is used. Below, we describe three different lenses through which we might interpret pruning. Compression Pruning a neural network decreases the number of parameters required to specify the model, which decreases the disk space required to store it. This allows large models to be deployed on edge computing devices like smartphones. Pruning can also increase inference speed if whole neurons or convolutional channels are pruned, which reduces GPU usage. Regularization Pruning a neural network also regularizes it. We might consider pruning to be a form of permanent dropout BIBREF11 or a heuristic-based L0 regularizer BIBREF12. Through this lens, pruning decreases the complexity of the network and therefore narrows the range of possible functions it can express. The main difference between L0 or L1 regularization and weight pruning is that the former induce sparsity via a penalty on the loss function, which is learned during gradient descent via stochastic relaxation. It's not clear which approach is more principled or preferred. BIBREF13 Interestingly, recent work used compression not to induce simplicity but to measure it BIBREF14. Sparse Architecture Search Finally, we can view neural network pruning as a type of sparse architecture search. BIBREF15 and BIBREF16 show that they can train carefully re-initialized pruned architectures to similar performance levels as dense networks. Under this lens, stochastic gradient descent (SGD) induces network sparsity, and pruning simply makes that sparsity explicit. These sparse architectures, along with the appropriate initializations, are sometimes referred to as “lottery tickets.” Sparse networks are difficult to train from scratch BIBREF17. However, BIBREF18 and BIBREF19 present methods to do this by allowing SGD to search over the space of possible subnetworks. Our findings suggest that these methods might be used to train sparse BERT from scratch. Pruning: Compression, Regularization, Architecture Search ::: Magnitude Weight Pruning In this work, we focus on weight magnitude pruning because it is one of the most fine-grained and effective pruning methods. It also has a compelling saliency criterion BIBREF8: if a weight is close to zero, then its input is effectively ignored, which means the weight can be pruned. Magnitude weight pruning itself is a simple procedure: 1. Pick a target percentage of weights to be pruned, say 50%. 2. Calculate a threshold such that 50% of weight magnitudes are under that threshold. 3. Remove those weights. 4. Continue training the network to recover any lost accuracy. 5. Optionally, return to step 1 and increase the percentage of weights pruned. This procedure is conveniently implemented in a Tensorflow BIBREF20 package, which we use BIBREF21. Calculating a threshold and pruning can be done for all network parameters holistically (global pruning) or for each weight matrix individually (matrix-local pruning). Both methods will prune to the same sparsity, but in global pruning the sparsity might be unevenly distributed across weight matrices. We use matrix-local pruning because it is more popular in the community. For information on other pruning techniques, we recommend BIBREF13 and BIBREF15. Experimental Setup BERT is a large Transformer encoder; for background, we refer readers to BIBREF22 or one of these excellent tutorials BIBREF23, BIBREF24. Experimental Setup ::: Implementing BERT Pruning BERT-Base consists of 12 encoder layers, each of which contains 6 prunable matrices: 4 for the multi-headed self-attention and 2 for the layer's output feed-forward network. Recall that self-attention first projects layer inputs into key, query, and value embeddings via linear projections. While there is a separate key, query, and value projection matrix for each attention head, implementations typically “stack” matrices from each attention head, resulting in only 3 parameter matrices: one for key projections, one for value projections, and one for query projections. We prune each of these matrices separately, calculating a threshold for each. We also prune the linear output projection, which combines outputs from each attention head into a single embedding. We prune word embeddings in the same way we prune feed-foward networks and self-attention parameters. The justification is similar: if a word embedding value is close to zero, we can assume it's zero and store the rest in a sparse matrix. This is useful because token / subword embeddings tend to account for a large portion of a natural language model's memory. In BERT-Base specifically, the embeddings account for $\sim $21% of the model's memory. Our experimental code for pruning BERT, based on the public BERT repository, is available here. Experimental Setup ::: Pruning During Pre-Training We perform weight magnitude pruning on a pre-trained BERT-Base model. We select sparsities from 0% to 90% in increments of 10% and gradually prune BERT to this sparsity over the first 10k steps of training. We continue pre-training on English Wikipedia and BookCorpus for another 90k steps to regain any lost accuracy. The resulting pre-training losses are shown in Table TABREF27. We then fine-tune these pruned models on tasks from the General Language Understanding Evaluation (GLUE) benchmark, which is a standard set of 9 tasks that include sentiment analysis, natural language inference, etc. We avoid WNLI, which is known to be problematic. We also avoid tasks with less than 5k training examples because the results tend to be noisy (RTE, MRPC, STS-B). We fine-tune a separate model on each of the remaining 5 GLUE tasks for 3 epochs and try 4 learning rates: $[2, 3, 4, 5] \times 10^{-5}$. The best evaluation accuracies are averaged and plotted in Figure FIGREF15. Individual task results are in Table TABREF27. BERT can be used as a static feature-extractor or as a pre-trained model which is fine-tuned end-to-end. In all experiments, we fine-tune weights in all layers of BERT on downstream tasks. Experimental Setup ::: Disentangling Complexity Restriction and Information Deletion Pruning involves two steps: it deletes the information stored in a weight by setting it to 0 and then regularizes the model by preventing that weight from changing during further training. To disentangle these two effects (model complexity restriction and information deletion), we repeat the experiments from Section SECREF9 with an identical pre-training setup, but instead of pruning we simply set the weights to 0 and allow them to vary during downstream training. This deletes the pre-training information associated with the weight but does not prevent the model from fitting downstream datasets by keeping the weight at zero during downstream training. We also fine-tune on downstream tasks until training loss becomes comparable to models with no pruning. We trained most models for 13 epochs rather than 3. Models with 70-90% information deletion required 15 epochs to fit the training data. The results are also included in Figure FIGREF15 and Table TABREF27. Experimental Setup ::: Pruning After Downstream Fine-tuning We might expect that BERT would be more compressible after downstream fine-tuning. Intuitively, the information needed for downstream tasks is a subset of the information learned during pre-training; some tasks require more semantic information than syntactic, and vice-versa. We should be able to discard the “extra" information and only keep what we need for, say, parsing BIBREF25. For magnitude weight pruning specifically, we might expect downstream training to change the distribution of weights in the parameter matrices. This, in turn, changes the sort-order of the absolute values of those weights, which changes the order that we prune them in. This new pruning order, hypothetically, would be less degrading to our specific downstream task. To test this, we fine-tuned pre-trained BERT-Base on downstream data for 3 epochs. We then pruned at various sparsity levels and continued training for 5 more epochs (7 for 80/90% sparsity), at which point the training losses became comparable to those of models pruned during pre-training. We repeat this for learning rates in $[2, 3, 4, 5] \times 10^{-5}$ and show the results with the best development accuracy in Figure FIGREF15 / Table TABREF27. We also measure the difference in which weights are selected for pruning during pre-training vs. downstream fine-tuning and plot the results in Figure FIGREF25. Pruning Regimes ::: 30-40% of Weights Are Not Useful Figure FIGREF15 shows that the first 30-40% of weights pruned by magnitude weight pruning do not impact pre-training loss or inference on any downstream task. These weights can be pruned either before or after fine-tuning. This makes sense from the perspective of pruning as sparse architecture search: when we initialize BERT-Base, we initialize many possible subnetworks. SGD selects the best one for pre-training and pushes the rest of the weights to 0. We can then prune those weights without affecting the output of the network. Pruning Regimes ::: Medium Pruning Levels Prevent Information Transfer Past 40% pruning, performance starts to degrade. Pre-training loss increases as we prune weights necessary for fitting the pre-training data (Table TABREF27). Feature activations of the hidden layers start to diverge from models with low levels of pruning (Figure FIGREF18). Downstream accuracy also begins to degrade at this point. We believe this observation may point towards a more principled stopping criterion for pruning. Currently, the only way to know how much to prune is by trial and (dev-set) error. Predictors of performance degradation while pruning might help us decide which level of sparsity is appropriate for a given trained network without trying many at once. Why does pruning at these levels hurt downstream performance? On one hand, pruning deletes pre-training information by setting weights to 0, preventing the transfer of the useful inductive biases learned during pre-training. On the other hand, pruning regularizes the model by keeping certain weights at zero, which might prevent fitting downstream datasets. Figure FIGREF15 and Table TABREF27 show information deletion is the main cause of performance degradation between 40 - 60% sparsity, since pruning and information deletion degrade models by the same amount. Information deletion would not be a problem if pre-training and downstream datasets contained similar information. However, pre-training is effective precisely because the pre-training dataset is much larger than the labeled downstream dataset, which allows learning of more robust representations. We see that the main obstacle to compressing pre-trained models is maintaining the inductive bias of the model learned during pre-training. Encoding this bias requires many more weights than fitting downstream datasets, and it cannot be recovered due to a fundamental information gap between pre-training and downstream datasets. The amount a model can be pruned is limited by the largest dataset the model has been trained on: in this case, the pre-training dataset. Practitioners should be aware of this; pruning may subtly harm downstream generalization without affecting training loss. We might consider finding a lottery ticket for BERT, which we would expect to fit the GLUE training data just as well as pre-trained BERT BIBREF27, BIBREF28. However, we predict that the lottery-ticket will not reach similar generalization levels unless the lottery ticket encodes enough information to close the information gap. Pruning Regimes ::: High Pruning Levels Also Prevent Fitting Downstream Datasets At 70% sparsity and above, models with information deletion recover some accuracy w.r.t. pruned models, so complexity restriction is a secondary cause of performance degradation. However, these models do not recover all evaluation accuracy, despite matching un-pruned model's training loss. Table TABREF27 shows that on the MNLI and QQP tasks, which have the largest amount of training data, information deletion performs much better than pruning. In contrast, models do not recover as well on SST-2 and CoLA, which have less data. We believe this is because the larger datasets require larger models to fit, so complexity restriction becomes an issue earlier. We might be concerned that poorly performing models are over-fitting, since they have lower training losses than unpruned models. But the best performing information-deleted models have the lowest training error of all, so overfitting seems unlikely. Pruning Regimes ::: How Much Is A Bit Of BERT Worth? We've seen that over-pruning BERT deletes information useful for downstream tasks. Is this information equally useful to all tasks? We might consider the pre-training loss as a proxy for how much pre-training information we've deleted in total. Similarly, the performance of information-deletion models is a proxy for how much of that information was useful for each task. Figure FIGREF18 shows that the pre-training loss linearly predicts the effects of information deletion on downstream accuracy. For every bit of information we delete from BERT, it appears only a fraction is useful for CoLA, and an even smaller fraction useful for QQP. This relationship should be taken into account when considering the memory / accuracy trade-off of over-pruning. Pruning an extra 30% of BERT's weights is worth only one accuracy point on QQP but 10 points on CoLA. It's unclear, however, whether this is because the pre-training task is less relevant to QQP or whether QQP simply has a bigger dataset with more information content. Downstream Fine-tuning Does Not Improve Prunability Since pre-training information deletion plays a central role in performance degradation while over-pruning, we might expect that downstream fine-tuning would improve prunability by making important weights more salient (increasing their magnitude). However, Figure FIGREF15 shows that models pruned after downstream fine-tuning do not surpass the development accuracies of models pruned during pre-training, despite achieving similar training losses. Figure FIGREF25 shows fine-tuning changes which weights are pruned by less than 6%. Why doesn't fine-tuning change which weights are pruned much? Table TABREF30 shows that the magnitude sorting order of weights is mostly preserved; weights move on average 0-4% away from their starting positions in the sort order. We also see that high magnitude weights are more stable than lower ones (Figure FIGREF31). Our experiments suggest that training on downstream data before pruning is too blunt an instrument to improve prunability. Even so, we might consider simply training on the downstream tasks for much longer, which would increase the difference in weights pruned. However, Figure FIGREF26 shows that even after an epoch of downstream fine-tuning, weights quickly re-stabilize in a new sorting order, meaning longer downstream training will have only a marginal effect on which weights are pruned. Indeed, Figure FIGREF25 shows that the weights selected for 60% pruning quickly stabilize and evaluation accuracy does not improve with more training before pruning. Related Work Compressing BERT for Specific Tasks Section SECREF5 showed that downstream fine-tuning does not increase prunability. However, several alternative compression approaches have been proposed to discard non-task-specific information. BIBREF25 used an information bottleneck to discard non-syntactic information. BIBREF31 used BERT as a knowledge distillation teacher to compress relevant information into smaller Bi-LSTMs, while BIBREF32 took a similar distillation approach. While fine-tuning does not increase prunability, task-specific knowledge might be extracted from BERT with other methods. Attention Head Pruning BIBREF33 previously showed redundancy in transformer models by pruning entire attention heads. BIBREF34 showed that after fine-tuning on MNLI, up to 40% of attention heads can be pruned from BERT without affecting test accuracy. They show redundancy in BERT after fine-tuning on a single downstream task; in contrast, our work emphasizes the interplay between compression and transfer learning to many tasks, pruning both before and after fine-tuning. Also, magnitude weight pruning allows us to additionally prune the feed-foward networks and sub-word embeddings in BERT (not just self-attention), which account for $\sim $72% of BERT's total memory usage. We suspect that attention head pruning and weight pruning remove different redundancies from BERT. Figure FIGREF26 shows that weight pruning does not prune any specific attention head much more than the pruning rate for the whole model. It is not clear, however, whether weight pruning and recovery training makes attention heads less prunable by distributing functionality to unused heads. Conclusion And Future Work We've shown that encoding BERT's inductive bias requires many more weights than are required to fit downstream data. Future work on compressing pre-trained models should focus on maintaining that inductive bias and quantifying its relevance to various tasks during accuracy/memory trade-offs. For magnitude weight pruning, we've shown that 30-40% of the weights do not encode any useful inductive bias and can be discarded without affecting BERT's universality. The relevance of the rest of the weights vary from task to task, and fine-tuning on downstream tasks does not change the nature of this trade-off by changing which weights are pruned. In future work, we will investigate the factors that influence language modeling's relevance to downstream tasks and how to improve compression in a task-general way. It's reasonable to believe that these conclusions will generalize to other pre-trained language models such as Kermit BIBREF3, XLNet BIBREF2, GPT-2 BIBREF4, RoBERTa BIBREF35 or ELMO BIBREF36. All of these learn some variant of language modeling, and most use Transformer architectures. While it remains to be shown in future work, viewing pruning as architecture search implies these models will be prunable due to the training dynamics inherent to neural networks.
The increase is linearly from lowest on average 2.0 , medium around 3.5, and the largest is 6.0
0a4e82dc3728be0bd0325bfe944e7e7de0b98b22
0a4e82dc3728be0bd0325bfe944e7e7de0b98b22_0
Q: How do they gather human reviews? Text: Introduction The attention mechanism BIBREF1 in neural networks can be used to interpret and visualize model behavior by selecting the most pertinent pieces of information instead of all available information. For example, in BIBREF0 , a hierarchical attention network (Han) is created and tested on the classification of product and movie reviews. As a side effect of employing the attention mechanism, sentences (and words) that are considered important to the model can be highlighted, and color intensity corresponds to the level of importance (darker color indicates higher importance). Our application is the escalation of Internet chats. To maintain quality of service, users are transferred to human representatives when their conversations with an intelligent virtual assistant (IVA) fail to progress. These transfers are known as escalations. We apply Han to such conversations in a sequential manner by feeding each user turn to Han as they occur, to determine if the conversation should escalate. If so, the user will be transferred to a live chat representative to continue the conversation. To help the human representative quickly determine the cause of the escalation, we generate a visualization of the user's turns using the attention weights to highlight the turns influential in the escalation decision. This helps the representative quickly scan the conversation history and determine the best course of action based on problematic turns. Unfortunately, there are instances where the attention weights for every turn at the point of escalation are nearly equal, requiring the representative to carefully read the history to determine the cause of escalation unassisted. Table TABREF1 shows one such example with uniform attention weights at the point of escalation. Our application requires that the visualizations be generated in real-time at the point of escalation. The user must wait for the human representative to review the IVA chat history and resume the failed task. Therefore, we seek visualization methods that do not add significant latency to the escalation transfer. Using the attention weights for turn influence is fast as they were already computed at the time of classification. However, these weights will not generate useful visualizations for the representatives when their values are similar across all turns (see Han Weight in Table TABREF1 ). To overcome this problem, we develop a visualization method to be applied in the instances where the attention weights are uniform. Our method produces informative visuals for determining influential samples in a sequence by observing the changes in sample importance over the cumulative sequence (see Our Weight in Table TABREF1 ). Note that we present a technique that only serves to resolve situations when the existing attention weights are ambiguous; we are not developing a new attention mechanism, and, as our method is external, it does not require any changes to the existing model to apply. To determine when the turn weights are uniform, we use perplexity BIBREF2 (see more details in subsection SECREF4 ). If a conversation INLINEFORM0 escalates on turn INLINEFORM1 with attention weights INLINEFORM2 , let INLINEFORM3 . Intuitively, INLINEFORM4 should be low when uniformity is high. We measure the INLINEFORM5 of every escalated conversation and provide a user-chosen uniformity threshold for INLINEFORM6 (Figure FIGREF2 ). For example, if the INLINEFORM7 threshold for uniformity is INLINEFORM8 , 20% of conversations in our dataset will result in Han visuals where all turns have similar weight; thus, no meaningful visualization can be produced. Companies that deploy IVA solutions for customer service report escalated conversation volumes of INLINEFORM9 per day for one IVA BIBREF3 . Therefore, even at 20%, contact centers handling multiple companies may see hundreds or thousands of conversations per day with no visualizations. If we apply our method in instances where Han weights are uniform, all conversations become non-uniform using the same INLINEFORM10 threshold for INLINEFORM11 , enabling visualization to reduce human effort. Related Work Neural networks are powerful learning algorithms, but are also some of the most complex. This is made worse by the non-deterministic nature of neural network training; a small change in a learning parameter can drastically affect the network's learning ability. This has led to the development of methodologies for understanding and uncovering not just neural networks, but black box models in general. The interpretation of deep networks is a young field of research. We refer readers to BIBREF4 for a comprehensive overview of different methods for understanding and visualizing deep neural networks. More recent developments include DeepLIFT BIBREF5 (not yet applicable to RNNs), layerwise relevance propagation BIBREF6 (only very recently adapted to textual input and LSTMs BIBREF7 , BIBREF8 ), and LIME BIBREF9 . LIME is model-agnostic, relying solely on the input data and classifier prediction probabilities. By perturbing the input and seeing how predictions change, one can approximate the complex model using a simpler, interpretable linear model. However, users must consider how the perturbations are created, which simple model to train, and what features to use in the simpler model. In addition, LIME is an external method not built into the classifier that can add significant latency when creating visuals in real-time as it requires generating perturbations and fitting a regression for every sample point. Attention BIBREF1 , however, is built into Han and commonly implemented in other network structures (see below), and, as a result, visuals are created for free as they are obtained from the attention weights directly. Attention has been used for grammatical error correction BIBREF10 , cloze-style reading tasks BIBREF11 , BIBREF12 , text classification BIBREF13 , abstractive sentence summarization BIBREF14 , and many other sequence transduction tasks. BIBREF15 uses an encoder-decoder framework with attention to model conversations and generate natural responses to user input. BIBREF16 is perhaps most similar to what we wish to achieve, but only uses one-round conversation data (one user input, one computer response). To the best of our knowledge, ours is the first paper that considers the changes in attention during sequential analysis to create more explanatory visuals in situations where attention weights on an entire sequence are uniform. Methodology In Table TABREF3 , we see the bottom visualization where the weights are uniform at the point of escalation. However, on the 2nd turn, the Han had produced more distinct weights. It is clear from this example that the importance of a single sample can change drastically as a sequence progresses. Using these changes in attention over the sequence, we formalized a set of rules to create an alternative visualization for the entire sequence to be applied in cases where the attention weights are uniform over all samples at the stopping point. Measuring Uniformity We begin with defining what it means for attention weights to be uniform. For a probability distribution INLINEFORM0 over the sample space INLINEFORM1 , the perplexity measure is defined as the exponential of the entropy of INLINEFORM2 . More formally, INLINEFORM3 where the entropy is INLINEFORM0 As entropy is a measure of the degree of randomness in INLINEFORM0 , perplexity is a measure of the number of choices that comprise this randomness. The following properties of perplexity will be applicable. For any distribution INLINEFORM0 , the value of INLINEFORM1 is always positive. ( INLINEFORM2 for all INLINEFORM3 .) For any distribution INLINEFORM0 over INLINEFORM1 values, we have INLINEFORM2 . The larger the value, the closer INLINEFORM3 is to being uniform. The equality holds if and only if INLINEFORM4 is uniform. With respect to property ( UID6 ) above, we define a metric INLINEFORM0 , where INLINEFORM1 is any distribution over INLINEFORM2 values. Thus, for all INLINEFORM3 and all distributions INLINEFORM4 that are uniform over INLINEFORM5 values, it must be the case that INLINEFORM6 . Furthermore, INLINEFORM7 for all INLINEFORM8 and INLINEFORM9 . We drop the subscript INLINEFORM10 from INLINEFORM11 when it is obvious from the context. In our application, obtaining an exact uniform distribution is not feasible; it suffices to consider a distribution to be uniform if it is almost the same over all values. We say that a given distribution INLINEFORM0 on INLINEFORM1 values is INLINEFORM2 -uniform if INLINEFORM3 . Note that since INLINEFORM4 can be at most INLINEFORM5 (as INLINEFORM6 ), this restricts INLINEFORM7 to be any real number between 0 and INLINEFORM8 . In this context, given a distribution INLINEFORM0 over INLINEFORM1 values, we will refer to INLINEFORM2 as the measure of uniformity of INLINEFORM3 . The smaller the value of INLINEFORM4 , the closer INLINEFORM5 is to being uniform. For our specific application, INLINEFORM0 is a user chosen uniformity threshold, INLINEFORM1 consists of turn weights, and INLINEFORM2 is the number of turns in the conversation. For example, in Figure FIGREF2 , if the threshold for INLINEFORM3 is chosen to be INLINEFORM4 , this will result in 20% of conversations in our datasets with uniform Han turn weights. Attention Behaviors Given a conversation INLINEFORM0 that contains INLINEFORM1 turns, let INLINEFORM2 be the vector of attention weights obtained from inputting INLINEFORM3 (where INLINEFORM4 is the INLINEFORM5 -th turn in INLINEFORM6 ) to Han. When turn INLINEFORM7 is added, we consider three forms of behavior that help us create a new visual: attention, context, and variation dependency switches. See section SECREF4 for evidence as to why we chose these particular behaviors. An attention dependency switch occurs when the addition of a turn changes the distribution of weights. Suppose we have a 4 turn conversation. In Figure FIGREF8 , considering only the first 3 turns gives us a uniform distribution of weights (left). However, when we add turn 4 (Figure FIGREF8 , right), the distribution shifts to one of non-uniformity. We consider the addition of any such turn that causes a switch from uniform to non-uniform or vice-versa in the creation of our visuals. More formally, there is an attention dependency variable change from turn INLINEFORM0 to INLINEFORM1 with some threshold INLINEFORM2 (note that INLINEFORM3 in section SECREF4 ) if any one of the following occurs: INLINEFORM0 and INLINEFORM1 INLINEFORM0 and INLINEFORM1 With 1, we are switching from a uniform distribution to a non-uniform distribution with the addition of turn INLINEFORM0 . . With 2, we are switching from a non-uniform distribution to a uniform distribution. Note that it is possible that the attention dependency variable change is observed for many turns and not just one. A context dependency switch occurs when the addition of a turn causes a previous turn's weight to change significantly. In Figure FIGREF9 , the addition of turn 6 causes turn 3's weight to spike. Mathematically, there is a context dependency variable change in turn INLINEFORM0 by addition of turn INLINEFORM1 for INLINEFORM2 with some threshold INLINEFORM3 if INLINEFORM4 The final switch of consideration is a variation dependency switch, which occurs when the weight of turn INLINEFORM0 changes significantly over the entire course of a conversation. More formally, there is a variation dependency variable change in turn INLINEFORM0 with some threshold INLINEFORM1 when the conversation has INLINEFORM2 turns if INLINEFORM3 . Note that variation dependency differs from context dependency as the latter determines turn INLINEFORM0 's change with the addition of only one turn. For determining attention dependency, we considered normalized attention weights, but for variation and context, we considered the unnormalized output logits from the Han. It is also important to note that an attention dependency switch can occur without a context dependency switch and vice-versa. In Figure FIGREF9 , neither distribution is uniform; therefore, no attention dependency switch occurred. In Figure FIGREF12 , an attention dependency switch has occurred (uniform to non-uniform distribution), but there is no context dependency variable change. In Figure FIGREF13 , a context dependency variable change has occurred as many previous weights have spiked, but the distribution of weights has not changed (no attention dependency variable change bc it is still non-uniform). In our experiments, we compute the thresholds mentioned in the definitions above as follows: For attention dependency, we experimented with various INLINEFORM0 thresholds and tagged 100 randomly chosen conversations for each of those thresholds to determine a potential candidate. For example, using a threshold of INLINEFORM1 , weight vectors such as INLINEFORM2 would be considered uniform, which we greatly disagreed with. However, we determined that weight distributions below the INLINEFORM3 threshold appeared uniform 90% of the time, which we considered good agreement. For context dependency and variation dependency switches, we chose the value of INLINEFORM0 and INLINEFORM1 , respectively, using the 75th percentile of the values for different turns. Upon comparison with manual tagging of 100 randomly chosen conversations, we agreed on all 100 cases for the context dependency switch and 99 out of 100 cases for the variation dependency switch. Data and Classifier Our escalation data was obtained from BIBREF17 , which consists of INLINEFORM0 conversations ( INLINEFORM1 user turns) from two commercial airline IVAs. INLINEFORM2 of the INLINEFORM3 conversations had been tagged for escalation. See dataset statistics in Table TABREF17 . Airline dataset 1 has INLINEFORM0 conversations and INLINEFORM1 turns, and airline dataset 2 has INLINEFORM2 conversations and INLINEFORM3 turns. The low turn counts present in dataset 2 are due to the FAQ focus of dataset 2's particular IVA. Users tend to perform single queries such as “baggage policy" instead of engaging in a conversational interaction. In contrast, dataset 1 originated from a more “natural" IVA, and, therefore, users appeared to engage with it more through conversation. The classifier (Han) used for escalation prediction is outlined in BIBREF0 . As the code was unavailable, we implemented Han with TensorFlow BIBREF18 . Our version has substantially the same architecture as in BIBREF0 with the exception that LSTM cells are used in place of GRU. We used the 200-dimensional word embeddings from glove.twitter.27B BIBREF19 and did not adapt them during the training of our model. Each recurrent encoding layer has 50 forward and 50 backward cells, giving 100-dimensional embeddings each for turns and conversations. In predicting escalation, our network obtained an INLINEFORM0 of INLINEFORM1 ( INLINEFORM2 precision, INLINEFORM3 recall, averaged over five random splits). To compute these metrics, turn-level annotations were converted to conversation-level annotations by labeling a conversation as escalate if any turn in the conversation was labeled escalate. For the visualization experiments, a random 80-20 split was used to create training and testing sets. The training set consisted of INLINEFORM0 conversations of which INLINEFORM1 should escalate. The testing set consisted of INLINEFORM2 conversations of which 241 should escalate. Creating Our Visuals Given the occurrences of attention ( INLINEFORM0 ), context ( INLINEFORM1 ), and variation ( INLINEFORM2 ) dependency switches, we now discuss how a visual of the entire conversation can be created. For each turn INLINEFORM3 , create a vector INLINEFORM4 , where each variable inside this vector takes the value 1 when the attention, context, and variation dependency switches trigger, respectively, and 0 otherwise. Compute INLINEFORM0 , and use this value to represent the intensity of a single color (blue in our examples). The higher the value of INLINEFORM1 , the higher the color intensity. Note that INLINEFORM2 . Take, for example, Table TABREF19 where for the first conversation's weights (using our weights), turns 2,3, and 6 have values of INLINEFORM3 , turns 4,5, and 7 have values of INLINEFORM4 , and the first turn has a value of 0. Considering a higher dimension for INLINEFORM5 which would create more values for INLINEFORM6 is an objective for future work. Results and Discussion We first considered the frequency of each of the behaviors discussed in section SECREF7 as well as their co-occurrences with escalation. After removing single turn conversations (as they are uniform by default), the number of turns that had a context dependency switch as a result of adding a new turn was INLINEFORM0 . However, the number of times that such an event coincided at least once with escalation was 766. As it appeared that the effect of context dependency was quite low, we next considered the variation and attention dependency variables. The total number of turns that had a variation dependency switch was INLINEFORM1 , and INLINEFORM2 also coincided with a change of escalation, indicating that a variation dependency switch is potentially valuable in the creation of new visuals. In addition, the number of uniform to non-uniform turn pairs (uniform weight distribution for first INLINEFORM3 turns but non-uniform for first INLINEFORM4 turns) was INLINEFORM5 whereas the number of non-uniform to uniform turn pairs was 259. Out of the times when there was a uniform to non-uniform switch, 710 cases coincided with escalation compared to only 22 for non-uniform to uniform changes. As shown in Figure FIGREF2 , the use of our method when the Han weights are uniform greatly reduces or even eliminates the uniformity at lower INLINEFORM0 thresholds. To determine if our visuals were also assigning weights properly, we had three reviewers rate on a 0 to 10 scale (0 being poor, 10 being best) of how well each visualization highlights the influential turns for escalation in the conversation. See Table TABREF20 for an example that was tagged nearly perfectly by reviewers. As our method only serves to highlight influential turns in situations when the existing attention weights are uniform, no direct comparison was done to Han weights over the entire dataset. To avoid bias, the chosen reviewers had never used the specific IVA and were not familiar with its knowledge base although they may have performed similar tagging tasks in the past. The annotators were reminded that if a turn is given a darker color, then that turn supposedly has greater influence in determining escalation. They were, thus, given the task of determining if they agree with the visualization's decision. A rating of 0 was instructed to be given on complete disagreement, and 10 upon perfect agreement. Consider a human representative given Our Weight in Table TABREF1 , which highlights turn 4 as the most influential turn on escalation, as opposed to the Han Weight which requires careful reading to make this determination. From the INLINEFORM0 conversations that escalated in the dataset, we first filtered conversations by a uniformity threshold, INLINEFORM1 (user chosen as described in subsection SECREF7 ). At this threshold, INLINEFORM2 or 138 conversations remained. Next, we filtered the conversations that were not correctly classified by Han, leaving 85 or INLINEFORM3 . The average INLINEFORM0 rating between the three reviewers over the remaining conversations was 6. This demonstrates that on average, reviewers felt that the visualizations were adequate. Put in perspective, adding adequate visuals to the thousands of daily escalations that would otherwise have no visual is a great improvement. In cases of uniform attention weights at the stopping point, this can also make it difficult to spot potential areas for classifier improvement if we do not incorporate turn weight fluctuations as the conversation progresses to the stopping point. For example, in the first escalated conversation displayed in Table TABREF19 , turn 6 has a high weight under our scheme because of the presence of the word “live". Customers will frequently ask for a “live customer representative" which is a sign for escalation. However, in Table TABREF19 , “live" is used in a different context, but the weight given to it is high due to turn weight fluctuations as the conversation progresses to the stopping point. Our weights expose this potential problem for the classifier which may suggest using n-grams or some other methodology for improvement. If we were to use uniform Han weights at the stopping point only, we might miss these areas for improvement. In addition to the possible reduction in human review time and spotting potential areas for classifier improvement, the visuals only required INLINEFORM0 milliseconds on average to compute per conversation (on a laptop with an Intel Core i7-4710MQ CPU @ 2.50GHz, 16 GB of RAM, running Ubuntu 16.04). This adds insignificant latency to the transfer while generating the visualization, which is an important goal. In the future, this work would greatly benefit from an expanded dataset. As we only wish to consider conversations with uniform weights on the turn of escalation, this cuts our dataset dramatically, necessitating a larger tagged dataset. Considering more attention behaviors so we can have higher granularity of color intensity is also an objective of future work. As our method only looks at the changes in attention weight, our method is not task-specific. Therefore, it would be beneficial to test our methodology on visualizing other sequential analysis tasks besides escalation, such as fraud or anomaly detection or applications in the medical domain BIBREF20 , BIBREF21 . Conclusion Although attention in deep neural networks was not initially introduced to inform observers, but to help a model make predictions, it can also be used to inform. In the instances where a model thinks all historical samples should be considered equally important in a sequential analysis task, we must look elsewhere for a computationally inexpensive means to understand what happened at the stopping point. In this paper, we have introduced such a means by monitoring attention changes over the sequential analysis to inform observers. This method introduces negligible overhead, an important consideration in real-time systems, and is not tied to the implementation details or task of the model, other than the prerequisite of an attention layer.
human representative to review the IVA chat history and resume the failed task
c635dc8013e63505084b9daaa9ddb021a2d24543
c635dc8013e63505084b9daaa9ddb021a2d24543_0
Q: Do they explain model predictions solely on attention weights? Text: Introduction The attention mechanism BIBREF1 in neural networks can be used to interpret and visualize model behavior by selecting the most pertinent pieces of information instead of all available information. For example, in BIBREF0 , a hierarchical attention network (Han) is created and tested on the classification of product and movie reviews. As a side effect of employing the attention mechanism, sentences (and words) that are considered important to the model can be highlighted, and color intensity corresponds to the level of importance (darker color indicates higher importance). Our application is the escalation of Internet chats. To maintain quality of service, users are transferred to human representatives when their conversations with an intelligent virtual assistant (IVA) fail to progress. These transfers are known as escalations. We apply Han to such conversations in a sequential manner by feeding each user turn to Han as they occur, to determine if the conversation should escalate. If so, the user will be transferred to a live chat representative to continue the conversation. To help the human representative quickly determine the cause of the escalation, we generate a visualization of the user's turns using the attention weights to highlight the turns influential in the escalation decision. This helps the representative quickly scan the conversation history and determine the best course of action based on problematic turns. Unfortunately, there are instances where the attention weights for every turn at the point of escalation are nearly equal, requiring the representative to carefully read the history to determine the cause of escalation unassisted. Table TABREF1 shows one such example with uniform attention weights at the point of escalation. Our application requires that the visualizations be generated in real-time at the point of escalation. The user must wait for the human representative to review the IVA chat history and resume the failed task. Therefore, we seek visualization methods that do not add significant latency to the escalation transfer. Using the attention weights for turn influence is fast as they were already computed at the time of classification. However, these weights will not generate useful visualizations for the representatives when their values are similar across all turns (see Han Weight in Table TABREF1 ). To overcome this problem, we develop a visualization method to be applied in the instances where the attention weights are uniform. Our method produces informative visuals for determining influential samples in a sequence by observing the changes in sample importance over the cumulative sequence (see Our Weight in Table TABREF1 ). Note that we present a technique that only serves to resolve situations when the existing attention weights are ambiguous; we are not developing a new attention mechanism, and, as our method is external, it does not require any changes to the existing model to apply. To determine when the turn weights are uniform, we use perplexity BIBREF2 (see more details in subsection SECREF4 ). If a conversation INLINEFORM0 escalates on turn INLINEFORM1 with attention weights INLINEFORM2 , let INLINEFORM3 . Intuitively, INLINEFORM4 should be low when uniformity is high. We measure the INLINEFORM5 of every escalated conversation and provide a user-chosen uniformity threshold for INLINEFORM6 (Figure FIGREF2 ). For example, if the INLINEFORM7 threshold for uniformity is INLINEFORM8 , 20% of conversations in our dataset will result in Han visuals where all turns have similar weight; thus, no meaningful visualization can be produced. Companies that deploy IVA solutions for customer service report escalated conversation volumes of INLINEFORM9 per day for one IVA BIBREF3 . Therefore, even at 20%, contact centers handling multiple companies may see hundreds or thousands of conversations per day with no visualizations. If we apply our method in instances where Han weights are uniform, all conversations become non-uniform using the same INLINEFORM10 threshold for INLINEFORM11 , enabling visualization to reduce human effort. Related Work Neural networks are powerful learning algorithms, but are also some of the most complex. This is made worse by the non-deterministic nature of neural network training; a small change in a learning parameter can drastically affect the network's learning ability. This has led to the development of methodologies for understanding and uncovering not just neural networks, but black box models in general. The interpretation of deep networks is a young field of research. We refer readers to BIBREF4 for a comprehensive overview of different methods for understanding and visualizing deep neural networks. More recent developments include DeepLIFT BIBREF5 (not yet applicable to RNNs), layerwise relevance propagation BIBREF6 (only very recently adapted to textual input and LSTMs BIBREF7 , BIBREF8 ), and LIME BIBREF9 . LIME is model-agnostic, relying solely on the input data and classifier prediction probabilities. By perturbing the input and seeing how predictions change, one can approximate the complex model using a simpler, interpretable linear model. However, users must consider how the perturbations are created, which simple model to train, and what features to use in the simpler model. In addition, LIME is an external method not built into the classifier that can add significant latency when creating visuals in real-time as it requires generating perturbations and fitting a regression for every sample point. Attention BIBREF1 , however, is built into Han and commonly implemented in other network structures (see below), and, as a result, visuals are created for free as they are obtained from the attention weights directly. Attention has been used for grammatical error correction BIBREF10 , cloze-style reading tasks BIBREF11 , BIBREF12 , text classification BIBREF13 , abstractive sentence summarization BIBREF14 , and many other sequence transduction tasks. BIBREF15 uses an encoder-decoder framework with attention to model conversations and generate natural responses to user input. BIBREF16 is perhaps most similar to what we wish to achieve, but only uses one-round conversation data (one user input, one computer response). To the best of our knowledge, ours is the first paper that considers the changes in attention during sequential analysis to create more explanatory visuals in situations where attention weights on an entire sequence are uniform. Methodology In Table TABREF3 , we see the bottom visualization where the weights are uniform at the point of escalation. However, on the 2nd turn, the Han had produced more distinct weights. It is clear from this example that the importance of a single sample can change drastically as a sequence progresses. Using these changes in attention over the sequence, we formalized a set of rules to create an alternative visualization for the entire sequence to be applied in cases where the attention weights are uniform over all samples at the stopping point. Measuring Uniformity We begin with defining what it means for attention weights to be uniform. For a probability distribution INLINEFORM0 over the sample space INLINEFORM1 , the perplexity measure is defined as the exponential of the entropy of INLINEFORM2 . More formally, INLINEFORM3 where the entropy is INLINEFORM0 As entropy is a measure of the degree of randomness in INLINEFORM0 , perplexity is a measure of the number of choices that comprise this randomness. The following properties of perplexity will be applicable. For any distribution INLINEFORM0 , the value of INLINEFORM1 is always positive. ( INLINEFORM2 for all INLINEFORM3 .) For any distribution INLINEFORM0 over INLINEFORM1 values, we have INLINEFORM2 . The larger the value, the closer INLINEFORM3 is to being uniform. The equality holds if and only if INLINEFORM4 is uniform. With respect to property ( UID6 ) above, we define a metric INLINEFORM0 , where INLINEFORM1 is any distribution over INLINEFORM2 values. Thus, for all INLINEFORM3 and all distributions INLINEFORM4 that are uniform over INLINEFORM5 values, it must be the case that INLINEFORM6 . Furthermore, INLINEFORM7 for all INLINEFORM8 and INLINEFORM9 . We drop the subscript INLINEFORM10 from INLINEFORM11 when it is obvious from the context. In our application, obtaining an exact uniform distribution is not feasible; it suffices to consider a distribution to be uniform if it is almost the same over all values. We say that a given distribution INLINEFORM0 on INLINEFORM1 values is INLINEFORM2 -uniform if INLINEFORM3 . Note that since INLINEFORM4 can be at most INLINEFORM5 (as INLINEFORM6 ), this restricts INLINEFORM7 to be any real number between 0 and INLINEFORM8 . In this context, given a distribution INLINEFORM0 over INLINEFORM1 values, we will refer to INLINEFORM2 as the measure of uniformity of INLINEFORM3 . The smaller the value of INLINEFORM4 , the closer INLINEFORM5 is to being uniform. For our specific application, INLINEFORM0 is a user chosen uniformity threshold, INLINEFORM1 consists of turn weights, and INLINEFORM2 is the number of turns in the conversation. For example, in Figure FIGREF2 , if the threshold for INLINEFORM3 is chosen to be INLINEFORM4 , this will result in 20% of conversations in our datasets with uniform Han turn weights. Attention Behaviors Given a conversation INLINEFORM0 that contains INLINEFORM1 turns, let INLINEFORM2 be the vector of attention weights obtained from inputting INLINEFORM3 (where INLINEFORM4 is the INLINEFORM5 -th turn in INLINEFORM6 ) to Han. When turn INLINEFORM7 is added, we consider three forms of behavior that help us create a new visual: attention, context, and variation dependency switches. See section SECREF4 for evidence as to why we chose these particular behaviors. An attention dependency switch occurs when the addition of a turn changes the distribution of weights. Suppose we have a 4 turn conversation. In Figure FIGREF8 , considering only the first 3 turns gives us a uniform distribution of weights (left). However, when we add turn 4 (Figure FIGREF8 , right), the distribution shifts to one of non-uniformity. We consider the addition of any such turn that causes a switch from uniform to non-uniform or vice-versa in the creation of our visuals. More formally, there is an attention dependency variable change from turn INLINEFORM0 to INLINEFORM1 with some threshold INLINEFORM2 (note that INLINEFORM3 in section SECREF4 ) if any one of the following occurs: INLINEFORM0 and INLINEFORM1 INLINEFORM0 and INLINEFORM1 With 1, we are switching from a uniform distribution to a non-uniform distribution with the addition of turn INLINEFORM0 . . With 2, we are switching from a non-uniform distribution to a uniform distribution. Note that it is possible that the attention dependency variable change is observed for many turns and not just one. A context dependency switch occurs when the addition of a turn causes a previous turn's weight to change significantly. In Figure FIGREF9 , the addition of turn 6 causes turn 3's weight to spike. Mathematically, there is a context dependency variable change in turn INLINEFORM0 by addition of turn INLINEFORM1 for INLINEFORM2 with some threshold INLINEFORM3 if INLINEFORM4 The final switch of consideration is a variation dependency switch, which occurs when the weight of turn INLINEFORM0 changes significantly over the entire course of a conversation. More formally, there is a variation dependency variable change in turn INLINEFORM0 with some threshold INLINEFORM1 when the conversation has INLINEFORM2 turns if INLINEFORM3 . Note that variation dependency differs from context dependency as the latter determines turn INLINEFORM0 's change with the addition of only one turn. For determining attention dependency, we considered normalized attention weights, but for variation and context, we considered the unnormalized output logits from the Han. It is also important to note that an attention dependency switch can occur without a context dependency switch and vice-versa. In Figure FIGREF9 , neither distribution is uniform; therefore, no attention dependency switch occurred. In Figure FIGREF12 , an attention dependency switch has occurred (uniform to non-uniform distribution), but there is no context dependency variable change. In Figure FIGREF13 , a context dependency variable change has occurred as many previous weights have spiked, but the distribution of weights has not changed (no attention dependency variable change bc it is still non-uniform). In our experiments, we compute the thresholds mentioned in the definitions above as follows: For attention dependency, we experimented with various INLINEFORM0 thresholds and tagged 100 randomly chosen conversations for each of those thresholds to determine a potential candidate. For example, using a threshold of INLINEFORM1 , weight vectors such as INLINEFORM2 would be considered uniform, which we greatly disagreed with. However, we determined that weight distributions below the INLINEFORM3 threshold appeared uniform 90% of the time, which we considered good agreement. For context dependency and variation dependency switches, we chose the value of INLINEFORM0 and INLINEFORM1 , respectively, using the 75th percentile of the values for different turns. Upon comparison with manual tagging of 100 randomly chosen conversations, we agreed on all 100 cases for the context dependency switch and 99 out of 100 cases for the variation dependency switch. Data and Classifier Our escalation data was obtained from BIBREF17 , which consists of INLINEFORM0 conversations ( INLINEFORM1 user turns) from two commercial airline IVAs. INLINEFORM2 of the INLINEFORM3 conversations had been tagged for escalation. See dataset statistics in Table TABREF17 . Airline dataset 1 has INLINEFORM0 conversations and INLINEFORM1 turns, and airline dataset 2 has INLINEFORM2 conversations and INLINEFORM3 turns. The low turn counts present in dataset 2 are due to the FAQ focus of dataset 2's particular IVA. Users tend to perform single queries such as “baggage policy" instead of engaging in a conversational interaction. In contrast, dataset 1 originated from a more “natural" IVA, and, therefore, users appeared to engage with it more through conversation. The classifier (Han) used for escalation prediction is outlined in BIBREF0 . As the code was unavailable, we implemented Han with TensorFlow BIBREF18 . Our version has substantially the same architecture as in BIBREF0 with the exception that LSTM cells are used in place of GRU. We used the 200-dimensional word embeddings from glove.twitter.27B BIBREF19 and did not adapt them during the training of our model. Each recurrent encoding layer has 50 forward and 50 backward cells, giving 100-dimensional embeddings each for turns and conversations. In predicting escalation, our network obtained an INLINEFORM0 of INLINEFORM1 ( INLINEFORM2 precision, INLINEFORM3 recall, averaged over five random splits). To compute these metrics, turn-level annotations were converted to conversation-level annotations by labeling a conversation as escalate if any turn in the conversation was labeled escalate. For the visualization experiments, a random 80-20 split was used to create training and testing sets. The training set consisted of INLINEFORM0 conversations of which INLINEFORM1 should escalate. The testing set consisted of INLINEFORM2 conversations of which 241 should escalate. Creating Our Visuals Given the occurrences of attention ( INLINEFORM0 ), context ( INLINEFORM1 ), and variation ( INLINEFORM2 ) dependency switches, we now discuss how a visual of the entire conversation can be created. For each turn INLINEFORM3 , create a vector INLINEFORM4 , where each variable inside this vector takes the value 1 when the attention, context, and variation dependency switches trigger, respectively, and 0 otherwise. Compute INLINEFORM0 , and use this value to represent the intensity of a single color (blue in our examples). The higher the value of INLINEFORM1 , the higher the color intensity. Note that INLINEFORM2 . Take, for example, Table TABREF19 where for the first conversation's weights (using our weights), turns 2,3, and 6 have values of INLINEFORM3 , turns 4,5, and 7 have values of INLINEFORM4 , and the first turn has a value of 0. Considering a higher dimension for INLINEFORM5 which would create more values for INLINEFORM6 is an objective for future work. Results and Discussion We first considered the frequency of each of the behaviors discussed in section SECREF7 as well as their co-occurrences with escalation. After removing single turn conversations (as they are uniform by default), the number of turns that had a context dependency switch as a result of adding a new turn was INLINEFORM0 . However, the number of times that such an event coincided at least once with escalation was 766. As it appeared that the effect of context dependency was quite low, we next considered the variation and attention dependency variables. The total number of turns that had a variation dependency switch was INLINEFORM1 , and INLINEFORM2 also coincided with a change of escalation, indicating that a variation dependency switch is potentially valuable in the creation of new visuals. In addition, the number of uniform to non-uniform turn pairs (uniform weight distribution for first INLINEFORM3 turns but non-uniform for first INLINEFORM4 turns) was INLINEFORM5 whereas the number of non-uniform to uniform turn pairs was 259. Out of the times when there was a uniform to non-uniform switch, 710 cases coincided with escalation compared to only 22 for non-uniform to uniform changes. As shown in Figure FIGREF2 , the use of our method when the Han weights are uniform greatly reduces or even eliminates the uniformity at lower INLINEFORM0 thresholds. To determine if our visuals were also assigning weights properly, we had three reviewers rate on a 0 to 10 scale (0 being poor, 10 being best) of how well each visualization highlights the influential turns for escalation in the conversation. See Table TABREF20 for an example that was tagged nearly perfectly by reviewers. As our method only serves to highlight influential turns in situations when the existing attention weights are uniform, no direct comparison was done to Han weights over the entire dataset. To avoid bias, the chosen reviewers had never used the specific IVA and were not familiar with its knowledge base although they may have performed similar tagging tasks in the past. The annotators were reminded that if a turn is given a darker color, then that turn supposedly has greater influence in determining escalation. They were, thus, given the task of determining if they agree with the visualization's decision. A rating of 0 was instructed to be given on complete disagreement, and 10 upon perfect agreement. Consider a human representative given Our Weight in Table TABREF1 , which highlights turn 4 as the most influential turn on escalation, as opposed to the Han Weight which requires careful reading to make this determination. From the INLINEFORM0 conversations that escalated in the dataset, we first filtered conversations by a uniformity threshold, INLINEFORM1 (user chosen as described in subsection SECREF7 ). At this threshold, INLINEFORM2 or 138 conversations remained. Next, we filtered the conversations that were not correctly classified by Han, leaving 85 or INLINEFORM3 . The average INLINEFORM0 rating between the three reviewers over the remaining conversations was 6. This demonstrates that on average, reviewers felt that the visualizations were adequate. Put in perspective, adding adequate visuals to the thousands of daily escalations that would otherwise have no visual is a great improvement. In cases of uniform attention weights at the stopping point, this can also make it difficult to spot potential areas for classifier improvement if we do not incorporate turn weight fluctuations as the conversation progresses to the stopping point. For example, in the first escalated conversation displayed in Table TABREF19 , turn 6 has a high weight under our scheme because of the presence of the word “live". Customers will frequently ask for a “live customer representative" which is a sign for escalation. However, in Table TABREF19 , “live" is used in a different context, but the weight given to it is high due to turn weight fluctuations as the conversation progresses to the stopping point. Our weights expose this potential problem for the classifier which may suggest using n-grams or some other methodology for improvement. If we were to use uniform Han weights at the stopping point only, we might miss these areas for improvement. In addition to the possible reduction in human review time and spotting potential areas for classifier improvement, the visuals only required INLINEFORM0 milliseconds on average to compute per conversation (on a laptop with an Intel Core i7-4710MQ CPU @ 2.50GHz, 16 GB of RAM, running Ubuntu 16.04). This adds insignificant latency to the transfer while generating the visualization, which is an important goal. In the future, this work would greatly benefit from an expanded dataset. As we only wish to consider conversations with uniform weights on the turn of escalation, this cuts our dataset dramatically, necessitating a larger tagged dataset. Considering more attention behaviors so we can have higher granularity of color intensity is also an objective of future work. As our method only looks at the changes in attention weight, our method is not task-specific. Therefore, it would be beneficial to test our methodology on visualizing other sequential analysis tasks besides escalation, such as fraud or anomaly detection or applications in the medical domain BIBREF20 , BIBREF21 . Conclusion Although attention in deep neural networks was not initially introduced to inform observers, but to help a model make predictions, it can also be used to inform. In the instances where a model thinks all historical samples should be considered equally important in a sequential analysis task, we must look elsewhere for a computationally inexpensive means to understand what happened at the stopping point. In this paper, we have introduced such a means by monitoring attention changes over the sequential analysis to inform observers. This method introduces negligible overhead, an important consideration in real-time systems, and is not tied to the implementation details or task of the model, other than the prerequisite of an attention layer.
Yes
61aac406b648865f007a400dcd69f28e44efc636
61aac406b648865f007a400dcd69f28e44efc636_0
Q: Can their method of creating more informative visuals be applied to tasks other than turn taking in conversations? Text: Introduction The attention mechanism BIBREF1 in neural networks can be used to interpret and visualize model behavior by selecting the most pertinent pieces of information instead of all available information. For example, in BIBREF0 , a hierarchical attention network (Han) is created and tested on the classification of product and movie reviews. As a side effect of employing the attention mechanism, sentences (and words) that are considered important to the model can be highlighted, and color intensity corresponds to the level of importance (darker color indicates higher importance). Our application is the escalation of Internet chats. To maintain quality of service, users are transferred to human representatives when their conversations with an intelligent virtual assistant (IVA) fail to progress. These transfers are known as escalations. We apply Han to such conversations in a sequential manner by feeding each user turn to Han as they occur, to determine if the conversation should escalate. If so, the user will be transferred to a live chat representative to continue the conversation. To help the human representative quickly determine the cause of the escalation, we generate a visualization of the user's turns using the attention weights to highlight the turns influential in the escalation decision. This helps the representative quickly scan the conversation history and determine the best course of action based on problematic turns. Unfortunately, there are instances where the attention weights for every turn at the point of escalation are nearly equal, requiring the representative to carefully read the history to determine the cause of escalation unassisted. Table TABREF1 shows one such example with uniform attention weights at the point of escalation. Our application requires that the visualizations be generated in real-time at the point of escalation. The user must wait for the human representative to review the IVA chat history and resume the failed task. Therefore, we seek visualization methods that do not add significant latency to the escalation transfer. Using the attention weights for turn influence is fast as they were already computed at the time of classification. However, these weights will not generate useful visualizations for the representatives when their values are similar across all turns (see Han Weight in Table TABREF1 ). To overcome this problem, we develop a visualization method to be applied in the instances where the attention weights are uniform. Our method produces informative visuals for determining influential samples in a sequence by observing the changes in sample importance over the cumulative sequence (see Our Weight in Table TABREF1 ). Note that we present a technique that only serves to resolve situations when the existing attention weights are ambiguous; we are not developing a new attention mechanism, and, as our method is external, it does not require any changes to the existing model to apply. To determine when the turn weights are uniform, we use perplexity BIBREF2 (see more details in subsection SECREF4 ). If a conversation INLINEFORM0 escalates on turn INLINEFORM1 with attention weights INLINEFORM2 , let INLINEFORM3 . Intuitively, INLINEFORM4 should be low when uniformity is high. We measure the INLINEFORM5 of every escalated conversation and provide a user-chosen uniformity threshold for INLINEFORM6 (Figure FIGREF2 ). For example, if the INLINEFORM7 threshold for uniformity is INLINEFORM8 , 20% of conversations in our dataset will result in Han visuals where all turns have similar weight; thus, no meaningful visualization can be produced. Companies that deploy IVA solutions for customer service report escalated conversation volumes of INLINEFORM9 per day for one IVA BIBREF3 . Therefore, even at 20%, contact centers handling multiple companies may see hundreds or thousands of conversations per day with no visualizations. If we apply our method in instances where Han weights are uniform, all conversations become non-uniform using the same INLINEFORM10 threshold for INLINEFORM11 , enabling visualization to reduce human effort. Related Work Neural networks are powerful learning algorithms, but are also some of the most complex. This is made worse by the non-deterministic nature of neural network training; a small change in a learning parameter can drastically affect the network's learning ability. This has led to the development of methodologies for understanding and uncovering not just neural networks, but black box models in general. The interpretation of deep networks is a young field of research. We refer readers to BIBREF4 for a comprehensive overview of different methods for understanding and visualizing deep neural networks. More recent developments include DeepLIFT BIBREF5 (not yet applicable to RNNs), layerwise relevance propagation BIBREF6 (only very recently adapted to textual input and LSTMs BIBREF7 , BIBREF8 ), and LIME BIBREF9 . LIME is model-agnostic, relying solely on the input data and classifier prediction probabilities. By perturbing the input and seeing how predictions change, one can approximate the complex model using a simpler, interpretable linear model. However, users must consider how the perturbations are created, which simple model to train, and what features to use in the simpler model. In addition, LIME is an external method not built into the classifier that can add significant latency when creating visuals in real-time as it requires generating perturbations and fitting a regression for every sample point. Attention BIBREF1 , however, is built into Han and commonly implemented in other network structures (see below), and, as a result, visuals are created for free as they are obtained from the attention weights directly. Attention has been used for grammatical error correction BIBREF10 , cloze-style reading tasks BIBREF11 , BIBREF12 , text classification BIBREF13 , abstractive sentence summarization BIBREF14 , and many other sequence transduction tasks. BIBREF15 uses an encoder-decoder framework with attention to model conversations and generate natural responses to user input. BIBREF16 is perhaps most similar to what we wish to achieve, but only uses one-round conversation data (one user input, one computer response). To the best of our knowledge, ours is the first paper that considers the changes in attention during sequential analysis to create more explanatory visuals in situations where attention weights on an entire sequence are uniform. Methodology In Table TABREF3 , we see the bottom visualization where the weights are uniform at the point of escalation. However, on the 2nd turn, the Han had produced more distinct weights. It is clear from this example that the importance of a single sample can change drastically as a sequence progresses. Using these changes in attention over the sequence, we formalized a set of rules to create an alternative visualization for the entire sequence to be applied in cases where the attention weights are uniform over all samples at the stopping point. Measuring Uniformity We begin with defining what it means for attention weights to be uniform. For a probability distribution INLINEFORM0 over the sample space INLINEFORM1 , the perplexity measure is defined as the exponential of the entropy of INLINEFORM2 . More formally, INLINEFORM3 where the entropy is INLINEFORM0 As entropy is a measure of the degree of randomness in INLINEFORM0 , perplexity is a measure of the number of choices that comprise this randomness. The following properties of perplexity will be applicable. For any distribution INLINEFORM0 , the value of INLINEFORM1 is always positive. ( INLINEFORM2 for all INLINEFORM3 .) For any distribution INLINEFORM0 over INLINEFORM1 values, we have INLINEFORM2 . The larger the value, the closer INLINEFORM3 is to being uniform. The equality holds if and only if INLINEFORM4 is uniform. With respect to property ( UID6 ) above, we define a metric INLINEFORM0 , where INLINEFORM1 is any distribution over INLINEFORM2 values. Thus, for all INLINEFORM3 and all distributions INLINEFORM4 that are uniform over INLINEFORM5 values, it must be the case that INLINEFORM6 . Furthermore, INLINEFORM7 for all INLINEFORM8 and INLINEFORM9 . We drop the subscript INLINEFORM10 from INLINEFORM11 when it is obvious from the context. In our application, obtaining an exact uniform distribution is not feasible; it suffices to consider a distribution to be uniform if it is almost the same over all values. We say that a given distribution INLINEFORM0 on INLINEFORM1 values is INLINEFORM2 -uniform if INLINEFORM3 . Note that since INLINEFORM4 can be at most INLINEFORM5 (as INLINEFORM6 ), this restricts INLINEFORM7 to be any real number between 0 and INLINEFORM8 . In this context, given a distribution INLINEFORM0 over INLINEFORM1 values, we will refer to INLINEFORM2 as the measure of uniformity of INLINEFORM3 . The smaller the value of INLINEFORM4 , the closer INLINEFORM5 is to being uniform. For our specific application, INLINEFORM0 is a user chosen uniformity threshold, INLINEFORM1 consists of turn weights, and INLINEFORM2 is the number of turns in the conversation. For example, in Figure FIGREF2 , if the threshold for INLINEFORM3 is chosen to be INLINEFORM4 , this will result in 20% of conversations in our datasets with uniform Han turn weights. Attention Behaviors Given a conversation INLINEFORM0 that contains INLINEFORM1 turns, let INLINEFORM2 be the vector of attention weights obtained from inputting INLINEFORM3 (where INLINEFORM4 is the INLINEFORM5 -th turn in INLINEFORM6 ) to Han. When turn INLINEFORM7 is added, we consider three forms of behavior that help us create a new visual: attention, context, and variation dependency switches. See section SECREF4 for evidence as to why we chose these particular behaviors. An attention dependency switch occurs when the addition of a turn changes the distribution of weights. Suppose we have a 4 turn conversation. In Figure FIGREF8 , considering only the first 3 turns gives us a uniform distribution of weights (left). However, when we add turn 4 (Figure FIGREF8 , right), the distribution shifts to one of non-uniformity. We consider the addition of any such turn that causes a switch from uniform to non-uniform or vice-versa in the creation of our visuals. More formally, there is an attention dependency variable change from turn INLINEFORM0 to INLINEFORM1 with some threshold INLINEFORM2 (note that INLINEFORM3 in section SECREF4 ) if any one of the following occurs: INLINEFORM0 and INLINEFORM1 INLINEFORM0 and INLINEFORM1 With 1, we are switching from a uniform distribution to a non-uniform distribution with the addition of turn INLINEFORM0 . . With 2, we are switching from a non-uniform distribution to a uniform distribution. Note that it is possible that the attention dependency variable change is observed for many turns and not just one. A context dependency switch occurs when the addition of a turn causes a previous turn's weight to change significantly. In Figure FIGREF9 , the addition of turn 6 causes turn 3's weight to spike. Mathematically, there is a context dependency variable change in turn INLINEFORM0 by addition of turn INLINEFORM1 for INLINEFORM2 with some threshold INLINEFORM3 if INLINEFORM4 The final switch of consideration is a variation dependency switch, which occurs when the weight of turn INLINEFORM0 changes significantly over the entire course of a conversation. More formally, there is a variation dependency variable change in turn INLINEFORM0 with some threshold INLINEFORM1 when the conversation has INLINEFORM2 turns if INLINEFORM3 . Note that variation dependency differs from context dependency as the latter determines turn INLINEFORM0 's change with the addition of only one turn. For determining attention dependency, we considered normalized attention weights, but for variation and context, we considered the unnormalized output logits from the Han. It is also important to note that an attention dependency switch can occur without a context dependency switch and vice-versa. In Figure FIGREF9 , neither distribution is uniform; therefore, no attention dependency switch occurred. In Figure FIGREF12 , an attention dependency switch has occurred (uniform to non-uniform distribution), but there is no context dependency variable change. In Figure FIGREF13 , a context dependency variable change has occurred as many previous weights have spiked, but the distribution of weights has not changed (no attention dependency variable change bc it is still non-uniform). In our experiments, we compute the thresholds mentioned in the definitions above as follows: For attention dependency, we experimented with various INLINEFORM0 thresholds and tagged 100 randomly chosen conversations for each of those thresholds to determine a potential candidate. For example, using a threshold of INLINEFORM1 , weight vectors such as INLINEFORM2 would be considered uniform, which we greatly disagreed with. However, we determined that weight distributions below the INLINEFORM3 threshold appeared uniform 90% of the time, which we considered good agreement. For context dependency and variation dependency switches, we chose the value of INLINEFORM0 and INLINEFORM1 , respectively, using the 75th percentile of the values for different turns. Upon comparison with manual tagging of 100 randomly chosen conversations, we agreed on all 100 cases for the context dependency switch and 99 out of 100 cases for the variation dependency switch. Data and Classifier Our escalation data was obtained from BIBREF17 , which consists of INLINEFORM0 conversations ( INLINEFORM1 user turns) from two commercial airline IVAs. INLINEFORM2 of the INLINEFORM3 conversations had been tagged for escalation. See dataset statistics in Table TABREF17 . Airline dataset 1 has INLINEFORM0 conversations and INLINEFORM1 turns, and airline dataset 2 has INLINEFORM2 conversations and INLINEFORM3 turns. The low turn counts present in dataset 2 are due to the FAQ focus of dataset 2's particular IVA. Users tend to perform single queries such as “baggage policy" instead of engaging in a conversational interaction. In contrast, dataset 1 originated from a more “natural" IVA, and, therefore, users appeared to engage with it more through conversation. The classifier (Han) used for escalation prediction is outlined in BIBREF0 . As the code was unavailable, we implemented Han with TensorFlow BIBREF18 . Our version has substantially the same architecture as in BIBREF0 with the exception that LSTM cells are used in place of GRU. We used the 200-dimensional word embeddings from glove.twitter.27B BIBREF19 and did not adapt them during the training of our model. Each recurrent encoding layer has 50 forward and 50 backward cells, giving 100-dimensional embeddings each for turns and conversations. In predicting escalation, our network obtained an INLINEFORM0 of INLINEFORM1 ( INLINEFORM2 precision, INLINEFORM3 recall, averaged over five random splits). To compute these metrics, turn-level annotations were converted to conversation-level annotations by labeling a conversation as escalate if any turn in the conversation was labeled escalate. For the visualization experiments, a random 80-20 split was used to create training and testing sets. The training set consisted of INLINEFORM0 conversations of which INLINEFORM1 should escalate. The testing set consisted of INLINEFORM2 conversations of which 241 should escalate. Creating Our Visuals Given the occurrences of attention ( INLINEFORM0 ), context ( INLINEFORM1 ), and variation ( INLINEFORM2 ) dependency switches, we now discuss how a visual of the entire conversation can be created. For each turn INLINEFORM3 , create a vector INLINEFORM4 , where each variable inside this vector takes the value 1 when the attention, context, and variation dependency switches trigger, respectively, and 0 otherwise. Compute INLINEFORM0 , and use this value to represent the intensity of a single color (blue in our examples). The higher the value of INLINEFORM1 , the higher the color intensity. Note that INLINEFORM2 . Take, for example, Table TABREF19 where for the first conversation's weights (using our weights), turns 2,3, and 6 have values of INLINEFORM3 , turns 4,5, and 7 have values of INLINEFORM4 , and the first turn has a value of 0. Considering a higher dimension for INLINEFORM5 which would create more values for INLINEFORM6 is an objective for future work. Results and Discussion We first considered the frequency of each of the behaviors discussed in section SECREF7 as well as their co-occurrences with escalation. After removing single turn conversations (as they are uniform by default), the number of turns that had a context dependency switch as a result of adding a new turn was INLINEFORM0 . However, the number of times that such an event coincided at least once with escalation was 766. As it appeared that the effect of context dependency was quite low, we next considered the variation and attention dependency variables. The total number of turns that had a variation dependency switch was INLINEFORM1 , and INLINEFORM2 also coincided with a change of escalation, indicating that a variation dependency switch is potentially valuable in the creation of new visuals. In addition, the number of uniform to non-uniform turn pairs (uniform weight distribution for first INLINEFORM3 turns but non-uniform for first INLINEFORM4 turns) was INLINEFORM5 whereas the number of non-uniform to uniform turn pairs was 259. Out of the times when there was a uniform to non-uniform switch, 710 cases coincided with escalation compared to only 22 for non-uniform to uniform changes. As shown in Figure FIGREF2 , the use of our method when the Han weights are uniform greatly reduces or even eliminates the uniformity at lower INLINEFORM0 thresholds. To determine if our visuals were also assigning weights properly, we had three reviewers rate on a 0 to 10 scale (0 being poor, 10 being best) of how well each visualization highlights the influential turns for escalation in the conversation. See Table TABREF20 for an example that was tagged nearly perfectly by reviewers. As our method only serves to highlight influential turns in situations when the existing attention weights are uniform, no direct comparison was done to Han weights over the entire dataset. To avoid bias, the chosen reviewers had never used the specific IVA and were not familiar with its knowledge base although they may have performed similar tagging tasks in the past. The annotators were reminded that if a turn is given a darker color, then that turn supposedly has greater influence in determining escalation. They were, thus, given the task of determining if they agree with the visualization's decision. A rating of 0 was instructed to be given on complete disagreement, and 10 upon perfect agreement. Consider a human representative given Our Weight in Table TABREF1 , which highlights turn 4 as the most influential turn on escalation, as opposed to the Han Weight which requires careful reading to make this determination. From the INLINEFORM0 conversations that escalated in the dataset, we first filtered conversations by a uniformity threshold, INLINEFORM1 (user chosen as described in subsection SECREF7 ). At this threshold, INLINEFORM2 or 138 conversations remained. Next, we filtered the conversations that were not correctly classified by Han, leaving 85 or INLINEFORM3 . The average INLINEFORM0 rating between the three reviewers over the remaining conversations was 6. This demonstrates that on average, reviewers felt that the visualizations were adequate. Put in perspective, adding adequate visuals to the thousands of daily escalations that would otherwise have no visual is a great improvement. In cases of uniform attention weights at the stopping point, this can also make it difficult to spot potential areas for classifier improvement if we do not incorporate turn weight fluctuations as the conversation progresses to the stopping point. For example, in the first escalated conversation displayed in Table TABREF19 , turn 6 has a high weight under our scheme because of the presence of the word “live". Customers will frequently ask for a “live customer representative" which is a sign for escalation. However, in Table TABREF19 , “live" is used in a different context, but the weight given to it is high due to turn weight fluctuations as the conversation progresses to the stopping point. Our weights expose this potential problem for the classifier which may suggest using n-grams or some other methodology for improvement. If we were to use uniform Han weights at the stopping point only, we might miss these areas for improvement. In addition to the possible reduction in human review time and spotting potential areas for classifier improvement, the visuals only required INLINEFORM0 milliseconds on average to compute per conversation (on a laptop with an Intel Core i7-4710MQ CPU @ 2.50GHz, 16 GB of RAM, running Ubuntu 16.04). This adds insignificant latency to the transfer while generating the visualization, which is an important goal. In the future, this work would greatly benefit from an expanded dataset. As we only wish to consider conversations with uniform weights on the turn of escalation, this cuts our dataset dramatically, necessitating a larger tagged dataset. Considering more attention behaviors so we can have higher granularity of color intensity is also an objective of future work. As our method only looks at the changes in attention weight, our method is not task-specific. Therefore, it would be beneficial to test our methodology on visualizing other sequential analysis tasks besides escalation, such as fraud or anomaly detection or applications in the medical domain BIBREF20 , BIBREF21 . Conclusion Although attention in deep neural networks was not initially introduced to inform observers, but to help a model make predictions, it can also be used to inform. In the instances where a model thinks all historical samples should be considered equally important in a sequential analysis task, we must look elsewhere for a computationally inexpensive means to understand what happened at the stopping point. In this paper, we have introduced such a means by monitoring attention changes over the sequential analysis to inform observers. This method introduces negligible overhead, an important consideration in real-time systems, and is not tied to the implementation details or task of the model, other than the prerequisite of an attention layer.
computationally inexpensive means to understand what happened at the stopping point
07c9863e1e86c31b740b5b5a77fe8000be00c273
07c9863e1e86c31b740b5b5a77fe8000be00c273_0
Q: Does a neural scoring function take both the question and the logical form as inputs? Text: Introduction Teaching computers to answer complex natural language questions requires sophisticated reasoning and human language understanding. We investigate generic natural language interfaces for simple arithmetic questions on semi-structured tables. Typical questions for this task are topic independent and may require performing multiple discrete operations such as aggregation, comparison, superlatives or arithmetics. We propose a weakly supervised neural model that eliminates the need for expensive feature engineering in the candidate ranking stage. Each natural language question is translated using the method of BIBREF0 into a set of machine understandable candidate representations, called logical forms or programs. Then, the most likely such program is retrieved in two steps: i) using a simple algorithm, logical forms are transformed back into paraphrases (textual representations) understandable by non-expert users, ii) next, these strings are further embedded together with their respective questions in a jointly learned vector space using convolutional neural networks over character and word embeddings. Multi-layer neural networks and bilinear mappings are further employed as effective similarity measures and combined to score the candidate interpretations. Finally, the highest ranked logical form is executed against the input data to retrieve the answer. Our method uses only weak-supervision from question-answer-table input triples, without requiring expensive annotations of gold logical forms. We empirically test our approach on a series of experiments on WikiTableQuestions, to our knowledge the only dataset designed for this task. An ensemble of our best models reached state-of-the-art accuracy of 38.7% at the moment of publication. Related Work We briefly mention here two main types of QA systems related to our task: semantic parsing-based and embedding-based. Semantic parsing-based methods perform a functional parse of the question that is further converted to a machine understandable program and executed on a knowledgebase or database. For QA on semi-structured tables with multi-compositional queries, BIBREF0 generate and rank candidate logical forms with a log-linear model, resorting to hand-crafted features for scoring. As opposed, we learn neural features for each question and the paraphrase of each candidate logical form. Paraphrases and hand-crafted features have successfully facilitated semantic parsers targeting simple factoid BIBREF1 and compositional questions BIBREF2 . Compositional questions are also the focus of BIBREF3 that construct logical forms from the question embedding through operations parametrized by RNNs, thus losing interpretability. A similar fully neural, end-to-end differentiable network was proposed by BIBREF4 . Embedding-based methods determine compatibility between a question-answer pair using embeddings in a shared vector space BIBREF5 . Embedding learning using deep learning architectures has been widely explored in other domains, e.g. in the context of sentiment classification BIBREF6 . Model We describe our QA system. For every question $q$ : i) a set of candidate logical forms $\lbrace z_i\rbrace _{i = 1, \ldots , n_q}$ is generated using the method of BIBREF0 ; ii) each such candidate program $z_i$ is transformed in an interpretable textual representation $t_i$ ; iii) all $t_i$ 's are jointly embedded with $q$ in the same vector space and scored using a neural similarity function; iv) the logical form $z_i^*$ corresponding to the highest ranked $t_i^*$ is selected as the machine-understandable translation of question $q$ and executed on the input table to retrieve the final answer. Our contributions are the novel models that perform steps ii) and iii), while for step i) we rely on the work of BIBREF0 (henceforth: PL2015). Candidate Logical Form Generation We generate a set of candidate logical forms from a question using the method of BIBREF0 . Only briefly, we review this method. Specifically, a question is parsed into a set of candidate logical forms using a semantic parser that recursively applies deduction rules. Logical forms are represented in Lambda DCS form BIBREF7 and can be executed on a table to yield an answer. An example of a question and its correct logical form are below: How many people attended the last Rolling Stones concert? R[ $\lambda x$ [Attendance.Number. $x$ ]].argmax(Act.RollingStones,Index). Converting Logical Forms to Text In Algorithm 1 we describe how logical forms are transformed into interpretable textual representations called "paraphrases". We choose to embed paraphrases in low dimensional vectors and compare these against the question embedding. Working directly with paraphrases instead of logical forms is a design choice, justified by their interpretability, comprehensibility (understandability by non-technical users) and empirical accuracy gains. Our method recursively traverses the tree representation of the logical form starting at the root. For example, the correct candidate logical form for the question mentioned in section "Candidate Logical Form Generation" , namely How many people attended the last Rolling Stones concert?, is mapped to the paraphrase Attendance as number of last table row where act is Rolling Stones. Joint Embedding Model We embed the question together with the paraphrases of candidate logical forms in a jointly learned vector space. We use two convolutional neural networks (CNNs) for question and paraphrase embeddings, on top of which a max-pooling operation is applied. The CNNs receive as input token embeddings obtained as described below. switch case assert [1](1)SE[SWITCH]SwitchEndSwitch[1] 1 SE[CASE]CaseEndCase[1] 1 *EndSwitch*EndCase Recursive paraphrasing of a Lambda DCS logical form. The + operation means string concatenation with spaces. Lambda DCS language is detailed in BIBREF7 . [1] Paraphrase $z$ $z$ is the root of a Lambda DCS logical form $z$ Aggregation e.g. count, max, min... $t\leftarrow \textsc {Aggregation}(z) + \textsc {Paraphrase}(z.child)$ Join join on relations, e.g. $\lambda x$ .Country( $x$ , Australia) $t\leftarrow \textsc {Paraphrase}(z.relation)$ + $\textsc {Paraphrase}(z.child)$ Reverse reverses a binary relation $t\leftarrow \textsc {Paraphrase}(z.child)$ LambdaFormula lambda expression $\lambda x.[...]$ $z$0 Arithmetic or Merge e.g. plus, minus, union... $z$1 Superlative e.g. argmax(x, value) $z$2 Value i.e. constants $z$3 return $z$4 $z$5 is the textual paraphrase of the Lambda DCS logical form The embedding of an input word sequence (e.g. question, paraphrase) is depicted in Figure 1 and is similar to BIBREF8 . Every token is parametrized by learnable word and character embeddings. The latter help dealing with unknown tokens (e.g. rare words, misspellings, numbers or dates). Token vectors are then obtained using a CNN (with multiple filter widths) over the constituent characters , followed by a max-over-time pooling layer and concatenation with the word vector. We map both the question $q$ and the paraphrase $t$ into a joint vector space using sentence embeddings obtained from two jointly trained CNNs. CNNs' filters span a different number of tokens from a width set $L$ . For each filter width $l \in L$ , we learn $n$ different filters, each of dimension $\mathbb {R}^{l\times d}$ , where $d$ is the word embedding size. After the convolution layer, we apply a max-over-time pooling on the resulting feature matrices which yields, per filter-width, a vector of dimension $n$ . Next, we concatenate the resulting max-over-time pooling vectors of the different filter-widths in $L$ to form our sentence embedding. The final sentence embedding size is $n|L|$ . Let $u,v \in \mathbb {R}^{d}$ be the sentence embeddings of question $q$ and of paraphrase $t$ . We experiment with the following similarity scores: i) DOTPRODUCT : $u^{T}v$ ; ii) BILIN : $u^{T}Sv$ , with $S\in \mathbb {R}^{d\times d}$ being a trainable matrix; iii) FC: u and v concatenated, followed by two sequential fully connected layers with ELU non-linearities; iv) FC-BILIN: weighted average of BILIN and FC. These models define parametrized similarity scoring functions $: Q\times T\rightarrow \mathbb {R}$ , where $Q$ is the set of natural language questions and $T$ is the set of paraphrases of logical forms. Training Algorithm For training, we build two sets $\mathcal {P}$ (positive) and $\mathcal {N}$ (negative) consisting of all pairs $(q,t) \in Q \times T$ of questions and paraphrases of candidate logical forms generated as described in Section "Candidate Logical Form Generation" . A pair is positive or negative if its logical form gives the correct or respectively incorrect gold answer when executed on the corresponding table. During training, we use the ranking hinge loss function (with margin $\theta $ ): $ {L(\mathcal {P},\mathcal {N})= \sum _{p\in \mathcal {P}}\sum _{n\in \mathcal {N}}\max (0,}\theta -(p)+(n)) $ Experiments Dataset: For training and testing we use the train-validation-test split of WikiTableQuestions BIBREF0 , a dataset containing 22,033 pairs of questions and answers based on 2,108 Wikipedia tables. This dataset is also used by our baselines, BIBREF0 , BIBREF3 . Tables are not shared across these splits, which requires models to generalize to unseen data. We obtain about 3.8 million training triples $(q,t,l)$ , where $l$ is a binary indicator of whether the logical form gives the correct gold answer when executed on the corresponding table. 76.7% of the questions have at least one correct candidate logical form when generated with the model of BIBREF0 . Training Details: Our models are implemented using TensorFlow and trained on a single Tesla P100 GPU. Training takes approximately 6 hours. We initialize word vectors with 200 dimensional GloVe ( BIBREF9 ) pre-trained vectors. For the character CNN we use widths spanning 1, 2 and 3 characters. The sentence embedding CNNs use widths of $L=\lbrace 2,4,6,8\rbrace $ . The fully connected layers in the FC models have 500 hidden neurons, which we regularize using 0.8-dropout. The loss margin $\theta $ is set to 0.2. Optimization is done using Adam BIBREF10 with a learning rate of 7e-4. Hyperparameters are tunned on the development data split of the Wiki-TableQuestions table. We choose the best performing model on the validation set using early stopping. Results: Experimental results are shown in Table 1 . Our best performing single model is FC-BILIN with CNNs, Intuitively, BILIN and FC are able to extract different interaction features between the two input vectors, while their linear combination retains the best of both models. An ensemble of 15 single CNN-FC-BILIN models was setting (at the moment of publication) a new state-of-the-art precision@1 for this dataset: 38.7%. This shows that the same model initialized differently can learn different features. We also experimented with recurrent neural networks (RNNs) for the sentence embedding since these are known to capture word order better than CNNs. However, RNN-FC-BILIN performs worse than its CNN variant. There are a few reasons that contributed to the low accuracy obtained on this task by various methods (including ours) compared to other NLP problems: weak supervision, small training size and a high percentage of unanswerable questions. Error Analysis: The questions our models do not answer correctly can be split into two categories: either a correct logical form is not generated, or our scoring models do not rank the correct one at the top. We perform a qualitative analysis presented in Table 2 to reveal common question types our models often rank incorrectly. The first two examples show questions whose correct logical form depends on the structure of the table. In these cases a bias towards the more general logical form is often exhibited. The third example shows that our model has difficulty distinguishing operands with slight modification (e.g. smaller and smaller equals), which may be due to weak-supervision. Ablation Studies: For a better understanding of our model, we investigate the usefulness of various components with an ablation study shown in Table 3 . In particular, we emphasize that replacing the paraphrasing stage with the raw strings of the Lambda DCS expressions resulted in lower precision@1, which confirms the utility of this stage. Analysis of Correct Answers: We analyze how well our best single model performs on various question types. For this, we manually annotate 80 randomly chosen questions that are correctly answered by our model and report statistics in Table 3 . Conclusion In this paper we propose a neural network QA system for semi-structured tables that eliminates the need for manually designed features. Experiments show that an ensemble of our models reaches competitive accuracy on the WikiTableQuestions dataset, thus indicating its capability to answer complex, multi-compositional questions. Our code is available at https://github.com/dalab/neural_qa . Acknowledgments This research was supported by the Swiss National Science Foundation (SNSF) grant number 407540_167176 under the project "Conversational Agent for Interactive Access to Information".
Yes
bf7cb53f4105f2e6a413d1adef5349ff1e673500
bf7cb53f4105f2e6a413d1adef5349ff1e673500_0
Q: What is the source of the paraphrases of the questions? Text: Introduction Teaching computers to answer complex natural language questions requires sophisticated reasoning and human language understanding. We investigate generic natural language interfaces for simple arithmetic questions on semi-structured tables. Typical questions for this task are topic independent and may require performing multiple discrete operations such as aggregation, comparison, superlatives or arithmetics. We propose a weakly supervised neural model that eliminates the need for expensive feature engineering in the candidate ranking stage. Each natural language question is translated using the method of BIBREF0 into a set of machine understandable candidate representations, called logical forms or programs. Then, the most likely such program is retrieved in two steps: i) using a simple algorithm, logical forms are transformed back into paraphrases (textual representations) understandable by non-expert users, ii) next, these strings are further embedded together with their respective questions in a jointly learned vector space using convolutional neural networks over character and word embeddings. Multi-layer neural networks and bilinear mappings are further employed as effective similarity measures and combined to score the candidate interpretations. Finally, the highest ranked logical form is executed against the input data to retrieve the answer. Our method uses only weak-supervision from question-answer-table input triples, without requiring expensive annotations of gold logical forms. We empirically test our approach on a series of experiments on WikiTableQuestions, to our knowledge the only dataset designed for this task. An ensemble of our best models reached state-of-the-art accuracy of 38.7% at the moment of publication. Related Work We briefly mention here two main types of QA systems related to our task: semantic parsing-based and embedding-based. Semantic parsing-based methods perform a functional parse of the question that is further converted to a machine understandable program and executed on a knowledgebase or database. For QA on semi-structured tables with multi-compositional queries, BIBREF0 generate and rank candidate logical forms with a log-linear model, resorting to hand-crafted features for scoring. As opposed, we learn neural features for each question and the paraphrase of each candidate logical form. Paraphrases and hand-crafted features have successfully facilitated semantic parsers targeting simple factoid BIBREF1 and compositional questions BIBREF2 . Compositional questions are also the focus of BIBREF3 that construct logical forms from the question embedding through operations parametrized by RNNs, thus losing interpretability. A similar fully neural, end-to-end differentiable network was proposed by BIBREF4 . Embedding-based methods determine compatibility between a question-answer pair using embeddings in a shared vector space BIBREF5 . Embedding learning using deep learning architectures has been widely explored in other domains, e.g. in the context of sentiment classification BIBREF6 . Model We describe our QA system. For every question $q$ : i) a set of candidate logical forms $\lbrace z_i\rbrace _{i = 1, \ldots , n_q}$ is generated using the method of BIBREF0 ; ii) each such candidate program $z_i$ is transformed in an interpretable textual representation $t_i$ ; iii) all $t_i$ 's are jointly embedded with $q$ in the same vector space and scored using a neural similarity function; iv) the logical form $z_i^*$ corresponding to the highest ranked $t_i^*$ is selected as the machine-understandable translation of question $q$ and executed on the input table to retrieve the final answer. Our contributions are the novel models that perform steps ii) and iii), while for step i) we rely on the work of BIBREF0 (henceforth: PL2015). Candidate Logical Form Generation We generate a set of candidate logical forms from a question using the method of BIBREF0 . Only briefly, we review this method. Specifically, a question is parsed into a set of candidate logical forms using a semantic parser that recursively applies deduction rules. Logical forms are represented in Lambda DCS form BIBREF7 and can be executed on a table to yield an answer. An example of a question and its correct logical form are below: How many people attended the last Rolling Stones concert? R[ $\lambda x$ [Attendance.Number. $x$ ]].argmax(Act.RollingStones,Index). Converting Logical Forms to Text In Algorithm 1 we describe how logical forms are transformed into interpretable textual representations called "paraphrases". We choose to embed paraphrases in low dimensional vectors and compare these against the question embedding. Working directly with paraphrases instead of logical forms is a design choice, justified by their interpretability, comprehensibility (understandability by non-technical users) and empirical accuracy gains. Our method recursively traverses the tree representation of the logical form starting at the root. For example, the correct candidate logical form for the question mentioned in section "Candidate Logical Form Generation" , namely How many people attended the last Rolling Stones concert?, is mapped to the paraphrase Attendance as number of last table row where act is Rolling Stones. Joint Embedding Model We embed the question together with the paraphrases of candidate logical forms in a jointly learned vector space. We use two convolutional neural networks (CNNs) for question and paraphrase embeddings, on top of which a max-pooling operation is applied. The CNNs receive as input token embeddings obtained as described below. switch case assert [1](1)SE[SWITCH]SwitchEndSwitch[1] 1 SE[CASE]CaseEndCase[1] 1 *EndSwitch*EndCase Recursive paraphrasing of a Lambda DCS logical form. The + operation means string concatenation with spaces. Lambda DCS language is detailed in BIBREF7 . [1] Paraphrase $z$ $z$ is the root of a Lambda DCS logical form $z$ Aggregation e.g. count, max, min... $t\leftarrow \textsc {Aggregation}(z) + \textsc {Paraphrase}(z.child)$ Join join on relations, e.g. $\lambda x$ .Country( $x$ , Australia) $t\leftarrow \textsc {Paraphrase}(z.relation)$ + $\textsc {Paraphrase}(z.child)$ Reverse reverses a binary relation $t\leftarrow \textsc {Paraphrase}(z.child)$ LambdaFormula lambda expression $\lambda x.[...]$ $z$0 Arithmetic or Merge e.g. plus, minus, union... $z$1 Superlative e.g. argmax(x, value) $z$2 Value i.e. constants $z$3 return $z$4 $z$5 is the textual paraphrase of the Lambda DCS logical form The embedding of an input word sequence (e.g. question, paraphrase) is depicted in Figure 1 and is similar to BIBREF8 . Every token is parametrized by learnable word and character embeddings. The latter help dealing with unknown tokens (e.g. rare words, misspellings, numbers or dates). Token vectors are then obtained using a CNN (with multiple filter widths) over the constituent characters , followed by a max-over-time pooling layer and concatenation with the word vector. We map both the question $q$ and the paraphrase $t$ into a joint vector space using sentence embeddings obtained from two jointly trained CNNs. CNNs' filters span a different number of tokens from a width set $L$ . For each filter width $l \in L$ , we learn $n$ different filters, each of dimension $\mathbb {R}^{l\times d}$ , where $d$ is the word embedding size. After the convolution layer, we apply a max-over-time pooling on the resulting feature matrices which yields, per filter-width, a vector of dimension $n$ . Next, we concatenate the resulting max-over-time pooling vectors of the different filter-widths in $L$ to form our sentence embedding. The final sentence embedding size is $n|L|$ . Let $u,v \in \mathbb {R}^{d}$ be the sentence embeddings of question $q$ and of paraphrase $t$ . We experiment with the following similarity scores: i) DOTPRODUCT : $u^{T}v$ ; ii) BILIN : $u^{T}Sv$ , with $S\in \mathbb {R}^{d\times d}$ being a trainable matrix; iii) FC: u and v concatenated, followed by two sequential fully connected layers with ELU non-linearities; iv) FC-BILIN: weighted average of BILIN and FC. These models define parametrized similarity scoring functions $: Q\times T\rightarrow \mathbb {R}$ , where $Q$ is the set of natural language questions and $T$ is the set of paraphrases of logical forms. Training Algorithm For training, we build two sets $\mathcal {P}$ (positive) and $\mathcal {N}$ (negative) consisting of all pairs $(q,t) \in Q \times T$ of questions and paraphrases of candidate logical forms generated as described in Section "Candidate Logical Form Generation" . A pair is positive or negative if its logical form gives the correct or respectively incorrect gold answer when executed on the corresponding table. During training, we use the ranking hinge loss function (with margin $\theta $ ): $ {L(\mathcal {P},\mathcal {N})= \sum _{p\in \mathcal {P}}\sum _{n\in \mathcal {N}}\max (0,}\theta -(p)+(n)) $ Experiments Dataset: For training and testing we use the train-validation-test split of WikiTableQuestions BIBREF0 , a dataset containing 22,033 pairs of questions and answers based on 2,108 Wikipedia tables. This dataset is also used by our baselines, BIBREF0 , BIBREF3 . Tables are not shared across these splits, which requires models to generalize to unseen data. We obtain about 3.8 million training triples $(q,t,l)$ , where $l$ is a binary indicator of whether the logical form gives the correct gold answer when executed on the corresponding table. 76.7% of the questions have at least one correct candidate logical form when generated with the model of BIBREF0 . Training Details: Our models are implemented using TensorFlow and trained on a single Tesla P100 GPU. Training takes approximately 6 hours. We initialize word vectors with 200 dimensional GloVe ( BIBREF9 ) pre-trained vectors. For the character CNN we use widths spanning 1, 2 and 3 characters. The sentence embedding CNNs use widths of $L=\lbrace 2,4,6,8\rbrace $ . The fully connected layers in the FC models have 500 hidden neurons, which we regularize using 0.8-dropout. The loss margin $\theta $ is set to 0.2. Optimization is done using Adam BIBREF10 with a learning rate of 7e-4. Hyperparameters are tunned on the development data split of the Wiki-TableQuestions table. We choose the best performing model on the validation set using early stopping. Results: Experimental results are shown in Table 1 . Our best performing single model is FC-BILIN with CNNs, Intuitively, BILIN and FC are able to extract different interaction features between the two input vectors, while their linear combination retains the best of both models. An ensemble of 15 single CNN-FC-BILIN models was setting (at the moment of publication) a new state-of-the-art precision@1 for this dataset: 38.7%. This shows that the same model initialized differently can learn different features. We also experimented with recurrent neural networks (RNNs) for the sentence embedding since these are known to capture word order better than CNNs. However, RNN-FC-BILIN performs worse than its CNN variant. There are a few reasons that contributed to the low accuracy obtained on this task by various methods (including ours) compared to other NLP problems: weak supervision, small training size and a high percentage of unanswerable questions. Error Analysis: The questions our models do not answer correctly can be split into two categories: either a correct logical form is not generated, or our scoring models do not rank the correct one at the top. We perform a qualitative analysis presented in Table 2 to reveal common question types our models often rank incorrectly. The first two examples show questions whose correct logical form depends on the structure of the table. In these cases a bias towards the more general logical form is often exhibited. The third example shows that our model has difficulty distinguishing operands with slight modification (e.g. smaller and smaller equals), which may be due to weak-supervision. Ablation Studies: For a better understanding of our model, we investigate the usefulness of various components with an ablation study shown in Table 3 . In particular, we emphasize that replacing the paraphrasing stage with the raw strings of the Lambda DCS expressions resulted in lower precision@1, which confirms the utility of this stage. Analysis of Correct Answers: We analyze how well our best single model performs on various question types. For this, we manually annotate 80 randomly chosen questions that are correctly answered by our model and report statistics in Table 3 . Conclusion In this paper we propose a neural network QA system for semi-structured tables that eliminates the need for manually designed features. Experiments show that an ensemble of our models reaches competitive accuracy on the WikiTableQuestions dataset, thus indicating its capability to answer complex, multi-compositional questions. Our code is available at https://github.com/dalab/neural_qa . Acknowledgments This research was supported by the Swiss National Science Foundation (SNSF) grant number 407540_167176 under the project "Conversational Agent for Interactive Access to Information".
WikiTableQuestions
a6419207d2299f25e2688517d1580b7ba07c8e4b
a6419207d2299f25e2688517d1580b7ba07c8e4b_0
Q: Does the dataset they use differ from the one used by Pasupat and Liang, 2015? Text: Introduction Teaching computers to answer complex natural language questions requires sophisticated reasoning and human language understanding. We investigate generic natural language interfaces for simple arithmetic questions on semi-structured tables. Typical questions for this task are topic independent and may require performing multiple discrete operations such as aggregation, comparison, superlatives or arithmetics. We propose a weakly supervised neural model that eliminates the need for expensive feature engineering in the candidate ranking stage. Each natural language question is translated using the method of BIBREF0 into a set of machine understandable candidate representations, called logical forms or programs. Then, the most likely such program is retrieved in two steps: i) using a simple algorithm, logical forms are transformed back into paraphrases (textual representations) understandable by non-expert users, ii) next, these strings are further embedded together with their respective questions in a jointly learned vector space using convolutional neural networks over character and word embeddings. Multi-layer neural networks and bilinear mappings are further employed as effective similarity measures and combined to score the candidate interpretations. Finally, the highest ranked logical form is executed against the input data to retrieve the answer. Our method uses only weak-supervision from question-answer-table input triples, without requiring expensive annotations of gold logical forms. We empirically test our approach on a series of experiments on WikiTableQuestions, to our knowledge the only dataset designed for this task. An ensemble of our best models reached state-of-the-art accuracy of 38.7% at the moment of publication. Related Work We briefly mention here two main types of QA systems related to our task: semantic parsing-based and embedding-based. Semantic parsing-based methods perform a functional parse of the question that is further converted to a machine understandable program and executed on a knowledgebase or database. For QA on semi-structured tables with multi-compositional queries, BIBREF0 generate and rank candidate logical forms with a log-linear model, resorting to hand-crafted features for scoring. As opposed, we learn neural features for each question and the paraphrase of each candidate logical form. Paraphrases and hand-crafted features have successfully facilitated semantic parsers targeting simple factoid BIBREF1 and compositional questions BIBREF2 . Compositional questions are also the focus of BIBREF3 that construct logical forms from the question embedding through operations parametrized by RNNs, thus losing interpretability. A similar fully neural, end-to-end differentiable network was proposed by BIBREF4 . Embedding-based methods determine compatibility between a question-answer pair using embeddings in a shared vector space BIBREF5 . Embedding learning using deep learning architectures has been widely explored in other domains, e.g. in the context of sentiment classification BIBREF6 . Model We describe our QA system. For every question $q$ : i) a set of candidate logical forms $\lbrace z_i\rbrace _{i = 1, \ldots , n_q}$ is generated using the method of BIBREF0 ; ii) each such candidate program $z_i$ is transformed in an interpretable textual representation $t_i$ ; iii) all $t_i$ 's are jointly embedded with $q$ in the same vector space and scored using a neural similarity function; iv) the logical form $z_i^*$ corresponding to the highest ranked $t_i^*$ is selected as the machine-understandable translation of question $q$ and executed on the input table to retrieve the final answer. Our contributions are the novel models that perform steps ii) and iii), while for step i) we rely on the work of BIBREF0 (henceforth: PL2015). Candidate Logical Form Generation We generate a set of candidate logical forms from a question using the method of BIBREF0 . Only briefly, we review this method. Specifically, a question is parsed into a set of candidate logical forms using a semantic parser that recursively applies deduction rules. Logical forms are represented in Lambda DCS form BIBREF7 and can be executed on a table to yield an answer. An example of a question and its correct logical form are below: How many people attended the last Rolling Stones concert? R[ $\lambda x$ [Attendance.Number. $x$ ]].argmax(Act.RollingStones,Index). Converting Logical Forms to Text In Algorithm 1 we describe how logical forms are transformed into interpretable textual representations called "paraphrases". We choose to embed paraphrases in low dimensional vectors and compare these against the question embedding. Working directly with paraphrases instead of logical forms is a design choice, justified by their interpretability, comprehensibility (understandability by non-technical users) and empirical accuracy gains. Our method recursively traverses the tree representation of the logical form starting at the root. For example, the correct candidate logical form for the question mentioned in section "Candidate Logical Form Generation" , namely How many people attended the last Rolling Stones concert?, is mapped to the paraphrase Attendance as number of last table row where act is Rolling Stones. Joint Embedding Model We embed the question together with the paraphrases of candidate logical forms in a jointly learned vector space. We use two convolutional neural networks (CNNs) for question and paraphrase embeddings, on top of which a max-pooling operation is applied. The CNNs receive as input token embeddings obtained as described below. switch case assert [1](1)SE[SWITCH]SwitchEndSwitch[1] 1 SE[CASE]CaseEndCase[1] 1 *EndSwitch*EndCase Recursive paraphrasing of a Lambda DCS logical form. The + operation means string concatenation with spaces. Lambda DCS language is detailed in BIBREF7 . [1] Paraphrase $z$ $z$ is the root of a Lambda DCS logical form $z$ Aggregation e.g. count, max, min... $t\leftarrow \textsc {Aggregation}(z) + \textsc {Paraphrase}(z.child)$ Join join on relations, e.g. $\lambda x$ .Country( $x$ , Australia) $t\leftarrow \textsc {Paraphrase}(z.relation)$ + $\textsc {Paraphrase}(z.child)$ Reverse reverses a binary relation $t\leftarrow \textsc {Paraphrase}(z.child)$ LambdaFormula lambda expression $\lambda x.[...]$ $z$0 Arithmetic or Merge e.g. plus, minus, union... $z$1 Superlative e.g. argmax(x, value) $z$2 Value i.e. constants $z$3 return $z$4 $z$5 is the textual paraphrase of the Lambda DCS logical form The embedding of an input word sequence (e.g. question, paraphrase) is depicted in Figure 1 and is similar to BIBREF8 . Every token is parametrized by learnable word and character embeddings. The latter help dealing with unknown tokens (e.g. rare words, misspellings, numbers or dates). Token vectors are then obtained using a CNN (with multiple filter widths) over the constituent characters , followed by a max-over-time pooling layer and concatenation with the word vector. We map both the question $q$ and the paraphrase $t$ into a joint vector space using sentence embeddings obtained from two jointly trained CNNs. CNNs' filters span a different number of tokens from a width set $L$ . For each filter width $l \in L$ , we learn $n$ different filters, each of dimension $\mathbb {R}^{l\times d}$ , where $d$ is the word embedding size. After the convolution layer, we apply a max-over-time pooling on the resulting feature matrices which yields, per filter-width, a vector of dimension $n$ . Next, we concatenate the resulting max-over-time pooling vectors of the different filter-widths in $L$ to form our sentence embedding. The final sentence embedding size is $n|L|$ . Let $u,v \in \mathbb {R}^{d}$ be the sentence embeddings of question $q$ and of paraphrase $t$ . We experiment with the following similarity scores: i) DOTPRODUCT : $u^{T}v$ ; ii) BILIN : $u^{T}Sv$ , with $S\in \mathbb {R}^{d\times d}$ being a trainable matrix; iii) FC: u and v concatenated, followed by two sequential fully connected layers with ELU non-linearities; iv) FC-BILIN: weighted average of BILIN and FC. These models define parametrized similarity scoring functions $: Q\times T\rightarrow \mathbb {R}$ , where $Q$ is the set of natural language questions and $T$ is the set of paraphrases of logical forms. Training Algorithm For training, we build two sets $\mathcal {P}$ (positive) and $\mathcal {N}$ (negative) consisting of all pairs $(q,t) \in Q \times T$ of questions and paraphrases of candidate logical forms generated as described in Section "Candidate Logical Form Generation" . A pair is positive or negative if its logical form gives the correct or respectively incorrect gold answer when executed on the corresponding table. During training, we use the ranking hinge loss function (with margin $\theta $ ): $ {L(\mathcal {P},\mathcal {N})= \sum _{p\in \mathcal {P}}\sum _{n\in \mathcal {N}}\max (0,}\theta -(p)+(n)) $ Experiments Dataset: For training and testing we use the train-validation-test split of WikiTableQuestions BIBREF0 , a dataset containing 22,033 pairs of questions and answers based on 2,108 Wikipedia tables. This dataset is also used by our baselines, BIBREF0 , BIBREF3 . Tables are not shared across these splits, which requires models to generalize to unseen data. We obtain about 3.8 million training triples $(q,t,l)$ , where $l$ is a binary indicator of whether the logical form gives the correct gold answer when executed on the corresponding table. 76.7% of the questions have at least one correct candidate logical form when generated with the model of BIBREF0 . Training Details: Our models are implemented using TensorFlow and trained on a single Tesla P100 GPU. Training takes approximately 6 hours. We initialize word vectors with 200 dimensional GloVe ( BIBREF9 ) pre-trained vectors. For the character CNN we use widths spanning 1, 2 and 3 characters. The sentence embedding CNNs use widths of $L=\lbrace 2,4,6,8\rbrace $ . The fully connected layers in the FC models have 500 hidden neurons, which we regularize using 0.8-dropout. The loss margin $\theta $ is set to 0.2. Optimization is done using Adam BIBREF10 with a learning rate of 7e-4. Hyperparameters are tunned on the development data split of the Wiki-TableQuestions table. We choose the best performing model on the validation set using early stopping. Results: Experimental results are shown in Table 1 . Our best performing single model is FC-BILIN with CNNs, Intuitively, BILIN and FC are able to extract different interaction features between the two input vectors, while their linear combination retains the best of both models. An ensemble of 15 single CNN-FC-BILIN models was setting (at the moment of publication) a new state-of-the-art precision@1 for this dataset: 38.7%. This shows that the same model initialized differently can learn different features. We also experimented with recurrent neural networks (RNNs) for the sentence embedding since these are known to capture word order better than CNNs. However, RNN-FC-BILIN performs worse than its CNN variant. There are a few reasons that contributed to the low accuracy obtained on this task by various methods (including ours) compared to other NLP problems: weak supervision, small training size and a high percentage of unanswerable questions. Error Analysis: The questions our models do not answer correctly can be split into two categories: either a correct logical form is not generated, or our scoring models do not rank the correct one at the top. We perform a qualitative analysis presented in Table 2 to reveal common question types our models often rank incorrectly. The first two examples show questions whose correct logical form depends on the structure of the table. In these cases a bias towards the more general logical form is often exhibited. The third example shows that our model has difficulty distinguishing operands with slight modification (e.g. smaller and smaller equals), which may be due to weak-supervision. Ablation Studies: For a better understanding of our model, we investigate the usefulness of various components with an ablation study shown in Table 3 . In particular, we emphasize that replacing the paraphrasing stage with the raw strings of the Lambda DCS expressions resulted in lower precision@1, which confirms the utility of this stage. Analysis of Correct Answers: We analyze how well our best single model performs on various question types. For this, we manually annotate 80 randomly chosen questions that are correctly answered by our model and report statistics in Table 3 . Conclusion In this paper we propose a neural network QA system for semi-structured tables that eliminates the need for manually designed features. Experiments show that an ensemble of our models reaches competitive accuracy on the WikiTableQuestions dataset, thus indicating its capability to answer complex, multi-compositional questions. Our code is available at https://github.com/dalab/neural_qa . Acknowledgments This research was supported by the Swiss National Science Foundation (SNSF) grant number 407540_167176 under the project "Conversational Agent for Interactive Access to Information".
No
5c0b8c1b649df1b07d9af3aa9154ac340ec8b81c
5c0b8c1b649df1b07d9af3aa9154ac340ec8b81c_0
Q: Is the model compared against a linear regression baseline? Text: Introduction Stock prediction is crucial for quantitative analysts and investment companies. Stocks' trends, however, are affected by a lot of factors such as interest rates, inflation rates and financial news [12]. To predict stock prices accurately, one must use these variable information. In particular, in the banking industry and financial services, analysts' armies are dedicated to pouring over, analyzing, and attempting to quantify qualitative data from news. A large amount of stock trend information is extracted from the large amount of text and quantitative information that is involved in the analysis. Investors may judge on the basis of technical analysis, such as charts of a company, market indices, and on textual information such as news blogs or newspapers. It is however difficult for investors to analyze and predict market trends according to all of these information [22]. A lot of artificial intelligence approaches have been investigated to automatically predict those trends [3]. For instance, investment simulation analysis with artificial markets or stock trend analysis with lexical cohesion based metric of financial news' sentiment polarity. Quantitative analysis today is heavily dependent on data. However, the majority of such data is unstructured text that comes from sources like financial news articles. The challenge is not only the amount of data that are involved, but also the kind of language that is used in them to express sentiments, which means emoticons. Sifting through huge volumes of this text data is difficult as well as time-consuming. It also requires a great deal of resources and expertise to analyze all of that [4]. To solve the above problem, in this paper we use sentiment analysis to extract information from textual information. Sentiment analysis is the automated process of understanding an opinion about a given subject from news articles [5]. The analyzed data quantifies reactions or sentiments of the general public toward people, ideas or certain products and reveal the information's contextual polarity. Sentiment analysis allows us to understand if newspapers are talking positively or negatively about the financial market, get key insights about the stock's future trend market. We use valence aware dictionary and sentiment reasoner (VADER) to extract sentiment scores. VADER is a lexicon and rule-based sentiment analysis tool attuned to sentiments that are expressed in social media specifically [6]. VADER has been found to be quite successful when dealing with NY Times editorials and social media texts. This is because VADER not only tells about the negativity score and positively but also tells us about how positive or negative a sentiment is. However, news reports are not all objective. We may increase bias because of some non-objective reports, if we rely on the information that is extracted from the news for prediction fully. Therefore, in order to enhance the prediction model's robustness, we will adopt differential privacy (DP) method. DP is a system for sharing information about a dataset publicly by describing groups' patterns within the dataset while withholding information about individuals in the dataset. DP can be achieved if the we are willing to add random noise to the result. For example, rather than simply reporting the sum, we can inject noise from a Laplace or gaussian distribution, producing a result that’s not quite exact, that masks the contents of any given row. In the last several years a promising approach to private data analysis has emerged, based on DP, which ensures that an analysis outcome is "roughly as likely" to occur independent of whether any individual opts in to, or to opts out of, the database. In consequence, any one individual's specific data can never greatly affect the results. General techniques for ensuring DP have now been proposed, and a lot of datamining tasks can be carried out in a DP method, frequently with very accurate results [21]. We proposed a DP-LSTM neural network, which increase the accuracy of prediction and robustness of model at the same time. The remainder of the paper is organized as follows. In Section 2, we introduce stock price model, the sentiment analysis and differential privacy method. In Section 3, we develop the different privacy-inspired LSTM (DP-LSTM) deep neural network and present the training details. Prediction results are provided in Section 4. Section 5 concludes the paper. Problem Statement In this section, we first introduce the background of the stock price model, which is based on the autoregressive moving average (ARMA) model. Then, we present the sentiment analysis details of the financial news and introduce how to use them to improve prediction performance. At last, we introduce the differential privacy framework and the loss function. Problem Statement ::: ARMA Model The ARMA model, which is one of the most widely used linear models in time series prediction [17], where the future value is assumed as a linear combination of the past errors and past values. ARMA is used to set the stock midterm prediction problem up. Let ${X}_t^\text{A}$ be the variable based on ARMA at time $t$, then we have where $X_{t-i}$ denotes the past value at time $t-i$; $\epsilon _{t}$ denotes the random error at time $t$; $\phi _i$ and $\psi _j$ are the coefficients; $\mu $ is a constant; $p$ and $q$ are integers that are often referred to as autoregressive and moving average polynomials, respectively. Problem Statement ::: Sentiment Analysis Another variable highly related to stock price is the textual information from news, whose changes may be a precursor to price changes. In our paper, news refers to a news article's title on a given trading day. It has been used to infer whether an event had informational content and whether investors' interpretations of the information were positive, negative or neutral. We hence use sentiment analysis to identify and extract opinions within a given text. Sentiment analysis aims at gauging the attitude, sentiments, evaluations and emotions of a speaker or writer based on subjectivity's computational treatment in a text [19]-[20]. Figure FIGREF3 shows an example of the sentiment analysis results obtained from financial news titles that were based on VADER. VADER uses a combination of a sentiment lexicon which are generally labelled according to their semantic orientation as either negative or positive. VADER has been found to be quite successful when dealing with news reviews. It is fully open-sourced under the MIT License. The result of VADER represent as sentiment scores, which include the positive, negative and neutral scores represent the proportion of text that falls in these categories. This means all these three scores should add up to 1. Besides, the Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1(most extreme negative) and +1 (most extreme positive). Figure FIGREF5 shows the positive and negative wordcloud, which is an intuitive analysis of the number of words in the news titles. Problem Statement ::: Sentiment-ARMA Model and Loss Function To take the sentiment analysis results of the financial news into account, we introduce the sentiment-ARMA model as follows where $\alpha $ and $\lambda $ are weighting factors; $c$ is a constant; and $f_2(\cdot )$ is similar to $f_1(\cdot )$ in the ARMA model (DISPLAY_FORM2) and is used to describe the prediction problem. In this paper, the LSTM neural network is used to predict the stock price, the input data is the previous stock price and the sentiment analysis results. Hence, the sentiment based LSTM neural network (named sentiment-LSTM) is aimed to minimize the following loss function: where $T$ denotes the number of prediction time slots, i.e., $t = 1,...,p$ are the observations (training input data), $t = p+1,...,p+T$ are the predicts (training output data); and $\hat{X}_t$ is given in (DISPLAY_FORM7). Problem Statement ::: Overview of LSTM Denote $\mathcal {X}_t^{\text{train}} = \lbrace X_{t-i},S_{t-i}\rbrace _{i=1}^p$ as the training input data. Figure FIGREF10 shows the LSTM's structure network, which comprises one or more hidden layers, an output layer and an input layer [16]. LSTM networks' main advantage is that the hidden layer comprises memory cells. Each memory cell recurrently has a core self-connected linear unit called “ Constant Error Carousel (CEC)” [13], which provides short-term memory storage and has three gates: Input gate, which controls the information from a new input to the memory cell, is given by where $h_{t-1}$ is the hidden state at the time step $t-1$; $i_t$ is the output of the input gate layer at the time step $t$; $\hat{c}_t$ is the candidate value to be added to the output at the time step $t$; $b_i$ and $b_c$ are biases of the input gate layer and the candidate value computation, respectively; $W_i$ and $W_c$ are weights of the input gate and the candidate value computation, respectively; and $\sigma (x) = 1/(1+e^{-x})$ is the pointwise nonlinear activation function. Forget gate, which controls the limit up to which a value is saved in the memory, is given by where $f_t$ is the forget state at the time step $t$, $W_f$ is the weight of the forget gate; and $b_f$ is the bias of the forget gate. Output gate, which controls the information output from the memory cell, is given by where new cell states $c_t$ are calculated based on the results of the previous two steps; $o_t$ is the output at the time step $t$; $W_o$ is the weight of the output gate; and $b_o$ is the bias of the output gate [14]. Problem Statement ::: Definition of Differential Privacy Differential privacy is one of privacy's most popular definitions today, which is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. It intuitively requires that the mechanism that outputs information about an underlying dataset is robust to one sample's any change, thus protecting privacy. A mechanism ${f}$ is a random function that takes a dataset $\mathcal {N}$ as input, and outputs a random variable ${f}(\mathcal {N})$. For example, suppose $\mathcal {N}$ is a news articles dataset, then the function that outputs compound score of articles in $\mathcal {N}$ plus noise from the standard normal distribution is a mechanism [7]. Although differential privacy was originally developed to facilitate secure analysis over sensitive data, it can also enhance the robustness of the data. Note that finance data, especially news data and stock data, is unstable with a lot of noise, with a more robust data the accuracy of prediction will be improved. Since we predict stock price by fusing news come from different sources, which might include fake news. Involving differential privacy in the training to improve the robustness of the finance news is meaningful. Training DP-LSTM Neural Network It is known that it is risky to predict stocks by considering news factors, because news can't guarantee full notarization and objectivity, many times extreme news will have a big impact on prediction models. To solve this problem, we consider entering the idea of the differential privacy when training. In this section, our DP-LSTM deep neural network training strategy is presented. The input data consists of three components: stock price, sentiment analysis compound score and noise. Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Data Preprocessing The data for this project are two parts, the first part is the historical S&P 500 component stocks, which are downloaded from the Yahoo Finance. We use the data over the period of from 12/07/2017 to 06/01/2018. The second part is the news article from financial domain are collected with the same time period as stock data. Since our paper illustrates the relationship between the sentiment of the news articles and stocks' price. Hence, only news article from financial domain are collected. The data is mainly taken from Webhose archived data, which consists of 306242 news articles present in JSON format, dating from December 2017 up to end of June 2018. The former 85% of the dataset is used as the training data and the remainder 15% is used as the testing data. The News publishers for this data are CNBC.com, Reuters.com, WSJ.com, Fortune.com. The Wall Street Journal is one of the largest newspapers in the United States, which coverage of breaking news and current headlines from the US and around the world include top stories, photos, videos, detailed analysis and in-depth thoughts; CNBC primarily carries business day coverage of U.S. and international financial markets, which following the end of the business day and on non-trading days; Fortune is an American multinational business magazine; Reuters is an international news organization. We preprocess the raw article body and use NLTK sentiment package alence Aware Dictionary and Sentiment Reasoner (VADER) to extract sentiment scores. The stocks with missing data are deleted, and the dataset we used eventually contains 451 stocks and 4 news resources (CNBC.com, Reuters.com, WSJ.comFortune.com.). Each stock records the adjust close price and news compound scores of 121 trading days. A rolling window with size 10 is used to separate data, that is, We predict the stock price of the next trading day based on historical data from the previous 10 days, hence resulting in a point-by-point prediction [15]. In particular, the training window is initialized with all real training data. Then we shift the window and add the next real point to the last point of training window to predict the next point and so forth. Then, according to the length of the window, the training data is divided into 92 sets of training input data (each set length 10) and training output data (each set length 1). The testing data is divided into input and output data of 9 windows (see Figure FIGREF20). Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Normalization To detect stock price pattern, it is necessary to normalize the stock price data. Since the LSTM neural network requires the stock patterns during training, we use “min-max” normalization method to reform dataset, which keeps the pattern of the data [11], as follow: where $X_{t}^{n}$ denotes the data after normalization. Accordingly, de-normalization is required at the end of the prediction process to get the original price, which is given by where $\hat{X}_{t}^{n}$ denotes the predicted data and $\hat{X}_{t}$ denotes the predicted data after de-normalization. Note that compound score is not normalized, since the compound score range from -1 to 1, which means all the compound score data has the same scale, so it is not require the normalization processing. Training DP-LSTM Neural Network ::: Adding Noise We consider the differential privacy as a method to improve the robustness of the LSTM predictions [8]. We explore the interplay between machine learning and differential privacy, and found that differential privacy has several properties that make it particularly useful in application such as robustness to extract textual information [9]. The robustness of textual information means that accuracy is guaranteed to be unaffected by certain false information [10]. The input data of the model has 5 dimensions, which are the stock price and four compound scores as $(X^t, S_1^t, S_2^t, S_3^t, S_4^t), t=1,...,T$, where $X^t$ represents the stock price and $S_i^t,~i=1,...,4$ respectively denote the mean compound score calculated from WSJ, CNBC, Fortune and Reuters. According to the process of differential privacy, we add Gaussian noise with different variances to the news according to the variance of the news, i.e., the news compound score after adding noise is given by where $\text{var}(\cdot )$ is the variance operator, $\lambda $ is a weighting factor and $\mathcal {N}(\cdot )$ denotes the random Gaussian process with zero mean and variance $\lambda \text{var}(S_i)$. We used python to crawl the news from the four sources of each trading day, perform sentiment analysis on the title of the news, and get the compound score. After splitting the data into training sets and test sets, we separately add noise to each of four news sources of the training set, then, for $n$-th stock, four sets of noise-added data $(X^n_t, {\widetilde{S}^t_1}, S^t_2, S^t_3, S^t_4)$, $(X^n_t, {S^t_1}, \widetilde{S}^t_2, S^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, \widetilde{S}^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, S^t_3, \widetilde{S}^t_4)$ are combined into a new training data through a rolling window. The stock price is then combined with the new compound score training data as input data for our DP-LSTM neural network. Training DP-LSTM Neural Network ::: Training Setting The LSTM model in figure FIGREF10 has six layers, followed by an LSTM layer, a dropout layer, an LSTM layer, an LSTM layer, a dropout layer, a dense layer, respectively. The dropout layers (with dropout rate 0.2) prevent the network from overfitting. The dense layer is used to reshape the output. Since a network will be difficult to train if it contains a large number of LSTM layers [16], we use three LSTM layers here. In each LSTM layer, the loss function is the mean square error (MSE), which is the sum of the squared distances between our target variable and the predicted value. In addition, the ADAM [17] is used as optimizer, since it is straightforward to implement, computationally efficient and well suited for problems with large data set and parameters. There are many methods and algorithms to implement sentiment analysis systems. In this paper, we use rule-based systems that perform sentiment analysis based on a set of manually crafted rules. Usually, rule-based approaches define a set of rules in some kind of scripting language that identify subjectivity, polarity, or the subject of an opinion. We use VADER, a simple rule-based model for general sentiment analysis. Performance Evaluation In this section, we validate our DP-LSTM based on the S&P 500 stocks. We calculate the mean prediction accuracy (MPA) to evaluate the proposed methods, which is defined as where $X_{t,\ell }$ is the real stock price of the $\ell $-th stock on the $t$-th day, $L$ is the number of stocks and $\hat{X}_{t,\ell }$ is the corresponding prediction result. Figure FIGREF27 plots the average score for all news on the same day over the period. The compound score is fluctuating between -0.3 and 0.15, indicating an overall neutral to slightly negative sentiment. The Positive, Negative and Neutral scores represent the proportion of text that falls in these categories. The Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1 (most extreme negative) and +1 (most extreme positive). Figure FIGREF29 shows the $\text{MPAs}$ of the proposed DP-LSTM and vanilla LSTM for comparison. In Table TABREF30, we give the mean MPA results for the prediction prices, which shows the accuracy performance of DP-LSTM is 0.32% higer than the LSTM with news. The result means the DP framework can make the prediction result more accuracy and robustness. Note that the results are obtained by running many trials, since we train stocks separately and predict each price individually due to the different patterns and scales of stock prices. This in total adds up to 451 runs. The results shown in Table TABREF30 is the average of these 451 runs. Furthermore, we provide results for 9 duration over a period in Figure FIGREF29. The performance of our DP-LSTM is always better than the LSTM with news. Based on the sentiment-ARMA model and adding noise for training, the proposed DP-LSTM is more robust. The investment risk based on this prediction results is reduced. In Figure FIGREF31, we can see the prediction results of DP-LSTM with is closer to the real S&P 500 index price line than other methods. The two lines (prediction results of LSTM with news and LSTM without news) almost coincide in Figure FIGREF31. We can tell the subtle differences from the Table TABREF32, that DP-LSTM is far ahead, and LSTM with news is slightly better than LSTM without news. Conclusion In this paper, we integrated the deep neural network with the famous NLP models (VADER) to identify and extract opinions within a given text, combining the stock adjust close price and compound score to reduce the investment risk. We first proposed a sentiment-ARMA model to represent the stock price, which incorporates influential variables (price and news) based on the ARMA model. Then, a DP-LSTM deep neural network was proposed to predict stock price according to the sentiment-ARMA model, which combines the LSTM, compound score of news articles and differential privacy method. News are not all objective. If we rely on the information extracted from the news for prediction fully, we may increase bias because of some non-objective reports. Therefore, the DP-LSTM enhance robustness of the prediction model. Experiment results based on the S&P 500 stocks show that the proposed DP-LSTM network can predict the stock price accurately with robust performance, especially for S&P 500 index that reflects the general trend of the market. S&P 500 prediction results show that the differential privacy method can significantly improve the robustness and accuracy. References [1] X. Li, Y. Li, X.-Y. Liu, D. Wang, “Risk management via anomaly circumvent: mnemonic deep learning for midterm stock prediction.” in Proceedings of 2nd KDD Workshop on Anomaly Detection in Finance (Anchorage ’19), 2019. [2] P. Chang, C. Fan, and C. Liu, “Integrating a piece-wise linear representation method and a neural network model for stock trading points prediction.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39, 1 (2009), 80–92. [3] Akita, Ryo, et al. “Deep learning for stock prediction using numerical and textual information.” IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS). IEEE, 2016. [4] Li, Xiaodong, et al. “Does summarization help stock prediction? A news impact analysis.” IEEE Intelligent Systems 30.3 (2015): 26-34. [5] Ding, Xiao, et al. “Deep learning for event-driven stock prediction.” Twenty-fourth International Joint Conference on Artificial Intelligence. 2015. [6] Hutto, Clayton J., and Eric Gilbert. “Vader: A parsimonious rule-based model for sentiment analysis of social media text.” Eighth International AAAI Conference on Weblogs and Social Media, 2014. [7] Ji, Zhanglong, Zachary C. Lipton, and Charles Elkan. “Differential privacy and machine learning: a survey and review.” arXiv preprint arXiv:1412.7584 (2014). [8] Abadi, Martin, et al. “Deep learning with differential privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, 2016. [9] McMahan, H. Brendan, and Galen Andrew. “A general approach to adding differential privacy to iterative training procedures.” arXiv preprint arXiv:1812.06210 (2018). [10] Lecuyer, Mathias, et al. “Certified robustness to adversarial examples with differential privacy.” arXiv preprint arXiv:1802.03471 (2018). [11] Hafezi, Reza, Jamal Shahrabi, and Esmaeil Hadavandi. “A bat-neural network multi-agent system (BNNMAS) for stock price prediction: Case study of DAX stock price.” Applied Soft Computing, 29 (2015): 196-210. [12] Chang, Pei-Chann, Chin-Yuan Fan, and Chen-Hao Liu. “Integrating a piecewise linear representation method and a neural network model for stock trading points prediction.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39.1 (2008): 80-92. [13] Gers, Felix A., Nicol N. Schraudolph, and Jürgen Schmidhuber. “Learning precise timing with LSTM recurrent networks.” Journal of Machine Learning Research 3.Aug (2002): 115-143. [14] Qin, Yao, et al. “A dual-stage attention-based recurrent neural network for time series prediction.” arXiv preprint arXiv:1704.02971 (2017). [15] Malhotra, Pankaj, et al. “Long short term memory networks for anomaly detection in time series.” Proceedings. Presses universitaires de Louvain, 2015. [16] Sak, Haşim, Andrew Senior, and Françoise Beaufays. “Long short-term memory recurrent neural network architectures for large scale acoustic modeling.” Fifteenth annual conference of the international speech communication association, 2014. [17] Kingma, Diederik P., and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014). [18] Box, George EP, et al. Time series analysis: forecasting and control. John Wiley & Sons, 2015. [19] Pang, Bo, and Lillian Lee. “Opinion mining and sentiment analysis.” Foundations and Trends in Information Retrieval 2.1–2 (2008): 1-135. [20] Cambria, Erik. “Affective computing and sentiment analysis.” IEEE Intelligent Systems 31.2 (2016): 102-107. [21] Dwork C, Lei J. Differential privacy and robust statistics//STOC. 2009, 9: 371-380. [22] X. Li, Y. Li, Y. Zhan, and X.-Y. Liu. “Optimistic bull or pessimistic bear: adaptive deep reinforcement learning for stock portfolio allocation.” in Proceedings of the 36th International Conference on Machine Learning, 2019.
No
2e1ededb7c8460169cf3c38e6cde6de402c1e720
2e1ededb7c8460169cf3c38e6cde6de402c1e720_0
Q: What is the prediction accuracy of the model? Text: Introduction Stock prediction is crucial for quantitative analysts and investment companies. Stocks' trends, however, are affected by a lot of factors such as interest rates, inflation rates and financial news [12]. To predict stock prices accurately, one must use these variable information. In particular, in the banking industry and financial services, analysts' armies are dedicated to pouring over, analyzing, and attempting to quantify qualitative data from news. A large amount of stock trend information is extracted from the large amount of text and quantitative information that is involved in the analysis. Investors may judge on the basis of technical analysis, such as charts of a company, market indices, and on textual information such as news blogs or newspapers. It is however difficult for investors to analyze and predict market trends according to all of these information [22]. A lot of artificial intelligence approaches have been investigated to automatically predict those trends [3]. For instance, investment simulation analysis with artificial markets or stock trend analysis with lexical cohesion based metric of financial news' sentiment polarity. Quantitative analysis today is heavily dependent on data. However, the majority of such data is unstructured text that comes from sources like financial news articles. The challenge is not only the amount of data that are involved, but also the kind of language that is used in them to express sentiments, which means emoticons. Sifting through huge volumes of this text data is difficult as well as time-consuming. It also requires a great deal of resources and expertise to analyze all of that [4]. To solve the above problem, in this paper we use sentiment analysis to extract information from textual information. Sentiment analysis is the automated process of understanding an opinion about a given subject from news articles [5]. The analyzed data quantifies reactions or sentiments of the general public toward people, ideas or certain products and reveal the information's contextual polarity. Sentiment analysis allows us to understand if newspapers are talking positively or negatively about the financial market, get key insights about the stock's future trend market. We use valence aware dictionary and sentiment reasoner (VADER) to extract sentiment scores. VADER is a lexicon and rule-based sentiment analysis tool attuned to sentiments that are expressed in social media specifically [6]. VADER has been found to be quite successful when dealing with NY Times editorials and social media texts. This is because VADER not only tells about the negativity score and positively but also tells us about how positive or negative a sentiment is. However, news reports are not all objective. We may increase bias because of some non-objective reports, if we rely on the information that is extracted from the news for prediction fully. Therefore, in order to enhance the prediction model's robustness, we will adopt differential privacy (DP) method. DP is a system for sharing information about a dataset publicly by describing groups' patterns within the dataset while withholding information about individuals in the dataset. DP can be achieved if the we are willing to add random noise to the result. For example, rather than simply reporting the sum, we can inject noise from a Laplace or gaussian distribution, producing a result that’s not quite exact, that masks the contents of any given row. In the last several years a promising approach to private data analysis has emerged, based on DP, which ensures that an analysis outcome is "roughly as likely" to occur independent of whether any individual opts in to, or to opts out of, the database. In consequence, any one individual's specific data can never greatly affect the results. General techniques for ensuring DP have now been proposed, and a lot of datamining tasks can be carried out in a DP method, frequently with very accurate results [21]. We proposed a DP-LSTM neural network, which increase the accuracy of prediction and robustness of model at the same time. The remainder of the paper is organized as follows. In Section 2, we introduce stock price model, the sentiment analysis and differential privacy method. In Section 3, we develop the different privacy-inspired LSTM (DP-LSTM) deep neural network and present the training details. Prediction results are provided in Section 4. Section 5 concludes the paper. Problem Statement In this section, we first introduce the background of the stock price model, which is based on the autoregressive moving average (ARMA) model. Then, we present the sentiment analysis details of the financial news and introduce how to use them to improve prediction performance. At last, we introduce the differential privacy framework and the loss function. Problem Statement ::: ARMA Model The ARMA model, which is one of the most widely used linear models in time series prediction [17], where the future value is assumed as a linear combination of the past errors and past values. ARMA is used to set the stock midterm prediction problem up. Let ${X}_t^\text{A}$ be the variable based on ARMA at time $t$, then we have where $X_{t-i}$ denotes the past value at time $t-i$; $\epsilon _{t}$ denotes the random error at time $t$; $\phi _i$ and $\psi _j$ are the coefficients; $\mu $ is a constant; $p$ and $q$ are integers that are often referred to as autoregressive and moving average polynomials, respectively. Problem Statement ::: Sentiment Analysis Another variable highly related to stock price is the textual information from news, whose changes may be a precursor to price changes. In our paper, news refers to a news article's title on a given trading day. It has been used to infer whether an event had informational content and whether investors' interpretations of the information were positive, negative or neutral. We hence use sentiment analysis to identify and extract opinions within a given text. Sentiment analysis aims at gauging the attitude, sentiments, evaluations and emotions of a speaker or writer based on subjectivity's computational treatment in a text [19]-[20]. Figure FIGREF3 shows an example of the sentiment analysis results obtained from financial news titles that were based on VADER. VADER uses a combination of a sentiment lexicon which are generally labelled according to their semantic orientation as either negative or positive. VADER has been found to be quite successful when dealing with news reviews. It is fully open-sourced under the MIT License. The result of VADER represent as sentiment scores, which include the positive, negative and neutral scores represent the proportion of text that falls in these categories. This means all these three scores should add up to 1. Besides, the Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1(most extreme negative) and +1 (most extreme positive). Figure FIGREF5 shows the positive and negative wordcloud, which is an intuitive analysis of the number of words in the news titles. Problem Statement ::: Sentiment-ARMA Model and Loss Function To take the sentiment analysis results of the financial news into account, we introduce the sentiment-ARMA model as follows where $\alpha $ and $\lambda $ are weighting factors; $c$ is a constant; and $f_2(\cdot )$ is similar to $f_1(\cdot )$ in the ARMA model (DISPLAY_FORM2) and is used to describe the prediction problem. In this paper, the LSTM neural network is used to predict the stock price, the input data is the previous stock price and the sentiment analysis results. Hence, the sentiment based LSTM neural network (named sentiment-LSTM) is aimed to minimize the following loss function: where $T$ denotes the number of prediction time slots, i.e., $t = 1,...,p$ are the observations (training input data), $t = p+1,...,p+T$ are the predicts (training output data); and $\hat{X}_t$ is given in (DISPLAY_FORM7). Problem Statement ::: Overview of LSTM Denote $\mathcal {X}_t^{\text{train}} = \lbrace X_{t-i},S_{t-i}\rbrace _{i=1}^p$ as the training input data. Figure FIGREF10 shows the LSTM's structure network, which comprises one or more hidden layers, an output layer and an input layer [16]. LSTM networks' main advantage is that the hidden layer comprises memory cells. Each memory cell recurrently has a core self-connected linear unit called “ Constant Error Carousel (CEC)” [13], which provides short-term memory storage and has three gates: Input gate, which controls the information from a new input to the memory cell, is given by where $h_{t-1}$ is the hidden state at the time step $t-1$; $i_t$ is the output of the input gate layer at the time step $t$; $\hat{c}_t$ is the candidate value to be added to the output at the time step $t$; $b_i$ and $b_c$ are biases of the input gate layer and the candidate value computation, respectively; $W_i$ and $W_c$ are weights of the input gate and the candidate value computation, respectively; and $\sigma (x) = 1/(1+e^{-x})$ is the pointwise nonlinear activation function. Forget gate, which controls the limit up to which a value is saved in the memory, is given by where $f_t$ is the forget state at the time step $t$, $W_f$ is the weight of the forget gate; and $b_f$ is the bias of the forget gate. Output gate, which controls the information output from the memory cell, is given by where new cell states $c_t$ are calculated based on the results of the previous two steps; $o_t$ is the output at the time step $t$; $W_o$ is the weight of the output gate; and $b_o$ is the bias of the output gate [14]. Problem Statement ::: Definition of Differential Privacy Differential privacy is one of privacy's most popular definitions today, which is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. It intuitively requires that the mechanism that outputs information about an underlying dataset is robust to one sample's any change, thus protecting privacy. A mechanism ${f}$ is a random function that takes a dataset $\mathcal {N}$ as input, and outputs a random variable ${f}(\mathcal {N})$. For example, suppose $\mathcal {N}$ is a news articles dataset, then the function that outputs compound score of articles in $\mathcal {N}$ plus noise from the standard normal distribution is a mechanism [7]. Although differential privacy was originally developed to facilitate secure analysis over sensitive data, it can also enhance the robustness of the data. Note that finance data, especially news data and stock data, is unstable with a lot of noise, with a more robust data the accuracy of prediction will be improved. Since we predict stock price by fusing news come from different sources, which might include fake news. Involving differential privacy in the training to improve the robustness of the finance news is meaningful. Training DP-LSTM Neural Network It is known that it is risky to predict stocks by considering news factors, because news can't guarantee full notarization and objectivity, many times extreme news will have a big impact on prediction models. To solve this problem, we consider entering the idea of the differential privacy when training. In this section, our DP-LSTM deep neural network training strategy is presented. The input data consists of three components: stock price, sentiment analysis compound score and noise. Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Data Preprocessing The data for this project are two parts, the first part is the historical S&P 500 component stocks, which are downloaded from the Yahoo Finance. We use the data over the period of from 12/07/2017 to 06/01/2018. The second part is the news article from financial domain are collected with the same time period as stock data. Since our paper illustrates the relationship between the sentiment of the news articles and stocks' price. Hence, only news article from financial domain are collected. The data is mainly taken from Webhose archived data, which consists of 306242 news articles present in JSON format, dating from December 2017 up to end of June 2018. The former 85% of the dataset is used as the training data and the remainder 15% is used as the testing data. The News publishers for this data are CNBC.com, Reuters.com, WSJ.com, Fortune.com. The Wall Street Journal is one of the largest newspapers in the United States, which coverage of breaking news and current headlines from the US and around the world include top stories, photos, videos, detailed analysis and in-depth thoughts; CNBC primarily carries business day coverage of U.S. and international financial markets, which following the end of the business day and on non-trading days; Fortune is an American multinational business magazine; Reuters is an international news organization. We preprocess the raw article body and use NLTK sentiment package alence Aware Dictionary and Sentiment Reasoner (VADER) to extract sentiment scores. The stocks with missing data are deleted, and the dataset we used eventually contains 451 stocks and 4 news resources (CNBC.com, Reuters.com, WSJ.comFortune.com.). Each stock records the adjust close price and news compound scores of 121 trading days. A rolling window with size 10 is used to separate data, that is, We predict the stock price of the next trading day based on historical data from the previous 10 days, hence resulting in a point-by-point prediction [15]. In particular, the training window is initialized with all real training data. Then we shift the window and add the next real point to the last point of training window to predict the next point and so forth. Then, according to the length of the window, the training data is divided into 92 sets of training input data (each set length 10) and training output data (each set length 1). The testing data is divided into input and output data of 9 windows (see Figure FIGREF20). Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Normalization To detect stock price pattern, it is necessary to normalize the stock price data. Since the LSTM neural network requires the stock patterns during training, we use “min-max” normalization method to reform dataset, which keeps the pattern of the data [11], as follow: where $X_{t}^{n}$ denotes the data after normalization. Accordingly, de-normalization is required at the end of the prediction process to get the original price, which is given by where $\hat{X}_{t}^{n}$ denotes the predicted data and $\hat{X}_{t}$ denotes the predicted data after de-normalization. Note that compound score is not normalized, since the compound score range from -1 to 1, which means all the compound score data has the same scale, so it is not require the normalization processing. Training DP-LSTM Neural Network ::: Adding Noise We consider the differential privacy as a method to improve the robustness of the LSTM predictions [8]. We explore the interplay between machine learning and differential privacy, and found that differential privacy has several properties that make it particularly useful in application such as robustness to extract textual information [9]. The robustness of textual information means that accuracy is guaranteed to be unaffected by certain false information [10]. The input data of the model has 5 dimensions, which are the stock price and four compound scores as $(X^t, S_1^t, S_2^t, S_3^t, S_4^t), t=1,...,T$, where $X^t$ represents the stock price and $S_i^t,~i=1,...,4$ respectively denote the mean compound score calculated from WSJ, CNBC, Fortune and Reuters. According to the process of differential privacy, we add Gaussian noise with different variances to the news according to the variance of the news, i.e., the news compound score after adding noise is given by where $\text{var}(\cdot )$ is the variance operator, $\lambda $ is a weighting factor and $\mathcal {N}(\cdot )$ denotes the random Gaussian process with zero mean and variance $\lambda \text{var}(S_i)$. We used python to crawl the news from the four sources of each trading day, perform sentiment analysis on the title of the news, and get the compound score. After splitting the data into training sets and test sets, we separately add noise to each of four news sources of the training set, then, for $n$-th stock, four sets of noise-added data $(X^n_t, {\widetilde{S}^t_1}, S^t_2, S^t_3, S^t_4)$, $(X^n_t, {S^t_1}, \widetilde{S}^t_2, S^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, \widetilde{S}^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, S^t_3, \widetilde{S}^t_4)$ are combined into a new training data through a rolling window. The stock price is then combined with the new compound score training data as input data for our DP-LSTM neural network. Training DP-LSTM Neural Network ::: Training Setting The LSTM model in figure FIGREF10 has six layers, followed by an LSTM layer, a dropout layer, an LSTM layer, an LSTM layer, a dropout layer, a dense layer, respectively. The dropout layers (with dropout rate 0.2) prevent the network from overfitting. The dense layer is used to reshape the output. Since a network will be difficult to train if it contains a large number of LSTM layers [16], we use three LSTM layers here. In each LSTM layer, the loss function is the mean square error (MSE), which is the sum of the squared distances between our target variable and the predicted value. In addition, the ADAM [17] is used as optimizer, since it is straightforward to implement, computationally efficient and well suited for problems with large data set and parameters. There are many methods and algorithms to implement sentiment analysis systems. In this paper, we use rule-based systems that perform sentiment analysis based on a set of manually crafted rules. Usually, rule-based approaches define a set of rules in some kind of scripting language that identify subjectivity, polarity, or the subject of an opinion. We use VADER, a simple rule-based model for general sentiment analysis. Performance Evaluation In this section, we validate our DP-LSTM based on the S&P 500 stocks. We calculate the mean prediction accuracy (MPA) to evaluate the proposed methods, which is defined as where $X_{t,\ell }$ is the real stock price of the $\ell $-th stock on the $t$-th day, $L$ is the number of stocks and $\hat{X}_{t,\ell }$ is the corresponding prediction result. Figure FIGREF27 plots the average score for all news on the same day over the period. The compound score is fluctuating between -0.3 and 0.15, indicating an overall neutral to slightly negative sentiment. The Positive, Negative and Neutral scores represent the proportion of text that falls in these categories. The Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1 (most extreme negative) and +1 (most extreme positive). Figure FIGREF29 shows the $\text{MPAs}$ of the proposed DP-LSTM and vanilla LSTM for comparison. In Table TABREF30, we give the mean MPA results for the prediction prices, which shows the accuracy performance of DP-LSTM is 0.32% higer than the LSTM with news. The result means the DP framework can make the prediction result more accuracy and robustness. Note that the results are obtained by running many trials, since we train stocks separately and predict each price individually due to the different patterns and scales of stock prices. This in total adds up to 451 runs. The results shown in Table TABREF30 is the average of these 451 runs. Furthermore, we provide results for 9 duration over a period in Figure FIGREF29. The performance of our DP-LSTM is always better than the LSTM with news. Based on the sentiment-ARMA model and adding noise for training, the proposed DP-LSTM is more robust. The investment risk based on this prediction results is reduced. In Figure FIGREF31, we can see the prediction results of DP-LSTM with is closer to the real S&P 500 index price line than other methods. The two lines (prediction results of LSTM with news and LSTM without news) almost coincide in Figure FIGREF31. We can tell the subtle differences from the Table TABREF32, that DP-LSTM is far ahead, and LSTM with news is slightly better than LSTM without news. Conclusion In this paper, we integrated the deep neural network with the famous NLP models (VADER) to identify and extract opinions within a given text, combining the stock adjust close price and compound score to reduce the investment risk. We first proposed a sentiment-ARMA model to represent the stock price, which incorporates influential variables (price and news) based on the ARMA model. Then, a DP-LSTM deep neural network was proposed to predict stock price according to the sentiment-ARMA model, which combines the LSTM, compound score of news articles and differential privacy method. News are not all objective. If we rely on the information extracted from the news for prediction fully, we may increase bias because of some non-objective reports. Therefore, the DP-LSTM enhance robustness of the prediction model. Experiment results based on the S&P 500 stocks show that the proposed DP-LSTM network can predict the stock price accurately with robust performance, especially for S&P 500 index that reflects the general trend of the market. S&P 500 prediction results show that the differential privacy method can significantly improve the robustness and accuracy. References [1] X. Li, Y. Li, X.-Y. Liu, D. Wang, “Risk management via anomaly circumvent: mnemonic deep learning for midterm stock prediction.” in Proceedings of 2nd KDD Workshop on Anomaly Detection in Finance (Anchorage ’19), 2019. [2] P. Chang, C. Fan, and C. Liu, “Integrating a piece-wise linear representation method and a neural network model for stock trading points prediction.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39, 1 (2009), 80–92. [3] Akita, Ryo, et al. “Deep learning for stock prediction using numerical and textual information.” IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS). IEEE, 2016. [4] Li, Xiaodong, et al. “Does summarization help stock prediction? A news impact analysis.” IEEE Intelligent Systems 30.3 (2015): 26-34. [5] Ding, Xiao, et al. “Deep learning for event-driven stock prediction.” Twenty-fourth International Joint Conference on Artificial Intelligence. 2015. [6] Hutto, Clayton J., and Eric Gilbert. “Vader: A parsimonious rule-based model for sentiment analysis of social media text.” Eighth International AAAI Conference on Weblogs and Social Media, 2014. [7] Ji, Zhanglong, Zachary C. Lipton, and Charles Elkan. “Differential privacy and machine learning: a survey and review.” arXiv preprint arXiv:1412.7584 (2014). [8] Abadi, Martin, et al. “Deep learning with differential privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, 2016. [9] McMahan, H. Brendan, and Galen Andrew. “A general approach to adding differential privacy to iterative training procedures.” arXiv preprint arXiv:1812.06210 (2018). [10] Lecuyer, Mathias, et al. “Certified robustness to adversarial examples with differential privacy.” arXiv preprint arXiv:1802.03471 (2018). [11] Hafezi, Reza, Jamal Shahrabi, and Esmaeil Hadavandi. “A bat-neural network multi-agent system (BNNMAS) for stock price prediction: Case study of DAX stock price.” Applied Soft Computing, 29 (2015): 196-210. [12] Chang, Pei-Chann, Chin-Yuan Fan, and Chen-Hao Liu. “Integrating a piecewise linear representation method and a neural network model for stock trading points prediction.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39.1 (2008): 80-92. [13] Gers, Felix A., Nicol N. Schraudolph, and Jürgen Schmidhuber. “Learning precise timing with LSTM recurrent networks.” Journal of Machine Learning Research 3.Aug (2002): 115-143. [14] Qin, Yao, et al. “A dual-stage attention-based recurrent neural network for time series prediction.” arXiv preprint arXiv:1704.02971 (2017). [15] Malhotra, Pankaj, et al. “Long short term memory networks for anomaly detection in time series.” Proceedings. Presses universitaires de Louvain, 2015. [16] Sak, Haşim, Andrew Senior, and Françoise Beaufays. “Long short-term memory recurrent neural network architectures for large scale acoustic modeling.” Fifteenth annual conference of the international speech communication association, 2014. [17] Kingma, Diederik P., and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014). [18] Box, George EP, et al. Time series analysis: forecasting and control. John Wiley & Sons, 2015. [19] Pang, Bo, and Lillian Lee. “Opinion mining and sentiment analysis.” Foundations and Trends in Information Retrieval 2.1–2 (2008): 1-135. [20] Cambria, Erik. “Affective computing and sentiment analysis.” IEEE Intelligent Systems 31.2 (2016): 102-107. [21] Dwork C, Lei J. Differential privacy and robust statistics//STOC. 2009, 9: 371-380. [22] X. Li, Y. Li, Y. Zhan, and X.-Y. Liu. “Optimistic bull or pessimistic bear: adaptive deep reinforcement learning for stock portfolio allocation.” in Proceedings of the 36th International Conference on Machine Learning, 2019.
mean prediction accuracy 0.99582651 S&P 500 Accuracy 0.99582651
3b391cd58cf6a61fe8c8eff2095e33794e80f0e3
3b391cd58cf6a61fe8c8eff2095e33794e80f0e3_0
Q: What is the dataset used in the paper? Text: Introduction Stock prediction is crucial for quantitative analysts and investment companies. Stocks' trends, however, are affected by a lot of factors such as interest rates, inflation rates and financial news [12]. To predict stock prices accurately, one must use these variable information. In particular, in the banking industry and financial services, analysts' armies are dedicated to pouring over, analyzing, and attempting to quantify qualitative data from news. A large amount of stock trend information is extracted from the large amount of text and quantitative information that is involved in the analysis. Investors may judge on the basis of technical analysis, such as charts of a company, market indices, and on textual information such as news blogs or newspapers. It is however difficult for investors to analyze and predict market trends according to all of these information [22]. A lot of artificial intelligence approaches have been investigated to automatically predict those trends [3]. For instance, investment simulation analysis with artificial markets or stock trend analysis with lexical cohesion based metric of financial news' sentiment polarity. Quantitative analysis today is heavily dependent on data. However, the majority of such data is unstructured text that comes from sources like financial news articles. The challenge is not only the amount of data that are involved, but also the kind of language that is used in them to express sentiments, which means emoticons. Sifting through huge volumes of this text data is difficult as well as time-consuming. It also requires a great deal of resources and expertise to analyze all of that [4]. To solve the above problem, in this paper we use sentiment analysis to extract information from textual information. Sentiment analysis is the automated process of understanding an opinion about a given subject from news articles [5]. The analyzed data quantifies reactions or sentiments of the general public toward people, ideas or certain products and reveal the information's contextual polarity. Sentiment analysis allows us to understand if newspapers are talking positively or negatively about the financial market, get key insights about the stock's future trend market. We use valence aware dictionary and sentiment reasoner (VADER) to extract sentiment scores. VADER is a lexicon and rule-based sentiment analysis tool attuned to sentiments that are expressed in social media specifically [6]. VADER has been found to be quite successful when dealing with NY Times editorials and social media texts. This is because VADER not only tells about the negativity score and positively but also tells us about how positive or negative a sentiment is. However, news reports are not all objective. We may increase bias because of some non-objective reports, if we rely on the information that is extracted from the news for prediction fully. Therefore, in order to enhance the prediction model's robustness, we will adopt differential privacy (DP) method. DP is a system for sharing information about a dataset publicly by describing groups' patterns within the dataset while withholding information about individuals in the dataset. DP can be achieved if the we are willing to add random noise to the result. For example, rather than simply reporting the sum, we can inject noise from a Laplace or gaussian distribution, producing a result that’s not quite exact, that masks the contents of any given row. In the last several years a promising approach to private data analysis has emerged, based on DP, which ensures that an analysis outcome is "roughly as likely" to occur independent of whether any individual opts in to, or to opts out of, the database. In consequence, any one individual's specific data can never greatly affect the results. General techniques for ensuring DP have now been proposed, and a lot of datamining tasks can be carried out in a DP method, frequently with very accurate results [21]. We proposed a DP-LSTM neural network, which increase the accuracy of prediction and robustness of model at the same time. The remainder of the paper is organized as follows. In Section 2, we introduce stock price model, the sentiment analysis and differential privacy method. In Section 3, we develop the different privacy-inspired LSTM (DP-LSTM) deep neural network and present the training details. Prediction results are provided in Section 4. Section 5 concludes the paper. Problem Statement In this section, we first introduce the background of the stock price model, which is based on the autoregressive moving average (ARMA) model. Then, we present the sentiment analysis details of the financial news and introduce how to use them to improve prediction performance. At last, we introduce the differential privacy framework and the loss function. Problem Statement ::: ARMA Model The ARMA model, which is one of the most widely used linear models in time series prediction [17], where the future value is assumed as a linear combination of the past errors and past values. ARMA is used to set the stock midterm prediction problem up. Let ${X}_t^\text{A}$ be the variable based on ARMA at time $t$, then we have where $X_{t-i}$ denotes the past value at time $t-i$; $\epsilon _{t}$ denotes the random error at time $t$; $\phi _i$ and $\psi _j$ are the coefficients; $\mu $ is a constant; $p$ and $q$ are integers that are often referred to as autoregressive and moving average polynomials, respectively. Problem Statement ::: Sentiment Analysis Another variable highly related to stock price is the textual information from news, whose changes may be a precursor to price changes. In our paper, news refers to a news article's title on a given trading day. It has been used to infer whether an event had informational content and whether investors' interpretations of the information were positive, negative or neutral. We hence use sentiment analysis to identify and extract opinions within a given text. Sentiment analysis aims at gauging the attitude, sentiments, evaluations and emotions of a speaker or writer based on subjectivity's computational treatment in a text [19]-[20]. Figure FIGREF3 shows an example of the sentiment analysis results obtained from financial news titles that were based on VADER. VADER uses a combination of a sentiment lexicon which are generally labelled according to their semantic orientation as either negative or positive. VADER has been found to be quite successful when dealing with news reviews. It is fully open-sourced under the MIT License. The result of VADER represent as sentiment scores, which include the positive, negative and neutral scores represent the proportion of text that falls in these categories. This means all these three scores should add up to 1. Besides, the Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1(most extreme negative) and +1 (most extreme positive). Figure FIGREF5 shows the positive and negative wordcloud, which is an intuitive analysis of the number of words in the news titles. Problem Statement ::: Sentiment-ARMA Model and Loss Function To take the sentiment analysis results of the financial news into account, we introduce the sentiment-ARMA model as follows where $\alpha $ and $\lambda $ are weighting factors; $c$ is a constant; and $f_2(\cdot )$ is similar to $f_1(\cdot )$ in the ARMA model (DISPLAY_FORM2) and is used to describe the prediction problem. In this paper, the LSTM neural network is used to predict the stock price, the input data is the previous stock price and the sentiment analysis results. Hence, the sentiment based LSTM neural network (named sentiment-LSTM) is aimed to minimize the following loss function: where $T$ denotes the number of prediction time slots, i.e., $t = 1,...,p$ are the observations (training input data), $t = p+1,...,p+T$ are the predicts (training output data); and $\hat{X}_t$ is given in (DISPLAY_FORM7). Problem Statement ::: Overview of LSTM Denote $\mathcal {X}_t^{\text{train}} = \lbrace X_{t-i},S_{t-i}\rbrace _{i=1}^p$ as the training input data. Figure FIGREF10 shows the LSTM's structure network, which comprises one or more hidden layers, an output layer and an input layer [16]. LSTM networks' main advantage is that the hidden layer comprises memory cells. Each memory cell recurrently has a core self-connected linear unit called “ Constant Error Carousel (CEC)” [13], which provides short-term memory storage and has three gates: Input gate, which controls the information from a new input to the memory cell, is given by where $h_{t-1}$ is the hidden state at the time step $t-1$; $i_t$ is the output of the input gate layer at the time step $t$; $\hat{c}_t$ is the candidate value to be added to the output at the time step $t$; $b_i$ and $b_c$ are biases of the input gate layer and the candidate value computation, respectively; $W_i$ and $W_c$ are weights of the input gate and the candidate value computation, respectively; and $\sigma (x) = 1/(1+e^{-x})$ is the pointwise nonlinear activation function. Forget gate, which controls the limit up to which a value is saved in the memory, is given by where $f_t$ is the forget state at the time step $t$, $W_f$ is the weight of the forget gate; and $b_f$ is the bias of the forget gate. Output gate, which controls the information output from the memory cell, is given by where new cell states $c_t$ are calculated based on the results of the previous two steps; $o_t$ is the output at the time step $t$; $W_o$ is the weight of the output gate; and $b_o$ is the bias of the output gate [14]. Problem Statement ::: Definition of Differential Privacy Differential privacy is one of privacy's most popular definitions today, which is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. It intuitively requires that the mechanism that outputs information about an underlying dataset is robust to one sample's any change, thus protecting privacy. A mechanism ${f}$ is a random function that takes a dataset $\mathcal {N}$ as input, and outputs a random variable ${f}(\mathcal {N})$. For example, suppose $\mathcal {N}$ is a news articles dataset, then the function that outputs compound score of articles in $\mathcal {N}$ plus noise from the standard normal distribution is a mechanism [7]. Although differential privacy was originally developed to facilitate secure analysis over sensitive data, it can also enhance the robustness of the data. Note that finance data, especially news data and stock data, is unstable with a lot of noise, with a more robust data the accuracy of prediction will be improved. Since we predict stock price by fusing news come from different sources, which might include fake news. Involving differential privacy in the training to improve the robustness of the finance news is meaningful. Training DP-LSTM Neural Network It is known that it is risky to predict stocks by considering news factors, because news can't guarantee full notarization and objectivity, many times extreme news will have a big impact on prediction models. To solve this problem, we consider entering the idea of the differential privacy when training. In this section, our DP-LSTM deep neural network training strategy is presented. The input data consists of three components: stock price, sentiment analysis compound score and noise. Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Data Preprocessing The data for this project are two parts, the first part is the historical S&P 500 component stocks, which are downloaded from the Yahoo Finance. We use the data over the period of from 12/07/2017 to 06/01/2018. The second part is the news article from financial domain are collected with the same time period as stock data. Since our paper illustrates the relationship between the sentiment of the news articles and stocks' price. Hence, only news article from financial domain are collected. The data is mainly taken from Webhose archived data, which consists of 306242 news articles present in JSON format, dating from December 2017 up to end of June 2018. The former 85% of the dataset is used as the training data and the remainder 15% is used as the testing data. The News publishers for this data are CNBC.com, Reuters.com, WSJ.com, Fortune.com. The Wall Street Journal is one of the largest newspapers in the United States, which coverage of breaking news and current headlines from the US and around the world include top stories, photos, videos, detailed analysis and in-depth thoughts; CNBC primarily carries business day coverage of U.S. and international financial markets, which following the end of the business day and on non-trading days; Fortune is an American multinational business magazine; Reuters is an international news organization. We preprocess the raw article body and use NLTK sentiment package alence Aware Dictionary and Sentiment Reasoner (VADER) to extract sentiment scores. The stocks with missing data are deleted, and the dataset we used eventually contains 451 stocks and 4 news resources (CNBC.com, Reuters.com, WSJ.comFortune.com.). Each stock records the adjust close price and news compound scores of 121 trading days. A rolling window with size 10 is used to separate data, that is, We predict the stock price of the next trading day based on historical data from the previous 10 days, hence resulting in a point-by-point prediction [15]. In particular, the training window is initialized with all real training data. Then we shift the window and add the next real point to the last point of training window to predict the next point and so forth. Then, according to the length of the window, the training data is divided into 92 sets of training input data (each set length 10) and training output data (each set length 1). The testing data is divided into input and output data of 9 windows (see Figure FIGREF20). Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Normalization To detect stock price pattern, it is necessary to normalize the stock price data. Since the LSTM neural network requires the stock patterns during training, we use “min-max” normalization method to reform dataset, which keeps the pattern of the data [11], as follow: where $X_{t}^{n}$ denotes the data after normalization. Accordingly, de-normalization is required at the end of the prediction process to get the original price, which is given by where $\hat{X}_{t}^{n}$ denotes the predicted data and $\hat{X}_{t}$ denotes the predicted data after de-normalization. Note that compound score is not normalized, since the compound score range from -1 to 1, which means all the compound score data has the same scale, so it is not require the normalization processing. Training DP-LSTM Neural Network ::: Adding Noise We consider the differential privacy as a method to improve the robustness of the LSTM predictions [8]. We explore the interplay between machine learning and differential privacy, and found that differential privacy has several properties that make it particularly useful in application such as robustness to extract textual information [9]. The robustness of textual information means that accuracy is guaranteed to be unaffected by certain false information [10]. The input data of the model has 5 dimensions, which are the stock price and four compound scores as $(X^t, S_1^t, S_2^t, S_3^t, S_4^t), t=1,...,T$, where $X^t$ represents the stock price and $S_i^t,~i=1,...,4$ respectively denote the mean compound score calculated from WSJ, CNBC, Fortune and Reuters. According to the process of differential privacy, we add Gaussian noise with different variances to the news according to the variance of the news, i.e., the news compound score after adding noise is given by where $\text{var}(\cdot )$ is the variance operator, $\lambda $ is a weighting factor and $\mathcal {N}(\cdot )$ denotes the random Gaussian process with zero mean and variance $\lambda \text{var}(S_i)$. We used python to crawl the news from the four sources of each trading day, perform sentiment analysis on the title of the news, and get the compound score. After splitting the data into training sets and test sets, we separately add noise to each of four news sources of the training set, then, for $n$-th stock, four sets of noise-added data $(X^n_t, {\widetilde{S}^t_1}, S^t_2, S^t_3, S^t_4)$, $(X^n_t, {S^t_1}, \widetilde{S}^t_2, S^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, \widetilde{S}^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, S^t_3, \widetilde{S}^t_4)$ are combined into a new training data through a rolling window. The stock price is then combined with the new compound score training data as input data for our DP-LSTM neural network. Training DP-LSTM Neural Network ::: Training Setting The LSTM model in figure FIGREF10 has six layers, followed by an LSTM layer, a dropout layer, an LSTM layer, an LSTM layer, a dropout layer, a dense layer, respectively. The dropout layers (with dropout rate 0.2) prevent the network from overfitting. The dense layer is used to reshape the output. Since a network will be difficult to train if it contains a large number of LSTM layers [16], we use three LSTM layers here. In each LSTM layer, the loss function is the mean square error (MSE), which is the sum of the squared distances between our target variable and the predicted value. In addition, the ADAM [17] is used as optimizer, since it is straightforward to implement, computationally efficient and well suited for problems with large data set and parameters. There are many methods and algorithms to implement sentiment analysis systems. In this paper, we use rule-based systems that perform sentiment analysis based on a set of manually crafted rules. Usually, rule-based approaches define a set of rules in some kind of scripting language that identify subjectivity, polarity, or the subject of an opinion. We use VADER, a simple rule-based model for general sentiment analysis. Performance Evaluation In this section, we validate our DP-LSTM based on the S&P 500 stocks. We calculate the mean prediction accuracy (MPA) to evaluate the proposed methods, which is defined as where $X_{t,\ell }$ is the real stock price of the $\ell $-th stock on the $t$-th day, $L$ is the number of stocks and $\hat{X}_{t,\ell }$ is the corresponding prediction result. Figure FIGREF27 plots the average score for all news on the same day over the period. The compound score is fluctuating between -0.3 and 0.15, indicating an overall neutral to slightly negative sentiment. The Positive, Negative and Neutral scores represent the proportion of text that falls in these categories. The Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1 (most extreme negative) and +1 (most extreme positive). Figure FIGREF29 shows the $\text{MPAs}$ of the proposed DP-LSTM and vanilla LSTM for comparison. In Table TABREF30, we give the mean MPA results for the prediction prices, which shows the accuracy performance of DP-LSTM is 0.32% higer than the LSTM with news. The result means the DP framework can make the prediction result more accuracy and robustness. Note that the results are obtained by running many trials, since we train stocks separately and predict each price individually due to the different patterns and scales of stock prices. This in total adds up to 451 runs. The results shown in Table TABREF30 is the average of these 451 runs. Furthermore, we provide results for 9 duration over a period in Figure FIGREF29. The performance of our DP-LSTM is always better than the LSTM with news. Based on the sentiment-ARMA model and adding noise for training, the proposed DP-LSTM is more robust. The investment risk based on this prediction results is reduced. In Figure FIGREF31, we can see the prediction results of DP-LSTM with is closer to the real S&P 500 index price line than other methods. The two lines (prediction results of LSTM with news and LSTM without news) almost coincide in Figure FIGREF31. We can tell the subtle differences from the Table TABREF32, that DP-LSTM is far ahead, and LSTM with news is slightly better than LSTM without news. Conclusion In this paper, we integrated the deep neural network with the famous NLP models (VADER) to identify and extract opinions within a given text, combining the stock adjust close price and compound score to reduce the investment risk. We first proposed a sentiment-ARMA model to represent the stock price, which incorporates influential variables (price and news) based on the ARMA model. Then, a DP-LSTM deep neural network was proposed to predict stock price according to the sentiment-ARMA model, which combines the LSTM, compound score of news articles and differential privacy method. News are not all objective. If we rely on the information extracted from the news for prediction fully, we may increase bias because of some non-objective reports. Therefore, the DP-LSTM enhance robustness of the prediction model. Experiment results based on the S&P 500 stocks show that the proposed DP-LSTM network can predict the stock price accurately with robust performance, especially for S&P 500 index that reflects the general trend of the market. S&P 500 prediction results show that the differential privacy method can significantly improve the robustness and accuracy. References [1] X. Li, Y. Li, X.-Y. Liu, D. Wang, “Risk management via anomaly circumvent: mnemonic deep learning for midterm stock prediction.” in Proceedings of 2nd KDD Workshop on Anomaly Detection in Finance (Anchorage ’19), 2019. [2] P. Chang, C. Fan, and C. Liu, “Integrating a piece-wise linear representation method and a neural network model for stock trading points prediction.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39, 1 (2009), 80–92. [3] Akita, Ryo, et al. “Deep learning for stock prediction using numerical and textual information.” IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS). IEEE, 2016. [4] Li, Xiaodong, et al. “Does summarization help stock prediction? A news impact analysis.” IEEE Intelligent Systems 30.3 (2015): 26-34. [5] Ding, Xiao, et al. “Deep learning for event-driven stock prediction.” Twenty-fourth International Joint Conference on Artificial Intelligence. 2015. [6] Hutto, Clayton J., and Eric Gilbert. “Vader: A parsimonious rule-based model for sentiment analysis of social media text.” Eighth International AAAI Conference on Weblogs and Social Media, 2014. [7] Ji, Zhanglong, Zachary C. Lipton, and Charles Elkan. “Differential privacy and machine learning: a survey and review.” arXiv preprint arXiv:1412.7584 (2014). [8] Abadi, Martin, et al. “Deep learning with differential privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, 2016. [9] McMahan, H. Brendan, and Galen Andrew. “A general approach to adding differential privacy to iterative training procedures.” arXiv preprint arXiv:1812.06210 (2018). [10] Lecuyer, Mathias, et al. “Certified robustness to adversarial examples with differential privacy.” arXiv preprint arXiv:1802.03471 (2018). [11] Hafezi, Reza, Jamal Shahrabi, and Esmaeil Hadavandi. “A bat-neural network multi-agent system (BNNMAS) for stock price prediction: Case study of DAX stock price.” Applied Soft Computing, 29 (2015): 196-210. [12] Chang, Pei-Chann, Chin-Yuan Fan, and Chen-Hao Liu. “Integrating a piecewise linear representation method and a neural network model for stock trading points prediction.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39.1 (2008): 80-92. [13] Gers, Felix A., Nicol N. Schraudolph, and Jürgen Schmidhuber. “Learning precise timing with LSTM recurrent networks.” Journal of Machine Learning Research 3.Aug (2002): 115-143. [14] Qin, Yao, et al. “A dual-stage attention-based recurrent neural network for time series prediction.” arXiv preprint arXiv:1704.02971 (2017). [15] Malhotra, Pankaj, et al. “Long short term memory networks for anomaly detection in time series.” Proceedings. Presses universitaires de Louvain, 2015. [16] Sak, Haşim, Andrew Senior, and Françoise Beaufays. “Long short-term memory recurrent neural network architectures for large scale acoustic modeling.” Fifteenth annual conference of the international speech communication association, 2014. [17] Kingma, Diederik P., and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014). [18] Box, George EP, et al. Time series analysis: forecasting and control. John Wiley & Sons, 2015. [19] Pang, Bo, and Lillian Lee. “Opinion mining and sentiment analysis.” Foundations and Trends in Information Retrieval 2.1–2 (2008): 1-135. [20] Cambria, Erik. “Affective computing and sentiment analysis.” IEEE Intelligent Systems 31.2 (2016): 102-107. [21] Dwork C, Lei J. Differential privacy and robust statistics//STOC. 2009, 9: 371-380. [22] X. Li, Y. Li, Y. Zhan, and X.-Y. Liu. “Optimistic bull or pessimistic bear: adaptive deep reinforcement learning for stock portfolio allocation.” in Proceedings of the 36th International Conference on Machine Learning, 2019.
historical S&P 500 component stocks 306242 news articles
4d8ca3f7aa65dcb42eba72acf3584d37b416b19c
4d8ca3f7aa65dcb42eba72acf3584d37b416b19c_0
Q: How does the differential privacy mechanism work? Text: Introduction Stock prediction is crucial for quantitative analysts and investment companies. Stocks' trends, however, are affected by a lot of factors such as interest rates, inflation rates and financial news [12]. To predict stock prices accurately, one must use these variable information. In particular, in the banking industry and financial services, analysts' armies are dedicated to pouring over, analyzing, and attempting to quantify qualitative data from news. A large amount of stock trend information is extracted from the large amount of text and quantitative information that is involved in the analysis. Investors may judge on the basis of technical analysis, such as charts of a company, market indices, and on textual information such as news blogs or newspapers. It is however difficult for investors to analyze and predict market trends according to all of these information [22]. A lot of artificial intelligence approaches have been investigated to automatically predict those trends [3]. For instance, investment simulation analysis with artificial markets or stock trend analysis with lexical cohesion based metric of financial news' sentiment polarity. Quantitative analysis today is heavily dependent on data. However, the majority of such data is unstructured text that comes from sources like financial news articles. The challenge is not only the amount of data that are involved, but also the kind of language that is used in them to express sentiments, which means emoticons. Sifting through huge volumes of this text data is difficult as well as time-consuming. It also requires a great deal of resources and expertise to analyze all of that [4]. To solve the above problem, in this paper we use sentiment analysis to extract information from textual information. Sentiment analysis is the automated process of understanding an opinion about a given subject from news articles [5]. The analyzed data quantifies reactions or sentiments of the general public toward people, ideas or certain products and reveal the information's contextual polarity. Sentiment analysis allows us to understand if newspapers are talking positively or negatively about the financial market, get key insights about the stock's future trend market. We use valence aware dictionary and sentiment reasoner (VADER) to extract sentiment scores. VADER is a lexicon and rule-based sentiment analysis tool attuned to sentiments that are expressed in social media specifically [6]. VADER has been found to be quite successful when dealing with NY Times editorials and social media texts. This is because VADER not only tells about the negativity score and positively but also tells us about how positive or negative a sentiment is. However, news reports are not all objective. We may increase bias because of some non-objective reports, if we rely on the information that is extracted from the news for prediction fully. Therefore, in order to enhance the prediction model's robustness, we will adopt differential privacy (DP) method. DP is a system for sharing information about a dataset publicly by describing groups' patterns within the dataset while withholding information about individuals in the dataset. DP can be achieved if the we are willing to add random noise to the result. For example, rather than simply reporting the sum, we can inject noise from a Laplace or gaussian distribution, producing a result that’s not quite exact, that masks the contents of any given row. In the last several years a promising approach to private data analysis has emerged, based on DP, which ensures that an analysis outcome is "roughly as likely" to occur independent of whether any individual opts in to, or to opts out of, the database. In consequence, any one individual's specific data can never greatly affect the results. General techniques for ensuring DP have now been proposed, and a lot of datamining tasks can be carried out in a DP method, frequently with very accurate results [21]. We proposed a DP-LSTM neural network, which increase the accuracy of prediction and robustness of model at the same time. The remainder of the paper is organized as follows. In Section 2, we introduce stock price model, the sentiment analysis and differential privacy method. In Section 3, we develop the different privacy-inspired LSTM (DP-LSTM) deep neural network and present the training details. Prediction results are provided in Section 4. Section 5 concludes the paper. Problem Statement In this section, we first introduce the background of the stock price model, which is based on the autoregressive moving average (ARMA) model. Then, we present the sentiment analysis details of the financial news and introduce how to use them to improve prediction performance. At last, we introduce the differential privacy framework and the loss function. Problem Statement ::: ARMA Model The ARMA model, which is one of the most widely used linear models in time series prediction [17], where the future value is assumed as a linear combination of the past errors and past values. ARMA is used to set the stock midterm prediction problem up. Let ${X}_t^\text{A}$ be the variable based on ARMA at time $t$, then we have where $X_{t-i}$ denotes the past value at time $t-i$; $\epsilon _{t}$ denotes the random error at time $t$; $\phi _i$ and $\psi _j$ are the coefficients; $\mu $ is a constant; $p$ and $q$ are integers that are often referred to as autoregressive and moving average polynomials, respectively. Problem Statement ::: Sentiment Analysis Another variable highly related to stock price is the textual information from news, whose changes may be a precursor to price changes. In our paper, news refers to a news article's title on a given trading day. It has been used to infer whether an event had informational content and whether investors' interpretations of the information were positive, negative or neutral. We hence use sentiment analysis to identify and extract opinions within a given text. Sentiment analysis aims at gauging the attitude, sentiments, evaluations and emotions of a speaker or writer based on subjectivity's computational treatment in a text [19]-[20]. Figure FIGREF3 shows an example of the sentiment analysis results obtained from financial news titles that were based on VADER. VADER uses a combination of a sentiment lexicon which are generally labelled according to their semantic orientation as either negative or positive. VADER has been found to be quite successful when dealing with news reviews. It is fully open-sourced under the MIT License. The result of VADER represent as sentiment scores, which include the positive, negative and neutral scores represent the proportion of text that falls in these categories. This means all these three scores should add up to 1. Besides, the Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1(most extreme negative) and +1 (most extreme positive). Figure FIGREF5 shows the positive and negative wordcloud, which is an intuitive analysis of the number of words in the news titles. Problem Statement ::: Sentiment-ARMA Model and Loss Function To take the sentiment analysis results of the financial news into account, we introduce the sentiment-ARMA model as follows where $\alpha $ and $\lambda $ are weighting factors; $c$ is a constant; and $f_2(\cdot )$ is similar to $f_1(\cdot )$ in the ARMA model (DISPLAY_FORM2) and is used to describe the prediction problem. In this paper, the LSTM neural network is used to predict the stock price, the input data is the previous stock price and the sentiment analysis results. Hence, the sentiment based LSTM neural network (named sentiment-LSTM) is aimed to minimize the following loss function: where $T$ denotes the number of prediction time slots, i.e., $t = 1,...,p$ are the observations (training input data), $t = p+1,...,p+T$ are the predicts (training output data); and $\hat{X}_t$ is given in (DISPLAY_FORM7). Problem Statement ::: Overview of LSTM Denote $\mathcal {X}_t^{\text{train}} = \lbrace X_{t-i},S_{t-i}\rbrace _{i=1}^p$ as the training input data. Figure FIGREF10 shows the LSTM's structure network, which comprises one or more hidden layers, an output layer and an input layer [16]. LSTM networks' main advantage is that the hidden layer comprises memory cells. Each memory cell recurrently has a core self-connected linear unit called “ Constant Error Carousel (CEC)” [13], which provides short-term memory storage and has three gates: Input gate, which controls the information from a new input to the memory cell, is given by where $h_{t-1}$ is the hidden state at the time step $t-1$; $i_t$ is the output of the input gate layer at the time step $t$; $\hat{c}_t$ is the candidate value to be added to the output at the time step $t$; $b_i$ and $b_c$ are biases of the input gate layer and the candidate value computation, respectively; $W_i$ and $W_c$ are weights of the input gate and the candidate value computation, respectively; and $\sigma (x) = 1/(1+e^{-x})$ is the pointwise nonlinear activation function. Forget gate, which controls the limit up to which a value is saved in the memory, is given by where $f_t$ is the forget state at the time step $t$, $W_f$ is the weight of the forget gate; and $b_f$ is the bias of the forget gate. Output gate, which controls the information output from the memory cell, is given by where new cell states $c_t$ are calculated based on the results of the previous two steps; $o_t$ is the output at the time step $t$; $W_o$ is the weight of the output gate; and $b_o$ is the bias of the output gate [14]. Problem Statement ::: Definition of Differential Privacy Differential privacy is one of privacy's most popular definitions today, which is a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset. It intuitively requires that the mechanism that outputs information about an underlying dataset is robust to one sample's any change, thus protecting privacy. A mechanism ${f}$ is a random function that takes a dataset $\mathcal {N}$ as input, and outputs a random variable ${f}(\mathcal {N})$. For example, suppose $\mathcal {N}$ is a news articles dataset, then the function that outputs compound score of articles in $\mathcal {N}$ plus noise from the standard normal distribution is a mechanism [7]. Although differential privacy was originally developed to facilitate secure analysis over sensitive data, it can also enhance the robustness of the data. Note that finance data, especially news data and stock data, is unstable with a lot of noise, with a more robust data the accuracy of prediction will be improved. Since we predict stock price by fusing news come from different sources, which might include fake news. Involving differential privacy in the training to improve the robustness of the finance news is meaningful. Training DP-LSTM Neural Network It is known that it is risky to predict stocks by considering news factors, because news can't guarantee full notarization and objectivity, many times extreme news will have a big impact on prediction models. To solve this problem, we consider entering the idea of the differential privacy when training. In this section, our DP-LSTM deep neural network training strategy is presented. The input data consists of three components: stock price, sentiment analysis compound score and noise. Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Data Preprocessing The data for this project are two parts, the first part is the historical S&P 500 component stocks, which are downloaded from the Yahoo Finance. We use the data over the period of from 12/07/2017 to 06/01/2018. The second part is the news article from financial domain are collected with the same time period as stock data. Since our paper illustrates the relationship between the sentiment of the news articles and stocks' price. Hence, only news article from financial domain are collected. The data is mainly taken from Webhose archived data, which consists of 306242 news articles present in JSON format, dating from December 2017 up to end of June 2018. The former 85% of the dataset is used as the training data and the remainder 15% is used as the testing data. The News publishers for this data are CNBC.com, Reuters.com, WSJ.com, Fortune.com. The Wall Street Journal is one of the largest newspapers in the United States, which coverage of breaking news and current headlines from the US and around the world include top stories, photos, videos, detailed analysis and in-depth thoughts; CNBC primarily carries business day coverage of U.S. and international financial markets, which following the end of the business day and on non-trading days; Fortune is an American multinational business magazine; Reuters is an international news organization. We preprocess the raw article body and use NLTK sentiment package alence Aware Dictionary and Sentiment Reasoner (VADER) to extract sentiment scores. The stocks with missing data are deleted, and the dataset we used eventually contains 451 stocks and 4 news resources (CNBC.com, Reuters.com, WSJ.comFortune.com.). Each stock records the adjust close price and news compound scores of 121 trading days. A rolling window with size 10 is used to separate data, that is, We predict the stock price of the next trading day based on historical data from the previous 10 days, hence resulting in a point-by-point prediction [15]. In particular, the training window is initialized with all real training data. Then we shift the window and add the next real point to the last point of training window to predict the next point and so forth. Then, according to the length of the window, the training data is divided into 92 sets of training input data (each set length 10) and training output data (each set length 1). The testing data is divided into input and output data of 9 windows (see Figure FIGREF20). Training DP-LSTM Neural Network ::: Data Preprocessing and Normalization ::: Normalization To detect stock price pattern, it is necessary to normalize the stock price data. Since the LSTM neural network requires the stock patterns during training, we use “min-max” normalization method to reform dataset, which keeps the pattern of the data [11], as follow: where $X_{t}^{n}$ denotes the data after normalization. Accordingly, de-normalization is required at the end of the prediction process to get the original price, which is given by where $\hat{X}_{t}^{n}$ denotes the predicted data and $\hat{X}_{t}$ denotes the predicted data after de-normalization. Note that compound score is not normalized, since the compound score range from -1 to 1, which means all the compound score data has the same scale, so it is not require the normalization processing. Training DP-LSTM Neural Network ::: Adding Noise We consider the differential privacy as a method to improve the robustness of the LSTM predictions [8]. We explore the interplay between machine learning and differential privacy, and found that differential privacy has several properties that make it particularly useful in application such as robustness to extract textual information [9]. The robustness of textual information means that accuracy is guaranteed to be unaffected by certain false information [10]. The input data of the model has 5 dimensions, which are the stock price and four compound scores as $(X^t, S_1^t, S_2^t, S_3^t, S_4^t), t=1,...,T$, where $X^t$ represents the stock price and $S_i^t,~i=1,...,4$ respectively denote the mean compound score calculated from WSJ, CNBC, Fortune and Reuters. According to the process of differential privacy, we add Gaussian noise with different variances to the news according to the variance of the news, i.e., the news compound score after adding noise is given by where $\text{var}(\cdot )$ is the variance operator, $\lambda $ is a weighting factor and $\mathcal {N}(\cdot )$ denotes the random Gaussian process with zero mean and variance $\lambda \text{var}(S_i)$. We used python to crawl the news from the four sources of each trading day, perform sentiment analysis on the title of the news, and get the compound score. After splitting the data into training sets and test sets, we separately add noise to each of four news sources of the training set, then, for $n$-th stock, four sets of noise-added data $(X^n_t, {\widetilde{S}^t_1}, S^t_2, S^t_3, S^t_4)$, $(X^n_t, {S^t_1}, \widetilde{S}^t_2, S^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, \widetilde{S}^t_3, S^t_4)$, $(X^n_t, { S^t_1}, S^t_2, S^t_3, \widetilde{S}^t_4)$ are combined into a new training data through a rolling window. The stock price is then combined with the new compound score training data as input data for our DP-LSTM neural network. Training DP-LSTM Neural Network ::: Training Setting The LSTM model in figure FIGREF10 has six layers, followed by an LSTM layer, a dropout layer, an LSTM layer, an LSTM layer, a dropout layer, a dense layer, respectively. The dropout layers (with dropout rate 0.2) prevent the network from overfitting. The dense layer is used to reshape the output. Since a network will be difficult to train if it contains a large number of LSTM layers [16], we use three LSTM layers here. In each LSTM layer, the loss function is the mean square error (MSE), which is the sum of the squared distances between our target variable and the predicted value. In addition, the ADAM [17] is used as optimizer, since it is straightforward to implement, computationally efficient and well suited for problems with large data set and parameters. There are many methods and algorithms to implement sentiment analysis systems. In this paper, we use rule-based systems that perform sentiment analysis based on a set of manually crafted rules. Usually, rule-based approaches define a set of rules in some kind of scripting language that identify subjectivity, polarity, or the subject of an opinion. We use VADER, a simple rule-based model for general sentiment analysis. Performance Evaluation In this section, we validate our DP-LSTM based on the S&P 500 stocks. We calculate the mean prediction accuracy (MPA) to evaluate the proposed methods, which is defined as where $X_{t,\ell }$ is the real stock price of the $\ell $-th stock on the $t$-th day, $L$ is the number of stocks and $\hat{X}_{t,\ell }$ is the corresponding prediction result. Figure FIGREF27 plots the average score for all news on the same day over the period. The compound score is fluctuating between -0.3 and 0.15, indicating an overall neutral to slightly negative sentiment. The Positive, Negative and Neutral scores represent the proportion of text that falls in these categories. The Compound score is a metric that calculates the sum of all the lexicon ratings which have been normalized between -1 (most extreme negative) and +1 (most extreme positive). Figure FIGREF29 shows the $\text{MPAs}$ of the proposed DP-LSTM and vanilla LSTM for comparison. In Table TABREF30, we give the mean MPA results for the prediction prices, which shows the accuracy performance of DP-LSTM is 0.32% higer than the LSTM with news. The result means the DP framework can make the prediction result more accuracy and robustness. Note that the results are obtained by running many trials, since we train stocks separately and predict each price individually due to the different patterns and scales of stock prices. This in total adds up to 451 runs. The results shown in Table TABREF30 is the average of these 451 runs. Furthermore, we provide results for 9 duration over a period in Figure FIGREF29. The performance of our DP-LSTM is always better than the LSTM with news. Based on the sentiment-ARMA model and adding noise for training, the proposed DP-LSTM is more robust. The investment risk based on this prediction results is reduced. In Figure FIGREF31, we can see the prediction results of DP-LSTM with is closer to the real S&P 500 index price line than other methods. The two lines (prediction results of LSTM with news and LSTM without news) almost coincide in Figure FIGREF31. We can tell the subtle differences from the Table TABREF32, that DP-LSTM is far ahead, and LSTM with news is slightly better than LSTM without news. Conclusion In this paper, we integrated the deep neural network with the famous NLP models (VADER) to identify and extract opinions within a given text, combining the stock adjust close price and compound score to reduce the investment risk. We first proposed a sentiment-ARMA model to represent the stock price, which incorporates influential variables (price and news) based on the ARMA model. Then, a DP-LSTM deep neural network was proposed to predict stock price according to the sentiment-ARMA model, which combines the LSTM, compound score of news articles and differential privacy method. News are not all objective. If we rely on the information extracted from the news for prediction fully, we may increase bias because of some non-objective reports. Therefore, the DP-LSTM enhance robustness of the prediction model. Experiment results based on the S&P 500 stocks show that the proposed DP-LSTM network can predict the stock price accurately with robust performance, especially for S&P 500 index that reflects the general trend of the market. S&P 500 prediction results show that the differential privacy method can significantly improve the robustness and accuracy. References [1] X. Li, Y. Li, X.-Y. Liu, D. Wang, “Risk management via anomaly circumvent: mnemonic deep learning for midterm stock prediction.” in Proceedings of 2nd KDD Workshop on Anomaly Detection in Finance (Anchorage ’19), 2019. [2] P. Chang, C. Fan, and C. Liu, “Integrating a piece-wise linear representation method and a neural network model for stock trading points prediction.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39, 1 (2009), 80–92. [3] Akita, Ryo, et al. “Deep learning for stock prediction using numerical and textual information.” IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS). IEEE, 2016. [4] Li, Xiaodong, et al. “Does summarization help stock prediction? A news impact analysis.” IEEE Intelligent Systems 30.3 (2015): 26-34. [5] Ding, Xiao, et al. “Deep learning for event-driven stock prediction.” Twenty-fourth International Joint Conference on Artificial Intelligence. 2015. [6] Hutto, Clayton J., and Eric Gilbert. “Vader: A parsimonious rule-based model for sentiment analysis of social media text.” Eighth International AAAI Conference on Weblogs and Social Media, 2014. [7] Ji, Zhanglong, Zachary C. Lipton, and Charles Elkan. “Differential privacy and machine learning: a survey and review.” arXiv preprint arXiv:1412.7584 (2014). [8] Abadi, Martin, et al. “Deep learning with differential privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ACM, 2016. [9] McMahan, H. Brendan, and Galen Andrew. “A general approach to adding differential privacy to iterative training procedures.” arXiv preprint arXiv:1812.06210 (2018). [10] Lecuyer, Mathias, et al. “Certified robustness to adversarial examples with differential privacy.” arXiv preprint arXiv:1802.03471 (2018). [11] Hafezi, Reza, Jamal Shahrabi, and Esmaeil Hadavandi. “A bat-neural network multi-agent system (BNNMAS) for stock price prediction: Case study of DAX stock price.” Applied Soft Computing, 29 (2015): 196-210. [12] Chang, Pei-Chann, Chin-Yuan Fan, and Chen-Hao Liu. “Integrating a piecewise linear representation method and a neural network model for stock trading points prediction.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39.1 (2008): 80-92. [13] Gers, Felix A., Nicol N. Schraudolph, and Jürgen Schmidhuber. “Learning precise timing with LSTM recurrent networks.” Journal of Machine Learning Research 3.Aug (2002): 115-143. [14] Qin, Yao, et al. “A dual-stage attention-based recurrent neural network for time series prediction.” arXiv preprint arXiv:1704.02971 (2017). [15] Malhotra, Pankaj, et al. “Long short term memory networks for anomaly detection in time series.” Proceedings. Presses universitaires de Louvain, 2015. [16] Sak, Haşim, Andrew Senior, and Françoise Beaufays. “Long short-term memory recurrent neural network architectures for large scale acoustic modeling.” Fifteenth annual conference of the international speech communication association, 2014. [17] Kingma, Diederik P., and Jimmy Ba. “Adam: A method for stochastic optimization.” arXiv preprint arXiv:1412.6980 (2014). [18] Box, George EP, et al. Time series analysis: forecasting and control. John Wiley & Sons, 2015. [19] Pang, Bo, and Lillian Lee. “Opinion mining and sentiment analysis.” Foundations and Trends in Information Retrieval 2.1–2 (2008): 1-135. [20] Cambria, Erik. “Affective computing and sentiment analysis.” IEEE Intelligent Systems 31.2 (2016): 102-107. [21] Dwork C, Lei J. Differential privacy and robust statistics//STOC. 2009, 9: 371-380. [22] X. Li, Y. Li, Y. Zhan, and X.-Y. Liu. “Optimistic bull or pessimistic bear: adaptive deep reinforcement learning for stock portfolio allocation.” in Proceedings of the 36th International Conference on Machine Learning, 2019.
A mechanism ${f}$ is a random function that takes a dataset $\mathcal {N}$ as input, and outputs a random variable ${f}(\mathcal {N})$.
7182f6ed12fa990835317c57ad1ff486282594ee
7182f6ed12fa990835317c57ad1ff486282594ee_0
Q: How does the SCAN dataset evaluate compositional generalization? Text: Introduction A crucial property underlying the expressive power of human language is its systematicity BIBREF0 , BIBREF1 : syntactic or grammatical rules allow arbitrary elements to be combined in novel ways, making the number of sentences possible in a language to be exponential in the number of its basic elements. Recent work has shown that standard deep learning methods in natural language processing fail to capture this important property: when tested on unseen combinations of known elements, state-of-the-art models fail to generalize BIBREF2 , BIBREF3 , BIBREF4 . It has been suggested that this failure represents a major deficiency of current deep learning models, especially when they are compared to human learners BIBREF5 , BIBREF0 . A recently published dataset called SCAN BIBREF2 (Simplified version of the CommAI Navigation tasks), tests compositional generalization in a sequence-to-sequence (seq2seq) setting by systematically holding out of the training set all inputs containing a basic primitive verb ("jump"), and testing on sequences containing that verb. Success on this difficult problem requires models to generalize knowledge gained about the other primitive verbs ("walk", "run" and "look") to the novel verb "jump," without having seen "jump" in any but the most basic context ("jump" $\rightarrow $ JUMP). It is trivial for human learners to generalize in this way (e.g. if I tell you that "dax" is a verb, you can generalize its usage to all kinds of constructions, like "dax twice and then dax again", without even knowing what the word means) BIBREF2 . However, standard recurrent seq2seq models fail miserably on this task, with the best-reported model (a gated recurrent unit augmented with an attention mechanism) achieving only 12.5% accuracy on the test set BIBREF2 , BIBREF4 . Recently, convolutional neural networks (CNN) were shown to perform better on this test, but still only achieved 69.2% accuracy on the test set. From a statistical-learning perspective, this failure is quite natural. The neural networks trained on the SCAN task fail to generalize because they have memorized biases that do indeed exist in the training set. Because "jump" has never been seen with any adverb, it would not be irrational to assume that "jump twice" is an invalid sentence in this language. The SCAN task requires networks to make an inferential leap about the entire structure of part of the distribution that they have not seen - that is, it requires them to make an out-of-domain (o.o.d.) extrapolation BIBREF5 , rather than merely interpolate according to the assumption that train and test data are independent and identically distributed (i.i.d.) (see Figure 1 ). Seen another way, the SCAN task and its analogues in human learning (e.g. "dax"), require models not to learn some of the correlations that are actually present in the training data BIBREF6 . Given that humans can perform well on certain kinds of o.o.d. extrapolation tasks, the human brain must be implementing principles that allow humans to generalize systematically, but which are lacking in current deep learning models. One prominent idea from neuroscience research on language processing that may offer such a principle is that the brain contains partially separate systems for processing syntax and semantics. In this paper, we motivate such a separation from a machine-learning perspective, and test a simple implementation on the SCAN dataset. Our novel model, which we call Syntactic Attention, encodes syntactic and semantic information in separate streams before producing output sequences. Our experiments show that our novel architecture achieves substantially improved compositional generalization performance over other recurrent networks on the SCAN dataset. Syntax and prefrontal cortex Syntax is the aspect of language underlying its systematicity BIBREF1 . When given a novel verb like "dax," humans can generalize its usage to many different constructions that they have never seen before, by applying known syntactic or grammatical rules about verbs (e.g. rules about how to conjugate to a different tense or about how adverbs modify verbs). It has long been thought that humans possess specialized cognitive machinery for learning the syntactic or grammatical structure of language BIBREF7 . A part of the prefrontal cortex called Broca's area, originally thought only to be involved in language production, was later found to be important for comprehending syntactically complex sentences, leading some to conclude that it is important for syntactic processing in general BIBREF8 , BIBREF9 . For example, patients with lesions to this area showed poor comprehension on sentences such as "The girl that the boy is chasing is tall". Sentences such as this one require listeners to process syntactic information because semantics is not enough to understand their meanings - e.g. either the boy or the girl could be doing the chasing, and either could be tall. A more nuanced view situates the functioning of Broca's area within the context of prefrontal cortex in general, noting that it may simply be a part of prefrontal cortex specialized for language BIBREF9 . The prefrontal cortex is known to be important for cognitive control, or the active maintenance of top-down attentional signals that bias processing in other areas of the brain BIBREF10 (see diagram on the right of Figure 2 ). In this framework, Broca's area can be thought of as a part of prefrontal cortex specialized for language, and responsible for selectively attending to linguistic representations housed in other areas of the brain BIBREF9 . The prefrontal cortex has received much attention from computational neuroscientists BIBREF10 , BIBREF11 , and one model even showed a capacity for compositional generalization BIBREF6 . However, these ideas have not been taken up in deep learning research. Here, we emphasize the idea that the brain contains two separate systems for processing syntax and semantics, where the semantic system learns and stores representations of the meanings of words, and the syntactic system, housed in Broca's area of the prefrontal cortex, learns how to selectively attend to these semantic representations according to grammatical rules. Syntactic Attention The Syntactic Attention model improves the compositional generalization capability of an existing attention mechanism BIBREF12 by implementing two separate streams of information processing for syntax and semantics (see Figure 2 ). Here, by "semantics" we mean the information in each word in the input that determines its meaning (in terms of target outputs), and by "syntax" we mean the information contained in the input sequence that should determine the alignment of input to target words. We describe the mechanisms of this separation and the other details of the model below, following the notation of BIBREF12 , where possible. Separation assumption In the seq2seq problem, models must learn a mapping from arbitrary-length sequences of inputs $ \mathbf {x} = \lbrace x_1, x_2, ..., x_{T_x}\rbrace $ to arbitrary-length sequences of outputs $ \mathbf {y} = \lbrace y_1, y_2, ..., y_{T_y} \rbrace $ : $ p(\mathbf {y} | \mathbf {x}) $ . The attention mehcanism of BIBREF12 models the conditional probability of each target word given the input sequence and previous targets: $p(y_i|y_1, y_2, ..., y_{i-1}, \mathbf {x})$ . This is accomplished by processing the input sequence with a recurrent neural network (RNN) in the encoder. The outputs of this RNN are used both for encoding individual words in the input for later translation, and for determining their alignment to targets during decoding. The underlying assumption made by the Syntactic Attention architecture is that the dependence of target words on the input sequence can be separated into two independent factors. One factor, $p(y_i|x_j) $ , which we refer to as "semantics," models the conditional distribution from individual words in the input to individual words in the target. Note that, unlike in the model of BIBREF12 , these $x_j$ do not contain any information about the other words in the input sequence because they are not processed with an RNN. They are "semantic" in the sense that they contain the information relevant to translating into the target language. The other factor, $p(j \rightarrow i | \mathbf {x}) $ , which we refer to as "syntax," models the conditional probability that word $j$ in the input is relevant to word $i$ in the target sequence, given the entire input sequence. This alignment is accomplished from encodings of the inputs produced by an RNN. This factor is "syntactic" in the sense that it must capture all of the temporal information in the input that is relevant to determining the serial order of outputs. The crucial architectural assumption, then, is that any temporal dependency between individual words in the input that can be captured by an RNN should only be relevant to their alignment to words in the target sequence, and not to the translation of individual words. This assumption will be made clearer in the model description below. Encoder The encoder produces two separate vector representations for each word in the input sequence. Unlike the previous attention model BIBREF12 ), we separately extract the semantic information from each word with a linear transformation: $$m_j = W_m x_j,$$ (Eq. 8) where $W_m$ is a learned weight matrix that multiplies the one-hot encodings $\lbrace x_1, ..., x_{T_x}\rbrace $ . Note that the semantic representation of each word does not contain any information about the other words in the sentence. As in the previous attention mechanism BIBREF12 , we use a bidirectional RNN (biRNN) to extract what we now interpret as the syntactic information from each word in the input sequence. The biRNN produces a vector for each word on the forward pass, $ (\overrightarrow{h_1}, ..., \overrightarrow{h_{T_x})}$ , and a vector for each word on the backward pass, $ (\overleftarrow{h_1}, ..., \overleftarrow{h_{T_x})}$ . The syntactic information (or "annotations" BIBREF12 ) of each word $x_j$ is determined by the two vectors $\overrightarrow{h_{j-1}}$ , $\overleftarrow{h_{j+1}}$ corresponding to the words surrounding it: $$h_j = [\overrightarrow{h_{j-1}};\overleftarrow{h_{j+1}}]$$ (Eq. 9) In all experiments, we used a bidirectional Long Short-Term Memory (LSTM) for this purpose. Note that because there is no sequence information in the semantic representations, all of the information required to parse (i.e. align) the input sequence correctly (e.g. phrase structure, modifying relationships, etc.) must be encoded by the biRNN. Decoder The decoder models the conditional probability of each target word given the input and the previous targets: $p(y_i | y_1, y_2, ..., y_{i-1}, \mathbf {x})$ , where $y_i$ is the target translation and $\mathbf {x}$ is the whole input sequence. As in the previous model, we use an RNN to determine an attention distribution over the inputs at each time step (i.e. to align words in the input to the current target). However, our decoder diverges from this model in that the mapping from inputs to outputs is performed from a weighted average of the semantic representations of the input words: $$d_i = \sum _{j=1}^{T_x} \alpha _{ij} m_j \qquad p(y_i | y_1, y_2, ..., y_{i-1}, \mathbf {x}) = f(d_i)$$ (Eq. 11) where $f$ is parameterized by a linear function with a softmax nonlinearity, and the $\alpha _{ij}$ are the weights determined by the attention model. We note again that the $m_j$ are produced directly from corresponding $x_j$ , and do not depend on the other inputs. The attention weights are computed by a function measuring how well the syntactic information of a given word in the input sequence aligns with the current hidden state of the decoder RNN, $s_i$ : $$\alpha _{ij} = \frac{\exp (e_{ij})}{\sum _{k=1}^{T_x}\exp (e_{ik})} \qquad e_{ij} = a(s_{i}, h_j)$$ (Eq. 12) where $e_{ij}$ can be thought of as measuring the importance of a given input word $x_j$ to the current target word $y_i$ , and $s_{i}$ is the current hidden state of the decoder RNN. BIBREF12 model the function $a$ with a feedforward network, but following BIBREF14 , we choose to use a simple dot product: $$a(s_{i},h_j) = s_{i} \cdot h_j,$$ (Eq. 13) relying on the end-to-end backpropagation during training to allow the model to learn to make appropriate use of this function. Finally, the hidden state of the RNN is updated with the same weighted combination of the syntactic representations of the inputs: $$s_i = g(s_{i-1}, c_{i}) \qquad c_i = \sum _{j=1}^{T_x} \alpha _{ij} h_j$$ (Eq. 14) where $g$ is the decoder RNN, $s_i$ is the current hidden state, and $c_i$ can be thought of as the information in the attended words that can be used to determine what to attend to on the next time step. Again, in all experiments an LSTM was used. SCAN dataset The SCAN dataset is composed of sequences of commands that must be mapped to sequences of actions BIBREF2 (see Figure 3 and supplementary materials for further details). The dataset is generated from a simple finite phrase-structure grammar that includes things like adverbs and conjunctions. There are 20,910 total examples in the dataset that can be split systematically into training and testing sets in different ways. These splits include the following: Simple split: training and testing data are split randomly Length split: training includes only shorter sequences Add primitive split: a primitive command (e.g. "turn left" or "jump") is held out of the training set, except in its most basic form (e.g. "jump" $\rightarrow $ JUMP) Here we focus on the most difficult problem in the SCAN dataset, the add-jump split, where "jump" is held out of the training set. The best test accuracy reported in the original paper BIBREF2 , using standard seq2seq models, was 1.2%. More recent work has tested other kinds of seq2seq models, including Gated Recurrent Units (GRU) augmented with attention BIBREF4 and convolutional neural networks (CNNs) BIBREF15 . Here, we compare the Syntactic Attention model to the best previously reported results. Implementation details Experimental procedure is described in detail in the supplementary materials. Train and test sets were kept as they were in the original dataset, but following BIBREF4 , we used early stopping by validating on a 20% held out sample of the training set. All reported results are from runs of 200,000 iterations with a batch size of 1. Unless stated otherwise, each architecture was trained 5 times with different random seeds for initialization, to measure variability in results. All experiments were implemented in PyTorch. Details of the hyperparameter search are given in supplementary materials. Our best model used LSTMs, with 2 layers and 200 hidden units in the encoder, and 1 layer and 400 hidden units in the decoder, and 120-dimensional semantic vectors. The model included a dropout rate of 0.5, and was optimized using an Adam optimizer BIBREF16 with a learning rate of 0.001. Results The Syntactic Attention model achieves state-of-the-art performance on the key compositional generalization task of the SCAN dataset (see table 1 ). The table shows results (mean test accuracy (%) $\pm $ standard deviation) on the test splits of the dataset. Syntactic Attention is compared to the previous best models, which were a CNN BIBREF15 , and GRUs augmented with an attention mechanism ("+ attn"), which either included or did not include a dependency ("- dep") in the decoder on the previous action BIBREF4 . The best model from the hyperparameter search showed strong compositional generalization performance, attaining a mean accuracy of 91.1% (median = 98.5%) on the test set of the add-jump split. However, as in BIBREF15 , we found that our model showed variance across initialization seeds. We suggest that this may be due to the nature of the add-jump split: since "jump" has only been encountered in the simplest context, it may be that slight changes to the way that this verb is encoded can make big differences when models are tested on more complicated constructions. For this reason, we ran the best model 25 times on the add-jump split to get a more accurate assessment of performance. These results were highly skewed, with a mean accuracy of 78.4 % but a median of 91.0 % (see supplementary materials for detailed results). Overall, this represents an improvement over the best previously reported results on this task BIBREF4 , BIBREF15 , and does so without any hand-engineered features or additional supervision. Additional experiments To test our hypothesis that compositional generalization requires a separation between syntax (i.e. sequential information used for alignment), and semantics (i.e. the mapping from individual source words to individual targets), we conducted two more experiments: Sequential semantics. An additional biLSTM was used to process the semantics of the sentence: $m_j = [\overrightarrow{m_j};\overleftarrow{m_j}]$ , where $\overrightarrow{m_j}$ and $\overleftarrow{m_j}$ are the vectors produced for the source word $x_j$ by a biLSTM on the forward and backward passes, respectively. These $m_j$ replace those generated by the simple linear layer in the Syntactic Attention model (in equation ( 8 )). Syntax-action. Syntactic information was allowed to directly influence the output at each time step in the decoder: $p(y_i|y_1, y_2, ..., y_{i-1}, \mathbf {x}) = f([d_i; c_i])$ , where again $f$ is parameterized with a linear function and a softmax output nonlinearity. The results of the additional experiments (mean test accuracy (%) $\pm $ standard deviations) are shown in table 2 . These results partially confirmed our hypothesis: performance on the jump-split test set was worse when the strict separation between syntax and semantics was violated by allowing sequential information to be processed in the semantic stream. However, "syntax-action," which included sequential information produced by a biLSTM (in the syntactic stream) in the final production of actions, maintained good compositional generalization performance. We hypothesize that this was because in this setup, it was easier for the model to learn to use the semantic information to directly translate actions, so it largely ignored the syntactic information. This experiment suggests that the separation between syntax and semantics does not have to be perfectly strict, as long as non-sequential semantic representations are available for direct translation. Dicussion The Syntactic Attention model was designed to incorporate a key principle that has been hypothesized to describe the organization of the linguistic brain: mechanisms for learning rule-like or syntactic information are separated from mechanisms for learning semantic information. Our experiments confirm that this simple organizational principle encourages systematicity in recurrent neural networks in the seq2seq setting, as shown by the substantial improvement in the model's performance on the compositional generalization tasks in the SCAN dataset. The model makes the assumption that the translation of individual words in the input should be independent of their alignment to words in the target sequence. To this end, two separate encodings are produced for the words in the input: semantic representations in which each word is not influenced by other words in the sentence, and syntactic representations which are produced by an RNN that can capture temporal dependencies in the input sequence (e.g. modifying relationships, binding to grammatical roles). Just as Broca's area of the prefrontal cortex is thought to play a role in syntactic processing through a dynamic selective-attention mechanism that biases processing in other areas of the brain, the syntactic system in our model encodes serial information and is constrained to influence outputs through an attention mechanism alone. Patients with lesions to Broca's area are able to comprehend sentences like "The girl is kicking a green ball", where semantics can be used to infer the grammatical roles of the words (e.g. that the girl, not the ball, is doing the kicking) BIBREF8 . However, these patients struggle with sentences such as "The girl that the boy is chasing is tall", where the sequential order of the words, rather than semantics, must be used to infer grammatical roles (e.g. either the boy or the girl could be doing the chasing). In our model, the syntactic stream can be seen as analogous to Broca's area, because without it the model would not be able to learn about the temporal dependencies that determine the grammatical roles of words in the input. The separation of semantics and syntax, which is in the end a constraint, forces the model to learn, in a relatively independent fashion, 1) the individual meanings of words and 2) how the words are being used in a sentence (e.g. how they can modify one another, what grammatical role each is playing, etc.). This encourages systematic generalization because, even if a word has only been encountered in a single context (e.g. "jump" in the add-jump split), as long as its syntactic role is known (e.g. that it is a verb that can be modified by adverbs such as "twice"), it can be used in many other constructions that follow the rules for that syntactic role (see supplementary materials for visualizations). Additional experiments confirmed this intuition, showing that when sequential information is allowed to be processed by the semantic system ("sequential semantics"), systematic generalization performance is substantially reduced. The Syntactic Attention model bears some resemblance to a symbolic system - the paradigm example of systematicity - in the following sense: in symbolic systems, representational content (e.g. the value of a variable stored in memory) is maintained separately from the computations that are performed on that content. This separation ensures that the manipulation of the content stored in variables is fairly independent of the content itself, and will therefore generalize to arbitrary elements. Our model implements an analogous separation, but in a purely neural architecture that does not rely on hand-coded rules or additional supervision. In this way, it can be seen as transforming a difficult out-of-domain (o.o.d.) generalization problem into two separate i.i.d. generalization problems - one where the individual meanings of words are learned, and one where how words are used (e.g. how adverbs modify verbs) is learned (see Figure 4 ). It is unlikely that the human brain has such a strict separation between semantic and syntactic processing, and in the end, there must be more of an interaction between the two streams. We expect that the separation between syntax and semantics in the brain is only a relative one, but we have shown here that this kind of separation can be useful for encouraging systematicity and allowing for compositional generalization. Other related work Our model integrates ideas from computational and cognitive neuroscience BIBREF9 , BIBREF11 , BIBREF6 , BIBREF10 , into the neural machine translation framework. Much of the work in neural machine translation uses an encoder-decoder framework, where one RNN is used to encode the source sentence, and then a decoder neural network decodes the representations given by the encoder to produce the words in the target sentence BIBREF17 . Earlier work attempted to encode the source sentence into a single fixed-length vector (the final hidden state of the encoder RNN), but it was subsequently shown that better performance could be achieved by encoding each word in the source, and using an attention mechanism to align these encodings with each target word during the decoding process BIBREF12 . The current work builds directly on this attention model, while incorporating a separation between syntactic and semantic information streams. The principle of compositionality has recently regained the attention of deep learning researchers BIBREF18 , BIBREF19 , BIBREF0 , BIBREF2 , BIBREF20 , BIBREF21 . In particular, the issue has been explored in the visual-question answering (VQA) setting BIBREF18 , BIBREF14 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Many of the successful models in this setting learn hand-coded operations BIBREF18 , BIBREF23 , use highly specialized components BIBREF14 , BIBREF24 , or use additional supervision BIBREF23 , BIBREF25 . In contrast, our model uses standard recurrent networks and simply imposes the additional constraint that syntactic and semantic information are processed in separate streams. Some of the recent research on compositionality in machine learning has had a special focus on the use of attention. For example, in the Compositional Attention Network, built for VQA, a strict separation is maintained between the representations used to encode images and the representations used to encode questions BIBREF14 . This separation is enforced by restricting them to interact only through attention distributions. Our model utilizes a similar restriction, reinforcing the idea that compositionality is enhanced when information from different modalities (in our case syntax and semantics) are only allowed to interact through discrete probability distributions. Previous research on compositionality in machine learning has also focused on the incorporation of symbol-like processing into deep learning models BIBREF18 , BIBREF23 , BIBREF25 . These methods generally rely on hand-coding or additional supervision for the symbolic representations or algorithmic processes to emerge. For example, in neural module networks BIBREF18 , a neural network is constructed out of composable neural modules that each learn a specific operation. These networks have shown an impressive capacity for systematic generalization on VQA tasks BIBREF19 . These models can be seen as accomplishing a similar transformation as depicted in Figure 4 , because the learning in each module is somewhat independent of the mechanism that composes them. However, BIBREF19 find that when these networks are trained end-to-end (i.e. without hand-coded parameterizations and layouts) their systematicity is significantly degraded. In contrast, our model learns in an end-to-end way to generalize systematically without any explicit symbolic processes built in. This offers an alternative way in which symbol-like processing can be achieved with neural networks - by enforcing a separation between mechanisms for learning representational content (semantics) and mechanisms for learning how to dynamically attend to or manipulate that content (syntax) in the context of a cognitive operation or reasoning problem. Conclusion The Syntactic Attention model incorporates ideas from cognitive and computational neuroscience into the neural machine translation framework, and produces the kind of systematic generalization thought to be a key component of human language-learning and intelligence. The key feature of the architecture is the separation of sequential information used for alignment (syntax) from information used for mapping individual inputs to outputs (semantics). This separation allows the model to generalize the usage of a word with known syntax to many of its valid grammatical constructions. This principle may be a useful heuristic in other natural language processing tasks, and in other systematic or compositional generalization tasks. The success of our approach suggests a conceptual link between dynamic selective-attention mechanisms in the prefrontal cortex and the systematicity of human cognition, and points to the untapped potential of incorporating ideas from cognitive science and neuroscience into modern approaches in deep learning and artificial intelligence BIBREF26 . SCAN dataset details The SCAN dataset BIBREF2 generates sequences of commands using the pharase-structure grammar described in Figure 5 . This simple grammar is not recursive, and so can generate a finite number of command sequences (20,910 total). These commands are interpreted according to the rules shown in Figure 6 . Although the grammar used to generate and interpret the commands is simple compared to any natural language, it captures the basic properties that are important for testing compositionality (e.g. modifying relationships, discrete grammatical roles, etc.). The add-primitive splits (described in main text) are meant to be analogous to the capacity of humans to generalize the usage of a novel verb (e.g. "dax") to many constructions BIBREF2 . Experimental procedure details The cluster used for all experiments consists of 3 nodes, with 68 cores in total (48 times Intel(R) Xeon(R) CPU E5-2650 v4 at 2.20GHz, 20 times Intel(R) Xeon(R) CPU E5-2650 v3 at 2.30GHz), with 128GB of ram each, connected through a 56Gbit infiniband network. It has 8 pascal Titan X GPUs and runs Ubuntu 16.04. All experiments were conducted with the SCAN dataset as it was originally published BIBREF2 . No data were excluded, and no preprocessing was done except to encode words in the input and action sequences into one-hot vectors, and to add special tokens for start-of-sequence and end-of-sequence tokens. Train and test sets were kept as they were in the original dataset, but following BIBREF4 , we used early stopping by validating on a 20% held out sample of the training set. All reported results are from runs of 200,000 iterations with a batch size of 1. Except for the additional batch of 25 runs for the add-jump split, each architecture was trained 5 times with different random seeds for initialization, to measure variability in results. All experiments were implemented in PyTorch. Initial experimentation included different implementations of the assumption that syntactic information be separated from semantic information. After the architecture described in the main text showed promising results, a hyperparameter search was conducted to determine optimization (stochastic gradient descent vs. Adam), RNN-type (GRU vs. LSTM), regularizers (dropout, weight decay), and number of layers (1 vs. 2 layers for encoder and decoder RNNs). We found that the Adam optimizer BIBREF16 with a learning rate of 0.001, two layers in the encoder RNN and 1 layer in the decoder RNN, and dropout worked the best, so all further experiments used these specifications. Then, a grid-search was conducted to find the number of hidden units (in both semantic and syntactic streams) and dropout rate. We tried hidden dimensions ranging from 50 to 400, and dropout rates ranging from 0.0 to 0.5. The best model used an LSTM with 2 layers and 200 hidden units in the encoder, and an LSTM with 1 layer and 400 hidden units in the decoder, and used 120-dimensional semantic vectors, and a dropout rate of 0.5. The results for this model are reported in the main text. All additional experiments were done with models derived from this one, with the same hyperparameter settings. All evaluation runs are reported in the main text: for each evaluation except for the add-jump split, models were trained 5 times with different random seeds, and performance was measured with means and standard deviations of accuracy. For the add-jump split, we included 25 runs to get a more accurate assessment of performance. This revealed a strong skew in the distribution of results, so we included the median as the main measure of performance. Occasionally, the model did not train at all due to an unknown error (possibly very poor random initialization, high learning rate or numerical error). For this reason, we excluded runs in which training accuracy did not get above 10%. No other runs were excluded. Skew of add-jump results As mentioned in the results section of the main text, we found that test accuracy on the add-jump split was variable and highly skewed. Figure 7 shows a histogram of these results (proportion correct). The model performs near-perfectly most of the time, but is also prone to catastrophic failures. This may be because, at least for our model, the add-jump split represents a highly nonlinear problem in the sense that slight differences in the way the primitive verb "jump" is encoded during training can have huge differences for how the model performs on more complicated constructions. We recommend that future experiments with this kind of compositional generalization problem take note of this phenomenon, and conduct especially comprehensive analyses of variability in results. Future research will also be needed to better understand the factors that determine this variability, and whether it can be overcome with other priors or regularization techniques. Supplementary experiments Our main hypothesis is that the separation between sequential information used for alignment (syntax) and information about the meanings of individual words (semantics) encourages systematicity. The results reported in the main text are largely consistent with this hypothesis, as shown by the performance of the Syntactic Attention model on the composotional generalization tests of the SCAN dataset. However, it is also possible that the simplicity of the semantic stream in the model is also important for improving compositional generalization. To test this, we replaced the linear layer in the semantic stream with a nonlinear neural network. From the model description in the main text: $$p(y_i|y_1, y_2, ..., y_{i-1}, \mathbf {x}) = f(d_i),$$ (Eq. 37) In the original model, $f$ was parameterized with a simple linear layer, but here we use a two-layer feedforward network with a ReLU nonlinearity, before a softmax is applied to generate a distribution over the possible actions. We tested this model on the add-primitive splits of the SCAN dataset. The results (mean (%) with standard deviations) are shown in Table 3 , with comparison to the baseline Syntactic Attention model. The results show that this modification did not substantially degrade compositional generalization performance, suggesting that the success of the Syntactic Attention model does not depend on the parameterization of the semantic stream with a simple linear function. The original SCAN dataset was published with compositional generalization splits that have more than one example of the held-out primitive verb BIBREF2 . The training sets in these splits of the dataset include 1, 2, 4, 8, 16, or 32 random samples of command sequences with the "jump" command, allowing for a more fine-grained measurement of the ability to generalize the usage of a primitive verb from few examples. For each number of "jump" commands included in the training set, five different random samples were taken to capture any variance in results due to the selection of particular commands to train on. BIBREF2 found that their best model (an LSTM without an attention mechansim) did not generalize well (below 39%), even when it was trained on 8 random examples that included the "jump" command, but that the addition of further examples to the training set improved performance. Subsequent work showed better performance at lower numbers of "jump" examples, with GRU's augmented with an attention mechanism ("+ attn"), and either with or without a dependence in the decoder on the previous target ("- dep") BIBREF4 . Here, we compare the Syntactic Attention model to these results. The Syntactic Attention model shows a substantial improvement over previously reported results at the lowest numbers of "jump" examples used for training (see Figure 8 and Table 4 ). Compositional generalization performance is already quite high at 1 example, and at 2 examples is almost perfect (99.997% correct). The compositional generalization splits of the SCAN dataset were originally designed to test for the ability to generalize known primitive verbs to valid unseen constructions BIBREF2 . Further work with SCAN augmented this set of tests to include compositional generalization based not on known verbs but on known templates BIBREF3 . These template splits included the following (see Figure 9 for examples): Jump around right: All command sequences with the phrase "jump around right" are held out of the training set and subsequently tested. Primitive right: All command sequences containing primitive verbs modified by "right" are held out of the training set and subsequently tested. Primitive opposite right: All command sequences containing primitive verbs modified by "opposite right" are held out of the training set and subsequently tested. Primitive around right: All command sequences containing primitive verbs modified by "around right" are held out of the training set and subsequently tested. Results of the Syntactic Attention model on these template splits are compared to those originally published BIBREF3 in Table 5 . The model, like the one reported in BIBREF3 , performs well on the jump around right split, consistent with the idea that this task does not present a problem for neural networks. The rest of the results are mixed: Syntactic Attention shows good compositional generalization performance on the Primitive right split, but fails on the Primitive opposite right and Primitive around right splits. All of the template tasks require models to generalize based on the symmetry between "left" and "right" in the dataset. However, in the opposite right and around right splits, this symmetry is substantially violated, as one of the two prepositional phrases in which they can occur is never seen with "right." Further research is required to determine whether a model implementing similar principles to Syntactic Attention can perform well on this task. Visualizing attention The way that the attention mechanism of BIBREF12 is set up allows for easy visualization of the model's attention. Here, we visualize the attention distributions over the words in the command sequence at each step during the decoding process. In the following figures (Figures 10 to 15 ), the attention weights on each command (in the columns of the image) is shown for each of the model's outputs (in the rows of the image) for some illustrative examples. Darker blue indicates a higher weight. The examples are shown in pairs for a model trained and tested on the add-jump split, with one example drawn from the training set and a corresponding example drawn from the test set. Examples are shown in increasing complexity, with a failure mode depicted in Figure 15 . In general, it can be seen that although the attention distributions on the test examples are not exactly the same as those from the corresponding training examples, they are usually good enough for the model to produce the correct action sequence. This shows the model's ability to apply the same syntactic rules it learned on the other verbs to the novel verb "jump." In the example shown in Figure 15 , the model fails to attend to the correct sequence of commands, resulting in an error.
it systematically holds out inputs in the training set containing basic primitive verb, "jump", and tests on sequences containing that verb.
cac0119681f311b2efd14b3251a2a5b69ad5d0cd
cac0119681f311b2efd14b3251a2a5b69ad5d0cd_0
Q: Is their model fine-tuned also on all available data, what are results? Text: Introduction Representation learning has been an active research area for more than 30 years BIBREF1, with the goal of learning high level representations which separates different explanatory factors of the phenomena represented by the input data BIBREF2, BIBREF3. Disentangled representations provide models with exponentially higher ability to generalize, using little amount of labels, to new conditions by combining multiple sources of variations. Building Automatic Speech Recognition (ASR) systems, for example, requires a large volume of training data to represent different factors contributing to the creation of speech signals, e.g. background noise, recording channel, speaker identity, accent, emotional state, topic under discussion, and the language used in communication. The practical need for building ASR systems for new conditions with limited resources spurred a lot of work focused on unsupervised speech recognition and representation learning BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, in addition to semi- and weakly-supervised learning techniques aiming at reducing the supervised data needed in real-world scenarios BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17. Recently impressive results have been reported for representation learning, that generalizes to different downstream tasks, through self-supervised learning for text and speech BIBREF18, BIBREF19, BIBREF10, BIBREF11, BIBREF0. Self-supervised representation learning is done through tasks to predict masked parts of the input, reconstruct inputs through low bit-rate channels, or contrast similar data points against different ones. Different from BIBREF0 where the a BERT-like model is trained with the masked language model loss, frozen, and then used as a feature extractor in tandem with a final fully supervised convolutional ASR model BIBREF20, in this work, our “Discrete BERT” approach achieves an average relative Word Error Rate (WER) reduction of 9% by pre-training and fine-tuning the same BERT model using a Connectionist Temporal Classification BIBREF21 loss. In addition, we present a new approach for pre-training bi-directional transformer models on continuous speech data using the InfoNCE loss BIBREF10 – dubbed “continuous BERT”. To understand the nature of their learned representations, we train models using the continuous and the discrete BERT approaches on spectral features, e.g. Mel-frequency cepstral coefficients (MFCC), as well as on pre-trained Wav2vec features BIBREF22. These comparisons provide insights on how complementary the acoustically motivated contrastive loss function is to the other masked language model one. The unsupervised and semi-supervised ASR approaches is in need for test suites like the unified downstream tasks available for language representation models BIBREF18. BIBREF23, BIBREF24, BIBREF25 evaluated semi-supervised self-labeling WER performance on the standard test “clean” and test “other” benchmarks of the Librispeech dataset BIBREF26 when using only 100 hour subset as labeled data. BIBREF22, BIBREF0, BIBREF10 use the same 960h Librispeech data as unlabeled pre-training data, however, they use Phone Error Rates (PER) on the 3h TIMIT dataset BIBREF27 as their performance metric. The zero-resource ASR literature BIBREF7, BIBREF28 use the ABX task evaluate the quality of learned features. To combine the best of these evaluation approaches, we pre-train our models on the unlabeled 960h Librispeech data, with a close-to-zero supervised set of only 1 hour and 10 hours, sampled equally from the “clean” and “other” conditions of Librispeech. Then, we report final WER performance on its standard dev and test sets. Using our proposed approaches we achieve a best WER of 10.2% and 23.5% the clean and other subsets respectively which is competitive with previous work that uses 100h of labeled data. Preliminaries ::: BERT Using self-supervision, BERT BIBREF18, a deep bidirectional transformer model, builds its internal language representation that generalizes to other downstream NLP tasks. Self-attention over the whole input word sequence enables BERT to jointly condition on both the left and right context of data. For training, it uses both a masked language model loss, by randomly removing some input words for the model to predict, and a contrastive loss to distinguish the next sentence in the document from a randomly selected one. Preliminaries ::: Wav2Vec Wav2vec BIBREF22 learns representations of audio data by solving a self-supervised context-prediction task with the same loss function as word2vec BIBREF29, BIBREF10. The model is based on two convolutional neural networks where the encoder $f: \mapsto $ produces a representation $_{i}$ for each time step i at a rate of 100 Hz and the aggregator $g: \mapsto $ combines multiple encoder time steps into a new representation $_i$ for each time step i. Given $_i$, the model is trained to distinguish a sample $_{i+k}$ that is k steps in the future from distractor samples $$ drawn from a distribution $p_n$, by minimizing the contrastive loss for steps $k=1,\dots ,K$: where $T$ is the sequence length, $\sigma (x) = 1/(1+\exp (-x))$, and where $\sigma (_{i+k}^\top h_k(_i))$ is the probability of $_{i+k}$ being the true sample. A step-specific affine transformation $h_k(_i) = W_k _i + \mathbf {b}_k$ is applied to $_i$ BIBREF10. The loss $\mathcal {L} = \sum _{k=1}^K \mathcal {L}_k$ is optimized by summing (DISPLAY_FORM4) over different step sizes. The learned high level features produced by the context network $_i$ are shown to be better acoustic representations for speech recognition compared to standard spectral features. Preliminaries ::: vq-wav2vec vq-wav2vec BIBREF0 learns vector quantized (VQ) representations of audio data using a future time-step prediction task. Similar to wav2vec, there is a convolutional encoder and decoder networks $f: \mapsto $ and $g: \hat{} \mapsto $ for feature extraction and aggregation. However, in between them there is a quantization module $q: \mapsto \hat{}$ to build discrete representations which are input to the aggregator. First, 30ms segments of raw speech are mapped to a dense feature representation $$ at a stride of 10ms using the encoder $f$. Next, the quantizer (q) turns these dense representations into discrete indices which are mapped to a reconstruction $$ of the original representation $$. The $$ is fed into the aggregator $g$ and the model is optimized via the same context prediction task as wav2vec (cf. §SECREF3). The quantization module replaces the original representation $$ by $= _i$ from a fixed size codebook $\in \mathbb {R}^{V \times d}$ which contains $V$ representations of size $d$. Approach ::: Discrete BERT Our work builds on the recently proposed work in BIBREF0 where audio is quantized using a contrastive loss, then features learned on top by a BERT model BIBREF18. For the vq-wav2vec quantization, we use the gumbel-softmax vq-wav2vec model with the same setup as described in BIBREF0. This model quantizes the Librispeech dataset into 13.5k unique codes. To understand the impact of acoustic representations baked into the wav2vec features, as alternatives, we explore quantizing the standard mel-frequency cepstral coefficients (MFCC) and log-mel filterbanks coefficients (FBANK), choosing a subset small enough to fit into GPU memory and running k-means with 13.5k centroids (to match the vq-wav2vec setup) to convergence. We then assign the index of the closest centroid to represent each time-step. We train a standard BERT model BIBREF18, BIBREF30 with only the masked language modeling task on each set of inputs in the same way as described in BIBREF0, namely by choosing tokens for masking with probability of 0.05, expanding each chosen token to a span of 10 masked tokens (spans may overlap) and then computing a cross-entropy loss which attempts to maximize the likelihood of predicting the true token for each one that was masked (Figure ). Approach ::: Continuous BERT A masked language modeling task cannot be performed with continuous inputs and outputs, as there are no targets to predict in place of the masked tokens. Instead of reconstructing the input as in BIBREF31, we classify the masked positive example among a set of negatives. The inputs to the model are dense wav2vec features BIBREF22, MFCC or FBANK features representing 10ms of audio data. Some of these inputs are replaced with a mask embedding and are then fed into a transformer encoder. We then compute the dot product between the outputs corresponding to each masked input, the true input that was masked, and a set of negatives sampled from other masked inputs within the same batch. The model is optimized with the InfoNCE loss BIBREF10 where given one positive sample $_i$ and $N$ negative samples $\tilde{}$ we minimize: where each sample $_i$ is computed as a dot product of the output of the model at timestep $i$ and the true unmasked value of positive example at timestep $i$ or a randomly sampled negative example. To stabilize training, we add the squared sum of logits produced by the dot-product to the loss, and then apply a soft clamp $\hat{s_i}=\lambda \tanh (s_i/\lambda )$ for each logit $s_i$ to prevent the model's tendency to continually increase the magnitude of logits during training BIBREF32. Approach ::: Supervised fine-tuning The pre-trained models are fine-tuned to perform the ASR task by adding a randomly initialized linear projection on top of the features computed by the transformer models into $V$ classes representing the vocabulary of the task. The vocabulary is 29 tokens for character targets plus a word boundary token. The models are optimized by minimizing the CTC loss. Fine-tuning requires only a few epochs on a single GPU. Experiments All of our experiments are implemented by extending the fairseq BIBREF33 toolkit. Experiments ::: Data All of our experiments are performed by pre-training on 960 hours of Librispeech BIBREF26 training set, fine-tuning on labeled 10 hours and 1 hour sets sampled equally from the two conditions of the training set, and evaluating on the standard dev and test splits. Experiments ::: Models ::: Quantized Inputs Training We first train the vq-wav2vec quantization model following the gumbel-softmax recipe described in BIBREF0. After training this model on 960h of Librispeech and quantizing the training dataset, we are left with 13.5k unique codewords combinations. For quantizing MFCC and log-mel filterbanks we first compute dense features using the scripts from the Kaldi BIBREF34 toolkit. We then compute 13.5k K-Means centroids, to match the number of unique tokens produced by the vq-wav2vec model, using 8 32GB Volta GPUs. To fit into GPU memory, we subsample 50% of MFCC features and 25% of FBANK features from the training set before running the K-Means algorithm. The model we use for the masked language modeling task is a standard BERT model with 12 layers, model dimension 768, inner dimension (FFN) 3072 and 12 attention heads BIBREF18. The learning rate is warmed up over the first 10,000 updates to a peak value of 1e-5, and then linearly decayed over a total of 250k updates. We train on 128 GPUs with a batch size of 3072 tokens per GPU giving a total batch size of 393k tokens BIBREF35. Each token represents 10ms of audio data. To mask the input sequence, we follow BIBREF0 and randomly sample $p=0.05$ of all tokens to be a starting index, without replacement, and mask $M=10$ consecutive tokens from every sampled index; spans may overlap. Experiments ::: Models ::: Continuous Inputs Training For training on dense features, we use a model similar to a standard BERT model with the same parameterization as the one used for quantized input training, but we use the wav2vec, MFCC or FBANK inputs directly. We add 128 relative positional embeddings at every multi-head attention block as formulated in BIBREF36 instead of fixed positional embeddings to ease handling longer examples. We train this model on only 8 GPUs, with a batch size of 9600 inputs per GPU resulting in a total batch size of 76,800. We find that increasing the number of GPUs (which increases the effective batch size) does not lead to better results with this particular setup. Wav2vec features are 512-dimensional, while MFCC features have 39 dimensions and Logmel features have 80. We introduce a simple linear projection from the feature dimension to BERT dimension (768) for all models. Similarly to the approach in SECREF12, we choose time-steps to mask by randomly sampling, without replacement, $p=0.05$ of all time-steps to be a starting index, and mask $M=10$ consecutive time-steps from every sampled index; spans may overlap. We sample 10 negative examples from other masked time-steps from the same example, and an additional 10 negative examples from masked time-steps occurring anywhere in the batch. We compute a dot product between the original features and the output corresponding to the same time-step after they are processed by the BERT model. We add the squared sum of logits from these computations multiplied by $\lambda =0.04$ to the loss, and then apply a smooth clamp by recomputing each logit $\hat{s_i}=20\tanh (s_i/20)$. The learning rate is warmed up over the first 10,000 updates to a peak value of 1e-5, and then linearly decayed over a total of 250k updates. Experiments ::: Methodology For quantized inputs, we compute token indices using the gumbel-softmax based vq-wav2vec model. For MFCC and FBANK features we take the index of the closest centroid (as measured by finding the minimum Euclidean distance) to each corresponding feature in the Librispeech dataset. We then train a BERT model as descirbed in §SECREF12. For wav2vec continuous inputs, we use features extracted by the publicly available wav2vec BIBREF22 model which contains 6 convolutional blocks in the feature extractor and 11 convolutional blocks in the aggregator module. We use the outputs of the aggregator as features. For MFCC and FBANK, we use those features directly after applying a single linear projection to upsample them to the model dimensionality. We fine-tune our pre-trained models on 1 or 10 hours of labelled data sampled from the Librispeech training set. We use the standard CTC loss and train for up to 20k updates. We find that the pre-trained models converge after only around 4k updates, while the models trained from scratch tend to converge much later, around 18k updates. We fine-tune all models with learning rate of $0.0001$ that is linearly warmed up over the first 2k updates and then annealed following a cosine learning rate schedule over the last 18k updates. We set the dropout of the pre-trained BERT models to 0.1 and sweep on dropout of the BERT model outputs before the final projection layer over values between 0.0 and 0.4 in increments of 0.1. For each model, we choose a single best checkpoint that has the best loss on the validation set, which is a combination of dev-clean and dev-other standard Librispeech splits. We use the publicly available wav2letter++ BIBREF37 decoder integrated into the Fairseq framework with the official Librispeech 4-gram language model. We run a sweep on weights for language model score, word score and silence token weights for each model, where parameters are chosen randomly and evaluated on the dev-other Librispeech set. We use the weights found by these sweeps to evaluate and report results for all other splits. The sweeps are run with beam size of 250, while the final decoding is done with beam size of 1500. The quantized BERT models have a limit of 2048 source tokens due to their use of fixed positional embeddings. During training we discard longer examples and during evaluation we discard randomly chosen tokens from each example until they are at most 2048 tokens long. We expect that increasing the size of the fixed positional embeddings, or switching to relative positional embeddings will improve performance on longer examples, but in this work we wanted to stay consistent with the setup in BIBREF0. The tandem model which uses the features extracted from the pre-trained BERT models is a character-based Wav2Letter setup of BIBREF38 which uses seven consecutive blocks of convolutions (kernel size 5 with 1000 channels), followed by a PReLU nonlinearity and a dropout rate of 0.1. The final representation is projected to a 28-dimensional probability over the vocabulary and decoded using the standard 4-gram language model following the same protocol as for the fine-tuned models Experiments ::: Results Table TABREF15 presents WERs of different input features and pre-training methods on the standard Librispeech clean and other subsets using 10 hours and 1 hour of labeled data for fine-tuning. Compared to the two-model tandem system proposed in BIBREF0, which uses a the discrete BERT features to train another ASR system from scratch, our discrete BERT model provides an average of 13% and 6% of WER reduction on clean and other subsets respectively, by pre-training and fine-tuning the same BERT model on the 10h labeled set. The wav2vec inputs represent one level of unsupervised feature discovery, This is our reproduction of the tandem system in BIBREF0 which trains a convolutional model from scratch on features extracted of the discrete BERT model with Wav2vec input features, and evaluated on the Librispeech standard “clean” and “other” subsets.which provides a better space for quantization compared to raw spectral features. The discrete BERT training augments the wav2vec features with a higher level of representation that captures the sequential structure of the full utterance through the masked language modeling loss. On the other hand, the continuous BERT training, given its contrastive InforNCE loss, can be viewed as another level of acoustic representations that captures longer range regularities. Using the MFCC and FBANK as inputs to the continuous and discrete BERT models provide insights on the synergies of different levels of acoustic and language model representations. Similar to the observations in BIBREF40, the FBANK features are more friendly to unsupervised local acoustic representation learning methods like continuous BERT, leading to consistent gains compared to MFCC features for both 10h and 1h sets. When using the MFCC and FBANK features for the discrete BERT training, the naive k-means clustering provides bad input acoustic centroids with nothing to benefit from the FBANK compared to the MFCC features. This shifts the entire representation learning load to the, language modelling, discrete BERT component which is identical for both FBANK and MFCC, leading to almost similar performance for both input features in both the 10h and 1h fine-tuning conditions. Using the quantized wav2vec features instead provides a boost of about 40% relative improvement on average compared to the quantized FBANK features in the 10h fine-tuning case. In line with our hypotheses that the discrete BERT model plays the role of a language model and input wav2vec features learns high level acoustic representations, in the very low-resource condition of 1h fine-tuning, the average relative improvement between quantized FBANK and Wav2vec inputs is larger in the “clean” subsets – 55%, which require better local acoustic representations, compared to 45% WER reduction for the noisy “other” subsets that rely more on the global language modeling capabilities. With wav2vec features providing good acoustic representations, the discrete BERT model provides an average of about 28% relative improvement over the continuous BERT model for the 10h fine-tuning condition. We believe the reason is due to the complementary nature of the discrete BERT language modelling loss and the wav2vec acoustically motivated pre-training, as opposed to the relatively redundant acoustic pre-training losses of the continious BERT and wav2vec. In the 1h fine-tuning case, however, better local acoustic features provide more gains in the “clean” subsets compared to the “other” ones, following the same trend of the quantized FBANK and wav2vec features under the same conditions. Table TABREF16 shows the competitive performance of the discrete BERT approach compared to previously published work which is fine-tuned on more than 10 times the labeled data. Experiments ::: Ablations To understand the value of self-supervision in our setup, Table TABREF18 shows WERs for both continuous and discrete input features fine-tuned from random weights, without BERT pre-training, using 10 hours of labeled data. The performance of the discrete features completely collapses since randomly initialized input embedding tables don't have enough training data for learning meaningful representations. This is not a problem for continuous input features where, understandably, Wav2vec input features show much better WERs compared to the MFCC and FBANK features. The impact of adding a second layer of acoustic representation is shown by comparing the continuous BERT model trained on top of wav2vec features versus the wav2vec model fine-tuned directly using the CTC loss – only one level of learned representations. Continuous BERT training on top of wav2vec features provides substantial gains (Table TABREF19). Adding a second layer of representation more than halved the WER, with more gains observed in the “clean” subset as also observed in SECREF17. Discussion and Related Work The the success of BERT BIBREF18 and Word2Vec BIBREF29 for NLP tasks motivated more research on self-supervised approaches for acoustic word embedding and unsupervised acoustic feature representation BIBREF41, BIBREF42, BIBREF43, BIBREF44, BIBREF9, BIBREF45, BIBREF22, BIBREF10, BIBREF46, BIBREF0, either by predicting masked discrete or continuous input, or by contrastive prediction of neighboring or similarly sounding segments using distant supervision or proximity in the audio signal as an indication of similarity. In BIBREF47 a dynamic time warping alignment is used to discover similar segment pairs. Our work is inspired by the research efforts in reducing the dependence on labeled data for building ASR systems through unsupervised unit discovery and acoustic representation leaning BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, and through multi- and cross-lingual transfer learning in low-resource conditions BIBREF48, BIBREF49, BIBREF50, BIBREF51, BIBREF52, BIBREF53, and semi-supervised learning BIBREF12, BIBREF13, BIBREF14, BIBREF15. Conclusion and Future work We presetned two variations, continuous and discrete, of BERT models that are pre-trained on the librispeech 960h data and fine-tuned for speech recognition rather than used as feature extractor in tandem with another ASR system. Along with the discrete-input BERT model, we used a contrastive loss for training a continuous variant of BERT. The acoustic and language modeling roles in the system are played by the vq-wav2vec and the BERT components respectively. Our ablation experiments showed the contribution and importance of each component for final ASR performance. Our system is able to reach final WER of 10.2% and 23.5% on the standard Librispeech test clean and other sets, respectively, using only 10h of labeled data, almost matching the 100h supervised baselines. Our future directions include testing our model on 1000x larger volume of unlabeled data that is more acoustically challenging, along with multi- and cross-lingual transfer learning extensions.
No
af75ad21dda25ec72311c2be4589efed9df2f482
af75ad21dda25ec72311c2be4589efed9df2f482_0
Q: How much does this system outperform prior work? Text: Credits This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition. Introduction The following instructions are directed to authors of papers submitted to NAACL-HLT 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. The proceedings are designed for printing on A4 paper. General Instructions Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection "The First Page" ). Type single-spaced. Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section "Length of Submission" . Pages are numbered for initial submission. However, do not number the pages in the camera-ready version. By uncommenting \aclfinalcopy at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where *** appears in the \def\aclpaperid{***} definition at the top. The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The NAACL-HLT 2019 style will create a titlebox space of 2.5in for you when \aclfinalcopy is commented out. The author list for submissions should include all (and only) individuals who made substantial contributions to the work presented. Each author listed on a submission to NAACL-HLT 2019 will be notified of submissions, revisions and the final decision. No authors may be added to or removed from submissions to NAACL-HLT 2019 after the submission deadline. The Ruler The NAACL-HLT 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. ( users may uncomment the \aclfinalcopy command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper – this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (e.g., the first paragraph on this page ends at mark $108.5$ ). Electronically-available resources NAACL-HLT provides this description in 2e (naaclhlt2019.tex) and PDF format (naaclhlt2019.pdf), along with the 2e style file used to format it (naaclhlt2019.sty) and an ACL bibliography style (acl_natbib.bst) and example bibliography (naaclhlt2019.bib). These files are all available at http://naacl2019.org/downloads/ naaclhlt2019-latex.zip. We strongly recommend the use of these style files, which have been appropriately tailored for the NAACL-HLT 2019 proceedings. Format of Electronic Manuscript For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from using the pdflatex command. If your version of produces Postscript files, you can convert these into PDF using ps2pdf or dvipdf. On Windows, you can also use Adobe Distiller to generate PDF. Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. Before sending it, test your PDF by printing it from a computer different from the one where it was created. Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying “Output to a file”, then convert the file to PDF. It is of utmost importance to specify the A4 format (21 cm x 29.7 cm) when formatting the paper. When working with dvips, for instance, one should specify -t a4. Or using the command \special{papersize=210mm,297mm} in the latex preamble (directly below the \usepackage commands). Then using dvipdf and/or pdflatex which would make it easier for some. Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible. Layout Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are: Left and right margins: 2.5 cm Top margin: 2.5 cm Bottom margin: 2.5 cm Column width: 7.7 cm Column height: 24.7 cm Gap between columns: 0.6 cm Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible. Fonts For reasons of uniformity, Adobe's Times Roman font should be used. In 2e this is accomplished by putting \usepackage{times} \usepackage{latexsym} in the preamble. If Times Roman is unavailable, use Computer Modern Roman (2e's default). Note that the latter is about 10% less dense than Adobe's Times Roman font. The First Page Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract. Title: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table 1 ) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (e.g., use “Mitchell” not “MITCHELL”). Do not format title and section headings in all capitals as well except for proper names (such as “BLEU”) that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page. The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent. Abstract: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word Abstract in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font. Text: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers. Indent: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title. Sections Headings: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections. Citations: Citations within the text appear in parentheses as BIBREF0 or, if the author's name appears in the text itself, as Gusfield Gusfield:97. Using the provided style, the former is accomplished using \cite and the latter with \shortcite or \newcite. Collapse multiple citations as in BIBREF0 , BIBREF1 ; this is accomplished with the provided style using commas within the \cite command, e.g., \cite{Gusfield:97,Aho:72}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in BIBREF1 , but write as in BIBREF2 when more than two authors are involved. Collapse multiple citations as in BIBREF0 , BIBREF1 . Also refrain from using full citations as sentence constituents. We suggest that instead of “ BIBREF0 showed that ...” you use “Gusfield Gusfield:97 showed that ...” If you are using the provided and Bib style files, you can use the command \citet (cite in text) to get “author (year)” citations. If the Bib file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref package. To disable the hyperref package, load the style file with the nohyperref option: \usepackage[nohyperref]{naaclhlt2019} Digital Object Identifiers: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. As of 2017, we are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at http://aclanthology.info/. As examples, we cite BIBREF3 to show you how papers with a DOI will appear in the bibliography. We cite BIBREF4 to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography. As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g., “We previously showed BIBREF0 ...” should be avoided. Instead, use citations such as “ BIBREF0 Gusfield:97 previously showed ... ” Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper. NAACL-HLT 2019 reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form. Please do not use anonymous citations and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review. References: Gather the full set of references together under the heading References; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \bibliography commands near the end for more. Provide as complete a citation as possible, using a consistent format, such as the one for Computational Linguistics or the one in the Publication Manual of the American Psychological Association BIBREF5 . Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM Computing Reviews BIBREF6 . The and Bib style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above. Example citing an arxiv paper: BIBREF7 . Example article in journal citation: BIBREF8 . Example article in proceedings, with location: BIBREF9 . Example article in proceedings, without location: BIBREF10 . See corresponding .bib file for further details. Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work. Appendices: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: Appendix A. Title of Appendix. Footnotes Footnotes: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols. Footnotes should be separated from the text by a line. Graphics Illustrations: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink. Captions: Provide a caption for every illustration; number each one sequentially in the form: “Figure 1. Caption of the Figure.” “Table 1. Caption of the Table.” Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table 1 ). Captions longer than one line are left-aligned (see Table 2 ). Do not overwrite the default caption sizes. The naaclhlt2019.sty file is compatible with the caption and subcaption packages; do not add optional arguments. Accessibility In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color. Translation of non-English Terms It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration “translation”. Length of Submission The NAACL-HLT 2019 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page – up to nine (9) pages of content plus unlimited pages for references – so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review. NAACL-HLT 2019 does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix "Appendices" and Appendix "Supplemental Material" for further information. Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source. Acknowledgments The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review. Preparing References: Include your own bib file like this: \bibliographystyle{acl_natbib} \begin{thebibliography}{50} Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings EMNLP 2015, pages 632–642. Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the ACL 2016, Volume 1: Long Papers. Michael B. Chang, Abhishek Gupta, Sergey Levine, and Thomas L. Griffiths. 2018. Automatically composing representation transformations as a means for generalization. CoRR, abs/1807.04640. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of ACL 2017, Volume 1: Long Papers, pages 1657–1668. Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In Proceedings of AAAI 2018. Caio Corro and Ivan Titov. 2018. Differentiable perturb-and-parse: Semi-supervised parsing with a structured variational autoencoder. CoRR, abs/1807.09875. Sreerupa Das, C Lee Giles, and Guo-Zheng Sun. 1992. Learning context-free grammars: Capabilities and limitations of a recurrent neural network with an external stack memory. In Proceedings of CogSci 1992, page 14. Andrew Drozdov and Samuel Bowman. 2017. The coadaptation problem when learning how and what to compose. Proceedings of the 2nd Workshop on Representation Learning for NLP. C. Lee Giles, Guo-Zheng Sun, Hsing-Hen Chen, Yee-Chun Lee, and Dong Chen. 1989. Higher order recurrent networks and grammatical inference. In Proceedings of NIPS 1989, pages 380–387. Christoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. Neural Networks, 1:347–352. Will Grathwohl, Dami Choi, Yuhuai Wu, Geoffrey Roeder, and David K. Duvenaud. 2017. Backpropagation through the void: Optimizing control variates for black-box gradient estimation. CoRR, abs/1711.00123. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. CoRR, abs/1611.01144. Armand Joulin and Tomas Mikolov. 2015. Inferring algorithmic patterns with stack-augmented recurrent nets. In Proceedings of NIPS 2015, pages 190–198. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Diederik P. Kingma and Max Welling. 2013. Auto-encoding variational bayes. CoRR, abs/1312.6114. Phong Le and Willem H. Zuidema. 2015. The forest convolutional network: Compositional distributional semantics with a neural chart and without binarization. In Proceedings of EMNLP 2015, pages 1155–1164. Moshe Looks, Marcello Herreshoff, DeLesley Hutchins, and Peter Norvig. 2017. Deep learning with dynamic computation graphs. arXiv preprint arXiv:1702.02181. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distribution: A continuous relaxation of discrete random variables. CoRR, abs/1611.00712. Jean Maillard and Stephen Clark. 2018. Latent tree learning with differentiable parsers: Shift-reduce parsing and chart parsing. CoRR, abs/1806.00840. Jean Maillard, Stephen Clark, and Dani Yogatama. 2017. Jointly learning sentence embeddings and syntax with unsupervised tree-lstms. CoRR, abs/1705.09189. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Proceedings of NIPS 2017, pages 6294–6305. Andriy Mnih and Karol Gregor. 2014. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030. Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of ICML 2016, pages 1928–1937. Michael Mozer and Sreerupa Das. 1992. A connectionist symbol manipulator that discovers the structure of context-free languages. In Proceedings of NIPS 1992, pages 863–870. Tsendsuren Munkhdalai and Hong Yu. 2017. Neural tree indexers for text understanding. In Proceedings of EACL 2017, volume 1, page 11. NIH Public Access. Nikita Nangia and Samuel R. Bowman. 2018. Listops: A diagnostic dataset for latent tree learning. In Proceedings of NAACL-HLT 2018, Student Research Workshop, pages 92–99. Barbara BH Partee, Alice G ter Meulen, and Robert Wall. 1990. Mathematical methods in linguistics, volume 30. Springer Science & Business Media. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP 2014, pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444. Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceedings of CVPR 2017, pages 1179–1195. Sheldon M. Ross. 1997. Simulation (2. ed.). Statistical modeling and decision science. Academic Press. Himanshu Sahni, Saurabh Kumar, Farhan Tejani, and Charles L. Isbell. 2017. Learning to compose skills. CoRR, abs/1711.11289. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. CoRR, abs/1707.06347. Haoyue Shi, Hao Zhou, Jiaze Chen, and Lei Li. 2018. On tree-based neural sentence modeling. CoRR, abs/1808.09644. Satinder P. Singh. 1992. Transfer of learning by composing solutions of elemental sequential tasks. Machine Learning, 8:323–339. Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of ICML 2011, pages 129–136. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013a. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP 2013, pages 1631–1642. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP 2013, pages 1631–1642. G Sun. 1990. Connectionist pushdownautomata that learn context-free gram-mars. In Proc. IJCNN'90, volume 1, pages 577–580. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of ACL 2015, Volume 1: Long Papers, pages 1556–1566. George Tucker, Andriy Mnih, Chris J. Maddison, John Lawson, and Jascha Sohl-Dickstein. 2017. REBAR: low-variance, unbiased gradient estimates for discrete latent variable models. In Proceedings of NIPS 2017, pages 2624–2633. Claire Cardie Vlad Niculae, André F. T. Martins. 2018. Towards dynamic computation graphs via sparse latent structure. CoRR, abs/1809.00653. Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018a. Do latent tree learning models identify meaningful structure in sentences? TACL, 6:253–267. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018b. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL-HLT 2018, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256. Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2016. Learning to compose words into sentences with reinforcement learning. CoRR, abs/1611.09100. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. Xiao-Dan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In Proceedings of ICML 2015, pages 1604–1612. | where naaclhlt2019 corresponds to a naaclhlt2019.bib file. Appendices Appendices are material that can be read, and include lemmas, formulas, proofs, and tables that are not critical to the reading and understanding of the paper. Appendices should be uploaded as supplementary material when submitting the paper for review. Upon acceptance, the appendices come after the references, as shown here. Use \appendix before any appendix section to switch the section numbering over to letters. Supplemental Material Submissions may include non-readable supplementary material used in the work and described in the paper. Any accompanying software and/or data should include licenses and documentation of research review as appropriate. Supplementary material may report preprocessing decisions, model parameters, and other details necessary for the replication of the experiments reported in the paper. Seemingly small preprocessing decisions can sometimes make a large difference in performance, so it is crucial to record such decisions to precisely characterize state-of-the-art methods. Nonetheless, supplementary material should be supplementary (rather than central) to the paper. Submissions that misuse the supplementary material may be rejected without review. Supplementary material may include explanations or details of proofs or derivations that do not fit into the paper, lists of features or feature templates, sample inputs and outputs for a system, pseudo-code or source code, and data. (Source code and data should be separate uploads, rather than part of the paper). The paper should not rely on the supplementary material: while the paper may refer to and cite the supplementary material and the supplementary material will be available to the reviewers, they will not be asked to review the supplementary material.
The system outperforms by 27.7% the LSTM model, 38.5% the RL-SPINN model and 41.6% the Gumbel Tree-LSTM
de12e059088e4800d7d89e4214a3997994dbc0d9
de12e059088e4800d7d89e4214a3997994dbc0d9_0
Q: What are the baseline systems that are compared against? Text: Credits This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition. Introduction The following instructions are directed to authors of papers submitted to NAACL-HLT 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. The proceedings are designed for printing on A4 paper. General Instructions Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection "The First Page" ). Type single-spaced. Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section "Length of Submission" . Pages are numbered for initial submission. However, do not number the pages in the camera-ready version. By uncommenting \aclfinalcopy at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where *** appears in the \def\aclpaperid{***} definition at the top. The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The NAACL-HLT 2019 style will create a titlebox space of 2.5in for you when \aclfinalcopy is commented out. The author list for submissions should include all (and only) individuals who made substantial contributions to the work presented. Each author listed on a submission to NAACL-HLT 2019 will be notified of submissions, revisions and the final decision. No authors may be added to or removed from submissions to NAACL-HLT 2019 after the submission deadline. The Ruler The NAACL-HLT 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. ( users may uncomment the \aclfinalcopy command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper – this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (e.g., the first paragraph on this page ends at mark $108.5$ ). Electronically-available resources NAACL-HLT provides this description in 2e (naaclhlt2019.tex) and PDF format (naaclhlt2019.pdf), along with the 2e style file used to format it (naaclhlt2019.sty) and an ACL bibliography style (acl_natbib.bst) and example bibliography (naaclhlt2019.bib). These files are all available at http://naacl2019.org/downloads/ naaclhlt2019-latex.zip. We strongly recommend the use of these style files, which have been appropriately tailored for the NAACL-HLT 2019 proceedings. Format of Electronic Manuscript For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from using the pdflatex command. If your version of produces Postscript files, you can convert these into PDF using ps2pdf or dvipdf. On Windows, you can also use Adobe Distiller to generate PDF. Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. Before sending it, test your PDF by printing it from a computer different from the one where it was created. Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying “Output to a file”, then convert the file to PDF. It is of utmost importance to specify the A4 format (21 cm x 29.7 cm) when formatting the paper. When working with dvips, for instance, one should specify -t a4. Or using the command \special{papersize=210mm,297mm} in the latex preamble (directly below the \usepackage commands). Then using dvipdf and/or pdflatex which would make it easier for some. Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible. Layout Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are: Left and right margins: 2.5 cm Top margin: 2.5 cm Bottom margin: 2.5 cm Column width: 7.7 cm Column height: 24.7 cm Gap between columns: 0.6 cm Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible. Fonts For reasons of uniformity, Adobe's Times Roman font should be used. In 2e this is accomplished by putting \usepackage{times} \usepackage{latexsym} in the preamble. If Times Roman is unavailable, use Computer Modern Roman (2e's default). Note that the latter is about 10% less dense than Adobe's Times Roman font. The First Page Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract. Title: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table 1 ) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (e.g., use “Mitchell” not “MITCHELL”). Do not format title and section headings in all capitals as well except for proper names (such as “BLEU”) that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page. The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent. Abstract: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word Abstract in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font. Text: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers. Indent: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title. Sections Headings: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections. Citations: Citations within the text appear in parentheses as BIBREF0 or, if the author's name appears in the text itself, as Gusfield Gusfield:97. Using the provided style, the former is accomplished using \cite and the latter with \shortcite or \newcite. Collapse multiple citations as in BIBREF0 , BIBREF1 ; this is accomplished with the provided style using commas within the \cite command, e.g., \cite{Gusfield:97,Aho:72}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in BIBREF1 , but write as in BIBREF2 when more than two authors are involved. Collapse multiple citations as in BIBREF0 , BIBREF1 . Also refrain from using full citations as sentence constituents. We suggest that instead of “ BIBREF0 showed that ...” you use “Gusfield Gusfield:97 showed that ...” If you are using the provided and Bib style files, you can use the command \citet (cite in text) to get “author (year)” citations. If the Bib file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref package. To disable the hyperref package, load the style file with the nohyperref option: \usepackage[nohyperref]{naaclhlt2019} Digital Object Identifiers: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. As of 2017, we are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at http://aclanthology.info/. As examples, we cite BIBREF3 to show you how papers with a DOI will appear in the bibliography. We cite BIBREF4 to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography. As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g., “We previously showed BIBREF0 ...” should be avoided. Instead, use citations such as “ BIBREF0 Gusfield:97 previously showed ... ” Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper. NAACL-HLT 2019 reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form. Please do not use anonymous citations and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review. References: Gather the full set of references together under the heading References; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \bibliography commands near the end for more. Provide as complete a citation as possible, using a consistent format, such as the one for Computational Linguistics or the one in the Publication Manual of the American Psychological Association BIBREF5 . Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM Computing Reviews BIBREF6 . The and Bib style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above. Example citing an arxiv paper: BIBREF7 . Example article in journal citation: BIBREF8 . Example article in proceedings, with location: BIBREF9 . Example article in proceedings, without location: BIBREF10 . See corresponding .bib file for further details. Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work. Appendices: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: Appendix A. Title of Appendix. Footnotes Footnotes: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols. Footnotes should be separated from the text by a line. Graphics Illustrations: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink. Captions: Provide a caption for every illustration; number each one sequentially in the form: “Figure 1. Caption of the Figure.” “Table 1. Caption of the Table.” Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table 1 ). Captions longer than one line are left-aligned (see Table 2 ). Do not overwrite the default caption sizes. The naaclhlt2019.sty file is compatible with the caption and subcaption packages; do not add optional arguments. Accessibility In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color. Translation of non-English Terms It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration “translation”. Length of Submission The NAACL-HLT 2019 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page – up to nine (9) pages of content plus unlimited pages for references – so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review. NAACL-HLT 2019 does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix "Appendices" and Appendix "Supplemental Material" for further information. Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source. Acknowledgments The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review. Preparing References: Include your own bib file like this: \bibliographystyle{acl_natbib} \begin{thebibliography}{50} Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings EMNLP 2015, pages 632–642. Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the ACL 2016, Volume 1: Long Papers. Michael B. Chang, Abhishek Gupta, Sergey Levine, and Thomas L. Griffiths. 2018. Automatically composing representation transformations as a means for generalization. CoRR, abs/1807.04640. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of ACL 2017, Volume 1: Long Papers, pages 1657–1668. Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In Proceedings of AAAI 2018. Caio Corro and Ivan Titov. 2018. Differentiable perturb-and-parse: Semi-supervised parsing with a structured variational autoencoder. CoRR, abs/1807.09875. Sreerupa Das, C Lee Giles, and Guo-Zheng Sun. 1992. Learning context-free grammars: Capabilities and limitations of a recurrent neural network with an external stack memory. In Proceedings of CogSci 1992, page 14. Andrew Drozdov and Samuel Bowman. 2017. The coadaptation problem when learning how and what to compose. Proceedings of the 2nd Workshop on Representation Learning for NLP. C. Lee Giles, Guo-Zheng Sun, Hsing-Hen Chen, Yee-Chun Lee, and Dong Chen. 1989. Higher order recurrent networks and grammatical inference. In Proceedings of NIPS 1989, pages 380–387. Christoph Goller and Andreas Kuchler. 1996. Learning task-dependent distributed representations by backpropagation through structure. Neural Networks, 1:347–352. Will Grathwohl, Dami Choi, Yuhuai Wu, Geoffrey Roeder, and David K. Duvenaud. 2017. Backpropagation through the void: Optimizing control variates for black-box gradient estimation. CoRR, abs/1711.00123. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. CoRR, abs/1611.01144. Armand Joulin and Tomas Mikolov. 2015. Inferring algorithmic patterns with stack-augmented recurrent nets. In Proceedings of NIPS 2015, pages 190–198. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Diederik P. Kingma and Max Welling. 2013. Auto-encoding variational bayes. CoRR, abs/1312.6114. Phong Le and Willem H. Zuidema. 2015. The forest convolutional network: Compositional distributional semantics with a neural chart and without binarization. In Proceedings of EMNLP 2015, pages 1155–1164. Moshe Looks, Marcello Herreshoff, DeLesley Hutchins, and Peter Norvig. 2017. Deep learning with dynamic computation graphs. arXiv preprint arXiv:1702.02181. Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distribution: A continuous relaxation of discrete random variables. CoRR, abs/1611.00712. Jean Maillard and Stephen Clark. 2018. Latent tree learning with differentiable parsers: Shift-reduce parsing and chart parsing. CoRR, abs/1806.00840. Jean Maillard, Stephen Clark, and Dani Yogatama. 2017. Jointly learning sentence embeddings and syntax with unsupervised tree-lstms. CoRR, abs/1705.09189. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Proceedings of NIPS 2017, pages 6294–6305. Andriy Mnih and Karol Gregor. 2014. Neural variational inference and learning in belief networks. arXiv preprint arXiv:1402.0030. Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of ICML 2016, pages 1928–1937. Michael Mozer and Sreerupa Das. 1992. A connectionist symbol manipulator that discovers the structure of context-free languages. In Proceedings of NIPS 1992, pages 863–870. Tsendsuren Munkhdalai and Hong Yu. 2017. Neural tree indexers for text understanding. In Proceedings of EACL 2017, volume 1, page 11. NIH Public Access. Nikita Nangia and Samuel R. Bowman. 2018. Listops: A diagnostic dataset for latent tree learning. In Proceedings of NAACL-HLT 2018, Student Research Workshop, pages 92–99. Barbara BH Partee, Alice G ter Meulen, and Robert Wall. 1990. Mathematical methods in linguistics, volume 30. Springer Science & Business Media. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP 2014, pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444. Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proceedings of CVPR 2017, pages 1179–1195. Sheldon M. Ross. 1997. Simulation (2. ed.). Statistical modeling and decision science. Academic Press. Himanshu Sahni, Saurabh Kumar, Farhan Tejani, and Charles L. Isbell. 2017. Learning to compose skills. CoRR, abs/1711.11289. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. CoRR, abs/1707.06347. Haoyue Shi, Hao Zhou, Jiaze Chen, and Lei Li. 2018. On tree-based neural sentence modeling. CoRR, abs/1808.09644. Satinder P. Singh. 1992. Transfer of learning by composing solutions of elemental sequential tasks. Machine Learning, 8:323–339. Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of ICML 2011, pages 129–136. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013a. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP 2013, pages 1631–1642. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP 2013, pages 1631–1642. G Sun. 1990. Connectionist pushdownautomata that learn context-free gram-mars. In Proc. IJCNN'90, volume 1, pages 577–580. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of ACL 2015, Volume 1: Long Papers, pages 1556–1566. George Tucker, Andriy Mnih, Chris J. Maddison, John Lawson, and Jascha Sohl-Dickstein. 2017. REBAR: low-variance, unbiased gradient estimates for discrete latent variable models. In Proceedings of NIPS 2017, pages 2624–2633. Claire Cardie Vlad Niculae, André F. T. Martins. 2018. Towards dynamic computation graphs via sparse latent structure. CoRR, abs/1809.00653. Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018a. Do latent tree learning models identify meaningful structure in sentences? TACL, 6:253–267. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018b. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL-HLT 2018, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229–256. Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2016. Learning to compose words into sentences with reinforcement learning. CoRR, abs/1611.09100. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. Xiao-Dan Zhu, Parinaz Sobhani, and Hongyu Guo. 2015. Long short-term memory over recursive structures. In Proceedings of ICML 2015, pages 1604–1612. | where naaclhlt2019 corresponds to a naaclhlt2019.bib file. Appendices Appendices are material that can be read, and include lemmas, formulas, proofs, and tables that are not critical to the reading and understanding of the paper. Appendices should be uploaded as supplementary material when submitting the paper for review. Upon acceptance, the appendices come after the references, as shown here. Use \appendix before any appendix section to switch the section numbering over to letters. Supplemental Material Submissions may include non-readable supplementary material used in the work and described in the paper. Any accompanying software and/or data should include licenses and documentation of research review as appropriate. Supplementary material may report preprocessing decisions, model parameters, and other details necessary for the replication of the experiments reported in the paper. Seemingly small preprocessing decisions can sometimes make a large difference in performance, so it is crucial to record such decisions to precisely characterize state-of-the-art methods. Nonetheless, supplementary material should be supplementary (rather than central) to the paper. Submissions that misuse the supplementary material may be rejected without review. Supplementary material may include explanations or details of proofs or derivations that do not fit into the paper, lists of features or feature templates, sample inputs and outputs for a system, pseudo-code or source code, and data. (Source code and data should be separate uploads, rather than part of the paper). The paper should not rely on the supplementary material: while the paper may refer to and cite the supplementary material and the supplementary material will be available to the reviewers, they will not be asked to review the supplementary material.
The system is compared to baseline models: LSTM, RL-SPINN and Gumbel Tree-LSTM
3241f90a03853fa85d287007d2d51e7843ee3d9b
3241f90a03853fa85d287007d2d51e7843ee3d9b_0
Q: What standard large speaker verification corpora is used for evaluation? Text: Introduction Automatic spoken assessment systems are becoming increasingly popular, especially for English with the high demand around the world for learning of English as a second language BIBREF0, BIBREF1, BIBREF2, BIBREF3. In addition to assessing a candidate's English ability such as fluency and pronunciation and giving feedback to the candidate, these automatic systems also need to ensure the integrity of the candidate's score by detecting malpractice, as shown in Figure FIGREF1. Malpractice is the action by a candidate that breaks the assessment regulation and potentially threatens the reliability of the exam and associated certification. Malpractice can take a range of forms in spoken language assessment scenarios, such as using or trying to use unauthorised materials, impersonation, speaking irrelevant to prompts/questions, speaking in his/her first language (L1) instead of the target language for spoken tests, etc. This work aims to investigate the problem of automatically detecting impersonation, in which a candidate attempts to impersonate another in a speaking test. This is closely related to speaker verification. Speaker verification is the process to accept or reject an identity claim by comparing the speaker-specific information extracted from the verification speech with that from the enrolment speech of the claimed identity. These approaches can be directly applied to detect impersonation in spoken language tests. The performance of speaker verification systems has advanced considerably in the last decade with the development of i-vector modelling BIBREF4, in which a speech segment or a speaker is represented as a low-dimensional feature vector. Extraction of i-vectors is normally based on a Gaussian mixture model (GMM) based universal background model (UBM). This fixed length representation can then be used with a probabilistic linear discriminant analysis (PLDA) model to produce verification scores by comparing speaker representations, which are then used to make valid or impostor speaker decisions BIBREF5, BIBREF6, BIBREF7, BIBREF8. Recently, with developments in deep learning, performance of speaker verification systems has been improved by replacing the GMM with a deep neural network (DNN) to derive statistics for extracting speaker representations. This DNN is usually trained to take a fixed length window of the acoustics and discriminate between speakers using supplied speaker labels as targets. To handle the variable-length nature of the acoustic signal, a pooling layer is used to yield the final fixed-dimensional speaker representation. In BIBREF9, a DNN was trained at the frame level, and pooling was performed by averaging activation vectors of the last hidden layer over all frames of an input utterance. In BIBREF10, BIBREF11, BIBREF12, segment-level embeddings were extracted, which are referred to as x-vectors BIBREF12 with data augmentation. By leveraging data augmentation based on background noise and acoustic reverberation, these x-vectors based systems can achieve better performance than i-vector and d-vector based systems on standard speaker verification tasks. There has been some previous work on tasks related to non-native speech data using speaker verification approaches, such as detection of non-native speech BIBREF13, classification of native/non-native English BIBREF14 and L1 detection BIBREF15. In BIBREF16, meta-data (L1) sensitive bottleneck features were employed within the i-vector framework to improve the performance of speaker verification with non-native speech. In contrast, this paper focuses on making use of the state-of-the-art deep-learning based speaker verification approaches to detect candidate impersonation in an English speaking test. As there is limited amounts of data available for the non-native learner task, it is of interest to investigate adapting a standard speaker verification task to this non-native task. Here a system based on the VoxCeleb dataset BIBREF17, BIBREF18 is adapted to the BULATS task. Two forms of adaptation are examined: modifying the PLDA distance measure; and adapting the process for extracting the speaker representation by “fine-tuning" the network to the target domain. Furthermore, detailed analysis of performance is also done with respect to speaker attributes. Gender is an important attribute in impostor selection for standard speaker verification tasks, and for non-native speech, there are two additional speaker attributes: the L1 and the language proficiency level, which should also be taken into consideration for speaker verification. This paper is organised as follows. Section 2 gives an overview of speaker verification systems, and Section 3 introduces the non-native spoken English corpora used in this work. Experimental setup is described in Section 4, results and analysis are detailed in Section 5, and finally, conclusions are drawn in Section 6. Speaker Verification Systems In this work both i-vector and x-vector representations are used. For the i-vector speaker representation the form described in BIBREF4, BIBREF19 is used. This section will just discuss the x-vector speaker representation as this is the form that is adapted to the non-native verification task. Speaker Verification Systems ::: Deep neural network embedding extractor There are three blocks to form the DNN for extracting the utterance-level speaker representation, or embedding. The first block of the deep embedding extractor is a frame-level feature extractor. The input to this block is a sequence of acoustic feature vectors $\lbrace \mathbf {x}_{1},\mathbf {x}_{2},\cdots \mathbf {x}_{T}\rbrace $ of $T$ frames. This part normally consists of a number of hidden layers such as long short-term memory (LSTM) BIBREF20 or time delay neural network (TDNN) layers BIBREF11, BIBREF12. The activations of the last hidden layer of this block for the input frames, $\lbrace \mathbf {h}_{1},\mathbf {h}_{2},\cdots \mathbf {h}_{T}\rbrace $, form the input to the second block which is a statistics pooling layer. This layer converts variable-length frame-level features into a fixed-dimensional vector by calculating the mean vector, $\mu $ and standard deviation vector $\sigma $ of the frame-level feature vectors over the $T$ frames. The third block takes the statistics as the input and produces utterance-level representations using a number of stacked fully-connected hidden layers. The output of the DNN extractor is a softmax layer, and each of the nodes corresponds to one speaker identity. This DNN extractor is trained based on a cross-entropy loss function using the supplied speaker labels to get the targets. Consider there are $N$ training segments and $S$ speakers, the cross-entropy can be written as where $\theta $ represents the parameters of the DNN and $\delta \left(\cdot \right)$ represents the Kronecker delta function. $s_{k}^{\left(n\right)}$ represents that the speaker label for segment $n$ is $s_{k}$. After the DNN is trained, the utterance-level embeddings, $\mathbf {e}_{d}$, are normally extracted from the output of the affine component that is with or without the nonlinear activation function applied of one hidden layer in the third block of the DNN BIBREF11, BIBREF12. Speaker Verification Systems ::: PLDA classifier and adaptation After the speaker embeddings are extracted, they are used to train a PLDA model that yields the score (distance) between speaker embeddings. The training of the PLDA models aims to maximise the between-speaker difference and minimise the within-speaker variation, typically using expectation maximisation (EM). A number of variants of PLDA models have been introduced into the speaker verification task based on this “standard" PLDA BIBREF5: two-covariance PLDA BIBREF21 and heavy-tailed PLDA BIBREF6. The variant implemented in the Kaldi toolkit BIBREF19, and used in this work, follows BIBREF22 and is similar to the two-covariance model. This model can be written as where $\mathbf {e}$ is the speaker embedding. The vector $\mathbf {y}$ represents the underlying speaker vector and $\mu $ represents its mean. $\mathbf {z}$ is the Gaussian noise vector. For speaker verification tasks, estimation of this PLDA model can be performed by estimating the between-speaker covariance matrix, $\Gamma $, and within-speaker covariance matrix, $\Lambda $, using the EM algorithm. PLDA is a powerful approach to classifying speakers given a large amounts of training data with speaker labels BIBREF23, BIBREF24, BIBREF25. However, large amounts of labelled training data may not be available in the domain of interest such as the one considered in this paper, the non-native speaker verification. One approach to alleviate this problem is to do adaptation from a pre-trained out-of-domain model to the target domain. There are a number of methods for adapting the PLDA model in both supervised and unsupervised manners BIBREF26, BIBREF25. The Kaldi toolkit implements an unsupervised adaptation method which does not require knowledge of speaker labels BIBREF19. This method aims at adapting $\Gamma $ and $\Lambda $ of the out-of-domain PLDA model to better match the total covariance of the in-domain adaptation data. Non-native Spoken English Corpora The Business Language Testing Service (BULATS) test of Cambridge Assessment English BIBREF27 is a multi-level computer-based English test. It consists of read speech and free-speaking components, with the candidate responding to prompts. The BULATS spoken test has five sections, all with materials appropriate to business scenarios. The first section (A) contains eight questions about the candidate and their work. The second section (B) is a read-aloud section in which the candidates are asked to read eight sentences. The last three sections (C, D and E) have longer utterances of spontaneous speech elicited by prompts. In section C the candidates are asked to talk for one minute about a prompted business related topic. In section D, the candidate has one minute to describe a business situation illustrated in graphs or charts, such as pie or bar charts. The prompt for section E asks the candidate to imagine they are in a specific conversation and to respond to questions they may be asked in that situation (e.g. advice about planning a conference). This section is made up of 5x 20 seconds responses. Each section is scored between 0 and 6; the overall score is therefore between 0 and 30. This score is then mapped into Common European Framework of Reference (CEFR) BIBREF28 language proficiency levels, which is an international standard for describing language ability on a six-level scale. Each candidate is finally assigned a “grade", ranging from minimal (A1) and basic (A2) command, through limited but effective (B1) and generally effective (B2) command, to good operational (C1) and fully operational (C2) command of the spoken language. In this work, non-native speech from the BULATS test is used as both training and test data for the speaker verification systems. To investigate how the systems generalise, data for testing is also taken from the Cambridge Assessment English Linguaskill online test. Like BULATS, this is also a multi-level test and has a similar format composed of the same five sections as described before but assesses general English ability. Experimental Setup A set of 8,480 candidates from BULATS was used for training. The approximately 280 hours of speech covers a wide range of more than 70 different L1s. There are 15 major L1s with more than 100 candidates for each, including Tamil, Gujarati, Hindi, Telugu, Malayalam, Bengali, Spanish, Russian, Kannada, Portuguese, French, etc. Data augmentation was applied to the training set, and each recording was processed with a randomly selected source from “babble", “music", “noise" and “reverb" BIBREF12, which roughly doubled the size of the original training set. Another set of 8,318 BULATS candidates was used as one test set to evaluate the system performance. There are 7 major L1s in this set, each of which has more than 100 candidates: Spanish, Thai, Tamil, Arabic, Vietnamese, Polish and Dutch. There are no overlapping candidates between the BULATS training and test sets. The other test set of 2,540 candidates came from the Linguaskill test, of which there are 6 major L1s each with more than 100 candidates: Hindi, Portuguese, Japanese, Spanish, Thai and Vietnamese. Each of the training set and two test sets was fairly gender balanced, with approximately one third of candidates graded as B1, one third graded as B2, and the rest graded as A1, A2, C1, or C2, according to CEFR ability levels. For each test set candidate, responses from sections A and B were used for speaker enrolment (approximately 180s), while the more challenging free-speaking sections C, D, and E were used for whole section-level verification (approximately 60s for each section). Experimental results ::: Baseline system performance Gender is generally considered an important speaker attribute, and impostor trials were first selected from the same gender group as the reference speaker, as commonly done in standard speaker verification tasks. This resulted in a total of 104.8 million verification trials for the BULATS test set and 9.7 million trials for the Linguaskill test set. An i-vector/PLDA system and an x-vector/PLDA system were first trained on the “in-domain" BULATS training set. For the i-vector system, 13-dimensional perceptual linear predictive (PLP) features were extracted using the HTK toolkit BIBREF29 with a frame-length of 25ms. A UBM of 2,048 mixture components was first trained with full-covariance matrices, and then 600-dimensional i-vectors were extracted for both training and test sets. For the x-vector system, 40-dimensional filterbank features were also extracted using HTK with a frame-length of 25ms. DNN configurations were the same as used in BIBREF12, and 512-dimensional x-vectors were extracted from the affine component of the segment-level layer immediately following the statistics pooling layer. Performance of the two baseline systems is shown in Table TABREF9 in terms of equal error rate (EER). The x-vector system yielded lower EERs on both BULATS and Linguaskill test sets. In addition to the models trained on the BULATS data, it is also interesting to investigate the application of “out-of-the-box" models for standard speaker verification tasks to this non-native speaker verification task as there is limited amounts of non-native learner English data that is publicly available. In this paper, the Kaldi-released BIBREF19 VoxCeleb x-vector/PLDA system was used as imported models, which was trained on augmented VoxCeleb 1 BIBREF17 and VoxCeleb 2 BIBREF18. There are more than 7,000 speakers in the VoxCeleb dataset with more than 2,000 hours of audio data, making it the largest publicly available speaker recognition dataset. 30 dimensional mel-frequency cepstral coefficients (MFCCs) were used as input features and system configurations were the same as the BULATS x-vector/PLDA one. It can be seen from Table TABREF10 that these out-of-domain models gave worse performance than baseline systems trained on a far smaller amount of BULATS data due to domain mismatch. Thus, two kinds of in-domain adaptation strategies were explored to make use of the BULATS training set: PLDA adaptation and x-vector extractor fine-tuning. For PLDA adaptation, x-vectors of the BULATS training set were first extracted using the VoxCeleb-trained x-vector extractor, and then employed to adapt the VoxCeleb-trained PLDA model with their mean and variance. For x-vector extractor fine-tuning, with all other layers of the VoxCeleb-trained model kept still, the output layer was re-initialised using the BULATS training set with the number of targets adjusted accordingly, and then all layers were fine-tuned on the BULATS training set. Here the PLDA adaptation system is referred to as X1 and the extractor fine-tuning system is referred to as X2. Both adaptation approaches can yield good performance gains as can be seen from Table TABREF10. PLDA adaptation is a straightforward yet effective way, while the system with x-vector extractor fine-tuning gave slightly lower EERs on both BULATS and Linguaskill test sets by virtue of a relatively “in-domain" extractor prior to the PLDA back-end. Detection Error Tradeoff (DET) curves of the four x-vector/PLDA systems on the BULATS test set were illustrated in Figure FIGREF11. It can be seen that, both adaptation systems outperformed the original VoxCeleb-trained system in any threshold of the false alarm (FA) probability and the miss (MS) probability. The extractor fine-tuning system only gave higher MS probability than the PLDA adapted one with FA probability below 0.4%, while for a large range of FA probabilities above 0.4%, the extractor fine-tuning system outperformed the PLDA adapted one. Furthermore, by leveraging the large-scale VoxCeleb dataset, both adaptation systems produced lower EERs than baseline systems solely trained on BULATS data, especially the extractor fine-tuning one, which gave a reduction rate of 26$\%$ in EER over the baseline x-vector/PLDA system on the BULATS test set. It can also be seen from Figure FIGREF11 that, the extractor fine-tuning system gave consistently better performance than the baseline systems for almost any threshold of FA and MS. Experimental results ::: Impostor attributes analysis As mentioned in Section SECREF8, gender is an important attribute when selecting impostors. For the non-native English speech data considered in this work, there are two additional attributes that may significantly impact performance, the candidate speaking ability (grade) and L1. In this section, the impact of both attributes on verification performance is analysed on the BULATS test set using the extractor fine-tuning system (X2) detailed in Section SECREF8 with impostors selected from the same gender group as the reference speaker. Taking EER as the operating threshold, both grade and L1 breakdown are investigated with respect to the number of impostor trials resulting in false alarm (FA) errors. As there were only a small number of speakers graded as C1 or C2 in the BULATS test set, the two grade groups were merged into one group as C in the following analysis. Also for a fair comparison, 200 speakers were randomly selected (roughly gender balanced) for each grade group from the BULATS test set, and the grade breakdown is shown in Table TABREF13. For lower grades, impostor trials from the grade group of A1 dominated FA errors as A1 speakers tend to speak short utterances, which is more challenging for the systems. For higher grades (B2 and C), impostor trials from the grade group of C constituted a larger portion of FA errors probably due to the fact that C speakers tend to speak long utterances in a more “native" way and they are also similar to B2 speakers. The numbers of speakers from different L1 groups also varied in the BULATS test set. For a fair comparison, 200 speakers were randomly selected (roughly gender balanced) for each of 6 major L1s. The L1 breakdown is shown in Table TABREF14, where impostor trials from the same L1 group as the reference speaker generally dominated FA errors. English learners from the same L1 group tend to have similar accents when speaking English, which makes them more confusable to speaker verification systems compared to learners from a different L1 group. Particularly, impostors of Thai L1 constitute a considerable portion of FA errors for each L1, as A1 and A2 speakers dominate Thai L1 in the BULATS test set, which is different from other L1s where B1 and B2 speakers dominate. Experimental results ::: Overall system performance Based on the analysis in the previous section, the impact of speaker attributes beyond gender, the grade and L1, were used as additional restrictions on the imposter set selection. The following forms of impostor selection were examined: gender, impostors from the same gender group as the reference speaker, as in Section SECREF8; grade, impostors from the same grade group as the reference speaker; $>$grade, impostors from higher grade groups than the reference speaker if the grade of the reference speaker is lower than C, otherwise from C; this case is of practical interest for impersonation in spoken language tests; L1, impostors from the same L1 group as the reference speaker; The number of total verification trials decreases with further restriction on impostors, which is shown in Table TABREF20. Table TABREF21 shows the impact on EER of restricting the possible set of impostors according to gender, L1 or grade on both BULATS and Linguaskill test sets. Due to the lack of data for each L1 or grade, X1 and X2 systems that are adapted or fine-tuned on all of the BULATS training set are used for verification. As expected, restricting possible impostors according to speaker attributes yielded higher EERs as the percentage of impostors “close" to the reference speaker increased. Take gender as the starting point, which is the configuration used in previous experiments in Section SECREF8. Further restricting the set of impostors to L1 again increased EERs agreeing with the results shown in Table TABREF14, similarly to grade. An interesting result in terms of handling impersonation is that, if the set of impostors is further restricted to $>$grade, EERs decrease compared to simply restricted to gender. The highest EER for both systems was achieved by restricted to gender+L1+grade, which indicates that all these are important speaker attributes of non-native data. The gender+L1+$>$grade case is more related to practical scenarios of impersonation, since it is more likely that a candidate chooses a substitute from the same gender and L1 group but speak the target language better to impersonate him/herself in order to obtain a higher grade in a spoken language test. For the impersonation scenario where the impostor trials are restricted to gender+L1+$>$grade, the DET curves for all systems including the unadapted VoxCeleb and BULATS trained systems are shown in Figure FIGREF22 for the BULATS test set. This allows the overall distribution of FA and MS errors for the aforementioned systems to be evaluated. It can be seen that, compared with the fine-tuned X2 system, the PLDA-adapted X1 system had a lower MS probability when the FA probability was low and had a higher MS probability when the FA probability was high. This implies that the X1 system tends to accept imposters as reference speakers while the X2 system tends to reject reference speakers as impostors. For malpractice candidate impersonation in spoken language tests, the X2 system may have a high cost as it may incorrectly identify malpractice in valid candidates. This would require manual checks to confirm this classification. In contrast, the X1 system may result in a lower level of security because it has a higher chance of misidentifying the candidate who is impersonating another. Based on these complementary trends, a score-level linear combination of the two systems was performed with weights of 0.7 and 0.3 for X1 and X2 systems, respectively. The combination system gave consistently better performance for a wide range of FA and MS probabilities than the aforementioned systems with an EER of 0.58% on the BULATS test set, as demonstrated in Figure FIGREF22. The same trend was also observed at these weightings on the Linguaskill test set with an EER of 0.72% for the combination system, approximately 8% relative reduction in EER from the X1 system. Thus, the combination of the two adapted systems making use of both large-scale VoxCeleb data and in-domain BULATS data, can serve as a sensible configuration for impersonation detection in spoken language tests. Conclusions This paper has investigated malpractice in the form of candidate impersonation for spoken language assessment. This task has close relationships to standard speaker verification, but applied to the domain of non-native speech. Advanced neural network based speaker verification systems were built on both limited non-native spoken English data from the BULATS test, and a large standard corpus VoxCeleb. For the configuration used all systems yielded relatively low EERs of less than 1%. Though built with only limited data the systems trained on just BULATS systems outperformed the “out-of-the-box" VoxCeleb based system. However by adapting both the PLDA model and the deep speaker representation, the VoxCeleb-based systems could yield lower EERs. The attributes of the “impostors" was then analysed in terms of both the impostor's grade and L1. As expected, L1 was the most important attribute of the impostor selected, though the grade did also influence performance. With the most likely scenario of impersonation by restricting impostors to be from the same gender, same L1, and higher grade group, the combination of the two adapted systems gave consistently better performance for a wide range of FA and MS probabilities, making it a sensible configuration for impersonation detection.
non-native speech from the BULATS test
52e8f79814736fea96fd9b642881b476243e1698
52e8f79814736fea96fd9b642881b476243e1698_0
Q: What systems are tested? Text: Introduction Automatic spoken assessment systems are becoming increasingly popular, especially for English with the high demand around the world for learning of English as a second language BIBREF0, BIBREF1, BIBREF2, BIBREF3. In addition to assessing a candidate's English ability such as fluency and pronunciation and giving feedback to the candidate, these automatic systems also need to ensure the integrity of the candidate's score by detecting malpractice, as shown in Figure FIGREF1. Malpractice is the action by a candidate that breaks the assessment regulation and potentially threatens the reliability of the exam and associated certification. Malpractice can take a range of forms in spoken language assessment scenarios, such as using or trying to use unauthorised materials, impersonation, speaking irrelevant to prompts/questions, speaking in his/her first language (L1) instead of the target language for spoken tests, etc. This work aims to investigate the problem of automatically detecting impersonation, in which a candidate attempts to impersonate another in a speaking test. This is closely related to speaker verification. Speaker verification is the process to accept or reject an identity claim by comparing the speaker-specific information extracted from the verification speech with that from the enrolment speech of the claimed identity. These approaches can be directly applied to detect impersonation in spoken language tests. The performance of speaker verification systems has advanced considerably in the last decade with the development of i-vector modelling BIBREF4, in which a speech segment or a speaker is represented as a low-dimensional feature vector. Extraction of i-vectors is normally based on a Gaussian mixture model (GMM) based universal background model (UBM). This fixed length representation can then be used with a probabilistic linear discriminant analysis (PLDA) model to produce verification scores by comparing speaker representations, which are then used to make valid or impostor speaker decisions BIBREF5, BIBREF6, BIBREF7, BIBREF8. Recently, with developments in deep learning, performance of speaker verification systems has been improved by replacing the GMM with a deep neural network (DNN) to derive statistics for extracting speaker representations. This DNN is usually trained to take a fixed length window of the acoustics and discriminate between speakers using supplied speaker labels as targets. To handle the variable-length nature of the acoustic signal, a pooling layer is used to yield the final fixed-dimensional speaker representation. In BIBREF9, a DNN was trained at the frame level, and pooling was performed by averaging activation vectors of the last hidden layer over all frames of an input utterance. In BIBREF10, BIBREF11, BIBREF12, segment-level embeddings were extracted, which are referred to as x-vectors BIBREF12 with data augmentation. By leveraging data augmentation based on background noise and acoustic reverberation, these x-vectors based systems can achieve better performance than i-vector and d-vector based systems on standard speaker verification tasks. There has been some previous work on tasks related to non-native speech data using speaker verification approaches, such as detection of non-native speech BIBREF13, classification of native/non-native English BIBREF14 and L1 detection BIBREF15. In BIBREF16, meta-data (L1) sensitive bottleneck features were employed within the i-vector framework to improve the performance of speaker verification with non-native speech. In contrast, this paper focuses on making use of the state-of-the-art deep-learning based speaker verification approaches to detect candidate impersonation in an English speaking test. As there is limited amounts of data available for the non-native learner task, it is of interest to investigate adapting a standard speaker verification task to this non-native task. Here a system based on the VoxCeleb dataset BIBREF17, BIBREF18 is adapted to the BULATS task. Two forms of adaptation are examined: modifying the PLDA distance measure; and adapting the process for extracting the speaker representation by “fine-tuning" the network to the target domain. Furthermore, detailed analysis of performance is also done with respect to speaker attributes. Gender is an important attribute in impostor selection for standard speaker verification tasks, and for non-native speech, there are two additional speaker attributes: the L1 and the language proficiency level, which should also be taken into consideration for speaker verification. This paper is organised as follows. Section 2 gives an overview of speaker verification systems, and Section 3 introduces the non-native spoken English corpora used in this work. Experimental setup is described in Section 4, results and analysis are detailed in Section 5, and finally, conclusions are drawn in Section 6. Speaker Verification Systems In this work both i-vector and x-vector representations are used. For the i-vector speaker representation the form described in BIBREF4, BIBREF19 is used. This section will just discuss the x-vector speaker representation as this is the form that is adapted to the non-native verification task. Speaker Verification Systems ::: Deep neural network embedding extractor There are three blocks to form the DNN for extracting the utterance-level speaker representation, or embedding. The first block of the deep embedding extractor is a frame-level feature extractor. The input to this block is a sequence of acoustic feature vectors $\lbrace \mathbf {x}_{1},\mathbf {x}_{2},\cdots \mathbf {x}_{T}\rbrace $ of $T$ frames. This part normally consists of a number of hidden layers such as long short-term memory (LSTM) BIBREF20 or time delay neural network (TDNN) layers BIBREF11, BIBREF12. The activations of the last hidden layer of this block for the input frames, $\lbrace \mathbf {h}_{1},\mathbf {h}_{2},\cdots \mathbf {h}_{T}\rbrace $, form the input to the second block which is a statistics pooling layer. This layer converts variable-length frame-level features into a fixed-dimensional vector by calculating the mean vector, $\mu $ and standard deviation vector $\sigma $ of the frame-level feature vectors over the $T$ frames. The third block takes the statistics as the input and produces utterance-level representations using a number of stacked fully-connected hidden layers. The output of the DNN extractor is a softmax layer, and each of the nodes corresponds to one speaker identity. This DNN extractor is trained based on a cross-entropy loss function using the supplied speaker labels to get the targets. Consider there are $N$ training segments and $S$ speakers, the cross-entropy can be written as where $\theta $ represents the parameters of the DNN and $\delta \left(\cdot \right)$ represents the Kronecker delta function. $s_{k}^{\left(n\right)}$ represents that the speaker label for segment $n$ is $s_{k}$. After the DNN is trained, the utterance-level embeddings, $\mathbf {e}_{d}$, are normally extracted from the output of the affine component that is with or without the nonlinear activation function applied of one hidden layer in the third block of the DNN BIBREF11, BIBREF12. Speaker Verification Systems ::: PLDA classifier and adaptation After the speaker embeddings are extracted, they are used to train a PLDA model that yields the score (distance) between speaker embeddings. The training of the PLDA models aims to maximise the between-speaker difference and minimise the within-speaker variation, typically using expectation maximisation (EM). A number of variants of PLDA models have been introduced into the speaker verification task based on this “standard" PLDA BIBREF5: two-covariance PLDA BIBREF21 and heavy-tailed PLDA BIBREF6. The variant implemented in the Kaldi toolkit BIBREF19, and used in this work, follows BIBREF22 and is similar to the two-covariance model. This model can be written as where $\mathbf {e}$ is the speaker embedding. The vector $\mathbf {y}$ represents the underlying speaker vector and $\mu $ represents its mean. $\mathbf {z}$ is the Gaussian noise vector. For speaker verification tasks, estimation of this PLDA model can be performed by estimating the between-speaker covariance matrix, $\Gamma $, and within-speaker covariance matrix, $\Lambda $, using the EM algorithm. PLDA is a powerful approach to classifying speakers given a large amounts of training data with speaker labels BIBREF23, BIBREF24, BIBREF25. However, large amounts of labelled training data may not be available in the domain of interest such as the one considered in this paper, the non-native speaker verification. One approach to alleviate this problem is to do adaptation from a pre-trained out-of-domain model to the target domain. There are a number of methods for adapting the PLDA model in both supervised and unsupervised manners BIBREF26, BIBREF25. The Kaldi toolkit implements an unsupervised adaptation method which does not require knowledge of speaker labels BIBREF19. This method aims at adapting $\Gamma $ and $\Lambda $ of the out-of-domain PLDA model to better match the total covariance of the in-domain adaptation data. Non-native Spoken English Corpora The Business Language Testing Service (BULATS) test of Cambridge Assessment English BIBREF27 is a multi-level computer-based English test. It consists of read speech and free-speaking components, with the candidate responding to prompts. The BULATS spoken test has five sections, all with materials appropriate to business scenarios. The first section (A) contains eight questions about the candidate and their work. The second section (B) is a read-aloud section in which the candidates are asked to read eight sentences. The last three sections (C, D and E) have longer utterances of spontaneous speech elicited by prompts. In section C the candidates are asked to talk for one minute about a prompted business related topic. In section D, the candidate has one minute to describe a business situation illustrated in graphs or charts, such as pie or bar charts. The prompt for section E asks the candidate to imagine they are in a specific conversation and to respond to questions they may be asked in that situation (e.g. advice about planning a conference). This section is made up of 5x 20 seconds responses. Each section is scored between 0 and 6; the overall score is therefore between 0 and 30. This score is then mapped into Common European Framework of Reference (CEFR) BIBREF28 language proficiency levels, which is an international standard for describing language ability on a six-level scale. Each candidate is finally assigned a “grade", ranging from minimal (A1) and basic (A2) command, through limited but effective (B1) and generally effective (B2) command, to good operational (C1) and fully operational (C2) command of the spoken language. In this work, non-native speech from the BULATS test is used as both training and test data for the speaker verification systems. To investigate how the systems generalise, data for testing is also taken from the Cambridge Assessment English Linguaskill online test. Like BULATS, this is also a multi-level test and has a similar format composed of the same five sections as described before but assesses general English ability. Experimental Setup A set of 8,480 candidates from BULATS was used for training. The approximately 280 hours of speech covers a wide range of more than 70 different L1s. There are 15 major L1s with more than 100 candidates for each, including Tamil, Gujarati, Hindi, Telugu, Malayalam, Bengali, Spanish, Russian, Kannada, Portuguese, French, etc. Data augmentation was applied to the training set, and each recording was processed with a randomly selected source from “babble", “music", “noise" and “reverb" BIBREF12, which roughly doubled the size of the original training set. Another set of 8,318 BULATS candidates was used as one test set to evaluate the system performance. There are 7 major L1s in this set, each of which has more than 100 candidates: Spanish, Thai, Tamil, Arabic, Vietnamese, Polish and Dutch. There are no overlapping candidates between the BULATS training and test sets. The other test set of 2,540 candidates came from the Linguaskill test, of which there are 6 major L1s each with more than 100 candidates: Hindi, Portuguese, Japanese, Spanish, Thai and Vietnamese. Each of the training set and two test sets was fairly gender balanced, with approximately one third of candidates graded as B1, one third graded as B2, and the rest graded as A1, A2, C1, or C2, according to CEFR ability levels. For each test set candidate, responses from sections A and B were used for speaker enrolment (approximately 180s), while the more challenging free-speaking sections C, D, and E were used for whole section-level verification (approximately 60s for each section). Experimental results ::: Baseline system performance Gender is generally considered an important speaker attribute, and impostor trials were first selected from the same gender group as the reference speaker, as commonly done in standard speaker verification tasks. This resulted in a total of 104.8 million verification trials for the BULATS test set and 9.7 million trials for the Linguaskill test set. An i-vector/PLDA system and an x-vector/PLDA system were first trained on the “in-domain" BULATS training set. For the i-vector system, 13-dimensional perceptual linear predictive (PLP) features were extracted using the HTK toolkit BIBREF29 with a frame-length of 25ms. A UBM of 2,048 mixture components was first trained with full-covariance matrices, and then 600-dimensional i-vectors were extracted for both training and test sets. For the x-vector system, 40-dimensional filterbank features were also extracted using HTK with a frame-length of 25ms. DNN configurations were the same as used in BIBREF12, and 512-dimensional x-vectors were extracted from the affine component of the segment-level layer immediately following the statistics pooling layer. Performance of the two baseline systems is shown in Table TABREF9 in terms of equal error rate (EER). The x-vector system yielded lower EERs on both BULATS and Linguaskill test sets. In addition to the models trained on the BULATS data, it is also interesting to investigate the application of “out-of-the-box" models for standard speaker verification tasks to this non-native speaker verification task as there is limited amounts of non-native learner English data that is publicly available. In this paper, the Kaldi-released BIBREF19 VoxCeleb x-vector/PLDA system was used as imported models, which was trained on augmented VoxCeleb 1 BIBREF17 and VoxCeleb 2 BIBREF18. There are more than 7,000 speakers in the VoxCeleb dataset with more than 2,000 hours of audio data, making it the largest publicly available speaker recognition dataset. 30 dimensional mel-frequency cepstral coefficients (MFCCs) were used as input features and system configurations were the same as the BULATS x-vector/PLDA one. It can be seen from Table TABREF10 that these out-of-domain models gave worse performance than baseline systems trained on a far smaller amount of BULATS data due to domain mismatch. Thus, two kinds of in-domain adaptation strategies were explored to make use of the BULATS training set: PLDA adaptation and x-vector extractor fine-tuning. For PLDA adaptation, x-vectors of the BULATS training set were first extracted using the VoxCeleb-trained x-vector extractor, and then employed to adapt the VoxCeleb-trained PLDA model with their mean and variance. For x-vector extractor fine-tuning, with all other layers of the VoxCeleb-trained model kept still, the output layer was re-initialised using the BULATS training set with the number of targets adjusted accordingly, and then all layers were fine-tuned on the BULATS training set. Here the PLDA adaptation system is referred to as X1 and the extractor fine-tuning system is referred to as X2. Both adaptation approaches can yield good performance gains as can be seen from Table TABREF10. PLDA adaptation is a straightforward yet effective way, while the system with x-vector extractor fine-tuning gave slightly lower EERs on both BULATS and Linguaskill test sets by virtue of a relatively “in-domain" extractor prior to the PLDA back-end. Detection Error Tradeoff (DET) curves of the four x-vector/PLDA systems on the BULATS test set were illustrated in Figure FIGREF11. It can be seen that, both adaptation systems outperformed the original VoxCeleb-trained system in any threshold of the false alarm (FA) probability and the miss (MS) probability. The extractor fine-tuning system only gave higher MS probability than the PLDA adapted one with FA probability below 0.4%, while for a large range of FA probabilities above 0.4%, the extractor fine-tuning system outperformed the PLDA adapted one. Furthermore, by leveraging the large-scale VoxCeleb dataset, both adaptation systems produced lower EERs than baseline systems solely trained on BULATS data, especially the extractor fine-tuning one, which gave a reduction rate of 26$\%$ in EER over the baseline x-vector/PLDA system on the BULATS test set. It can also be seen from Figure FIGREF11 that, the extractor fine-tuning system gave consistently better performance than the baseline systems for almost any threshold of FA and MS. Experimental results ::: Impostor attributes analysis As mentioned in Section SECREF8, gender is an important attribute when selecting impostors. For the non-native English speech data considered in this work, there are two additional attributes that may significantly impact performance, the candidate speaking ability (grade) and L1. In this section, the impact of both attributes on verification performance is analysed on the BULATS test set using the extractor fine-tuning system (X2) detailed in Section SECREF8 with impostors selected from the same gender group as the reference speaker. Taking EER as the operating threshold, both grade and L1 breakdown are investigated with respect to the number of impostor trials resulting in false alarm (FA) errors. As there were only a small number of speakers graded as C1 or C2 in the BULATS test set, the two grade groups were merged into one group as C in the following analysis. Also for a fair comparison, 200 speakers were randomly selected (roughly gender balanced) for each grade group from the BULATS test set, and the grade breakdown is shown in Table TABREF13. For lower grades, impostor trials from the grade group of A1 dominated FA errors as A1 speakers tend to speak short utterances, which is more challenging for the systems. For higher grades (B2 and C), impostor trials from the grade group of C constituted a larger portion of FA errors probably due to the fact that C speakers tend to speak long utterances in a more “native" way and they are also similar to B2 speakers. The numbers of speakers from different L1 groups also varied in the BULATS test set. For a fair comparison, 200 speakers were randomly selected (roughly gender balanced) for each of 6 major L1s. The L1 breakdown is shown in Table TABREF14, where impostor trials from the same L1 group as the reference speaker generally dominated FA errors. English learners from the same L1 group tend to have similar accents when speaking English, which makes them more confusable to speaker verification systems compared to learners from a different L1 group. Particularly, impostors of Thai L1 constitute a considerable portion of FA errors for each L1, as A1 and A2 speakers dominate Thai L1 in the BULATS test set, which is different from other L1s where B1 and B2 speakers dominate. Experimental results ::: Overall system performance Based on the analysis in the previous section, the impact of speaker attributes beyond gender, the grade and L1, were used as additional restrictions on the imposter set selection. The following forms of impostor selection were examined: gender, impostors from the same gender group as the reference speaker, as in Section SECREF8; grade, impostors from the same grade group as the reference speaker; $>$grade, impostors from higher grade groups than the reference speaker if the grade of the reference speaker is lower than C, otherwise from C; this case is of practical interest for impersonation in spoken language tests; L1, impostors from the same L1 group as the reference speaker; The number of total verification trials decreases with further restriction on impostors, which is shown in Table TABREF20. Table TABREF21 shows the impact on EER of restricting the possible set of impostors according to gender, L1 or grade on both BULATS and Linguaskill test sets. Due to the lack of data for each L1 or grade, X1 and X2 systems that are adapted or fine-tuned on all of the BULATS training set are used for verification. As expected, restricting possible impostors according to speaker attributes yielded higher EERs as the percentage of impostors “close" to the reference speaker increased. Take gender as the starting point, which is the configuration used in previous experiments in Section SECREF8. Further restricting the set of impostors to L1 again increased EERs agreeing with the results shown in Table TABREF14, similarly to grade. An interesting result in terms of handling impersonation is that, if the set of impostors is further restricted to $>$grade, EERs decrease compared to simply restricted to gender. The highest EER for both systems was achieved by restricted to gender+L1+grade, which indicates that all these are important speaker attributes of non-native data. The gender+L1+$>$grade case is more related to practical scenarios of impersonation, since it is more likely that a candidate chooses a substitute from the same gender and L1 group but speak the target language better to impersonate him/herself in order to obtain a higher grade in a spoken language test. For the impersonation scenario where the impostor trials are restricted to gender+L1+$>$grade, the DET curves for all systems including the unadapted VoxCeleb and BULATS trained systems are shown in Figure FIGREF22 for the BULATS test set. This allows the overall distribution of FA and MS errors for the aforementioned systems to be evaluated. It can be seen that, compared with the fine-tuned X2 system, the PLDA-adapted X1 system had a lower MS probability when the FA probability was low and had a higher MS probability when the FA probability was high. This implies that the X1 system tends to accept imposters as reference speakers while the X2 system tends to reject reference speakers as impostors. For malpractice candidate impersonation in spoken language tests, the X2 system may have a high cost as it may incorrectly identify malpractice in valid candidates. This would require manual checks to confirm this classification. In contrast, the X1 system may result in a lower level of security because it has a higher chance of misidentifying the candidate who is impersonating another. Based on these complementary trends, a score-level linear combination of the two systems was performed with weights of 0.7 and 0.3 for X1 and X2 systems, respectively. The combination system gave consistently better performance for a wide range of FA and MS probabilities than the aforementioned systems with an EER of 0.58% on the BULATS test set, as demonstrated in Figure FIGREF22. The same trend was also observed at these weightings on the Linguaskill test set with an EER of 0.72% for the combination system, approximately 8% relative reduction in EER from the X1 system. Thus, the combination of the two adapted systems making use of both large-scale VoxCeleb data and in-domain BULATS data, can serve as a sensible configuration for impersonation detection in spoken language tests. Conclusions This paper has investigated malpractice in the form of candidate impersonation for spoken language assessment. This task has close relationships to standard speaker verification, but applied to the domain of non-native speech. Advanced neural network based speaker verification systems were built on both limited non-native spoken English data from the BULATS test, and a large standard corpus VoxCeleb. For the configuration used all systems yielded relatively low EERs of less than 1%. Though built with only limited data the systems trained on just BULATS systems outperformed the “out-of-the-box" VoxCeleb based system. However by adapting both the PLDA model and the deep speaker representation, the VoxCeleb-based systems could yield lower EERs. The attributes of the “impostors" was then analysed in terms of both the impostor's grade and L1. As expected, L1 was the most important attribute of the impostor selected, though the grade did also influence performance. With the most likely scenario of impersonation by restricting impostors to be from the same gender, same L1, and higher grade group, the combination of the two adapted systems gave consistently better performance for a wide range of FA and MS probabilities, making it a sensible configuration for impersonation detection.
BULATS i-vector/PLDA BULATS x-vector/PLDA VoxCeleb x-vector/PLDA PLDA adaptation (X1) Extractor fine-tuning (X2)
2af66730a85b29ff28dbfa58342e0ae6265d2963
2af66730a85b29ff28dbfa58342e0ae6265d2963_0
Q: How many examples are there in the source domain? Text: Introduction Domain adaptation is a machine learning paradigm that aims at improving the generalization performance of a new (target) domain by using a dataset from the original (source) domain. Suppose that, as the source domain dataset, we have a captioning corpus, consisting of images of daily lives and each image has captions. Suppose also that we would like to generate captions for exotic cuisine, which are rare in the corpus. It is usually very costly to make a new corpus for the target domain, i.e., taking and captioning those images. The research question here is how we can leverage the source domain dataset to improve the performance on the target domain. As described by Daumé daume:07, there are mainly two settings of domain adaptation: fully supervised and semi-supervised. Our focus is the supervised setting, where both of the source and target domain datasets are labeled. We would like to use the label information of the source domain to improve the performance on the target domain. Recently, Recurrent Neural Networks (RNNs) have been successfully applied to various tasks in the field of natural language processing (NLP), including language modeling BIBREF0 , caption generation BIBREF1 and parsing BIBREF2 . For neural networks, there are two standard methods for supervised domain adaptation BIBREF3 . The first method is fine tuning: we first train the model with the source dataset and then tune it with the target domain dataset BIBREF4 , BIBREF5 . Since the objective function of neural network training is non-convex, the performance of the trained model can depend on the initialization of the parameters. This is in contrast with the convex methods such as Support Vector Machines (SVMs). We expect that the first training gives a good initialization of the parameters, and therefore the latter training gives a good generalization even if the target domain dataset is small. The downside of this approach is the lack of the optimization objective. The other method is to design the neural network so that it has two outputs. The first output is trained with the source dataset and the other output is trained with the target dataset, where the input part is shared among the domains. We call this method dual outputs. This type of network architecture has been successfully applied to multi-task learning in NLP such as part-of-speech tagging and named-entity recognition BIBREF6 , BIBREF7 . In the NLP community, there has been a large body of previous work on domain adaptation. One of the state-of-the-art methods for the supervised domain adaptation is feature augmentation BIBREF8 . The central idea of this method is to augment the original features/parameters in order to model the source specific, target specific and general behaviors of the data. However, it is not straight-forward to apply it to neural network models in which the cost function has a form of log probabilities. In this paper, we propose a new domain adaptation method for neural networks. We reformulate the method of daume:07 and derive an objective function using convexity of the loss function. From a high-level perspective, this method shares the idea of feature augmentation. We use redundant parameters for the source, target and general domains, where the general parameters are tuned to model the common characteristics of the datasets and the source/target parameters are tuned for domain specific aspects. In the latter part of this paper, we apply our domain adaptation method to a neural captioning model and show performance improvement over other standard methods on several datasets and metrics. In the datasets, the source and target have different word distributions, and thus adaptation of output parameters is important. We augment the output parameters to facilitate adaptation. Although we use captioning models in the experiments, our method can be applied to any neural networks trained with a cross-entropy loss. Related Work There are several recent studies applying domain adaptation methods to deep neural networks. However, few studies have focused on improving the fine tuning and dual outputs methods in the supervised setting. sun2015return have proposed an unsupervised domain adaptation method and apply it to the features from deep neural networks. Their idea is to minimize the domain shift by aligning the second-order statistics of source and target distributions. In our setting, it is not necessarily true that there is a correspondence between the source and target input distributions, and therefore we cannot expect their method to work well. wen2016multi have proposed a procedure to generate natural language for multiple domains of spoken dialogue systems. They improve the fine tuning method by pre-trainig with synthesized data. However, the synthesis protocol is only applicable to the spoken dialogue system. In this paper, we focus on domain adaptation methods which can be applied without dataset-specific tricks. yang2016multitask have conducted a series of experiments to investigate the transferability of neural networks for NLP. They compare the performance of two transfer methods called INIT and MULT, which correspond to the fine tuning and dual outputs methods in our terms. They conclude that MULT is slightly better than or comparable to INIT; this is consistent with our experiments shown in section "Experiments" . Although they obtain little improvement by transferring the output parameters, we achieve significant improvement by augmenting parameters in the output layers. Domain adaptation and language generation We start with the basic notations and formalization for domain adaptation. Let $\mathcal {X}$ be the set of inputs and $\mathcal {Y}$ be the outputs. We have a source domain dataset $D^s$ , which is sampled from some distribution $\mathcal {D}^s$ . Also, we have a target domain dataset $D^t$ , which is sampled from another distribution $\mathcal {D}^t$ . Since we are considering supervised settings, each element of the datasets has a form of input output pair $(x,y)$ . The goal of domain adaptation is to learn a function $f : \mathcal {X} \rightarrow \mathcal {Y}$ that models the input-output relation of $D^t$ . We implicitly assume that there is a connection between the source and target distributions and thus can leverage the information of the source domain dataset. In the case of image caption generation, the input $x$ is an image (or the feature vector of an image) and $\mathcal {Y}$0 is the caption (a sequence of words). In language generation tasks, a sequence of words is generated from an input $x$ . A state-of-the-art model for language generation is LSTM (Long Short Term Memory) initialized by a context vector computed by the input BIBREF1 . LSTM is a particular form of recurrent neural network, which has three gates and a memory cell. For each time step $t$ , the vectors $c_t$ and $h_t$ are computed from $u_t, c_{t-1}$ and $h_{t-1}$ by the following equations: $ &i = \sigma (W_{ix} u_t + W_{ih} h_{t-1}) \\ &f = \sigma (W_{fx} u_t + W_{fh} h_{t-1}) \\ &o = \sigma (W_{ox} u_t + W_{oh} h_{t-1}) \\ &g = \tanh (W_{gx} u_t + W_{gh} h_{t-1}) \\ &c_t = f \odot c_{t-1} + i \odot g \\ &h_t = o \odot \tanh (c_t), $ where $\sigma $ is the sigmoid function and $\odot $ is the element-wise product. Note that all the vectors in the equations have the same dimension $n$ , called the cell size. The probability of the output word at the $t$ -th step, $y_t$ , is computed by $$p(y_t|y_1,\ldots ,y_{t-1},x) = {\rm Softmax}(W h_t), $$ (Eq. 1) where $W$ is a matrix with a size of vocabulary size times $n$ . We call this matrix as the parameter of the output layer. The input $u_t$ is given by the word embedding of $y_{t-1}$ . To generate a caption, we first compute feature vectors of the image, and put it into the beginning of the LSTM as $$u_{0} = W_{0} {\rm CNN}(x),$$ (Eq. 2) where $W_0$ is a tunable parameter matrix and ${\rm CNN}$ is a feature extractor usually given by a convolutional neural network. Output words, $y_t$ , are selected in order and each caption ends with special symbol <EOS>. The process is illustrated in Figure 1 . Note that the cost function for the generated caption is $ \log p(y|x) = \sum _{t} \log p(y_t|y_1,\ldots ,y_{t-1}, x), $ where the conditional distributions are given by Eq. ( 1 ). The parameters of the model are optimized to minimize the cost on the training dataset. We also note that there are extensions of the models with attentions BIBREF9 , BIBREF10 , but the forms of the cost functions are the same. Domain adaptation for language generation In this section, we review standard domain adaptation techniques which are applicable to the neural language generation. The performance of these methods is compared in the next section. Standard and baseline methods A trivial method of domain adaptation is simply ignoring the source dataset, and train the model using only the target dataset. This method is hereafter denoted by TgtOnly. This is a baseline and any meaningful method must beat it. Another trivial method is SrcOnly, where only the source dataset is used for the training. Typically, the source dataset is bigger than that of the target, and this method sometimes works better than TgtOnly. Another method is All, in which the source and target datasets are combined and used for the training. Although this method uses all the data, the training criteria enforce the model to perform well on both of the domains, and therefore the performance on the target domain is not necessarily high. An approach widely used in the neural network community is FineTune. We first train the model with the source dataset and then it is used as the initial parameters for training the model with the target dataset. The training process is stopped in reference to the development set in order to avoid over-fitting. We could extend this method by posing a regularization term (e.g. $l_2$ regularization) in order not to deviate from the pre-trained parameter. In the latter experiments, however, we do not pursue this direction because we found no performance gain. Note that it is hard to control the scales of the regularization for each part of the neural net because there are many parameters having different roles. Another common approach for neural domain adaptation is Dual. In this method, the output of the network is “dualized”. In other words, we use different parameters $W$ in Eq. ( 1 ) for the source and target domains. For the source dataset, the model is trained with the first output and the second for the target dataset. The rest of the parameters are shared among the domains. This type of network design is often used for multi-task learning. Revisiting the feature augmentation method Before proceeding to our new method, we describe the feature augmentation method BIBREF8 from our perspective. let us start with the feature augmentation method. Here we consider the domain adaptation of a binary classification problem. Suppose that we train SVM models for the source and target domains separately. The objective functions have the form of $ \frac{1}{n_s} \sum _{(x,y) \in \mathcal {D}_s} \max (0, 1 - y(w_s^T \Phi (x))) + \lambda \Vert w_s \Vert ^2 \\ \frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t} \max (0, 1 - y(w_t^T \Phi (x))) + \lambda \Vert w_t \Vert ^2 , $ where $\Phi (x)$ is the feature vector and $w_s, w_t$ are the SVM parameters. In the feature augmentation method, the parameters are decomposed to $w_s = \theta _g + \theta _s$ and $w_t = \theta _g + \theta _t$ . The optimization objective is different from the sum of the above functions: $ & \frac{1}{n_s} \sum _{(x,y) \in \mathcal {D}_s} \max (0, 1 - y(w_s^T \Phi (x))) \\ &+\lambda (\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2 ) \\ &+ \frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t} \max (0, 1 - y(w_t^T \Phi (x))) \\ &+ \lambda (\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2 ), $ where the quadratic regularization terms $\Vert \theta _g + \theta _s \Vert ^2$ and $\Vert \theta _g + \theta _t \Vert ^2$ are changed to $\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2$ and $\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2$ , respectively. Since the parameters $\theta _g$ are shared, we cannot optimize the problems separately. This change of the objective function can be understood as adding additional regularization terms $ 2(\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2 ) - \Vert \theta _g + \theta _t \Vert ^2, \\ 2(\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2 ) - \Vert \theta _g + \theta _s \Vert ^2. $ We can easily see that those are equal to $\Vert \theta _g - \theta _t \Vert ^2$ and $\Vert \theta _g - \theta _s \Vert ^2$ , respectively and thus this additional regularization enforces $\theta _g$ and $\theta _t$ (and also $\theta _g$ and $\theta _s$ ) not to be far away. This is how the feature augmentation method shares the domain information between the parameters $w_s$ and $w_t$ . Proposed method Although the above formalization is for an SVM, which has the quadratic cost of parameters, we can apply the idea to the log probability case. In the case of RNN language generation, the loss function of each output is a cross entropy applied to the softmax output $$-\log & p_s(y|y_1, \ldots , y_{t-1}, x) \nonumber \\ &= -w_{s,y}^T h + \log Z(w_s;h), $$ (Eq. 8) where $Z$ is the partition function and $h$ is the hidden state of the LSTM computed by $y_0, \ldots , y_{t-1}$ and $x$ . Again we decompose the word output parameter as $w_s = \theta _g + \theta _s$ . Since $\log Z$ is convex with respect to $w_s$ , we can easily show that the Eq. ( 8 ) is bounded above by $ -&\theta _{g,y}^T h + \frac{1}{2} \log Z(2 \theta _g;x) \\ &-\theta _{s,y}^T h +\frac{1}{2} \log Z(2 \theta _s;x). $ The equality holds if and only if $\theta _g = \theta _s$ . Therefore, optimizing this upper-bound effectively enforces the parameters to be close as well as reducing the cost. The exact same story can be applied to the target parameter $w_t = \theta _g + \theta _t$ . We combine the source and target cost functions and optimize the sum of the above upper-bounds. Then the derived objective function is $ \frac{1}{n_s} \sum _{(x,y) \in \mathcal {D}_s} [ -\theta _{g,y}^T h& + \frac{1}{2} \log Z(2 \theta _g;x) \\ &-\theta _{s,y}^T h + \frac{1}{2} \log Z(2 \theta _s;x) ] \\ + \frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t} [ -\theta _{g,y}^T h &+ \frac{1}{2} \log Z(2 \theta _g;x) \\ & -\theta _{t,y}^T h + \frac{1}{2} \log Z(2 \theta _t;x) ]. $ If we work with the sum of the source and target versions of Eq. ( 8 ), the method is actually the same as Dual because the parameters $\theta _g$ is completely redundant. The difference between this objective and the proposed upper bound works as a regularization term, which results in a good generalization performance. Although our formulation has the unique objective, there are three types of cross entropy loss terms given by $\theta _g$ , $\theta _s$ and $\theta _t$ . We denote them by $\ell (\theta _g), \ell (\theta _s)$ and $\ell (\theta _t)$ , respectively. For the source data, the sum of general and source loss terms is optimized, and for the target dataset the sum of general and target loss terms is optimized. The proposed algorithm is summarized in Algorithm "Proposed method" . Note that $\theta _h$ is the parameters of the LSTM except for the output part. In one epoch of the training, we use all data once. We can combine any parameter update methods for neural network training such as Adam BIBREF11 . boxruled [t] Proposed Method True Select a minibatch of data from source or target dataset source Optimize $\ell (\theta _g) + \ell (\theta _s)$ with respect to $\theta _g, \theta _s, \theta _h$ for the minibatch Optimize $\ell (\theta _g) + \ell (\theta _t)$ with respect to $\theta _g, \theta _t, \theta _h$ for the minibatch development error increases break Compute $w_t = \theta _g + \theta _t$ and $w_s = \theta _g + \theta _s$ . Use these parameters as the output parameters for each domain. Experiments We have conducted domain adaptation experiments on the following three datasets. The first experiment focuses on the situation where the domain adaptation is useful. The second experiment show the benefit of domain adaptation for both directions: from source to target and target to source. The third experiment shows an improvement in another metric. Although our method is applicable to any neural network with a cross entropy loss, all the experiments use caption generation models because it is one of the most successful neural network applications in NLP. Adaptation to food domain captioning This experiment highlights a typical scenario in which domain adaptation is useful. Suppose that we have a large dataset of captioned images, which are taken from daily lives, but we would like to generate high quality captions for more specialized domain images such as minor sports and exotic food. However, captioned images for those domains are quite limited due to the annotation cost. We use domain adaptation methods to improve the captions of the target domain. To simulate the scenario, we split the Microsoft COCO dataset into food and non-food domain datasets. The MS COCO dataset contains approximately 80K images for training and 40K images for validation; each image has 5 captions BIBREF12 . The dataset contains images of diverse categories, including animals, indoor scenes, sports, and foods. We selected the “food category” data by scoring the captions according to how much those are related to the food category. The score is computed based on wordnet similarities BIBREF13 . The training and validation datasets are split by the score with the same threshold. Consequently, the food dataset has 3,806 images for training and 1,775 for validation. The non-food dataset has 78,976 images for training and 38,749 for validation. The selected pictures from the food domain are typically a close-up of foods or people eating some foods. Table 1 shows some captions from the food and non-food domain datasets. Table 2 shows the top twenty frequent words in the two datasets except for the stop words. We observe that the frequent words are largely different, but still there are some words common in both datasets. To model the image captaining, we use LSTMs as described in the previous section. The image features are computed by the trained GoogLeNet and all the LSTMs have a single layer with 300 hidden units BIBREF14 . We use a standard optimization method, Adam BIBREF11 with hyper parameters $\alpha =0.001$ , $\beta _1=0.9$ and $\beta _2=0.999$ . We stop the training based on the loss on the development set. After the training we generate captions by beam search, where the size of the beam is 5. These settings are the same in the latter experiments. We compare the proposed method with other baseline methods. For all the methods, we use Adam with the same hyper parameters. In FineTune, we did not freeze any parameters during the target training. In Dual, all samples in source and target datasets are weighted equally. We evaluated the performance of the domain adaptation methods by the qualities of the generated captions. We used BLEU, METOR and CIDEr scores for the evaluation. The results are summarized in Table 3 . We see that the proposed method improves in most of the metrics. The baseline methods SrcOnly and TgtOnly are worse than other methods, because they use limited data for the training. Note that the CIDEr scores correlate with human evaluations better than BLEU and METOR scores BIBREF15 . Generated captions for sample images are shown in Table 4 . In the first example, All fails to identify the chocolate cake because there are birds in the source dataset which somehow look similar to chocolate cake. We argue that Proposed learns birds by the source parameters and chocolate cakes by the target parameters, and thus succeeded in generating appropriate captions. Adaptation between MS COCO and Flickr30K In this experiment, we explore the benefit of adaptation from both sides of the domains. Flickr30K is another captioning dataset, consisting of 30K images, and each image has five captions BIBREF16 . Although the formats of the datasets are almost the same, the model trained by the MS COCO dataset does not work well for the Flickr 30K dataset and vice versa. The word distributions of the captions are considerably different. If we ignore words with less than 30 counts, MS COCO has 3,655 words and Flicker30K has 2732 words; and only 1,486 words are shared. Also, the average lengths of captions are different. The average length of captions in Flickr30K is 12.3 while that of MS COCO is 10.5. The first result is the domain adaptation from MS COCO to Flickr30K, summarized in Table 5 . Again, we observe that the proposed method achieves the best score among the other methods. The difference between All and FineTune is bigger than in the previous setting because two datasets have different captions even for similar images. The scores of FineTune and Dual are at almost the same level. The second result is the domain adaptation from Flickr30K to MS COCO shown in Table 6 . This may not be a typical situation because the number of samples in the target domain is larger than that of the source domain. The SrcOnly model is trained only with Flickr30K and tested on the MS COCO dataset. We observe that FineTune gives little benefit over TgtOnly, which implies that the difference of the initial parameters has little effect in this case. Also, Dual gives little benefit over TgtOnly, meaning that the parameter sharing except for the output layer is not important in this case. Note that the CIDEr score of Proposed is slightly improved. Figure 2 shows the comparison of FineTune and Proposed, changing the number of the Flickr samples to 1600, 6400 and 30K. We observe that FineTune works relatively well when the target domain dataset is small. Answer sentence selection In this experiment, we use the captioning model as an affinity measure of images and sentences. TOEIC part 1 test consists of four-choice questions for English learners. The correct choice is the sentence that best describes the shown image. Questions are not easy because there are confusing keywords in wrong choices. An example of the question is shown in Table 7 . We downloaded 610 questions from http://www.english-test.net/toeic/ listening/. Our approach here is to select the most probable choice given the image by captioning models. We train captioning models with the images and correct answers from the training set. Since the TOEIC dataset is small, domain adaptation can give a large benefit. We compared the domain adaptation methods by the percentage of correct answers. The source dataset is 40K samples from MS COCO and the target dataset is the TOEIC dataset. We split the TOEIC dataset to 400 samples for training and 210 samples for testing. The percentages of correct answers for each method are summarized in Table 8 . Since the questions have four choices, all methods should perform better than 25%. TgtOnly is close to the baseline because the model is trained with only 400 samples. As in the previous experiments, FineTune and Dual are better than All and Proposed is better than the other methods. Conclusion and Future Work We have proposed a new method for supervised domain adaptation of neural networks. On captioning datasets, we have shown that the method outperforms other standard adaptation methods applicable to neural networks. The proposed method only decomposes the output word parameters, where other parameters, such as word embedding, are completely shared across the domains. Augmentation of parameters in the other part of the network would be an interesting direction of future work.
78,976
146fe3e97d8080f04222ed20903dd0d5fd2f551c
146fe3e97d8080f04222ed20903dd0d5fd2f551c_0
Q: How many examples are there in the target domain? Text: Introduction Domain adaptation is a machine learning paradigm that aims at improving the generalization performance of a new (target) domain by using a dataset from the original (source) domain. Suppose that, as the source domain dataset, we have a captioning corpus, consisting of images of daily lives and each image has captions. Suppose also that we would like to generate captions for exotic cuisine, which are rare in the corpus. It is usually very costly to make a new corpus for the target domain, i.e., taking and captioning those images. The research question here is how we can leverage the source domain dataset to improve the performance on the target domain. As described by Daumé daume:07, there are mainly two settings of domain adaptation: fully supervised and semi-supervised. Our focus is the supervised setting, where both of the source and target domain datasets are labeled. We would like to use the label information of the source domain to improve the performance on the target domain. Recently, Recurrent Neural Networks (RNNs) have been successfully applied to various tasks in the field of natural language processing (NLP), including language modeling BIBREF0 , caption generation BIBREF1 and parsing BIBREF2 . For neural networks, there are two standard methods for supervised domain adaptation BIBREF3 . The first method is fine tuning: we first train the model with the source dataset and then tune it with the target domain dataset BIBREF4 , BIBREF5 . Since the objective function of neural network training is non-convex, the performance of the trained model can depend on the initialization of the parameters. This is in contrast with the convex methods such as Support Vector Machines (SVMs). We expect that the first training gives a good initialization of the parameters, and therefore the latter training gives a good generalization even if the target domain dataset is small. The downside of this approach is the lack of the optimization objective. The other method is to design the neural network so that it has two outputs. The first output is trained with the source dataset and the other output is trained with the target dataset, where the input part is shared among the domains. We call this method dual outputs. This type of network architecture has been successfully applied to multi-task learning in NLP such as part-of-speech tagging and named-entity recognition BIBREF6 , BIBREF7 . In the NLP community, there has been a large body of previous work on domain adaptation. One of the state-of-the-art methods for the supervised domain adaptation is feature augmentation BIBREF8 . The central idea of this method is to augment the original features/parameters in order to model the source specific, target specific and general behaviors of the data. However, it is not straight-forward to apply it to neural network models in which the cost function has a form of log probabilities. In this paper, we propose a new domain adaptation method for neural networks. We reformulate the method of daume:07 and derive an objective function using convexity of the loss function. From a high-level perspective, this method shares the idea of feature augmentation. We use redundant parameters for the source, target and general domains, where the general parameters are tuned to model the common characteristics of the datasets and the source/target parameters are tuned for domain specific aspects. In the latter part of this paper, we apply our domain adaptation method to a neural captioning model and show performance improvement over other standard methods on several datasets and metrics. In the datasets, the source and target have different word distributions, and thus adaptation of output parameters is important. We augment the output parameters to facilitate adaptation. Although we use captioning models in the experiments, our method can be applied to any neural networks trained with a cross-entropy loss. Related Work There are several recent studies applying domain adaptation methods to deep neural networks. However, few studies have focused on improving the fine tuning and dual outputs methods in the supervised setting. sun2015return have proposed an unsupervised domain adaptation method and apply it to the features from deep neural networks. Their idea is to minimize the domain shift by aligning the second-order statistics of source and target distributions. In our setting, it is not necessarily true that there is a correspondence between the source and target input distributions, and therefore we cannot expect their method to work well. wen2016multi have proposed a procedure to generate natural language for multiple domains of spoken dialogue systems. They improve the fine tuning method by pre-trainig with synthesized data. However, the synthesis protocol is only applicable to the spoken dialogue system. In this paper, we focus on domain adaptation methods which can be applied without dataset-specific tricks. yang2016multitask have conducted a series of experiments to investigate the transferability of neural networks for NLP. They compare the performance of two transfer methods called INIT and MULT, which correspond to the fine tuning and dual outputs methods in our terms. They conclude that MULT is slightly better than or comparable to INIT; this is consistent with our experiments shown in section "Experiments" . Although they obtain little improvement by transferring the output parameters, we achieve significant improvement by augmenting parameters in the output layers. Domain adaptation and language generation We start with the basic notations and formalization for domain adaptation. Let $\mathcal {X}$ be the set of inputs and $\mathcal {Y}$ be the outputs. We have a source domain dataset $D^s$ , which is sampled from some distribution $\mathcal {D}^s$ . Also, we have a target domain dataset $D^t$ , which is sampled from another distribution $\mathcal {D}^t$ . Since we are considering supervised settings, each element of the datasets has a form of input output pair $(x,y)$ . The goal of domain adaptation is to learn a function $f : \mathcal {X} \rightarrow \mathcal {Y}$ that models the input-output relation of $D^t$ . We implicitly assume that there is a connection between the source and target distributions and thus can leverage the information of the source domain dataset. In the case of image caption generation, the input $x$ is an image (or the feature vector of an image) and $\mathcal {Y}$0 is the caption (a sequence of words). In language generation tasks, a sequence of words is generated from an input $x$ . A state-of-the-art model for language generation is LSTM (Long Short Term Memory) initialized by a context vector computed by the input BIBREF1 . LSTM is a particular form of recurrent neural network, which has three gates and a memory cell. For each time step $t$ , the vectors $c_t$ and $h_t$ are computed from $u_t, c_{t-1}$ and $h_{t-1}$ by the following equations: $ &i = \sigma (W_{ix} u_t + W_{ih} h_{t-1}) \\ &f = \sigma (W_{fx} u_t + W_{fh} h_{t-1}) \\ &o = \sigma (W_{ox} u_t + W_{oh} h_{t-1}) \\ &g = \tanh (W_{gx} u_t + W_{gh} h_{t-1}) \\ &c_t = f \odot c_{t-1} + i \odot g \\ &h_t = o \odot \tanh (c_t), $ where $\sigma $ is the sigmoid function and $\odot $ is the element-wise product. Note that all the vectors in the equations have the same dimension $n$ , called the cell size. The probability of the output word at the $t$ -th step, $y_t$ , is computed by $$p(y_t|y_1,\ldots ,y_{t-1},x) = {\rm Softmax}(W h_t), $$ (Eq. 1) where $W$ is a matrix with a size of vocabulary size times $n$ . We call this matrix as the parameter of the output layer. The input $u_t$ is given by the word embedding of $y_{t-1}$ . To generate a caption, we first compute feature vectors of the image, and put it into the beginning of the LSTM as $$u_{0} = W_{0} {\rm CNN}(x),$$ (Eq. 2) where $W_0$ is a tunable parameter matrix and ${\rm CNN}$ is a feature extractor usually given by a convolutional neural network. Output words, $y_t$ , are selected in order and each caption ends with special symbol <EOS>. The process is illustrated in Figure 1 . Note that the cost function for the generated caption is $ \log p(y|x) = \sum _{t} \log p(y_t|y_1,\ldots ,y_{t-1}, x), $ where the conditional distributions are given by Eq. ( 1 ). The parameters of the model are optimized to minimize the cost on the training dataset. We also note that there are extensions of the models with attentions BIBREF9 , BIBREF10 , but the forms of the cost functions are the same. Domain adaptation for language generation In this section, we review standard domain adaptation techniques which are applicable to the neural language generation. The performance of these methods is compared in the next section. Standard and baseline methods A trivial method of domain adaptation is simply ignoring the source dataset, and train the model using only the target dataset. This method is hereafter denoted by TgtOnly. This is a baseline and any meaningful method must beat it. Another trivial method is SrcOnly, where only the source dataset is used for the training. Typically, the source dataset is bigger than that of the target, and this method sometimes works better than TgtOnly. Another method is All, in which the source and target datasets are combined and used for the training. Although this method uses all the data, the training criteria enforce the model to perform well on both of the domains, and therefore the performance on the target domain is not necessarily high. An approach widely used in the neural network community is FineTune. We first train the model with the source dataset and then it is used as the initial parameters for training the model with the target dataset. The training process is stopped in reference to the development set in order to avoid over-fitting. We could extend this method by posing a regularization term (e.g. $l_2$ regularization) in order not to deviate from the pre-trained parameter. In the latter experiments, however, we do not pursue this direction because we found no performance gain. Note that it is hard to control the scales of the regularization for each part of the neural net because there are many parameters having different roles. Another common approach for neural domain adaptation is Dual. In this method, the output of the network is “dualized”. In other words, we use different parameters $W$ in Eq. ( 1 ) for the source and target domains. For the source dataset, the model is trained with the first output and the second for the target dataset. The rest of the parameters are shared among the domains. This type of network design is often used for multi-task learning. Revisiting the feature augmentation method Before proceeding to our new method, we describe the feature augmentation method BIBREF8 from our perspective. let us start with the feature augmentation method. Here we consider the domain adaptation of a binary classification problem. Suppose that we train SVM models for the source and target domains separately. The objective functions have the form of $ \frac{1}{n_s} \sum _{(x,y) \in \mathcal {D}_s} \max (0, 1 - y(w_s^T \Phi (x))) + \lambda \Vert w_s \Vert ^2 \\ \frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t} \max (0, 1 - y(w_t^T \Phi (x))) + \lambda \Vert w_t \Vert ^2 , $ where $\Phi (x)$ is the feature vector and $w_s, w_t$ are the SVM parameters. In the feature augmentation method, the parameters are decomposed to $w_s = \theta _g + \theta _s$ and $w_t = \theta _g + \theta _t$ . The optimization objective is different from the sum of the above functions: $ & \frac{1}{n_s} \sum _{(x,y) \in \mathcal {D}_s} \max (0, 1 - y(w_s^T \Phi (x))) \\ &+\lambda (\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2 ) \\ &+ \frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t} \max (0, 1 - y(w_t^T \Phi (x))) \\ &+ \lambda (\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2 ), $ where the quadratic regularization terms $\Vert \theta _g + \theta _s \Vert ^2$ and $\Vert \theta _g + \theta _t \Vert ^2$ are changed to $\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2$ and $\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2$ , respectively. Since the parameters $\theta _g$ are shared, we cannot optimize the problems separately. This change of the objective function can be understood as adding additional regularization terms $ 2(\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2 ) - \Vert \theta _g + \theta _t \Vert ^2, \\ 2(\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2 ) - \Vert \theta _g + \theta _s \Vert ^2. $ We can easily see that those are equal to $\Vert \theta _g - \theta _t \Vert ^2$ and $\Vert \theta _g - \theta _s \Vert ^2$ , respectively and thus this additional regularization enforces $\theta _g$ and $\theta _t$ (and also $\theta _g$ and $\theta _s$ ) not to be far away. This is how the feature augmentation method shares the domain information between the parameters $w_s$ and $w_t$ . Proposed method Although the above formalization is for an SVM, which has the quadratic cost of parameters, we can apply the idea to the log probability case. In the case of RNN language generation, the loss function of each output is a cross entropy applied to the softmax output $$-\log & p_s(y|y_1, \ldots , y_{t-1}, x) \nonumber \\ &= -w_{s,y}^T h + \log Z(w_s;h), $$ (Eq. 8) where $Z$ is the partition function and $h$ is the hidden state of the LSTM computed by $y_0, \ldots , y_{t-1}$ and $x$ . Again we decompose the word output parameter as $w_s = \theta _g + \theta _s$ . Since $\log Z$ is convex with respect to $w_s$ , we can easily show that the Eq. ( 8 ) is bounded above by $ -&\theta _{g,y}^T h + \frac{1}{2} \log Z(2 \theta _g;x) \\ &-\theta _{s,y}^T h +\frac{1}{2} \log Z(2 \theta _s;x). $ The equality holds if and only if $\theta _g = \theta _s$ . Therefore, optimizing this upper-bound effectively enforces the parameters to be close as well as reducing the cost. The exact same story can be applied to the target parameter $w_t = \theta _g + \theta _t$ . We combine the source and target cost functions and optimize the sum of the above upper-bounds. Then the derived objective function is $ \frac{1}{n_s} \sum _{(x,y) \in \mathcal {D}_s} [ -\theta _{g,y}^T h& + \frac{1}{2} \log Z(2 \theta _g;x) \\ &-\theta _{s,y}^T h + \frac{1}{2} \log Z(2 \theta _s;x) ] \\ + \frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t} [ -\theta _{g,y}^T h &+ \frac{1}{2} \log Z(2 \theta _g;x) \\ & -\theta _{t,y}^T h + \frac{1}{2} \log Z(2 \theta _t;x) ]. $ If we work with the sum of the source and target versions of Eq. ( 8 ), the method is actually the same as Dual because the parameters $\theta _g$ is completely redundant. The difference between this objective and the proposed upper bound works as a regularization term, which results in a good generalization performance. Although our formulation has the unique objective, there are three types of cross entropy loss terms given by $\theta _g$ , $\theta _s$ and $\theta _t$ . We denote them by $\ell (\theta _g), \ell (\theta _s)$ and $\ell (\theta _t)$ , respectively. For the source data, the sum of general and source loss terms is optimized, and for the target dataset the sum of general and target loss terms is optimized. The proposed algorithm is summarized in Algorithm "Proposed method" . Note that $\theta _h$ is the parameters of the LSTM except for the output part. In one epoch of the training, we use all data once. We can combine any parameter update methods for neural network training such as Adam BIBREF11 . boxruled [t] Proposed Method True Select a minibatch of data from source or target dataset source Optimize $\ell (\theta _g) + \ell (\theta _s)$ with respect to $\theta _g, \theta _s, \theta _h$ for the minibatch Optimize $\ell (\theta _g) + \ell (\theta _t)$ with respect to $\theta _g, \theta _t, \theta _h$ for the minibatch development error increases break Compute $w_t = \theta _g + \theta _t$ and $w_s = \theta _g + \theta _s$ . Use these parameters as the output parameters for each domain. Experiments We have conducted domain adaptation experiments on the following three datasets. The first experiment focuses on the situation where the domain adaptation is useful. The second experiment show the benefit of domain adaptation for both directions: from source to target and target to source. The third experiment shows an improvement in another metric. Although our method is applicable to any neural network with a cross entropy loss, all the experiments use caption generation models because it is one of the most successful neural network applications in NLP. Adaptation to food domain captioning This experiment highlights a typical scenario in which domain adaptation is useful. Suppose that we have a large dataset of captioned images, which are taken from daily lives, but we would like to generate high quality captions for more specialized domain images such as minor sports and exotic food. However, captioned images for those domains are quite limited due to the annotation cost. We use domain adaptation methods to improve the captions of the target domain. To simulate the scenario, we split the Microsoft COCO dataset into food and non-food domain datasets. The MS COCO dataset contains approximately 80K images for training and 40K images for validation; each image has 5 captions BIBREF12 . The dataset contains images of diverse categories, including animals, indoor scenes, sports, and foods. We selected the “food category” data by scoring the captions according to how much those are related to the food category. The score is computed based on wordnet similarities BIBREF13 . The training and validation datasets are split by the score with the same threshold. Consequently, the food dataset has 3,806 images for training and 1,775 for validation. The non-food dataset has 78,976 images for training and 38,749 for validation. The selected pictures from the food domain are typically a close-up of foods or people eating some foods. Table 1 shows some captions from the food and non-food domain datasets. Table 2 shows the top twenty frequent words in the two datasets except for the stop words. We observe that the frequent words are largely different, but still there are some words common in both datasets. To model the image captaining, we use LSTMs as described in the previous section. The image features are computed by the trained GoogLeNet and all the LSTMs have a single layer with 300 hidden units BIBREF14 . We use a standard optimization method, Adam BIBREF11 with hyper parameters $\alpha =0.001$ , $\beta _1=0.9$ and $\beta _2=0.999$ . We stop the training based on the loss on the development set. After the training we generate captions by beam search, where the size of the beam is 5. These settings are the same in the latter experiments. We compare the proposed method with other baseline methods. For all the methods, we use Adam with the same hyper parameters. In FineTune, we did not freeze any parameters during the target training. In Dual, all samples in source and target datasets are weighted equally. We evaluated the performance of the domain adaptation methods by the qualities of the generated captions. We used BLEU, METOR and CIDEr scores for the evaluation. The results are summarized in Table 3 . We see that the proposed method improves in most of the metrics. The baseline methods SrcOnly and TgtOnly are worse than other methods, because they use limited data for the training. Note that the CIDEr scores correlate with human evaluations better than BLEU and METOR scores BIBREF15 . Generated captions for sample images are shown in Table 4 . In the first example, All fails to identify the chocolate cake because there are birds in the source dataset which somehow look similar to chocolate cake. We argue that Proposed learns birds by the source parameters and chocolate cakes by the target parameters, and thus succeeded in generating appropriate captions. Adaptation between MS COCO and Flickr30K In this experiment, we explore the benefit of adaptation from both sides of the domains. Flickr30K is another captioning dataset, consisting of 30K images, and each image has five captions BIBREF16 . Although the formats of the datasets are almost the same, the model trained by the MS COCO dataset does not work well for the Flickr 30K dataset and vice versa. The word distributions of the captions are considerably different. If we ignore words with less than 30 counts, MS COCO has 3,655 words and Flicker30K has 2732 words; and only 1,486 words are shared. Also, the average lengths of captions are different. The average length of captions in Flickr30K is 12.3 while that of MS COCO is 10.5. The first result is the domain adaptation from MS COCO to Flickr30K, summarized in Table 5 . Again, we observe that the proposed method achieves the best score among the other methods. The difference between All and FineTune is bigger than in the previous setting because two datasets have different captions even for similar images. The scores of FineTune and Dual are at almost the same level. The second result is the domain adaptation from Flickr30K to MS COCO shown in Table 6 . This may not be a typical situation because the number of samples in the target domain is larger than that of the source domain. The SrcOnly model is trained only with Flickr30K and tested on the MS COCO dataset. We observe that FineTune gives little benefit over TgtOnly, which implies that the difference of the initial parameters has little effect in this case. Also, Dual gives little benefit over TgtOnly, meaning that the parameter sharing except for the output layer is not important in this case. Note that the CIDEr score of Proposed is slightly improved. Figure 2 shows the comparison of FineTune and Proposed, changing the number of the Flickr samples to 1600, 6400 and 30K. We observe that FineTune works relatively well when the target domain dataset is small. Answer sentence selection In this experiment, we use the captioning model as an affinity measure of images and sentences. TOEIC part 1 test consists of four-choice questions for English learners. The correct choice is the sentence that best describes the shown image. Questions are not easy because there are confusing keywords in wrong choices. An example of the question is shown in Table 7 . We downloaded 610 questions from http://www.english-test.net/toeic/ listening/. Our approach here is to select the most probable choice given the image by captioning models. We train captioning models with the images and correct answers from the training set. Since the TOEIC dataset is small, domain adaptation can give a large benefit. We compared the domain adaptation methods by the percentage of correct answers. The source dataset is 40K samples from MS COCO and the target dataset is the TOEIC dataset. We split the TOEIC dataset to 400 samples for training and 210 samples for testing. The percentages of correct answers for each method are summarized in Table 8 . Since the questions have four choices, all methods should perform better than 25%. TgtOnly is close to the baseline because the model is trained with only 400 samples. As in the previous experiments, FineTune and Dual are better than All and Proposed is better than the other methods. Conclusion and Future Work We have proposed a new method for supervised domain adaptation of neural networks. On captioning datasets, we have shown that the method outperforms other standard adaptation methods applicable to neural networks. The proposed method only decomposes the output word parameters, where other parameters, such as word embedding, are completely shared across the domains. Augmentation of parameters in the other part of the network would be an interesting direction of future work.
the food dataset has 3,806 images for training
0fc17e51a17efce17577e2db89a24abd6607bb2b
0fc17e51a17efce17577e2db89a24abd6607bb2b_0
Q: Did they only experiment with captioning task? Text: Introduction Domain adaptation is a machine learning paradigm that aims at improving the generalization performance of a new (target) domain by using a dataset from the original (source) domain. Suppose that, as the source domain dataset, we have a captioning corpus, consisting of images of daily lives and each image has captions. Suppose also that we would like to generate captions for exotic cuisine, which are rare in the corpus. It is usually very costly to make a new corpus for the target domain, i.e., taking and captioning those images. The research question here is how we can leverage the source domain dataset to improve the performance on the target domain. As described by Daumé daume:07, there are mainly two settings of domain adaptation: fully supervised and semi-supervised. Our focus is the supervised setting, where both of the source and target domain datasets are labeled. We would like to use the label information of the source domain to improve the performance on the target domain. Recently, Recurrent Neural Networks (RNNs) have been successfully applied to various tasks in the field of natural language processing (NLP), including language modeling BIBREF0 , caption generation BIBREF1 and parsing BIBREF2 . For neural networks, there are two standard methods for supervised domain adaptation BIBREF3 . The first method is fine tuning: we first train the model with the source dataset and then tune it with the target domain dataset BIBREF4 , BIBREF5 . Since the objective function of neural network training is non-convex, the performance of the trained model can depend on the initialization of the parameters. This is in contrast with the convex methods such as Support Vector Machines (SVMs). We expect that the first training gives a good initialization of the parameters, and therefore the latter training gives a good generalization even if the target domain dataset is small. The downside of this approach is the lack of the optimization objective. The other method is to design the neural network so that it has two outputs. The first output is trained with the source dataset and the other output is trained with the target dataset, where the input part is shared among the domains. We call this method dual outputs. This type of network architecture has been successfully applied to multi-task learning in NLP such as part-of-speech tagging and named-entity recognition BIBREF6 , BIBREF7 . In the NLP community, there has been a large body of previous work on domain adaptation. One of the state-of-the-art methods for the supervised domain adaptation is feature augmentation BIBREF8 . The central idea of this method is to augment the original features/parameters in order to model the source specific, target specific and general behaviors of the data. However, it is not straight-forward to apply it to neural network models in which the cost function has a form of log probabilities. In this paper, we propose a new domain adaptation method for neural networks. We reformulate the method of daume:07 and derive an objective function using convexity of the loss function. From a high-level perspective, this method shares the idea of feature augmentation. We use redundant parameters for the source, target and general domains, where the general parameters are tuned to model the common characteristics of the datasets and the source/target parameters are tuned for domain specific aspects. In the latter part of this paper, we apply our domain adaptation method to a neural captioning model and show performance improvement over other standard methods on several datasets and metrics. In the datasets, the source and target have different word distributions, and thus adaptation of output parameters is important. We augment the output parameters to facilitate adaptation. Although we use captioning models in the experiments, our method can be applied to any neural networks trained with a cross-entropy loss. Related Work There are several recent studies applying domain adaptation methods to deep neural networks. However, few studies have focused on improving the fine tuning and dual outputs methods in the supervised setting. sun2015return have proposed an unsupervised domain adaptation method and apply it to the features from deep neural networks. Their idea is to minimize the domain shift by aligning the second-order statistics of source and target distributions. In our setting, it is not necessarily true that there is a correspondence between the source and target input distributions, and therefore we cannot expect their method to work well. wen2016multi have proposed a procedure to generate natural language for multiple domains of spoken dialogue systems. They improve the fine tuning method by pre-trainig with synthesized data. However, the synthesis protocol is only applicable to the spoken dialogue system. In this paper, we focus on domain adaptation methods which can be applied without dataset-specific tricks. yang2016multitask have conducted a series of experiments to investigate the transferability of neural networks for NLP. They compare the performance of two transfer methods called INIT and MULT, which correspond to the fine tuning and dual outputs methods in our terms. They conclude that MULT is slightly better than or comparable to INIT; this is consistent with our experiments shown in section "Experiments" . Although they obtain little improvement by transferring the output parameters, we achieve significant improvement by augmenting parameters in the output layers. Domain adaptation and language generation We start with the basic notations and formalization for domain adaptation. Let $\mathcal {X}$ be the set of inputs and $\mathcal {Y}$ be the outputs. We have a source domain dataset $D^s$ , which is sampled from some distribution $\mathcal {D}^s$ . Also, we have a target domain dataset $D^t$ , which is sampled from another distribution $\mathcal {D}^t$ . Since we are considering supervised settings, each element of the datasets has a form of input output pair $(x,y)$ . The goal of domain adaptation is to learn a function $f : \mathcal {X} \rightarrow \mathcal {Y}$ that models the input-output relation of $D^t$ . We implicitly assume that there is a connection between the source and target distributions and thus can leverage the information of the source domain dataset. In the case of image caption generation, the input $x$ is an image (or the feature vector of an image) and $\mathcal {Y}$0 is the caption (a sequence of words). In language generation tasks, a sequence of words is generated from an input $x$ . A state-of-the-art model for language generation is LSTM (Long Short Term Memory) initialized by a context vector computed by the input BIBREF1 . LSTM is a particular form of recurrent neural network, which has three gates and a memory cell. For each time step $t$ , the vectors $c_t$ and $h_t$ are computed from $u_t, c_{t-1}$ and $h_{t-1}$ by the following equations: $ &i = \sigma (W_{ix} u_t + W_{ih} h_{t-1}) \\ &f = \sigma (W_{fx} u_t + W_{fh} h_{t-1}) \\ &o = \sigma (W_{ox} u_t + W_{oh} h_{t-1}) \\ &g = \tanh (W_{gx} u_t + W_{gh} h_{t-1}) \\ &c_t = f \odot c_{t-1} + i \odot g \\ &h_t = o \odot \tanh (c_t), $ where $\sigma $ is the sigmoid function and $\odot $ is the element-wise product. Note that all the vectors in the equations have the same dimension $n$ , called the cell size. The probability of the output word at the $t$ -th step, $y_t$ , is computed by $$p(y_t|y_1,\ldots ,y_{t-1},x) = {\rm Softmax}(W h_t), $$ (Eq. 1) where $W$ is a matrix with a size of vocabulary size times $n$ . We call this matrix as the parameter of the output layer. The input $u_t$ is given by the word embedding of $y_{t-1}$ . To generate a caption, we first compute feature vectors of the image, and put it into the beginning of the LSTM as $$u_{0} = W_{0} {\rm CNN}(x),$$ (Eq. 2) where $W_0$ is a tunable parameter matrix and ${\rm CNN}$ is a feature extractor usually given by a convolutional neural network. Output words, $y_t$ , are selected in order and each caption ends with special symbol <EOS>. The process is illustrated in Figure 1 . Note that the cost function for the generated caption is $ \log p(y|x) = \sum _{t} \log p(y_t|y_1,\ldots ,y_{t-1}, x), $ where the conditional distributions are given by Eq. ( 1 ). The parameters of the model are optimized to minimize the cost on the training dataset. We also note that there are extensions of the models with attentions BIBREF9 , BIBREF10 , but the forms of the cost functions are the same. Domain adaptation for language generation In this section, we review standard domain adaptation techniques which are applicable to the neural language generation. The performance of these methods is compared in the next section. Standard and baseline methods A trivial method of domain adaptation is simply ignoring the source dataset, and train the model using only the target dataset. This method is hereafter denoted by TgtOnly. This is a baseline and any meaningful method must beat it. Another trivial method is SrcOnly, where only the source dataset is used for the training. Typically, the source dataset is bigger than that of the target, and this method sometimes works better than TgtOnly. Another method is All, in which the source and target datasets are combined and used for the training. Although this method uses all the data, the training criteria enforce the model to perform well on both of the domains, and therefore the performance on the target domain is not necessarily high. An approach widely used in the neural network community is FineTune. We first train the model with the source dataset and then it is used as the initial parameters for training the model with the target dataset. The training process is stopped in reference to the development set in order to avoid over-fitting. We could extend this method by posing a regularization term (e.g. $l_2$ regularization) in order not to deviate from the pre-trained parameter. In the latter experiments, however, we do not pursue this direction because we found no performance gain. Note that it is hard to control the scales of the regularization for each part of the neural net because there are many parameters having different roles. Another common approach for neural domain adaptation is Dual. In this method, the output of the network is “dualized”. In other words, we use different parameters $W$ in Eq. ( 1 ) for the source and target domains. For the source dataset, the model is trained with the first output and the second for the target dataset. The rest of the parameters are shared among the domains. This type of network design is often used for multi-task learning. Revisiting the feature augmentation method Before proceeding to our new method, we describe the feature augmentation method BIBREF8 from our perspective. let us start with the feature augmentation method. Here we consider the domain adaptation of a binary classification problem. Suppose that we train SVM models for the source and target domains separately. The objective functions have the form of $ \frac{1}{n_s} \sum _{(x,y) \in \mathcal {D}_s} \max (0, 1 - y(w_s^T \Phi (x))) + \lambda \Vert w_s \Vert ^2 \\ \frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t} \max (0, 1 - y(w_t^T \Phi (x))) + \lambda \Vert w_t \Vert ^2 , $ where $\Phi (x)$ is the feature vector and $w_s, w_t$ are the SVM parameters. In the feature augmentation method, the parameters are decomposed to $w_s = \theta _g + \theta _s$ and $w_t = \theta _g + \theta _t$ . The optimization objective is different from the sum of the above functions: $ & \frac{1}{n_s} \sum _{(x,y) \in \mathcal {D}_s} \max (0, 1 - y(w_s^T \Phi (x))) \\ &+\lambda (\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2 ) \\ &+ \frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t} \max (0, 1 - y(w_t^T \Phi (x))) \\ &+ \lambda (\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2 ), $ where the quadratic regularization terms $\Vert \theta _g + \theta _s \Vert ^2$ and $\Vert \theta _g + \theta _t \Vert ^2$ are changed to $\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2$ and $\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2$ , respectively. Since the parameters $\theta _g$ are shared, we cannot optimize the problems separately. This change of the objective function can be understood as adding additional regularization terms $ 2(\Vert \theta _g \Vert ^2 + \Vert \theta _t \Vert ^2 ) - \Vert \theta _g + \theta _t \Vert ^2, \\ 2(\Vert \theta _g \Vert ^2 + \Vert \theta _s \Vert ^2 ) - \Vert \theta _g + \theta _s \Vert ^2. $ We can easily see that those are equal to $\Vert \theta _g - \theta _t \Vert ^2$ and $\Vert \theta _g - \theta _s \Vert ^2$ , respectively and thus this additional regularization enforces $\theta _g$ and $\theta _t$ (and also $\theta _g$ and $\theta _s$ ) not to be far away. This is how the feature augmentation method shares the domain information between the parameters $w_s$ and $w_t$ . Proposed method Although the above formalization is for an SVM, which has the quadratic cost of parameters, we can apply the idea to the log probability case. In the case of RNN language generation, the loss function of each output is a cross entropy applied to the softmax output $$-\log & p_s(y|y_1, \ldots , y_{t-1}, x) \nonumber \\ &= -w_{s,y}^T h + \log Z(w_s;h), $$ (Eq. 8) where $Z$ is the partition function and $h$ is the hidden state of the LSTM computed by $y_0, \ldots , y_{t-1}$ and $x$ . Again we decompose the word output parameter as $w_s = \theta _g + \theta _s$ . Since $\log Z$ is convex with respect to $w_s$ , we can easily show that the Eq. ( 8 ) is bounded above by $ -&\theta _{g,y}^T h + \frac{1}{2} \log Z(2 \theta _g;x) \\ &-\theta _{s,y}^T h +\frac{1}{2} \log Z(2 \theta _s;x). $ The equality holds if and only if $\theta _g = \theta _s$ . Therefore, optimizing this upper-bound effectively enforces the parameters to be close as well as reducing the cost. The exact same story can be applied to the target parameter $w_t = \theta _g + \theta _t$ . We combine the source and target cost functions and optimize the sum of the above upper-bounds. Then the derived objective function is $ \frac{1}{n_s} \sum _{(x,y) \in \mathcal {D}_s} [ -\theta _{g,y}^T h& + \frac{1}{2} \log Z(2 \theta _g;x) \\ &-\theta _{s,y}^T h + \frac{1}{2} \log Z(2 \theta _s;x) ] \\ + \frac{1}{n_t} \sum _{(x,y) \in \mathcal {D}_t} [ -\theta _{g,y}^T h &+ \frac{1}{2} \log Z(2 \theta _g;x) \\ & -\theta _{t,y}^T h + \frac{1}{2} \log Z(2 \theta _t;x) ]. $ If we work with the sum of the source and target versions of Eq. ( 8 ), the method is actually the same as Dual because the parameters $\theta _g$ is completely redundant. The difference between this objective and the proposed upper bound works as a regularization term, which results in a good generalization performance. Although our formulation has the unique objective, there are three types of cross entropy loss terms given by $\theta _g$ , $\theta _s$ and $\theta _t$ . We denote them by $\ell (\theta _g), \ell (\theta _s)$ and $\ell (\theta _t)$ , respectively. For the source data, the sum of general and source loss terms is optimized, and for the target dataset the sum of general and target loss terms is optimized. The proposed algorithm is summarized in Algorithm "Proposed method" . Note that $\theta _h$ is the parameters of the LSTM except for the output part. In one epoch of the training, we use all data once. We can combine any parameter update methods for neural network training such as Adam BIBREF11 . boxruled [t] Proposed Method True Select a minibatch of data from source or target dataset source Optimize $\ell (\theta _g) + \ell (\theta _s)$ with respect to $\theta _g, \theta _s, \theta _h$ for the minibatch Optimize $\ell (\theta _g) + \ell (\theta _t)$ with respect to $\theta _g, \theta _t, \theta _h$ for the minibatch development error increases break Compute $w_t = \theta _g + \theta _t$ and $w_s = \theta _g + \theta _s$ . Use these parameters as the output parameters for each domain. Experiments We have conducted domain adaptation experiments on the following three datasets. The first experiment focuses on the situation where the domain adaptation is useful. The second experiment show the benefit of domain adaptation for both directions: from source to target and target to source. The third experiment shows an improvement in another metric. Although our method is applicable to any neural network with a cross entropy loss, all the experiments use caption generation models because it is one of the most successful neural network applications in NLP. Adaptation to food domain captioning This experiment highlights a typical scenario in which domain adaptation is useful. Suppose that we have a large dataset of captioned images, which are taken from daily lives, but we would like to generate high quality captions for more specialized domain images such as minor sports and exotic food. However, captioned images for those domains are quite limited due to the annotation cost. We use domain adaptation methods to improve the captions of the target domain. To simulate the scenario, we split the Microsoft COCO dataset into food and non-food domain datasets. The MS COCO dataset contains approximately 80K images for training and 40K images for validation; each image has 5 captions BIBREF12 . The dataset contains images of diverse categories, including animals, indoor scenes, sports, and foods. We selected the “food category” data by scoring the captions according to how much those are related to the food category. The score is computed based on wordnet similarities BIBREF13 . The training and validation datasets are split by the score with the same threshold. Consequently, the food dataset has 3,806 images for training and 1,775 for validation. The non-food dataset has 78,976 images for training and 38,749 for validation. The selected pictures from the food domain are typically a close-up of foods or people eating some foods. Table 1 shows some captions from the food and non-food domain datasets. Table 2 shows the top twenty frequent words in the two datasets except for the stop words. We observe that the frequent words are largely different, but still there are some words common in both datasets. To model the image captaining, we use LSTMs as described in the previous section. The image features are computed by the trained GoogLeNet and all the LSTMs have a single layer with 300 hidden units BIBREF14 . We use a standard optimization method, Adam BIBREF11 with hyper parameters $\alpha =0.001$ , $\beta _1=0.9$ and $\beta _2=0.999$ . We stop the training based on the loss on the development set. After the training we generate captions by beam search, where the size of the beam is 5. These settings are the same in the latter experiments. We compare the proposed method with other baseline methods. For all the methods, we use Adam with the same hyper parameters. In FineTune, we did not freeze any parameters during the target training. In Dual, all samples in source and target datasets are weighted equally. We evaluated the performance of the domain adaptation methods by the qualities of the generated captions. We used BLEU, METOR and CIDEr scores for the evaluation. The results are summarized in Table 3 . We see that the proposed method improves in most of the metrics. The baseline methods SrcOnly and TgtOnly are worse than other methods, because they use limited data for the training. Note that the CIDEr scores correlate with human evaluations better than BLEU and METOR scores BIBREF15 . Generated captions for sample images are shown in Table 4 . In the first example, All fails to identify the chocolate cake because there are birds in the source dataset which somehow look similar to chocolate cake. We argue that Proposed learns birds by the source parameters and chocolate cakes by the target parameters, and thus succeeded in generating appropriate captions. Adaptation between MS COCO and Flickr30K In this experiment, we explore the benefit of adaptation from both sides of the domains. Flickr30K is another captioning dataset, consisting of 30K images, and each image has five captions BIBREF16 . Although the formats of the datasets are almost the same, the model trained by the MS COCO dataset does not work well for the Flickr 30K dataset and vice versa. The word distributions of the captions are considerably different. If we ignore words with less than 30 counts, MS COCO has 3,655 words and Flicker30K has 2732 words; and only 1,486 words are shared. Also, the average lengths of captions are different. The average length of captions in Flickr30K is 12.3 while that of MS COCO is 10.5. The first result is the domain adaptation from MS COCO to Flickr30K, summarized in Table 5 . Again, we observe that the proposed method achieves the best score among the other methods. The difference between All and FineTune is bigger than in the previous setting because two datasets have different captions even for similar images. The scores of FineTune and Dual are at almost the same level. The second result is the domain adaptation from Flickr30K to MS COCO shown in Table 6 . This may not be a typical situation because the number of samples in the target domain is larger than that of the source domain. The SrcOnly model is trained only with Flickr30K and tested on the MS COCO dataset. We observe that FineTune gives little benefit over TgtOnly, which implies that the difference of the initial parameters has little effect in this case. Also, Dual gives little benefit over TgtOnly, meaning that the parameter sharing except for the output layer is not important in this case. Note that the CIDEr score of Proposed is slightly improved. Figure 2 shows the comparison of FineTune and Proposed, changing the number of the Flickr samples to 1600, 6400 and 30K. We observe that FineTune works relatively well when the target domain dataset is small. Answer sentence selection In this experiment, we use the captioning model as an affinity measure of images and sentences. TOEIC part 1 test consists of four-choice questions for English learners. The correct choice is the sentence that best describes the shown image. Questions are not easy because there are confusing keywords in wrong choices. An example of the question is shown in Table 7 . We downloaded 610 questions from http://www.english-test.net/toeic/ listening/. Our approach here is to select the most probable choice given the image by captioning models. We train captioning models with the images and correct answers from the training set. Since the TOEIC dataset is small, domain adaptation can give a large benefit. We compared the domain adaptation methods by the percentage of correct answers. The source dataset is 40K samples from MS COCO and the target dataset is the TOEIC dataset. We split the TOEIC dataset to 400 samples for training and 210 samples for testing. The percentages of correct answers for each method are summarized in Table 8 . Since the questions have four choices, all methods should perform better than 25%. TgtOnly is close to the baseline because the model is trained with only 400 samples. As in the previous experiments, FineTune and Dual are better than All and Proposed is better than the other methods. Conclusion and Future Work We have proposed a new method for supervised domain adaptation of neural networks. On captioning datasets, we have shown that the method outperforms other standard adaptation methods applicable to neural networks. The proposed method only decomposes the output word parameters, where other parameters, such as word embedding, are completely shared across the domains. Augmentation of parameters in the other part of the network would be an interesting direction of future work.
Yes
e86d381322c8db2b74a13a8e23082ddb010c1e40
e86d381322c8db2b74a13a8e23082ddb010c1e40_0
Q: how well this method is compared to other method? Text: 1.20pt Crowd Sourced Data Analysis: Mapping of Programming Concepts to Syntactical Patterns Deepak Thukral ([email protected]) & Darvesh Punia ([email protected]) Since programming concepts do not match their syntactic representations, code search is a very tedious task. For instance in Java or C, array doesn’t match [], so using “array” as a query, one cannot find what they are looking for. Often developers have to search code whether to understand any code, or to reuse some part of that code, or just to read it, without natural language searching, developers have to often scroll back and forth or use variable names as their queries. In our work, we have used Stackoverflow (SO) question and answers to make a mapping of programming concepts with their respective natural language keywords, and then tag these natural language terms to every line of code, which can further we used in searching using natural language keywords. Keywords: Data Analysis, Stack Overflow, Code Search, Natural Language Processing, Information Retrieval, Entity Discovery, Classification, Topic Modelling. Introduction In many community-based information web sites, such as Stack Overflow, users contribute content in the form of questions and answers, which allows others to learn through the contributions of the community. In our project we intend to use Stack Overflow as a tool to improve source code search. We focus on questions related to Java for research purposes. We consulted the paper "Ranking Crowd Knowledge to Assist Software Development" which categorizes all the questions in 4 types. We realized that categorization of questions could be good step in solving our problem. The questions were categorized into broadly 4 categories: Debug, How To Do It, Seeking Different Solution and Need To Know/Conceptual. Searching in code is often done by developers, where looking through thousands of lines of code to find things is not only time consuming but also tiring. Developers always need to search for code fragments when they write code, or when they want to debug, look up code fragments to reuse or try to understand somebody else's code. In our work we create a mapping of programming concepts with their syntactical patters using StackOverflow question and answers. We created it using three major steps: Entity Discovery: Parts of Speech (POS) technique is used to discover more entities, Mapping Creation: In this we extract syntactical patters matching their programming concepts, Entity Linking: Finally, each line of source code is annotated with it's associated natural language terms using the mapping created, which further allows keyword based searching. Literature Survey Word2Vec Word2Vec is a two layered neural that is trained to represent a word. For input it requires a huge corpus of text. And its output is a vector for the words present in the input corpus. The word vectors even carry the contextual information for each of the word. One typical example of how fantastic the technique can work is that in vector representation "King" - "Man" + "Woman" = "Queen". Doc2Vec Doc2vec adapts word2vec to a paragraph or a document. It represents a document by a n-dimensional vector. Each paragraph is often represented by a tag and this tag can be used to find similar documents by their tag. It is often used for document level matching. Topic Modelling Topic modelling is an unsupervised algorithm used in text mining that can extract topics from a given text. Topic modelling is typically done by using LDA. It is able to generate topics that may be hidden in the intricacies of the article. Latent Dirichlet Allocation (LDA) LDA assumes that there is some distribution of topics in the document that generate any words and hence particular document. Initially, any random topic distribution is assumed. Then, for each document the mixture of topics is found. Then, assuming the underlying distribution of topics, the probability of getting that document is generated. This is done over several iterations ultimately getting a stable generative model. Term frequency There are several ways to incorporate the frequency of any word in its weight. A typical way is to simply use the frequency count of that word as its term frequency. Inverse document frequency This term penalises the weight if it appears in several document. It measures how important a given term could be in the entire document depending on how rare it is. Mathematically, it is written as $$idf(w,D) = log \frac{N}{|\lbrace d : w \in d, d \in D \rbrace |}\ $$ (Eq. 1) where N is the total number of documents and D represts all the documents. Term frequency Inverse document frequency Tf-idf is finally computed simply as the product of tf and idf for any given word. Dataset and Preprocessing We chose java as the topic for research purpose. Although, at times the questions were further rewritten to test various techniques. We adopted the following preprocessing steps Entity Discovery: We started off with 20 manually taken entities that belonged to "Java" language from Tutorialspoint, and then ran it over questions obtained from posts having Tag as "java". Entity-Profile Construction: In this we require SO posts, which are have tag attribute of specific language (in our case we have taken the language Java), and has at least one code snippet. For this we had taken 7500000 posts( 15 files of 500000 posts each ), and after processing we obtained 110000 posts, which had tag of "Java" and contained at least one code snipper per post. Categorization of questions We started with the paper Ranking Crowd Knowledge to Assist Software Development’ which discusses the categorization of questions into different types and finds the characteristic/attributes of an effective answers. It also focuses on finding the kind of answers or code examples which help developers and maintainers solve their problems and how are they different from not-so-helpful examples. Topic Modeling using LDA LDA was applied on the questions of StackOverflow to divide the text into topics. Top few words of each topic were then examined as shown in the results. The main parameter to the model was number of topics. To get more clear results we mixed 200 articles of https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Java/List_of_articles. We ensured that noise level is low so that there is not much difference in results. to We kept no of topics equal to 4 initially as discussed in the paper. Topic Modeling with 4 topics (LDA model) Topic #0 : class, file, example, data, public, type, classes, gt, lt, interface, client, objects, method, xml, database Topic #1 : project, version, new, development, released, web, application, release, server, framework, sun, apache, available, users, based Topic #2 : object, api, org, string, return, method, function, called, case, private, class, reference, add, added, net Topic #3 : java, code, language, source, following, implementation, platform, software, applications, oracle, provides, program, features, run, based Our aim was to find 4 topics but the result was not satisfactory as the words similar to the top words for each topic was not good for classifying questions. However, we found that topics were created around the concepts of java eg. topics related database, networking etc. Word2Vec and Doc2Vec In the paper they did categorization of questions into four categories from where we discovered the the top word corresponding to the each category as follows : We need to set the context window which determines how many words before and after a given word would be included as context words of the given word. We ran Word2Vec for context window size of 10. Then we found the most similar words from the model trained corresponding to the top words found above. Here also, as above WikiPedia articles were used. The vectors obtained can be used to find a word similar to a given word, or to perform clustering. 4 topics( Word2Vec ) : Window Size = 10 The numerical value against each word indicate how similar it is to the top word of that category. suggest: [('advise', 0.9350534081459045), ('explain', 0.8972327709197998), ('tell', 0.8767426013946533), ('help', 0.8588043451309204), ('assist', 0.8563621640205383), ('bear', 0.853961706161499), ('suggestion', 0.8530131578445435), ('know', 0.8440178632736206), ('enlighten', 0.8383816480636597), ('assistance', 0.8383312225341797)] debug: [('isdebugenabled', 0.6271491646766663), ('stacktrace', 0.6043941378593445), ('getcontentlength', 0.6030460000038147), ('inflight', 0.5879602432250977), ('p_get_class_schedule', 0.5865908265113831), ('refreshing', 0.5822206735610962), ('setdebug', 0.5703291893005371), ('defaulting', 0.5698220729827881), ('loglevel', 0.565199613571167), ('logger', 0.5500662422180176)] implement: [(‘algorithm', 0.8106338977813721), ('abstract', 0.7818029522895813), ('serializable', 0.7369078993797302), ('interface', 0.7162636518478394), ('extend', 0.7146245241165161), ('implementing', 0.7100733518600464), ('extending', 0.691871166229248), ('subclass', 0.6590582132339478), ('webapplicationinitializer', 0.6560215353965759), ('class', 0.6359965801239014)] explain: [('quite', 0.8827763795852661), ('clarify', 0.8689993023872375), ('perhaps', 0.8645872473716736), ('especially', 0.863167941570282), ('better', 0.8565487861633301), ('difficult', 0.8529151678085327), ('express', 0.8518675565719604), ('situation', 0.849044680595398), ('performance', 0.8483530879020691), ('fast', 0.8476318120956421)] The numerical value against each word indicate how similar it is to the top word of that category/topic. suggest: [('advise', 0.7253733277320862), ('help', 0.7107036113739014), (‘guide', 0.7041284441947937), ('assist', 0.6895503997802734), ('tell', 0.6714411973953247), ('enlighten', 0.6282055377960205), ('know', 0.6244207620620728), (‘bear', 0.6184274554252625), ('excuse', 0.5978437066078186), ('suggestion', 0.5930529832839966)] implement: [('implementing', 0.5342041850090027), (‘algorithm', 0.5311474800109863), (‘extend', 0.52436097860336304), (‘execute', 0.5210355854034424), (‘function', 0.5163625431060791), ('abstract', 0.5103716850280762), ('interface', 0.46266812086105347), ('public', 0.3308292627334595), ('keypressthread', 0.3260781168937683), ('instantiable', 0.3248624801635742)] debug: [('isdebugenabled', 0.3359779119491577), (‘rectify’, 0.3264111280441284), ('stacktrace', 0.32063740491867065), ('logcat', 0.32000619173049927), ('setdebug', 0.3003194332122803), ('exec', 0.2963804602622986), ('logger', 0.2900662422180176), ('getanonymouslogger', 0.2807658016681671), ('conn', 0.2756619453430176), ('rootlogger', 0.26862865686416626)] explain: [(‘clarify', 0.5776931047439575), (‘difference', 0.5335918664932251), ('quite', 0.5324383974075317), (‘express', 0.5210134983062744), ('understanding', 0.4876328408718109), (‘sense', 0.48241132497787476), ('perhaps', 0.470624178647995), ('might', 0.4641563892364502), ('much', 0.4596584439277649), ('perhaps', 0.470624178647995), ('situation', 0.440624178647995)] Classifying We applied classifiers such as Naive Bayes, Logistic Regression and LSTM to predict the result. For the evaluation( of 4000 posts data ), we kept 80% data as training data i.e. 3200 samples and 20% data for testing i.e. 800 samples per class. After we got a list of words and the different words to which they are similar, we manually annotate the data according the class we want. The How-to-do-it category is very close to scenario in which a developer has a programming task at hand and need to solve it. For this reason, in our approach, we only consider Q&A pairs that are classified as How-to-do-it. tableAccuracy of Classifiers Used Approach Our objective is to automatically tag each line of a given source code, in such a manner that the line matches it’s associated named entity. To achieve this, we used: (1) Entity Discovery, (2) Entity Profile Construction (3) Entity Linking. Entity Discovery We extracted 20 entities from TutorialsPoint, and then applied this scheme to extract more. So, what we did was for each entity which appears in title of question, we extracted those questions. Then divided this data into 80% and 20%. For the 80% data, we did POS tagging of questions, and then used those patterns which have a minimum support of 0.1 (normalized). For each pattern we saw if it existed in 20% of data, and if yes we added it in vector. Now, we sorted all the generated patterns from maximum to minimum, and then took 5 first distinct entities. After discovering 5 entities from a pattern, we stopped to remove redundancy and avoid not so useful entities. Consider example: For array, the most frequent pattern was NN IN DT ENTITY IN NNS where ENTITY is the placeholder for array. As an example, the SO title, "How to determine type of object/NN in/IN an/DT array/ENTITY of/IN objects/NNS" has this frequent pattern. Same pattern appears in another title, "Get an array of int/NN from/IN a/DT string/ENTITY of/IN numbers/NNS". So we gather that both array and string have the same PoS sequence. This is how we discovered other entities. We could extract 2000( 200 of which are useful ) more useful entities after this. Entity Profile Construction In this step, we create a profile for each entity that we had discovered, by matching them to their respective syntactical patters. If we are interested in the entity “conditional”, so “condition” would have syntactical structure composed of a few tokens, now this structure would be repeated across multiple codes which would be attached to the keyword “conditional”. So we want to discover these patterns in source code that are associated with specific entities (like array or conditional). For array we see that it can be best matched with, [ ], whereas conditional could be best matched with, if ( ). We need to identify the most appropriate n-grams that represent a specific entity from dataset. So we use the TF-IDF over n-grams to identify the syntactic patterns that are most associated with a given entity. To compute term frequency tf (t,g) of an n-gram g, we use the SO posts containing the entity name in title (Table 5.1). For IDF computation, we use all SO posts. Thus, we use the TF-IDF weight = tf (t, g) x log |D|/ df(g) , where |D|- the total number of posts in SO and df(g) is the number of such posts containing the n-gram, g. Table 5.2 shows the results of these steps for a few entities. Results for Entity Profile Construction tablePatterns and frequencies for loop in Java snippets tablePrecision@4 along with top pattern discovered for some of the entities Entity Linking In this step, we annotate every line of a given source code, by the entity names that appear in that line. Every line of code is cleaned by removing user defined terms, to just focus on programming keywords. After the line is read and cleaned, we start with treating each term being a uni-gram and then switch to bi-grams, tri-grams, and so on, until all the n-grams are covered. We used Entity-Profile created as once n-grams are created of a line, we match those with the syntactical patterns of that entity (from Entity-Profile created) and determine if they match. Once an entity has been determined for a line of code, we annotates that line with the entity name as a comment. This further helps in searching within a source code by using regular keywords. A given line can have multiple entities, however we need to mark the most relevant one. Therefore to keep it concise, we have annotated no more than four entities per line. Results of Entity Linking Fig: Annotated Code after Step 3 Performing Search We used Apache Lucene for this. Lucene takes all the documents, splits them into words, and then builds an index for each word. The index contains word id, number of docs where the word is present, and the position of the word in those documents. So when given a single word query, it just searches the index and returns the result. For multi-word query just take the intersection of the set of files where the words are present.
Unanswerable
5712a0b1e33484ebc6d71c70ae222109c08dede2
5712a0b1e33484ebc6d71c70ae222109c08dede2_0
Q: What benchmark datasets they use? Text: Introduction This paper presents a compositional, attentional model for answering questions about a variety of world representations, including images and structured knowledge bases. The model translates from questions to dynamically assembled neural networks, then applies these networks to world representations (images or knowledge bases) to produce answers. We take advantage of two largely independent lines of work: on one hand, an extensive literature on answering questions by mapping from strings to logical representations of meaning; on the other, a series of recent successes in deep neural models for image recognition and captioning. By constructing neural networks instead of logical forms, our model leverages the best aspects of both linguistic compositionality and continuous representations. Our model has two components, trained jointly: first, a collection of neural “modules” that can be freely composed (fig:teasera); second, a network layout predictor that assembles modules into complete deep networks tailored to each question (fig:teaserb). Previous work has used manually-specified modular structures for visual learning BIBREF0 . Here we: Training data consists of (world, question, answer) triples: our approach requires no supervision of network layouts. We achieve state-of-the-art performance on two markedly different question answering tasks: one with questions about natural images, and another with more compositional questions about United States geography. Deep networks as functional programs We begin with a high-level discussion of the kinds of composed networks we would like to learn. Andreas15NMN describe a heuristic approach for decomposing visual question answering tasks into sequence of modular sub-problems. For example, the question What color is the bird? might be answered in two steps: first, “where is the bird?” (fig:examplesa), second, “what color is that part of the image?” (fig:examplesc). This first step, a generic module called find, can be expressed as a fragment of a neural network that maps from image features and a lexical item (here bird) to a distribution over pixels. This operation is commonly referred to as the attention mechanism, and is a standard tool for manipulating images BIBREF1 and text representations BIBREF2 . The first contribution of this paper is an extension and generalization of this mechanism to enable fully-differentiable reasoning about more structured semantic representations. fig:examplesb shows how the same module can be used to focus on the entity Georgia in a non-visual grounding domain; more generally, by representing every entity in the universe of discourse as a feature vector, we can obtain a distribution over entities that corresponds roughly to a logical set-valued denotation. Having obtained such a distribution, existing neural approaches use it to immediately compute a weighted average of image features and project back into a labeling decision—a describe module (fig:examplesc). But the logical perspective suggests a number of novel modules that might operate on attentions: e.g. combining them (by analogy to conjunction or disjunction) or inspecting them directly without a return to feature space (by analogy to quantification, fig:examplesd). These modules are discussed in detail in sec:model. Unlike their formal counterparts, they are differentiable end-to-end, facilitating their integration into learned models. Building on previous work, we learn behavior for a collection of heterogeneous modules from (world, question, answer) triples. The second contribution of this paper is a model for learning to assemble such modules compositionally. Isolated modules are of limited use—to obtain expressive power comparable to either formal approaches or monolithic deep networks, they must be composed into larger structures. fig:examples shows simple examples of composed structures, but for realistic question-answering tasks, even larger networks are required. Thus our goal is to automatically induce variable-free, tree-structured computation descriptors. We can use a familiar functional notation from formal semantics (e.g. Liang et al., 2011) to represent these computations. We write the two examples in fig:examples as (describe[color] find[bird]) and (exists find[state]) respectively. These are network layouts: they specify a structure for arranging modules (and their lexical parameters) into a complete network. Andreas15NMN use hand-written rules to deterministically transform dependency trees into layouts, and are restricted to producing simple structures like the above for non-synthetic data. For full generality, we will need to solve harder problems, like transforming What cities are in Georgia? (fig:teaser) into (and find[city] (relate[in] lookup[Georgia])) In this paper, we present a model for learning to select such structures from a set of automatically generated candidates. We call this model a dynamic neural module network. Related work There is an extensive literature on database question answering, in which strings are mapped to logical forms, then evaluated by a black-box execution model to produce answers. Supervision may be provided either by annotated logical forms BIBREF3 , BIBREF4 , BIBREF5 or from (world, question, answer) triples alone BIBREF6 , BIBREF7 . In general the set of primitive functions from which these logical forms can be assembled is fixed, but one recent line of work focuses on inducing new predicates functions automatically, either from perceptual features BIBREF8 or the underlying schema BIBREF9 . The model we describe in this paper has a unified framework for handling both the perceptual and schema cases, and differs from existing work primarily in learning a differentiable execution model with continuous evaluation results. Neural models for question answering are also a subject of current interest. These include approaches that model the task directly as a multiclass classification problem BIBREF10 , models that attempt to embed questions and answers in a shared vector space BIBREF11 and attentional models that select words from documents sources BIBREF2 . Such approaches generally require that answers can be retrieved directly based on surface linguistic features, without requiring intermediate computation. A more structured approach described by Yin15NeuralTable learns a query execution model for database tables without any natural language component. Previous efforts toward unifying formal logic and representation learning include those of Grefenstette13Logic, Krishnamurthy13CompVector, Lewis13DistributionalLogical, and Beltagy13Markov. The visually-grounded component of this work relies on recent advances in convolutional networks for computer vision BIBREF12 , and in particular the fact that late convolutional layers in networks trained for image recognition contain rich features useful for other vision tasks while preserving spatial information. These features have been used for both image captioning BIBREF1 and visual QA BIBREF13 . Most previous approaches to visual question answering either apply a recurrent model to deep representations of both the image and the question BIBREF14 , BIBREF15 , or use the question to compute an attention over the input image, and then answer based on both the question and the image features attended to BIBREF13 , BIBREF16 . Other approaches include the simple classification model described by Zhou15ClassVQA and the dynamic parameter prediction network described by Noh15DPPVQA. All of these models assume that a fixed computation can be performed on the image and question to compute the answer, rather than adapting the structure of the computation to the question. As noted, Andreas15NMN previously considered a simple generalization of these attentional approaches in which small variations in the network structure per-question were permitted, with the structure chosen by (deterministic) syntactic processing of questions. Other approaches in this general family include the “universal parser” sketched by Bottou14Reasoning, the graph transformer networks of Bottou97GraphTransformers, the knowledge-based neural networks of Towell94KBNN and the recursive neural networks of Socher13CVG, which use a fixed tree structure to perform further linguistic analysis without any external world representation. We are unaware of previous work that simultaneously learns both parameters for and structures of instance-specific networks. Model Recall that our goal is to map from questions and world representations to answers. This process involves the following variables: Our model is built around two distributions: a layout model $p(z|x;\theta _\ell )$ which chooses a layout for a sentence, and a execution model $p_z(y|w;\theta _e)$ which applies the network specified by $z$ to $w$ . For ease of presentation, we introduce these models in reverse order. We first imagine that $z$ is always observed, and in sec:model:modules describe how to evaluate and learn modules parameterized by $\theta _e$ within fixed structures. In sec:model:assemblingNetworks, we move to the real scenario, where $z$ is unknown. We describe how to predict layouts from questions and learn $\theta _e$ and $\theta _\ell $ jointly without layout supervision. Evaluating modules Given a layout $z$ , we assemble the corresponding modules into a full neural network (fig:teaserc), and apply it to the knowledge representation. Intermediate results flow between modules until an answer is produced at the root. We denote the output of the network with layout $z$ on input world $w$ as $\llbracket z \rrbracket _w$ ; when explicitly referencing the substructure of $z$ , we can alternatively write $\llbracket m(h^1, h^2) \rrbracket $ for a top-level module $m$ with submodule outputs $h^1$ and $h^2$ . We then define the execution model: $$p_z(y|w) = (\llbracket z \rrbracket _w)_y$$ (Eq. 13) (This assumes that the root module of $z$ produces a distribution over labels $y$ .) The set of possible layouts $z$ is restricted by module type constraints: some modules (like find above) operate directly on the input representation, while others (like describe above) also depend on input from specific earlier modules. Two base types are considered in this paper are Attention (a distribution over pixels or entities) and Labels (a distribution over answers). Parameters are tied across multiple instances of the same module, so different instantiated networks may share some parameters but not others. Modules have both parameter arguments (shown in square brackets) and ordinary inputs (shown in parentheses). Parameter arguments, like the running bird example in sec:programs, are provided by the layout, and are used to specialize module behavior for particular lexical items. Ordinary inputs are the result of computation lower in the network. In addition to parameter-specific weights, modules have global weights shared across all instances of the module (but not shared with other modules). We write $A, a, B, b, \dots $ for global weights and $u^i, v^i$ for weights associated with the parameter argument $i$ . $\oplus $ and $\odot $ denote (possibly broadcasted) elementwise addition and multiplication respectively. The complete set of global weights and parameter-specific weights constitutes $\theta _e$ . Every module has access to the world representation, represented as a collection of vectors $w^1, w^2, \dots $ (or $W$ expressed as a matrix). The nonlinearity $\sigma $ denotes a rectified linear unit. The modules used in this paper are shown below, with names and type constraints in the first row and a description of the module's computation following. =3mm |p0.95| Lookup ( $\rightarrow $ Attention) lookup[ $i$ ] produces an attention focused entirely at the index $f(i)$ , where the relationship $f$ between words and positions in the input map is known ahead of time (e.g. string matches on database fields). $$\llbracket {\texttt {lookup[i]}} \rrbracket = e_{f(i)}$$ (Eq. 14) where $e_i$ is the basis vector that is 1 in the $i$ th position and 0 elsewhere. Find ( $\rightarrow $ Attention) find[ $i$ ] computes a distribution over indices by concatenating the parameter argument with each position of the input feature map, and passing the concatenated vector through a MLP: $$\llbracket {\texttt {find[i]}} \rrbracket = \textrm {softmax}(a \odot \sigma (B v^i \oplus C W \oplus d))$$ (Eq. 15) Relate (Attention $\rightarrow $ Attention) relate directs focus from one region of the input to another. It behaves much like the find module, but also conditions its behavior on the current region of attention $h$ . Let $\bar{w}(h) = \sum _k h_k w^k$ , where $h_k$ is the $k^{th}$ element of $h$ . Then, $$& \llbracket {\texttt {relate[i]}}(h) \rrbracket = \textrm {softmax}(a \ \odot \nonumber \\ &\qquad \sigma (B v^i \oplus C W \oplus D\bar{w}(h) \oplus e))$$ (Eq. 16) And (Attention * $\rightarrow $ Attention) and performs an operation analogous to set intersection for attentions. The analogy to probabilistic logic suggests multiplying probabilities: $$\llbracket {\texttt {and}}(h^1, h^2, \ldots ) \rrbracket = h^1 \odot h^2 \odot \cdots $$ (Eq. 17) Describe (Attention $\rightarrow $ Labels) describe[ $i$ ] computes a weighted average of $w$ under the input attention. This average is then used to predict an answer representation. With $\bar{w}$ as above, $$\llbracket {\texttt {describe[i]}}(h) \rrbracket = \textrm {softmax}(A \sigma (B \bar{w}(h) + v^i))$$ (Eq. 18) Exists (Attention $\rightarrow $ Labels) exists is the existential quantifier, and inspects the incoming attention directly to produce a label, rather than an intermediate feature vector like describe: $$\llbracket {\texttt {exists]}}(h) \rrbracket = \textrm {softmax}\Big (\big (\max _k h_k\big )a + b\Big )$$ (Eq. 19) With $z$ observed, the model we have described so far corresponds largely to that of Andreas15NMN, though the module inventory is different—in particular, our new exists and relate modules do not depend on the two-dimensional spatial structure of the input. This enables generalization to non-visual world representations. Learning in this simplified setting is straightforward. Assuming the top-level module in each layout is a describe or exists module, the fully- instantiated network corresponds to a distribution over labels conditioned on layouts. To train, we maximize $ \sum _{(w,y,z)} \log p_z(y|w;\theta _e) $ directly. This can be understood as a parameter-tying scheme, where the decisions about which parameters to tie are governed by the observed layouts $z$ . Assembling networks Next we describe the layout model $p(z|x;\theta _\ell )$ . We first use a fixed syntactic parse to generate a small set of candidate layouts, analogously to the way a semantic grammar generates candidate semantic parses in previous work BIBREF17 . A semantic parse differs from a syntactic parse in two primary ways. First, lexical items must be mapped onto a (possibly smaller) set of semantic primitives. Second, these semantic primitives must be combined into a structure that closely, but not exactly, parallels the structure provided by syntax. For example, state and province might need to be identified with the same field in a database schema, while all states have a capital might need to be identified with the correct (in situ) quantifier scope. While we cannot avoid the structure selection problem, continuous representations simplify the lexical selection problem. For modules that accept a vector parameter, we associate these parameters with words rather than semantic tokens, and thus turn the combinatorial optimization problem associated with lexicon induction into a continuous one. Now, in order to learn that province and state have the same denotation, it is sufficient to learn that their associated parameters are close in some embedding space—a task amenable to gradient descent. (Note that this is easy only in an optimizability sense, and not an information-theoretic one—we must still learn to associate each independent lexical item with the correct vector.) The remaining combinatorial problem is to arrange the provided lexical items into the right computational structure. In this respect, layout prediction is more like syntactic parsing than ordinary semantic parsing, and we can rely on an off-the-shelf syntactic parser to get most of the way there. In this work, syntactic structure is provided by the Stanford dependency parser BIBREF18 . The construction of layout candidates is depicted in fig:layout, and proceeds as follows: Represent the input sentence as a dependency tree. Collect all nouns, verbs, and prepositional phrases that are attached directly to a wh-word or copula. Associate each of these with a layout fragment: Ordinary nouns and verbs are mapped to a single find module. Proper nouns to a single lookup module. Prepositional phrases are mapped to a depth-2 fragment, with a relate module for the preposition above a find module for the enclosed head noun. Form subsets of this set of layout fragments. For each subset, construct a layout candidate by joining all fragments with an and module, and inserting either a measure or describe module at the top (each subset thus results in two parse candidates.) All layouts resulting from this process feature a relatively flat tree structure with at most one conjunction and one quantifier. This is a strong simplifying assumption, but appears sufficient to cover most of the examples that appear in both of our tasks. As our approach includes both categories, relations and simple quantification, the range of phenomena considered is generally broader than previous perceptually-grounded QA work BIBREF8 , BIBREF19 . Having generated a set of candidate parses, we need to score them. This is a ranking problem; as in the rest of our approach, we solve it using standard neural machinery. In particular, we produce an LSTM representation of the question, a feature-based representation of the query, and pass both representations through a multilayer perceptron (MLP). The query feature vector includes indicators on the number of modules of each type present, as well as their associated parameter arguments. While one can easily imagine a more sophisticated parse-scoring model, this simple approach works well for our tasks. Formally, for a question $x$ , let $h_q(x)$ be an LSTM encoding of the question (i.e. the last hidden layer of an LSTM applied word-by-word to the input question). Let $\lbrace z_1, z_2, \ldots \rbrace $ be the proposed layouts for $x$ , and let $f(z_i)$ be a feature vector representing the $i$ th layout. Then the score $s(z_i|x)$ for the layout $z_i$ is $$s(z_i|x) = a^\top \sigma (B h_q(x) + C f(z_i) + d)$$ (Eq. 26) i.e. the output of an MLP with inputs $h_q(x)$ and $f(z_i)$ , and parameters $\theta _\ell = \lbrace a, B, C, d\rbrace $ . Finally, we normalize these scores to obtain a distribution: $$p(z_i|x;\theta _\ell ) = e^{s(z_i|x)} \Big / \sum _{j=1}^n e^{s(z_j|x)}$$ (Eq. 27) Having defined a layout selection module $p(z|x;\theta _\ell )$ and a network execution model $p_z(y|w;\theta _e)$ , we are ready to define a model for predicting answers given only (world, question) pairs. The key constraint is that we want to minimize evaluations of $p_z(y|w;\theta _e)$ (which involves expensive application of a deep network to a large input representation), but can tractably evaluate $p(z|x;\theta _\ell )$ for all $z$ (which involves application of a shallow network to a relatively small set of candidates). This is the opposite of the situation usually encountered semantic parsing, where calls to the query execution model are fast but the set of candidate parses is too large to score exhaustively. In fact, the problem more closely resembles the scenario faced by agents in the reinforcement learning setting (where it is cheap to score actions, but potentially expensive to execute them and obtain rewards). We adopt a common approach from that literature, and express our model as a stochastic policy. Under this policy, we first sample a layout $z$ from a distribution $p(z|x;\theta _\ell )$ , and then apply $z$ to the knowledge source and obtain a distribution over answers $p(y|z,w;\theta _e)$ . After $z$ is chosen, we can train the execution model directly by maximizing $\log p(y|z,w;\theta _e)$ with respect to $\theta _e$ as before (this is ordinary backpropagation). Because the hard selection of $z$ is non-differentiable, we optimize $p(z|x;\theta _\ell )$ using a policy gradient method. The gradient of the reward surface $J$ with respect to the parameters of the policy is $$\nabla J(\theta _\ell ) = \mathbb {E}[ \nabla \log p(z|x;\theta _\ell ) \cdot r ]$$ (Eq. 28) (this is the reinforce rule BIBREF20 ). Here the expectation is taken with respect to rollouts of the policy, and $r$ is the reward. Because our goal is to select the network that makes the most accurate predictions, we take the reward to be identically the negative log-probability from the execution phase, i.e. $$\mathbb {E}[(\nabla \log p(z|x;\theta _\ell )) \cdot \log p(y|z,w;\theta _e)]$$ (Eq. 29) Thus the update to the layout-scoring model at each timestep is simply the gradient of the log-probability of the chosen layout, scaled by the accuracy of that layout's predictions. At training time, we approximate the expectation with a single rollout, so at each step we update $\theta _\ell $ in the direction $ (\nabla \log p(z|x;\theta _\ell )) \cdot \log p(y|z,w;\theta _e) $ for a single $z \sim p(z|x;\theta _\ell )$ . $\theta _e$ and $\theta _\ell $ are optimized using adadelta BIBREF21 with $\rho =0.95,$ $\varepsilon =1\mathrm {e}{-6}$ and gradient clipping at a norm of 10. Experiments The framework described in this paper is general, and we are interested in how well it performs on datasets of varying domain, size and linguistic complexity. To that end, we evaluate our model on tasks at opposite extremes of both these criteria: a large visual question answering dataset, and a small collection of more structured geography questions. Questions about images Our first task is the recently-introduced Visual Question Answering challenge (VQA) BIBREF22 . The VQA dataset consists of more than 200,000 images paired with human-annotated questions and answers, as in fig:vqa:qualitative-results. We use the VQA 1.0 release, employing the development set for model selection and hyperparameter tuning, and reporting final results from the evaluation server on the test-standard set. For the experiments described in this section, the input feature representations $w_i$ are computed by the the fifth convolutional layer of a 16-layer VGGNet after pooling BIBREF12 . Input images are scaled to 448 $\times $ 448 before computing their representations. We found that performance on this task was best if the candidate layouts were relatively simple: only describe, and and find modules are used, and layouts contain at most two conjuncts. One weakness of this basic framework is a difficulty modeling prior knowledge about answers (of the form most bears are brown). This kinds of linguistic “prior” is essential for the VQA task, and easily incorporated. We simply introduce an extra hidden layer for recombining the final module network output with the input sentence representation $h_q(x)$ (see eq:layout-score), replacing eq:simple-execution with: $$\log p_z(y|w,x) = (A h_q(x) + B \llbracket z \rrbracket _w)_y$$ (Eq. 32) (Now modules with output type Labels should be understood as producing an answer embedding rather than a distribution over answers.) This allows the question to influence the answer directly. Results are shown in tbl:vqa:quantitative-results. The use of dynamic networks provides a small gain, most noticeably on "other" questions. We achieve state-of-the-art results on this task, outperforming a highly effective visual bag-of-words model BIBREF23 , a model with dynamic network parameter prediction (but fixed network structure) BIBREF24 , a more conventional attentional model BIBREF13 , and a previous approach using neural module networks with no structure prediction BIBREF0 . Some examples are shown in fig:vqa:qualitative-results. In general, the model learns to focus on the correct region of the image, and tends to consider a broad window around the region. This facilitates answering questions like Where is the cat?, which requires knowledge of the surroundings as well as the object in question. Questions about geography The next set of experiments we consider focuses on GeoQA, a geographical question-answering task first introduced by Krish2013Grounded. This task was originally paired with a visual question answering task much simpler than the one just discussed, and is appealing for a number of reasons. In contrast to the VQA dataset, GeoQA is quite small, containing only 263 examples. Two baselines are available: one using a classical semantic parser backed by a database, and another which induces logical predicates using linear classifiers over both spatial and distributional features. This allows us to evaluate the quality of our model relative to other perceptually grounded logical semantics, as well as strictly logical approaches. The GeoQA domain consists of a set of entities (e.g. states, cities, parks) which participate in various relations (e.g. north-of, capital-of). Here we take the world representation to consist of two pieces: a set of category features (used by the find module) and a different set of relational features (used by the relate module). For our experiments, we use a subset of the features originally used by Krishnamurthy et al. The original dataset includes no quantifiers, and treats the questions What cities are in Texas? and Are there any cities in Texas? identically. Because we are interested in testing the parser's ability to predict a variety of different structures, we introduce a new version of the dataset, GeoQA+Q, which distinguishes these two cases, and expects a Boolean answer to questions of the second kind. Results are shown in tbl:geo:quantitative. As in the original work, we report the results of leave-one-environment-out cross-validation on the set of 10 environments. Our dynamic model (D-NMN) outperforms both the logical (LSP-F) and perceptual models (LSP-W) described by BIBREF8 , as well as a fixed-structure neural module net (NMN). This improvement is particularly notable on the dataset with quantifiers, where dynamic structure prediction produces a 20% relative improvement over the fixed baseline. A variety of predicted layouts are shown in fig:geo:qualitative. Conclusion We have introduced a new model, the dynamic neural module network, for answering queries about both structured and unstructured sources of information. Given only (question, world, answer) triples as training data, the model learns to assemble neural networks on the fly from an inventory of neural models, and simultaneously learns weights for these modules so that they can be composed into novel structures. Our approach achieves state-of-the-art results on two tasks. We believe that the success of this work derives from two factors: Continuous representations improve the expressiveness and learnability of semantic parsers: by replacing discrete predicates with differentiable neural network fragments, we bypass the challenging combinatorial optimization problem associated with induction of a semantic lexicon. In structured world representations, neural predicate representations allow the model to invent reusable attributes and relations not expressed in the schema. Perhaps more importantly, we can extend compositional question-answering machinery to complex, continuous world representations like images. Semantic structure prediction improves generalization in deep networks: by replacing a fixed network topology with a dynamic one, we can tailor the computation performed to each problem instance, using deeper networks for more complex questions and representing combinatorially many queries with comparatively few parameters. In practice, this results in considerable gains in speed and sample efficiency, even with very little training data. These observations are not limited to the question answering domain, and we expect that they can be applied similarly to tasks like instruction following, game playing, and language generation. Acknowledgments JA is supported by a National Science Foundation Graduate Fellowship. MR is supported by a fellowship within the FIT weltweit-Program of the German Academic Exchange Service (DAAD). This work was additionally supported by DARPA, AFRL, DoD MURI award N000141110688, NSF awards IIS-1427425 and IIS-1212798, and the Berkeley Vision and Learning Center.
VQA and GeoQA
aee1af55d39145f609da95116ab1b154adb5fa7e
aee1af55d39145f609da95116ab1b154adb5fa7e_0
Q: what is the proposed Progressive Dynamic Hurdles method? Text: Introduction Over the past few years, impressive advances have been made in the field of neural architecture search. Reinforcement learning and evolution have both proven their capacity to produce models that exceed the performance of those designed by humans BIBREF0 , BIBREF1 . These advances have mostly focused on improving image models, although some effort has also been invested in searching for sequence models BIBREF2 , BIBREF3 . In these cases, it has always been to find improved recurrent neural networks (RNNs), which were long established as the de facto neural model for sequence problems BIBREF4 , BIBREF5 . However, recent works have shown that there are better alternatives to RNNs for solving sequence problems. Due to the success of convolution-based networks, such as Convolution Seq2Seq BIBREF6 , and full attention networks, such as the Transformer BIBREF7 , feed-forward networks are now a viable option for solving sequence-to-sequence (seq2seq) tasks. The main strength of feed-forward networks is that they are faster, and easier to train than RNNs. The goal of this work is to examine the use of neural architecture search methods to design better feed-forward architectures for seq2seq tasks. Specifically, we apply tournament selection architecture search to evolve from the Transformer, considered to be the state-of-art and widely-used, into a better and more efficient architecture. To achieve this, we construct a search space that reflects the recent advances in feed-forward seq2seq models and develop a method called progressive dynamic hurdles (PDH) that allows us to perform our search directly on the computationally demanding WMT 2014 English-German (En-De) translation task. Our search produces a new architecture – called the Evolved Transformer (ET) – which demonstrates consistent improvement over the original Transformer on four well-established language tasks: WMT 2014 English-German, WMT 2014 English-French (En-Fr), WMT 2014 English-Czech (En-Cs) and the 1 Billion Word Language Model Benchmark (LM1B). In our experiments with big size models, the Evolved Transformer is twice as efficient as the Transformer in FLOPS without loss of quality. At a much smaller – mobile-friendly – model size of $\sim $ 7M parameters, the Evolved Transformer outperforms the Transformer by 0.7 BLEU. Related Work RNNs have long been used as the default option for applying neural networks to sequence modeling BIBREF4 , BIBREF5 , with LSTM BIBREF8 and GRU BIBREF9 architectures being the most popular. However, recent work has shown that RNNs are not necessary to build state-of-the-art sequence models. For example, many high performance convolutional models have been designed, such as WaveNet BIBREF10 , Gated Convolution Networks BIBREF11 , Conv Seq2Seq BIBREF6 and Dynamic Lightweight Convolution model BIBREF12 . Perhaps the most promising architecture in this direction is the Transformer architecture BIBREF7 , which relies only on multi-head attention to convey spatial information. In this work, we use both convolutions and attention in our search space to leverage the strengths of both layer types. The recent advances in sequential feed-forward networks are not limited to architecture design. Various methods, such as BERT BIBREF13 and Radford et. al's pre-training technique BIBREF14 , have demonstrated how models such as the Transformer can improve over RNN pre-training BIBREF15 , BIBREF16 . For translation specifically, work on scaling up batch size BIBREF17 , BIBREF12 , using relative position representations BIBREF18 , and weighting multi-head attention BIBREF19 have all pushed the state-of-the-art for WMT 2014 En-De and En-Fr. However, these methods are orthogonal to this work, as we are only concerned with improving the neural network architecture itself, and not the techniques used for improving overall performance. The field of neural architecture search has also seen significant recent progress. The best performing architecture search methods are those that are computationally intensive BIBREF2 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF1 , BIBREF0 . Other methods have been developed with speed in mind, such as DARTS BIBREF23 , ENAS BIBREF3 , SMASH BIBREF24 , and SNAS BIBREF25 . These methods radically reduce the amount of time needed to run each search by approximating the performance of each candidate model, instead of investing resources to fully train and evaluate each candidate separately. Unfortunately, these faster methods produce slightly less competitive results. Zela et. al's zela18 utilization of Hyperband BIBREF26 and PNAS's BIBREF27 incorporation of a surrogate model are examples of approaches that try to both increase efficiency via candidate performance estimation and maximize search quality by training models to the end when necessary. The progressive dynamic hurdles method we introduce here is similar to these approaches in that we train our best models individually to the end, but optimize our procedure by discarding unpromising models early on. Methods We employ evolution-based architecture search because it is simple and has been shown to be more efficient than reinforcement learning when resources are limited BIBREF0 . We use the same tournament selection BIBREF28 algorithm as Real et al. real19, with the aging regularization omitted, and so encourage the reader to view their in-depth description of the method. In the interest of saving space, we will only give a brief overview of the algorithm here. Tournament selection evolutionary architecture search is conducted by first defining a gene encoding that describes a neural network architecture; we describe our encoding in the following Search Space subsection. An initial population is then created by randomly sampling from the space of gene encoding to create individuals. These individuals are assigned fitnesses based on training the the neural networks they describe on the target task and then evaluating their performance on the task's validation set. The population is then repeatedly sampled from to produce subpopulations, from which the individual with the highest fitness is selected as a parent. Selected parents have their gene encodings mutated – encoding fields randomly changed to different values – to produce child models. These child models are then assigned a fitness via training and evaluation on the target task, as the initial population was. When this fitness evaluation concludes, the population is sampled from once again, and the individual in the subpopulation with the lowest fitness is killed, meaning it is removed from the population. The newly evaluated child model is then added to the population, taking the killed individual's place. This process is repeated and results in a population with high fitness individuals, which in our case represent well-performing architectures. Search Space Our encoding search space is inspired by the NASNet search space BIBREF1 , but is altered to allow it to express architecture characteristics found in recent state-of-the-art feed-forward seq2seq networks. Crucially, we ensured that the search space can represent the Transformer, so that we can seed the search process with the Transformer itself. Our search space consists of two stackable cells, one for the model encoder and one for the decoder (see Figure 1 ). Each cell contains NASNet-style blocks, which receive two hidden state inputs and produce new hidden states as outputs; the encoder contains six blocks and the decoder contains eight blocks, so that the Transformer can be represented exactly. The blocks perform separate transformations to each input and then combine the transformation outputs together to produce a single block output; we will refer to the transformations applied to each input as a branch. Our search space contains five branch-level search fields (input, normalization, layer, output dimension and activation), one block-level search field (combiner function) and one cell-level search field (number of cells). In our search space, a child model's genetic encoding is expressed as: $[$ left input, left normalization, left layer, left relative output dimension, left activation, right input, right normalization, right layer, right relative output dimension, right activation, combiner function $]$ $\times $ 14 + $[$ number of cells $]$ $\times $ 2, with the first 6 blocks allocated to the encoder and the latter 8 allocated to the decoder. Given the vocabularies described in the Appendix, this yields a search space of $7.30 * 10^{115}$ models, although we do shrink this to some degree by introducing constraints (see the Appendix for more details). Seeding the Search Space with Transformer While previous neural architecture search works rely on well-formed hand crafted search spaces BIBREF1 , we intentionally leave our search minimally tuned, in a effort to alleviate our manual burden and emphasize the role of the automated search method. To help navigate the large search space we create for ourselves, we find it easier to seed our initial population with a known strong model, in this case the Transformer. This anchors the search to a known good starting point and guarantees at least a single strong potential parent in the population as the generations progress. We offer empirical support for these claims in our Results section. Evolution with Progressive Dynamic Hurdles The evolution algorithm we employ is adapted from the tournament selection evolutionary architecture search proposed by Real et al. real19, described above. Unlike Real et al. real19 who conducted their search on CIFAR-10, our search is conducted on a task that takes much longer to train and evaluate on. Specifically, to train a Transformer to peak performance on WMT'14 En-De requires $\sim $ 300k training steps, or 10 hours, in the base size when using a single Google TPU V.2 chip, as we do in our search. In contrast, Real et al. real19 used the less resource-intensive CIFAR-10 task BIBREF29 , which takes about two hours to train on, to assess their models during their search, as it was a good proxy for ImageNet BIBREF30 performance BIBREF1 . However, in our preliminary experimentation we could not find a proxy task that gave adequate signal for how well each child model would perform on the full WMT'14 En-De task; we investigated using only a fraction of the data set and various forms of aggressive early stopping. To address this problem we formulated a method to dynamically allocate resources to more promising architectures according to their fitness. This method, which we refer to as progressive dynamic hurdles (PDH), allows models that are consistently performing well to train for more steps. It begins as ordinary tournament selection evolutionary architecture search with early stopping, with each child model training for a relatively small $s_0$ number of steps before being evaluated for fitness. However, after a predetermined number of child models, $m$ , have been evaluated, a hurdle, $h_0$ , is created by calculating the the mean fitness of the current population. For the next $m$ child models produced, models that achieve a fitness greater than $h_0$ after $s_0$ train steps are granted an additional $s_1$ steps of training and then are evaluated again to determine their final fitness. Once another $m$ models have been considered this way, another hurdle, $h_1$ , is constructed by calculating the mean fitness of all members of the current population that were trained for the maximum number of steps. For the next $m$ child models, training and evaluation continues in the same fashion, except models with fitness greater than $m$0 after $m$1 steps of training are granted an additional $m$2 number of train steps, before being evaluated for their final fitness. This process is repeated until a satisfactory number of maximum training steps is reached. Algorithm 1 (Appendix) formalizes how the fitness of an individual model is calculated with hurdles and Algorithm 2 (Appendix) describes tournament selection augmented with progressive dynamic hurdles. Although different child models may train for different numbers of steps before being assigned their final fitness, this does not make their fitnesses incomparable. Tournament selection evolution is only concerned with relative fitness rank when selecting which subpopulation members will be killed and which will become parents; the margin by which one candidate is better or worse than the other members of the subpopulation does not matter. Assuming no model overfits during its training and that its fitness monotonically increases with respect to the number of train steps it is allocated, a comparison between two child models can be viewed as a comparison between their fitnesses at the lower of the two's cumulative train steps. Since the model that was allocated more train steps performed, by definition, above the fitness hurdle for the lower number of steps and the model that was allocated less steps performed, by definition, at or below that hurdle at the lower number of steps, it is guaranteed that the model with more train steps was better when it was evaluated at the lower number of train steps. The benefit of altering the fitness algorithm this way is that poor performing child models will not consume as many resources when their fitness is being computed. As soon as a candidate's fitness falls below a tolerable amount, its evaluation immediately ends. This may also result in good candidates being labeled as bad models if they are only strong towards the latter part of training. However, the resources saved as a result of discarding many bad models improves the overall quality of the search enough to justify potentially also discarding some good ones; this is supported empirically in our Results section. Datasets We use three different machine translation datasets to perform our experiments, all of which were taken from their Tensor2Tensor implementations. The first is WMT English-German, for which we mimic Vaswani et al.'s vaswani17 setup, using WMT'18 En-De training data without ParaCrawl BIBREF31 , yielding 4.5 million sentence pairs. In the same fashion, we use newstest2013 for development and test on newstest2014. The second translation dataset is WMT En-Fr, for which we also replicate Vaswani et.al's vaswani17 setup. We train on the 36 million sentence pairs of WMT'14 En-Fr, validate on newstest2013 and test on newstest2014. The final translation dataset is WMT English-Czech (En-Cs). We used the WMT'18 training dataset, again without ParaCrawl, and used newstest2013 and newstest2014 as validation and test sets. For all tasks, tokens were split using a shared source-target vocabulary of about 32k word-pieces BIBREF32 . All datasets were generated using Tensor2Tensor's “packed" scheme; sentences were shuffled and concatenated together with padding to form uniform 256 length inputs and targets, with examples longer than 256 being discarded. This yielded batch sizes of 4096 tokens per GPU or TPU chip; accordingly, 16 TPU chip configurations had $\sim $ 66K tokens per batch and 8 GPU chip configurations had $\sim $ 33K tokens per batch. For language modeling we used the 1 Billion Word Language Model Benchmark (LM1B) BIBREF33 , also using its “packed" Tensor2Tensor implementation. Again the tokens are split into a vocabulary of approximately 32k word-pieces and the sentences are shuffled. Training Details and Hyperparameters All of our experiments used Tensor2Tensor's Transformer TPU hyperparameter settings. These are nearly identical to those used by Vaswani et al. vaswani17, but modified to use the memory-efficient Adafactor BIBREF34 optimizer. Aside from using the optimizer itself, these hyperparameters set the warmup to a constant learning rate of $10^{-2}$ over 10k steps and then uses inverse-square-root learning-rate decay. For our experiments, we make only one change, which is to alter this decay so that it reaches 0 at the final step of training, which for our non-search experiments is uniformly 300k. We found that the our search candidate models, the Transformer, and the Evolved Transformer all benefited from this and so experimented with using linear decay, single-cycle cosine decay BIBREF35 and a modified inverse-square-root decay to 0 at 300k steps: $lr = step^{-0.00303926} - .962392$ ; every decay was paired with the same constant $10^{-2}$ warmup. We used WMT En-De validation perplexity to gauge model performance and found that the Transformer preferred the modified inverse-square-root decay. Therefore, this is what we used for both all our Transformer trainings and the architecture searches themselves. The Evolved Transformer performed best with cosine decay and so that is what we used for all of its trainings. Besides this one difference, the hyperparameter settings across models being compared are exactly the same. Because decaying to 0 resulted in only marginal weight changes towards the end of training, we did not use checkpoint averaging. Per-task there is one additional hyperparameter difference, which is dropout rate. For ET and all search child models, dropout was applied uniformly after each layer, approximating the Transformer's more nuanced dropout scheme. For En-De and En-Cs, all “big" sized models were given a higher dropout rate of 0.3, keeping in line with Vaswani et al. vaswani17, and all models with an input embedding size of 768 are given a dropout rate of 0.2. Aside from this, hyperparameters are identical across all translation tasks. For decoding we used the same beam decoding configuration used by Vaswani et al. vaswani17. That is a beam size of 4, length penalty ( $\alpha $ ) of 0.6, and maximum output length of input length + 50. All BLEU is calculated using case-sensitive tokenization and for WMT'14 En-De we also use the compound splitting that was used in Vaswani et al. vaswani17. Our language model training setup is identical to our machine translation setup except we remove label smoothing and lower the intra-attention dropout rate to 0. This was taken from the Tensor2Tensor hyperparameters for LM1B[2]. Search Configurations All of the architecture searches we describe were run on WMT'14 En-De. They utilized the search space and tournament selection evolution algorithm described in our Methods section. Unless otherwise noted, each search used 200 workers, which were equipped with a single Google TPU V.2 chip for training and evaluation. We maintained a population of size 100 with subpopulation sizes for both killing and reproducing set to 30. Mutations were applied independently per encoding field at a rate of 2.5%. For fitness we used the negative log perplexity of the validation set instead of BLEU because, as demonstrated in our Results section, perplexity is more consistent and that reduced the noise of our fitness signal. Results In this section, we will first benchmark the performance of our search method, progressive dynamic hurdles, against other evolutionary search methods BIBREF21 , BIBREF0 . We will then benchmark the Evolved Transformer, the result of our search method, against the Transformer BIBREF7 . Search Techniques We tested our evolution algorithm enhancements – using PDH and seeding the initial population with the Transformer – against control searches that did not use these techniques; without our enhancements, these controls function the same way as Real et. al's real19 searches, without aging regularization. Each search we describe was run 3 times and the top model from each run was retrained on a single TPU V.2 chip for 300k steps. The performance of the models after retraining is given in Table 1 . Our proposed search (Table 1 row 1), which used both PDH and Transformer seeding, was run first, with hurdles created every 1k models ( $m = 1000$ ) and six 30k train step (1 hour) increments ( $s=<30, 30, 30, 30, 30, 30>$ ). To test the effectiveness of seeding with the Transformer, we ran an identical search that was instead seeded with random valid encodings (Table 1 row 2). To test the effectiveness of PDH, we ran three controls (Table 1 rows 3-5) that each used a fixed number of train steps for each child model instead of hurdles (Table 1 column 2). For these we used the step increments (30k), the maximum number of steps our proposed search ultimately reaches (180k), and the total number of steps each top model receives when fully trained to gauge its final performance (300k). To determine the number of child models each of these searches would be able to train, we selected the value that would make the total amount of resources used by each control search equal to the maximum amount of resources used for our proposed searches, which require various amounts of resources depending on how many models fail to overcome hurdles. In the three trials we ran, our proposed search's total number of train steps used was 422M $\pm $ 21M, with a maximum of 446M. Thus the number of child models allotted for each non-PDH control search was set so that the total number of child model train steps used would be 446M. As demonstrated in Table 1, the search we propose, with PDH and Transformer seeding, has the best performance on average. It also is the most consistent, having the lowest standard deviation. Of all the searches conducted, only a single control run – “30K no hurdles" (Table 1 row 3) – produced a model that was better than any of our proposed search's best models. At the same time, the “30K no hurdles" setup also produced models that were significantly worse, which explains its high standard deviation. This phenomenon was a chief motivator for our developing this method. Although aggressive early stopping has the potential to produce strong models for cheap, searches that utilize it can also venture into modalities in which top fitness child models are only strong early on. Without running models for longer, whether or not this is happening cannot be detected. The 180K and 300K no hurdles searches did have insight into long term performance, but in a resource-inefficient manner that hurt these searches by limiting the number of generations they produced; for the “180k no hurdles" run to train as many models as PDH would require 1.08B train steps, over double what PDH used in our worst case. Searching with random seeding also proved to be ineffective, performing considerably worse than every other configuration. Of the five searches run, random seeding was the only one that had a top model perplexity higher than the Transformer, which is 4.75 $\pm $ 0.01 in the same setup. After confirming the effectiveness of our search procedure, we launched a larger scale version of our search using 270 workers. We trained 5k models per hurdle ( $m=5000$ ) and used larger step increments to get a closer approximation to 300k step performance: $s = <60, 60, 120>$ . The setup was the same as the Search Techniques experiments, except after 11k models we lowered the mutation rate to 0.01 and introduced the NONE value to the normalization mutation vocabulary. The search ran for 15K child models, requiring a total of 979M train steps. Over 13K models did not make it past the first hurdle, drastically reducing the resources required to view the 240 thousandth train step for top models, which would have cost 3.6B train steps for the same number of models without hurdles. After the search concluded, we then selected the top 20 models and trained them for the full 300k steps, each on a single TPU V.2 chip. The model that ended with the best perplexity is what we refer to as the Evolved Transformer (ET). Figure 3 shows the ET architecture. The most notable aspect of the Evolved Transformer is the use of wide depth-wise separable convolutions in the lower layers of the encoder and decoder blocks. The use of depth-wise convolution and self-attention was previously described in QANet BIBREF37 , however the overall architectures of the Evolved Transformer and QANet are different in many ways: e.g., QANet has smaller kernel sizes and no branching structures. The performance and analysis of the Evolved Transformer will be shown in the next section. The Evolved Transformer: Performance and Analysis To test the effectiveness of the found architecture – the Evolved Transformer – we compared it to the Transformer in its Tensor2Tensor training regime on WMT'14 En-De. Table 2 shows the results of these experiments run on the same 8 NVIDIA P100 hardware setup that was used by Vaswani et al. vaswani17. Observing ET's improved performance at parameter-comparable “base" and “big" sizes, we were also interested in understanding how small ET could be shrunk while still achieving the same performance as the Transformer. To create a spectrum of model sizes for each architecture, we selected different input embedding sizes and shrank or grew the rest of the model embedding sizes with the same proportions. Aside from embedding depths, these models are identical at all sizes, except the “big" 1024 input embedding size, for which all 8 head attention layers are upgraded to 16 head attention layers, as was done in Vaswani et al. vaswani17. ET demonstrates stronger performance than the Transformer at all sizes, with the largest difference of 0.7 BLEU at the smallest, mobile-friendly, size of $\sim $ 7M parameters. Performance on par with the “base" Transformer was reached when ET used just 71.4% of its FLOPS and performance of the “big" Transformer was exceeded by the ET model that used 44.8% less FLOPS. Figure 4 shows the FLOPS vs. BLEU performance of both architectures. To test ET's generalizability, we also compared it to the Transformer on an additional three well-established language tasks: WMT'14 En-Fr, WMT'14 En-Cs, and LM1B. Upgrading to 16 TPU V.2 chips, we doubled the number of synchronous workers for these experiments, pushing both models to their higher potential BIBREF17 . We ran each configuration 3 times, except WMT En-De, which we ran 6 times; this was a matter of resource availability and we gave priority to the task we searched on. As shown in Table 3 , ET performs at least one standard deviation above the Transformer in each of these tasks. Note that the Transformer mean BLEU scores in both Tables 2 and 3 for WMT'14 En-Fr and WMT'14 En-De are higher than those originally reported by Vaswani et al. BIBREF7 . As can be seen in Tables 2 and 3 , the Evolved Transformer is much more effective than the Transformer when its model size is small. When the model size becomes large, its BLEU performance saturates and the gap between the Evolved Transformer and the Transformer becomes smaller. One explanation for this behavior is that overfitting starts to occur at big model sizes, but we expect that data augmentation BIBREF17 or hyperparameter tuning could improve performance. The improvement gains in perplexity by ET, however, are still significant for big model sizes; it is worth emphasizing that perplexity is also a reliable metric for measuring machine translation quality BIBREF32 . To understand what mutations contributed to ET's improved performance we conducted two rounds of ablation testing. In the first round, we began with the Transformer and applied each mutation to it individually to measure the performance change each mutation introduces in isolation. In the second round, we began with ET and removed each mutation individually to again measure the impact of each single mutation. In both cases, each model was trained 3 times on WMT En-De for 300k steps with identical hyperparameters, using the inverse-square-root decay to 0 that the Transformer prefers. Each training was conducted on a single TPU V.2 chip. The results of these experiments are presented in Table 4 of the Appendix; we use validation perplexity for comparison because it was our fitness metric. In all cases, the augmented ET models outperformed the the augmented Transformer models, indicating that the gap in performance between ET and the Transformer cannot be attributed to any single mutation. The mutation with the seemingly strongest individual impact is the increase from 3 to 4 decoder blocks. However, even when this mutation is introduced to the Transformer and removed from ET, the resulting augmented ET model still has a higher fitness than the augmented Transformer model. Some mutations seem to only hurt model performance such as converting the encoder's first multi-head attention into a Gated Linear Unit. However, given how we formulate the problem – finding an improved model with a comparable number of parameters to the Transformer – these mutations might have been necessary. For example, when the Transformer decoder block is repeated 4 times, the resulting model has 69.6M parameters, which is outside of our allowed parameter range. Thus, mutations that shrank ET's total number of parameters, even at a slight degradation of performance, were necessary so that other more impactful parameter-expensive mutations, such as adding an additional decoder block, could be used. If model size is not of concern, some of these mutations could potentially be reverted to further improve performance. Likewise, parameter efficient layers, such as the depthwise-separable convolutions, could potentially be swapped for their less efficient counterparts, such as standard convolution. Conclusion We presented the first neural architecture search conducted to find improved feed-forward sequence models. We first constructed a large search space inspired by recent advances in seq2seq models and used it to search directly on the computationally intensive WMT En-De translation task. To mitigate the size of our space and the cost of training child models, we proposed using both our progressive dynamic hurdles method and seeding our initial population with a known strong model, the Transformer. When run at scale, our search found the Evolved Transformer. In a side by side comparison against the Transformer in an identical training regime, the Evolved Transformer showed consistent stronger performance on both translation and language modeling. On WMT En-De, the Evolved Transformer was twice as efficient FLOPS-wise as the Transformer big model and at a mobile-friendly size, the Evolved Transformer demonstrated a 0.7 BLEU gain over the Transformer architecture. Acknowledgements We would like to thank Ashish Vaswani, Jakob Uszkoreit, Niki Parmar, Noam Shazeer, Lukasz Kaiser and Ryan Sepassi for their help with Tensor2Tensor and for sharing their understanding of the Transformer. We are also grateful to David Dohan, Esteban Real, Yanping Huang, Alok Aggarwal, Vijay Vasudevan, and Chris Ying for lending their expertise in architecture search and evolution. Search Algorithms In the following, we describe the algorithm that we use to calculate child model fitness with hurdles (Algorithm "Search Algorithms" ) and evolution architecture search with progressive dynamic hurdles (Algorithm UID28 ). [h!] Calculate Model Fitness with Hurdles inputs: $model$ : the child model $s$ : vector of train step increments $h$ : queue of hurdles append $-\infty $ to $h$ TRAIN_N_STEPS( $model$ , $s_0$ ) $fitness \leftarrow $ EVALUATE( $model$ ) $i \leftarrow 0$ $s$0 $s$1 $s$2 TRAIN_N_STEPS( $s$3 , $s$4 ) $s$5 EVALUATE( $s$6 ) $s$7 return $s$8 Search Space Information In our search space, a child model's genetic encoding is expressed as: $[$ left input, left normalization, left layer, left relative output dimension, left activation, right input, right normalization, right layer, right relative output dimension, right activation, combiner function $]$ $\times $ 14 + $[$ number of cells $]$ $\times $ 2, with the first 6 blocks allocated to the encoder and the latter 8 allocated to the decoder. In the following, we will describe each of the components. Ablation studies of the Evolved Transformer To understand what mutations contributed to ET's improved performance we conducted two rounds of ablation testing. In the first round, we began with the Transformer and applied each mutation to it individually to measure the performance change each mutation introduces in isolation. In the second round, we began with ET and removed each mutation individually to again measure the impact of each single mutation. In both cases, each model was trained 3 times on WMT En-De for 300k steps with identical hyperparameters, using the inverse-square-root decay to 0 that the Transformer prefers. Each training was conducted on a single TPU V.2 chip. The results of these experiments are presented in Table 4 ; we use validation perplexity for comparison because it was our fitness metric. To highlight the impact of each augmented model's mutation, we present not only their perplexities but also the difference between their mean perplexity and their unaugmented base model mean perplexity in the "Mean Diff" columns: base model mean perplexity - augmented mean perplexity This delta estimates the change in performance each mutation creates in isolation. Red highlighted cells contain evidence that their corresponding mutation hurt overall performance. Green highlighted cells contain evidence that their corresponding mutation helped overall performance. In half of the cases, both the augmented Transformer's and the augmented Evolved Transformer's performances indicate that the mutation was helpful. Changing the number of attention heads from 8 to 16 was doubly indicated to be neutral and changing from 8 head self attention to a GLU layer in the decoder was doubly indicated to have hurt performance. However, this and other mutations that seemingly hurt performance may have been necessary given how we formulate the problem: finding an improved model with a comparable number of parameters to the Transformer. For example, when the Transformer decoder block is repeated 4 times, the resulting model has 69.6M parameters, which is outside of our allowed parameter range. Thus, mutations that shrank ET's total number of parameters, even at a slight degradation of performance, were necessary so that other more impactful parameter-expensive mutations, such as adding an additional decoder block, could be used. Other mutations have inconsistent evidence about how useful they are. This ablation study serves only to approximate what is useful, but how effective a mutation is also depends on the model it is being introduced to and how it interacts with other encoding field values.
allows models that are consistently performing well to train for more steps
feedddb7ae4998b6a3eaa2d6323017ba278748cc
feedddb7ae4998b6a3eaa2d6323017ba278748cc_0
Q: What is in the model search space? Text: Introduction Over the past few years, impressive advances have been made in the field of neural architecture search. Reinforcement learning and evolution have both proven their capacity to produce models that exceed the performance of those designed by humans BIBREF0 , BIBREF1 . These advances have mostly focused on improving image models, although some effort has also been invested in searching for sequence models BIBREF2 , BIBREF3 . In these cases, it has always been to find improved recurrent neural networks (RNNs), which were long established as the de facto neural model for sequence problems BIBREF4 , BIBREF5 . However, recent works have shown that there are better alternatives to RNNs for solving sequence problems. Due to the success of convolution-based networks, such as Convolution Seq2Seq BIBREF6 , and full attention networks, such as the Transformer BIBREF7 , feed-forward networks are now a viable option for solving sequence-to-sequence (seq2seq) tasks. The main strength of feed-forward networks is that they are faster, and easier to train than RNNs. The goal of this work is to examine the use of neural architecture search methods to design better feed-forward architectures for seq2seq tasks. Specifically, we apply tournament selection architecture search to evolve from the Transformer, considered to be the state-of-art and widely-used, into a better and more efficient architecture. To achieve this, we construct a search space that reflects the recent advances in feed-forward seq2seq models and develop a method called progressive dynamic hurdles (PDH) that allows us to perform our search directly on the computationally demanding WMT 2014 English-German (En-De) translation task. Our search produces a new architecture – called the Evolved Transformer (ET) – which demonstrates consistent improvement over the original Transformer on four well-established language tasks: WMT 2014 English-German, WMT 2014 English-French (En-Fr), WMT 2014 English-Czech (En-Cs) and the 1 Billion Word Language Model Benchmark (LM1B). In our experiments with big size models, the Evolved Transformer is twice as efficient as the Transformer in FLOPS without loss of quality. At a much smaller – mobile-friendly – model size of $\sim $ 7M parameters, the Evolved Transformer outperforms the Transformer by 0.7 BLEU. Related Work RNNs have long been used as the default option for applying neural networks to sequence modeling BIBREF4 , BIBREF5 , with LSTM BIBREF8 and GRU BIBREF9 architectures being the most popular. However, recent work has shown that RNNs are not necessary to build state-of-the-art sequence models. For example, many high performance convolutional models have been designed, such as WaveNet BIBREF10 , Gated Convolution Networks BIBREF11 , Conv Seq2Seq BIBREF6 and Dynamic Lightweight Convolution model BIBREF12 . Perhaps the most promising architecture in this direction is the Transformer architecture BIBREF7 , which relies only on multi-head attention to convey spatial information. In this work, we use both convolutions and attention in our search space to leverage the strengths of both layer types. The recent advances in sequential feed-forward networks are not limited to architecture design. Various methods, such as BERT BIBREF13 and Radford et. al's pre-training technique BIBREF14 , have demonstrated how models such as the Transformer can improve over RNN pre-training BIBREF15 , BIBREF16 . For translation specifically, work on scaling up batch size BIBREF17 , BIBREF12 , using relative position representations BIBREF18 , and weighting multi-head attention BIBREF19 have all pushed the state-of-the-art for WMT 2014 En-De and En-Fr. However, these methods are orthogonal to this work, as we are only concerned with improving the neural network architecture itself, and not the techniques used for improving overall performance. The field of neural architecture search has also seen significant recent progress. The best performing architecture search methods are those that are computationally intensive BIBREF2 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF1 , BIBREF0 . Other methods have been developed with speed in mind, such as DARTS BIBREF23 , ENAS BIBREF3 , SMASH BIBREF24 , and SNAS BIBREF25 . These methods radically reduce the amount of time needed to run each search by approximating the performance of each candidate model, instead of investing resources to fully train and evaluate each candidate separately. Unfortunately, these faster methods produce slightly less competitive results. Zela et. al's zela18 utilization of Hyperband BIBREF26 and PNAS's BIBREF27 incorporation of a surrogate model are examples of approaches that try to both increase efficiency via candidate performance estimation and maximize search quality by training models to the end when necessary. The progressive dynamic hurdles method we introduce here is similar to these approaches in that we train our best models individually to the end, but optimize our procedure by discarding unpromising models early on. Methods We employ evolution-based architecture search because it is simple and has been shown to be more efficient than reinforcement learning when resources are limited BIBREF0 . We use the same tournament selection BIBREF28 algorithm as Real et al. real19, with the aging regularization omitted, and so encourage the reader to view their in-depth description of the method. In the interest of saving space, we will only give a brief overview of the algorithm here. Tournament selection evolutionary architecture search is conducted by first defining a gene encoding that describes a neural network architecture; we describe our encoding in the following Search Space subsection. An initial population is then created by randomly sampling from the space of gene encoding to create individuals. These individuals are assigned fitnesses based on training the the neural networks they describe on the target task and then evaluating their performance on the task's validation set. The population is then repeatedly sampled from to produce subpopulations, from which the individual with the highest fitness is selected as a parent. Selected parents have their gene encodings mutated – encoding fields randomly changed to different values – to produce child models. These child models are then assigned a fitness via training and evaluation on the target task, as the initial population was. When this fitness evaluation concludes, the population is sampled from once again, and the individual in the subpopulation with the lowest fitness is killed, meaning it is removed from the population. The newly evaluated child model is then added to the population, taking the killed individual's place. This process is repeated and results in a population with high fitness individuals, which in our case represent well-performing architectures. Search Space Our encoding search space is inspired by the NASNet search space BIBREF1 , but is altered to allow it to express architecture characteristics found in recent state-of-the-art feed-forward seq2seq networks. Crucially, we ensured that the search space can represent the Transformer, so that we can seed the search process with the Transformer itself. Our search space consists of two stackable cells, one for the model encoder and one for the decoder (see Figure 1 ). Each cell contains NASNet-style blocks, which receive two hidden state inputs and produce new hidden states as outputs; the encoder contains six blocks and the decoder contains eight blocks, so that the Transformer can be represented exactly. The blocks perform separate transformations to each input and then combine the transformation outputs together to produce a single block output; we will refer to the transformations applied to each input as a branch. Our search space contains five branch-level search fields (input, normalization, layer, output dimension and activation), one block-level search field (combiner function) and one cell-level search field (number of cells). In our search space, a child model's genetic encoding is expressed as: $[$ left input, left normalization, left layer, left relative output dimension, left activation, right input, right normalization, right layer, right relative output dimension, right activation, combiner function $]$ $\times $ 14 + $[$ number of cells $]$ $\times $ 2, with the first 6 blocks allocated to the encoder and the latter 8 allocated to the decoder. Given the vocabularies described in the Appendix, this yields a search space of $7.30 * 10^{115}$ models, although we do shrink this to some degree by introducing constraints (see the Appendix for more details). Seeding the Search Space with Transformer While previous neural architecture search works rely on well-formed hand crafted search spaces BIBREF1 , we intentionally leave our search minimally tuned, in a effort to alleviate our manual burden and emphasize the role of the automated search method. To help navigate the large search space we create for ourselves, we find it easier to seed our initial population with a known strong model, in this case the Transformer. This anchors the search to a known good starting point and guarantees at least a single strong potential parent in the population as the generations progress. We offer empirical support for these claims in our Results section. Evolution with Progressive Dynamic Hurdles The evolution algorithm we employ is adapted from the tournament selection evolutionary architecture search proposed by Real et al. real19, described above. Unlike Real et al. real19 who conducted their search on CIFAR-10, our search is conducted on a task that takes much longer to train and evaluate on. Specifically, to train a Transformer to peak performance on WMT'14 En-De requires $\sim $ 300k training steps, or 10 hours, in the base size when using a single Google TPU V.2 chip, as we do in our search. In contrast, Real et al. real19 used the less resource-intensive CIFAR-10 task BIBREF29 , which takes about two hours to train on, to assess their models during their search, as it was a good proxy for ImageNet BIBREF30 performance BIBREF1 . However, in our preliminary experimentation we could not find a proxy task that gave adequate signal for how well each child model would perform on the full WMT'14 En-De task; we investigated using only a fraction of the data set and various forms of aggressive early stopping. To address this problem we formulated a method to dynamically allocate resources to more promising architectures according to their fitness. This method, which we refer to as progressive dynamic hurdles (PDH), allows models that are consistently performing well to train for more steps. It begins as ordinary tournament selection evolutionary architecture search with early stopping, with each child model training for a relatively small $s_0$ number of steps before being evaluated for fitness. However, after a predetermined number of child models, $m$ , have been evaluated, a hurdle, $h_0$ , is created by calculating the the mean fitness of the current population. For the next $m$ child models produced, models that achieve a fitness greater than $h_0$ after $s_0$ train steps are granted an additional $s_1$ steps of training and then are evaluated again to determine their final fitness. Once another $m$ models have been considered this way, another hurdle, $h_1$ , is constructed by calculating the mean fitness of all members of the current population that were trained for the maximum number of steps. For the next $m$ child models, training and evaluation continues in the same fashion, except models with fitness greater than $m$0 after $m$1 steps of training are granted an additional $m$2 number of train steps, before being evaluated for their final fitness. This process is repeated until a satisfactory number of maximum training steps is reached. Algorithm 1 (Appendix) formalizes how the fitness of an individual model is calculated with hurdles and Algorithm 2 (Appendix) describes tournament selection augmented with progressive dynamic hurdles. Although different child models may train for different numbers of steps before being assigned their final fitness, this does not make their fitnesses incomparable. Tournament selection evolution is only concerned with relative fitness rank when selecting which subpopulation members will be killed and which will become parents; the margin by which one candidate is better or worse than the other members of the subpopulation does not matter. Assuming no model overfits during its training and that its fitness monotonically increases with respect to the number of train steps it is allocated, a comparison between two child models can be viewed as a comparison between their fitnesses at the lower of the two's cumulative train steps. Since the model that was allocated more train steps performed, by definition, above the fitness hurdle for the lower number of steps and the model that was allocated less steps performed, by definition, at or below that hurdle at the lower number of steps, it is guaranteed that the model with more train steps was better when it was evaluated at the lower number of train steps. The benefit of altering the fitness algorithm this way is that poor performing child models will not consume as many resources when their fitness is being computed. As soon as a candidate's fitness falls below a tolerable amount, its evaluation immediately ends. This may also result in good candidates being labeled as bad models if they are only strong towards the latter part of training. However, the resources saved as a result of discarding many bad models improves the overall quality of the search enough to justify potentially also discarding some good ones; this is supported empirically in our Results section. Datasets We use three different machine translation datasets to perform our experiments, all of which were taken from their Tensor2Tensor implementations. The first is WMT English-German, for which we mimic Vaswani et al.'s vaswani17 setup, using WMT'18 En-De training data without ParaCrawl BIBREF31 , yielding 4.5 million sentence pairs. In the same fashion, we use newstest2013 for development and test on newstest2014. The second translation dataset is WMT En-Fr, for which we also replicate Vaswani et.al's vaswani17 setup. We train on the 36 million sentence pairs of WMT'14 En-Fr, validate on newstest2013 and test on newstest2014. The final translation dataset is WMT English-Czech (En-Cs). We used the WMT'18 training dataset, again without ParaCrawl, and used newstest2013 and newstest2014 as validation and test sets. For all tasks, tokens were split using a shared source-target vocabulary of about 32k word-pieces BIBREF32 . All datasets were generated using Tensor2Tensor's “packed" scheme; sentences were shuffled and concatenated together with padding to form uniform 256 length inputs and targets, with examples longer than 256 being discarded. This yielded batch sizes of 4096 tokens per GPU or TPU chip; accordingly, 16 TPU chip configurations had $\sim $ 66K tokens per batch and 8 GPU chip configurations had $\sim $ 33K tokens per batch. For language modeling we used the 1 Billion Word Language Model Benchmark (LM1B) BIBREF33 , also using its “packed" Tensor2Tensor implementation. Again the tokens are split into a vocabulary of approximately 32k word-pieces and the sentences are shuffled. Training Details and Hyperparameters All of our experiments used Tensor2Tensor's Transformer TPU hyperparameter settings. These are nearly identical to those used by Vaswani et al. vaswani17, but modified to use the memory-efficient Adafactor BIBREF34 optimizer. Aside from using the optimizer itself, these hyperparameters set the warmup to a constant learning rate of $10^{-2}$ over 10k steps and then uses inverse-square-root learning-rate decay. For our experiments, we make only one change, which is to alter this decay so that it reaches 0 at the final step of training, which for our non-search experiments is uniformly 300k. We found that the our search candidate models, the Transformer, and the Evolved Transformer all benefited from this and so experimented with using linear decay, single-cycle cosine decay BIBREF35 and a modified inverse-square-root decay to 0 at 300k steps: $lr = step^{-0.00303926} - .962392$ ; every decay was paired with the same constant $10^{-2}$ warmup. We used WMT En-De validation perplexity to gauge model performance and found that the Transformer preferred the modified inverse-square-root decay. Therefore, this is what we used for both all our Transformer trainings and the architecture searches themselves. The Evolved Transformer performed best with cosine decay and so that is what we used for all of its trainings. Besides this one difference, the hyperparameter settings across models being compared are exactly the same. Because decaying to 0 resulted in only marginal weight changes towards the end of training, we did not use checkpoint averaging. Per-task there is one additional hyperparameter difference, which is dropout rate. For ET and all search child models, dropout was applied uniformly after each layer, approximating the Transformer's more nuanced dropout scheme. For En-De and En-Cs, all “big" sized models were given a higher dropout rate of 0.3, keeping in line with Vaswani et al. vaswani17, and all models with an input embedding size of 768 are given a dropout rate of 0.2. Aside from this, hyperparameters are identical across all translation tasks. For decoding we used the same beam decoding configuration used by Vaswani et al. vaswani17. That is a beam size of 4, length penalty ( $\alpha $ ) of 0.6, and maximum output length of input length + 50. All BLEU is calculated using case-sensitive tokenization and for WMT'14 En-De we also use the compound splitting that was used in Vaswani et al. vaswani17. Our language model training setup is identical to our machine translation setup except we remove label smoothing and lower the intra-attention dropout rate to 0. This was taken from the Tensor2Tensor hyperparameters for LM1B[2]. Search Configurations All of the architecture searches we describe were run on WMT'14 En-De. They utilized the search space and tournament selection evolution algorithm described in our Methods section. Unless otherwise noted, each search used 200 workers, which were equipped with a single Google TPU V.2 chip for training and evaluation. We maintained a population of size 100 with subpopulation sizes for both killing and reproducing set to 30. Mutations were applied independently per encoding field at a rate of 2.5%. For fitness we used the negative log perplexity of the validation set instead of BLEU because, as demonstrated in our Results section, perplexity is more consistent and that reduced the noise of our fitness signal. Results In this section, we will first benchmark the performance of our search method, progressive dynamic hurdles, against other evolutionary search methods BIBREF21 , BIBREF0 . We will then benchmark the Evolved Transformer, the result of our search method, against the Transformer BIBREF7 . Search Techniques We tested our evolution algorithm enhancements – using PDH and seeding the initial population with the Transformer – against control searches that did not use these techniques; without our enhancements, these controls function the same way as Real et. al's real19 searches, without aging regularization. Each search we describe was run 3 times and the top model from each run was retrained on a single TPU V.2 chip for 300k steps. The performance of the models after retraining is given in Table 1 . Our proposed search (Table 1 row 1), which used both PDH and Transformer seeding, was run first, with hurdles created every 1k models ( $m = 1000$ ) and six 30k train step (1 hour) increments ( $s=<30, 30, 30, 30, 30, 30>$ ). To test the effectiveness of seeding with the Transformer, we ran an identical search that was instead seeded with random valid encodings (Table 1 row 2). To test the effectiveness of PDH, we ran three controls (Table 1 rows 3-5) that each used a fixed number of train steps for each child model instead of hurdles (Table 1 column 2). For these we used the step increments (30k), the maximum number of steps our proposed search ultimately reaches (180k), and the total number of steps each top model receives when fully trained to gauge its final performance (300k). To determine the number of child models each of these searches would be able to train, we selected the value that would make the total amount of resources used by each control search equal to the maximum amount of resources used for our proposed searches, which require various amounts of resources depending on how many models fail to overcome hurdles. In the three trials we ran, our proposed search's total number of train steps used was 422M $\pm $ 21M, with a maximum of 446M. Thus the number of child models allotted for each non-PDH control search was set so that the total number of child model train steps used would be 446M. As demonstrated in Table 1, the search we propose, with PDH and Transformer seeding, has the best performance on average. It also is the most consistent, having the lowest standard deviation. Of all the searches conducted, only a single control run – “30K no hurdles" (Table 1 row 3) – produced a model that was better than any of our proposed search's best models. At the same time, the “30K no hurdles" setup also produced models that were significantly worse, which explains its high standard deviation. This phenomenon was a chief motivator for our developing this method. Although aggressive early stopping has the potential to produce strong models for cheap, searches that utilize it can also venture into modalities in which top fitness child models are only strong early on. Without running models for longer, whether or not this is happening cannot be detected. The 180K and 300K no hurdles searches did have insight into long term performance, but in a resource-inefficient manner that hurt these searches by limiting the number of generations they produced; for the “180k no hurdles" run to train as many models as PDH would require 1.08B train steps, over double what PDH used in our worst case. Searching with random seeding also proved to be ineffective, performing considerably worse than every other configuration. Of the five searches run, random seeding was the only one that had a top model perplexity higher than the Transformer, which is 4.75 $\pm $ 0.01 in the same setup. After confirming the effectiveness of our search procedure, we launched a larger scale version of our search using 270 workers. We trained 5k models per hurdle ( $m=5000$ ) and used larger step increments to get a closer approximation to 300k step performance: $s = <60, 60, 120>$ . The setup was the same as the Search Techniques experiments, except after 11k models we lowered the mutation rate to 0.01 and introduced the NONE value to the normalization mutation vocabulary. The search ran for 15K child models, requiring a total of 979M train steps. Over 13K models did not make it past the first hurdle, drastically reducing the resources required to view the 240 thousandth train step for top models, which would have cost 3.6B train steps for the same number of models without hurdles. After the search concluded, we then selected the top 20 models and trained them for the full 300k steps, each on a single TPU V.2 chip. The model that ended with the best perplexity is what we refer to as the Evolved Transformer (ET). Figure 3 shows the ET architecture. The most notable aspect of the Evolved Transformer is the use of wide depth-wise separable convolutions in the lower layers of the encoder and decoder blocks. The use of depth-wise convolution and self-attention was previously described in QANet BIBREF37 , however the overall architectures of the Evolved Transformer and QANet are different in many ways: e.g., QANet has smaller kernel sizes and no branching structures. The performance and analysis of the Evolved Transformer will be shown in the next section. The Evolved Transformer: Performance and Analysis To test the effectiveness of the found architecture – the Evolved Transformer – we compared it to the Transformer in its Tensor2Tensor training regime on WMT'14 En-De. Table 2 shows the results of these experiments run on the same 8 NVIDIA P100 hardware setup that was used by Vaswani et al. vaswani17. Observing ET's improved performance at parameter-comparable “base" and “big" sizes, we were also interested in understanding how small ET could be shrunk while still achieving the same performance as the Transformer. To create a spectrum of model sizes for each architecture, we selected different input embedding sizes and shrank or grew the rest of the model embedding sizes with the same proportions. Aside from embedding depths, these models are identical at all sizes, except the “big" 1024 input embedding size, for which all 8 head attention layers are upgraded to 16 head attention layers, as was done in Vaswani et al. vaswani17. ET demonstrates stronger performance than the Transformer at all sizes, with the largest difference of 0.7 BLEU at the smallest, mobile-friendly, size of $\sim $ 7M parameters. Performance on par with the “base" Transformer was reached when ET used just 71.4% of its FLOPS and performance of the “big" Transformer was exceeded by the ET model that used 44.8% less FLOPS. Figure 4 shows the FLOPS vs. BLEU performance of both architectures. To test ET's generalizability, we also compared it to the Transformer on an additional three well-established language tasks: WMT'14 En-Fr, WMT'14 En-Cs, and LM1B. Upgrading to 16 TPU V.2 chips, we doubled the number of synchronous workers for these experiments, pushing both models to their higher potential BIBREF17 . We ran each configuration 3 times, except WMT En-De, which we ran 6 times; this was a matter of resource availability and we gave priority to the task we searched on. As shown in Table 3 , ET performs at least one standard deviation above the Transformer in each of these tasks. Note that the Transformer mean BLEU scores in both Tables 2 and 3 for WMT'14 En-Fr and WMT'14 En-De are higher than those originally reported by Vaswani et al. BIBREF7 . As can be seen in Tables 2 and 3 , the Evolved Transformer is much more effective than the Transformer when its model size is small. When the model size becomes large, its BLEU performance saturates and the gap between the Evolved Transformer and the Transformer becomes smaller. One explanation for this behavior is that overfitting starts to occur at big model sizes, but we expect that data augmentation BIBREF17 or hyperparameter tuning could improve performance. The improvement gains in perplexity by ET, however, are still significant for big model sizes; it is worth emphasizing that perplexity is also a reliable metric for measuring machine translation quality BIBREF32 . To understand what mutations contributed to ET's improved performance we conducted two rounds of ablation testing. In the first round, we began with the Transformer and applied each mutation to it individually to measure the performance change each mutation introduces in isolation. In the second round, we began with ET and removed each mutation individually to again measure the impact of each single mutation. In both cases, each model was trained 3 times on WMT En-De for 300k steps with identical hyperparameters, using the inverse-square-root decay to 0 that the Transformer prefers. Each training was conducted on a single TPU V.2 chip. The results of these experiments are presented in Table 4 of the Appendix; we use validation perplexity for comparison because it was our fitness metric. In all cases, the augmented ET models outperformed the the augmented Transformer models, indicating that the gap in performance between ET and the Transformer cannot be attributed to any single mutation. The mutation with the seemingly strongest individual impact is the increase from 3 to 4 decoder blocks. However, even when this mutation is introduced to the Transformer and removed from ET, the resulting augmented ET model still has a higher fitness than the augmented Transformer model. Some mutations seem to only hurt model performance such as converting the encoder's first multi-head attention into a Gated Linear Unit. However, given how we formulate the problem – finding an improved model with a comparable number of parameters to the Transformer – these mutations might have been necessary. For example, when the Transformer decoder block is repeated 4 times, the resulting model has 69.6M parameters, which is outside of our allowed parameter range. Thus, mutations that shrank ET's total number of parameters, even at a slight degradation of performance, were necessary so that other more impactful parameter-expensive mutations, such as adding an additional decoder block, could be used. If model size is not of concern, some of these mutations could potentially be reverted to further improve performance. Likewise, parameter efficient layers, such as the depthwise-separable convolutions, could potentially be swapped for their less efficient counterparts, such as standard convolution. Conclusion We presented the first neural architecture search conducted to find improved feed-forward sequence models. We first constructed a large search space inspired by recent advances in seq2seq models and used it to search directly on the computationally intensive WMT En-De translation task. To mitigate the size of our space and the cost of training child models, we proposed using both our progressive dynamic hurdles method and seeding our initial population with a known strong model, the Transformer. When run at scale, our search found the Evolved Transformer. In a side by side comparison against the Transformer in an identical training regime, the Evolved Transformer showed consistent stronger performance on both translation and language modeling. On WMT En-De, the Evolved Transformer was twice as efficient FLOPS-wise as the Transformer big model and at a mobile-friendly size, the Evolved Transformer demonstrated a 0.7 BLEU gain over the Transformer architecture. Acknowledgements We would like to thank Ashish Vaswani, Jakob Uszkoreit, Niki Parmar, Noam Shazeer, Lukasz Kaiser and Ryan Sepassi for their help with Tensor2Tensor and for sharing their understanding of the Transformer. We are also grateful to David Dohan, Esteban Real, Yanping Huang, Alok Aggarwal, Vijay Vasudevan, and Chris Ying for lending their expertise in architecture search and evolution. Search Algorithms In the following, we describe the algorithm that we use to calculate child model fitness with hurdles (Algorithm "Search Algorithms" ) and evolution architecture search with progressive dynamic hurdles (Algorithm UID28 ). [h!] Calculate Model Fitness with Hurdles inputs: $model$ : the child model $s$ : vector of train step increments $h$ : queue of hurdles append $-\infty $ to $h$ TRAIN_N_STEPS( $model$ , $s_0$ ) $fitness \leftarrow $ EVALUATE( $model$ ) $i \leftarrow 0$ $s$0 $s$1 $s$2 TRAIN_N_STEPS( $s$3 , $s$4 ) $s$5 EVALUATE( $s$6 ) $s$7 return $s$8 Search Space Information In our search space, a child model's genetic encoding is expressed as: $[$ left input, left normalization, left layer, left relative output dimension, left activation, right input, right normalization, right layer, right relative output dimension, right activation, combiner function $]$ $\times $ 14 + $[$ number of cells $]$ $\times $ 2, with the first 6 blocks allocated to the encoder and the latter 8 allocated to the decoder. In the following, we will describe each of the components. Ablation studies of the Evolved Transformer To understand what mutations contributed to ET's improved performance we conducted two rounds of ablation testing. In the first round, we began with the Transformer and applied each mutation to it individually to measure the performance change each mutation introduces in isolation. In the second round, we began with ET and removed each mutation individually to again measure the impact of each single mutation. In both cases, each model was trained 3 times on WMT En-De for 300k steps with identical hyperparameters, using the inverse-square-root decay to 0 that the Transformer prefers. Each training was conducted on a single TPU V.2 chip. The results of these experiments are presented in Table 4 ; we use validation perplexity for comparison because it was our fitness metric. To highlight the impact of each augmented model's mutation, we present not only their perplexities but also the difference between their mean perplexity and their unaugmented base model mean perplexity in the "Mean Diff" columns: base model mean perplexity - augmented mean perplexity This delta estimates the change in performance each mutation creates in isolation. Red highlighted cells contain evidence that their corresponding mutation hurt overall performance. Green highlighted cells contain evidence that their corresponding mutation helped overall performance. In half of the cases, both the augmented Transformer's and the augmented Evolved Transformer's performances indicate that the mutation was helpful. Changing the number of attention heads from 8 to 16 was doubly indicated to be neutral and changing from 8 head self attention to a GLU layer in the decoder was doubly indicated to have hurt performance. However, this and other mutations that seemingly hurt performance may have been necessary given how we formulate the problem: finding an improved model with a comparable number of parameters to the Transformer. For example, when the Transformer decoder block is repeated 4 times, the resulting model has 69.6M parameters, which is outside of our allowed parameter range. Thus, mutations that shrank ET's total number of parameters, even at a slight degradation of performance, were necessary so that other more impactful parameter-expensive mutations, such as adding an additional decoder block, could be used. Other mutations have inconsistent evidence about how useful they are. This ablation study serves only to approximate what is useful, but how effective a mutation is also depends on the model it is being introduced to and how it interacts with other encoding field values.
Our search space consists of two stackable cells, one for the model encoder and one for the decoder , Each cell contains NASNet-style blocks, which receive two hidden state inputs and produce new hidden states as outputs, Our search space contains five branch-level search fields (input, normalization, layer, output dimension and activation), one block-level search field (combiner function) and one cell-level search field (number of cells).
be1c0816793a4549c811480170f30fab52a7a157
be1c0816793a4549c811480170f30fab52a7a157_0
Q: How much energy did the NAS consume? Text: Introduction Over the past few years, impressive advances have been made in the field of neural architecture search. Reinforcement learning and evolution have both proven their capacity to produce models that exceed the performance of those designed by humans BIBREF0 , BIBREF1 . These advances have mostly focused on improving image models, although some effort has also been invested in searching for sequence models BIBREF2 , BIBREF3 . In these cases, it has always been to find improved recurrent neural networks (RNNs), which were long established as the de facto neural model for sequence problems BIBREF4 , BIBREF5 . However, recent works have shown that there are better alternatives to RNNs for solving sequence problems. Due to the success of convolution-based networks, such as Convolution Seq2Seq BIBREF6 , and full attention networks, such as the Transformer BIBREF7 , feed-forward networks are now a viable option for solving sequence-to-sequence (seq2seq) tasks. The main strength of feed-forward networks is that they are faster, and easier to train than RNNs. The goal of this work is to examine the use of neural architecture search methods to design better feed-forward architectures for seq2seq tasks. Specifically, we apply tournament selection architecture search to evolve from the Transformer, considered to be the state-of-art and widely-used, into a better and more efficient architecture. To achieve this, we construct a search space that reflects the recent advances in feed-forward seq2seq models and develop a method called progressive dynamic hurdles (PDH) that allows us to perform our search directly on the computationally demanding WMT 2014 English-German (En-De) translation task. Our search produces a new architecture – called the Evolved Transformer (ET) – which demonstrates consistent improvement over the original Transformer on four well-established language tasks: WMT 2014 English-German, WMT 2014 English-French (En-Fr), WMT 2014 English-Czech (En-Cs) and the 1 Billion Word Language Model Benchmark (LM1B). In our experiments with big size models, the Evolved Transformer is twice as efficient as the Transformer in FLOPS without loss of quality. At a much smaller – mobile-friendly – model size of $\sim $ 7M parameters, the Evolved Transformer outperforms the Transformer by 0.7 BLEU. Related Work RNNs have long been used as the default option for applying neural networks to sequence modeling BIBREF4 , BIBREF5 , with LSTM BIBREF8 and GRU BIBREF9 architectures being the most popular. However, recent work has shown that RNNs are not necessary to build state-of-the-art sequence models. For example, many high performance convolutional models have been designed, such as WaveNet BIBREF10 , Gated Convolution Networks BIBREF11 , Conv Seq2Seq BIBREF6 and Dynamic Lightweight Convolution model BIBREF12 . Perhaps the most promising architecture in this direction is the Transformer architecture BIBREF7 , which relies only on multi-head attention to convey spatial information. In this work, we use both convolutions and attention in our search space to leverage the strengths of both layer types. The recent advances in sequential feed-forward networks are not limited to architecture design. Various methods, such as BERT BIBREF13 and Radford et. al's pre-training technique BIBREF14 , have demonstrated how models such as the Transformer can improve over RNN pre-training BIBREF15 , BIBREF16 . For translation specifically, work on scaling up batch size BIBREF17 , BIBREF12 , using relative position representations BIBREF18 , and weighting multi-head attention BIBREF19 have all pushed the state-of-the-art for WMT 2014 En-De and En-Fr. However, these methods are orthogonal to this work, as we are only concerned with improving the neural network architecture itself, and not the techniques used for improving overall performance. The field of neural architecture search has also seen significant recent progress. The best performing architecture search methods are those that are computationally intensive BIBREF2 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF1 , BIBREF0 . Other methods have been developed with speed in mind, such as DARTS BIBREF23 , ENAS BIBREF3 , SMASH BIBREF24 , and SNAS BIBREF25 . These methods radically reduce the amount of time needed to run each search by approximating the performance of each candidate model, instead of investing resources to fully train and evaluate each candidate separately. Unfortunately, these faster methods produce slightly less competitive results. Zela et. al's zela18 utilization of Hyperband BIBREF26 and PNAS's BIBREF27 incorporation of a surrogate model are examples of approaches that try to both increase efficiency via candidate performance estimation and maximize search quality by training models to the end when necessary. The progressive dynamic hurdles method we introduce here is similar to these approaches in that we train our best models individually to the end, but optimize our procedure by discarding unpromising models early on. Methods We employ evolution-based architecture search because it is simple and has been shown to be more efficient than reinforcement learning when resources are limited BIBREF0 . We use the same tournament selection BIBREF28 algorithm as Real et al. real19, with the aging regularization omitted, and so encourage the reader to view their in-depth description of the method. In the interest of saving space, we will only give a brief overview of the algorithm here. Tournament selection evolutionary architecture search is conducted by first defining a gene encoding that describes a neural network architecture; we describe our encoding in the following Search Space subsection. An initial population is then created by randomly sampling from the space of gene encoding to create individuals. These individuals are assigned fitnesses based on training the the neural networks they describe on the target task and then evaluating their performance on the task's validation set. The population is then repeatedly sampled from to produce subpopulations, from which the individual with the highest fitness is selected as a parent. Selected parents have their gene encodings mutated – encoding fields randomly changed to different values – to produce child models. These child models are then assigned a fitness via training and evaluation on the target task, as the initial population was. When this fitness evaluation concludes, the population is sampled from once again, and the individual in the subpopulation with the lowest fitness is killed, meaning it is removed from the population. The newly evaluated child model is then added to the population, taking the killed individual's place. This process is repeated and results in a population with high fitness individuals, which in our case represent well-performing architectures. Search Space Our encoding search space is inspired by the NASNet search space BIBREF1 , but is altered to allow it to express architecture characteristics found in recent state-of-the-art feed-forward seq2seq networks. Crucially, we ensured that the search space can represent the Transformer, so that we can seed the search process with the Transformer itself. Our search space consists of two stackable cells, one for the model encoder and one for the decoder (see Figure 1 ). Each cell contains NASNet-style blocks, which receive two hidden state inputs and produce new hidden states as outputs; the encoder contains six blocks and the decoder contains eight blocks, so that the Transformer can be represented exactly. The blocks perform separate transformations to each input and then combine the transformation outputs together to produce a single block output; we will refer to the transformations applied to each input as a branch. Our search space contains five branch-level search fields (input, normalization, layer, output dimension and activation), one block-level search field (combiner function) and one cell-level search field (number of cells). In our search space, a child model's genetic encoding is expressed as: $[$ left input, left normalization, left layer, left relative output dimension, left activation, right input, right normalization, right layer, right relative output dimension, right activation, combiner function $]$ $\times $ 14 + $[$ number of cells $]$ $\times $ 2, with the first 6 blocks allocated to the encoder and the latter 8 allocated to the decoder. Given the vocabularies described in the Appendix, this yields a search space of $7.30 * 10^{115}$ models, although we do shrink this to some degree by introducing constraints (see the Appendix for more details). Seeding the Search Space with Transformer While previous neural architecture search works rely on well-formed hand crafted search spaces BIBREF1 , we intentionally leave our search minimally tuned, in a effort to alleviate our manual burden and emphasize the role of the automated search method. To help navigate the large search space we create for ourselves, we find it easier to seed our initial population with a known strong model, in this case the Transformer. This anchors the search to a known good starting point and guarantees at least a single strong potential parent in the population as the generations progress. We offer empirical support for these claims in our Results section. Evolution with Progressive Dynamic Hurdles The evolution algorithm we employ is adapted from the tournament selection evolutionary architecture search proposed by Real et al. real19, described above. Unlike Real et al. real19 who conducted their search on CIFAR-10, our search is conducted on a task that takes much longer to train and evaluate on. Specifically, to train a Transformer to peak performance on WMT'14 En-De requires $\sim $ 300k training steps, or 10 hours, in the base size when using a single Google TPU V.2 chip, as we do in our search. In contrast, Real et al. real19 used the less resource-intensive CIFAR-10 task BIBREF29 , which takes about two hours to train on, to assess their models during their search, as it was a good proxy for ImageNet BIBREF30 performance BIBREF1 . However, in our preliminary experimentation we could not find a proxy task that gave adequate signal for how well each child model would perform on the full WMT'14 En-De task; we investigated using only a fraction of the data set and various forms of aggressive early stopping. To address this problem we formulated a method to dynamically allocate resources to more promising architectures according to their fitness. This method, which we refer to as progressive dynamic hurdles (PDH), allows models that are consistently performing well to train for more steps. It begins as ordinary tournament selection evolutionary architecture search with early stopping, with each child model training for a relatively small $s_0$ number of steps before being evaluated for fitness. However, after a predetermined number of child models, $m$ , have been evaluated, a hurdle, $h_0$ , is created by calculating the the mean fitness of the current population. For the next $m$ child models produced, models that achieve a fitness greater than $h_0$ after $s_0$ train steps are granted an additional $s_1$ steps of training and then are evaluated again to determine their final fitness. Once another $m$ models have been considered this way, another hurdle, $h_1$ , is constructed by calculating the mean fitness of all members of the current population that were trained for the maximum number of steps. For the next $m$ child models, training and evaluation continues in the same fashion, except models with fitness greater than $m$0 after $m$1 steps of training are granted an additional $m$2 number of train steps, before being evaluated for their final fitness. This process is repeated until a satisfactory number of maximum training steps is reached. Algorithm 1 (Appendix) formalizes how the fitness of an individual model is calculated with hurdles and Algorithm 2 (Appendix) describes tournament selection augmented with progressive dynamic hurdles. Although different child models may train for different numbers of steps before being assigned their final fitness, this does not make their fitnesses incomparable. Tournament selection evolution is only concerned with relative fitness rank when selecting which subpopulation members will be killed and which will become parents; the margin by which one candidate is better or worse than the other members of the subpopulation does not matter. Assuming no model overfits during its training and that its fitness monotonically increases with respect to the number of train steps it is allocated, a comparison between two child models can be viewed as a comparison between their fitnesses at the lower of the two's cumulative train steps. Since the model that was allocated more train steps performed, by definition, above the fitness hurdle for the lower number of steps and the model that was allocated less steps performed, by definition, at or below that hurdle at the lower number of steps, it is guaranteed that the model with more train steps was better when it was evaluated at the lower number of train steps. The benefit of altering the fitness algorithm this way is that poor performing child models will not consume as many resources when their fitness is being computed. As soon as a candidate's fitness falls below a tolerable amount, its evaluation immediately ends. This may also result in good candidates being labeled as bad models if they are only strong towards the latter part of training. However, the resources saved as a result of discarding many bad models improves the overall quality of the search enough to justify potentially also discarding some good ones; this is supported empirically in our Results section. Datasets We use three different machine translation datasets to perform our experiments, all of which were taken from their Tensor2Tensor implementations. The first is WMT English-German, for which we mimic Vaswani et al.'s vaswani17 setup, using WMT'18 En-De training data without ParaCrawl BIBREF31 , yielding 4.5 million sentence pairs. In the same fashion, we use newstest2013 for development and test on newstest2014. The second translation dataset is WMT En-Fr, for which we also replicate Vaswani et.al's vaswani17 setup. We train on the 36 million sentence pairs of WMT'14 En-Fr, validate on newstest2013 and test on newstest2014. The final translation dataset is WMT English-Czech (En-Cs). We used the WMT'18 training dataset, again without ParaCrawl, and used newstest2013 and newstest2014 as validation and test sets. For all tasks, tokens were split using a shared source-target vocabulary of about 32k word-pieces BIBREF32 . All datasets were generated using Tensor2Tensor's “packed" scheme; sentences were shuffled and concatenated together with padding to form uniform 256 length inputs and targets, with examples longer than 256 being discarded. This yielded batch sizes of 4096 tokens per GPU or TPU chip; accordingly, 16 TPU chip configurations had $\sim $ 66K tokens per batch and 8 GPU chip configurations had $\sim $ 33K tokens per batch. For language modeling we used the 1 Billion Word Language Model Benchmark (LM1B) BIBREF33 , also using its “packed" Tensor2Tensor implementation. Again the tokens are split into a vocabulary of approximately 32k word-pieces and the sentences are shuffled. Training Details and Hyperparameters All of our experiments used Tensor2Tensor's Transformer TPU hyperparameter settings. These are nearly identical to those used by Vaswani et al. vaswani17, but modified to use the memory-efficient Adafactor BIBREF34 optimizer. Aside from using the optimizer itself, these hyperparameters set the warmup to a constant learning rate of $10^{-2}$ over 10k steps and then uses inverse-square-root learning-rate decay. For our experiments, we make only one change, which is to alter this decay so that it reaches 0 at the final step of training, which for our non-search experiments is uniformly 300k. We found that the our search candidate models, the Transformer, and the Evolved Transformer all benefited from this and so experimented with using linear decay, single-cycle cosine decay BIBREF35 and a modified inverse-square-root decay to 0 at 300k steps: $lr = step^{-0.00303926} - .962392$ ; every decay was paired with the same constant $10^{-2}$ warmup. We used WMT En-De validation perplexity to gauge model performance and found that the Transformer preferred the modified inverse-square-root decay. Therefore, this is what we used for both all our Transformer trainings and the architecture searches themselves. The Evolved Transformer performed best with cosine decay and so that is what we used for all of its trainings. Besides this one difference, the hyperparameter settings across models being compared are exactly the same. Because decaying to 0 resulted in only marginal weight changes towards the end of training, we did not use checkpoint averaging. Per-task there is one additional hyperparameter difference, which is dropout rate. For ET and all search child models, dropout was applied uniformly after each layer, approximating the Transformer's more nuanced dropout scheme. For En-De and En-Cs, all “big" sized models were given a higher dropout rate of 0.3, keeping in line with Vaswani et al. vaswani17, and all models with an input embedding size of 768 are given a dropout rate of 0.2. Aside from this, hyperparameters are identical across all translation tasks. For decoding we used the same beam decoding configuration used by Vaswani et al. vaswani17. That is a beam size of 4, length penalty ( $\alpha $ ) of 0.6, and maximum output length of input length + 50. All BLEU is calculated using case-sensitive tokenization and for WMT'14 En-De we also use the compound splitting that was used in Vaswani et al. vaswani17. Our language model training setup is identical to our machine translation setup except we remove label smoothing and lower the intra-attention dropout rate to 0. This was taken from the Tensor2Tensor hyperparameters for LM1B[2]. Search Configurations All of the architecture searches we describe were run on WMT'14 En-De. They utilized the search space and tournament selection evolution algorithm described in our Methods section. Unless otherwise noted, each search used 200 workers, which were equipped with a single Google TPU V.2 chip for training and evaluation. We maintained a population of size 100 with subpopulation sizes for both killing and reproducing set to 30. Mutations were applied independently per encoding field at a rate of 2.5%. For fitness we used the negative log perplexity of the validation set instead of BLEU because, as demonstrated in our Results section, perplexity is more consistent and that reduced the noise of our fitness signal. Results In this section, we will first benchmark the performance of our search method, progressive dynamic hurdles, against other evolutionary search methods BIBREF21 , BIBREF0 . We will then benchmark the Evolved Transformer, the result of our search method, against the Transformer BIBREF7 . Search Techniques We tested our evolution algorithm enhancements – using PDH and seeding the initial population with the Transformer – against control searches that did not use these techniques; without our enhancements, these controls function the same way as Real et. al's real19 searches, without aging regularization. Each search we describe was run 3 times and the top model from each run was retrained on a single TPU V.2 chip for 300k steps. The performance of the models after retraining is given in Table 1 . Our proposed search (Table 1 row 1), which used both PDH and Transformer seeding, was run first, with hurdles created every 1k models ( $m = 1000$ ) and six 30k train step (1 hour) increments ( $s=<30, 30, 30, 30, 30, 30>$ ). To test the effectiveness of seeding with the Transformer, we ran an identical search that was instead seeded with random valid encodings (Table 1 row 2). To test the effectiveness of PDH, we ran three controls (Table 1 rows 3-5) that each used a fixed number of train steps for each child model instead of hurdles (Table 1 column 2). For these we used the step increments (30k), the maximum number of steps our proposed search ultimately reaches (180k), and the total number of steps each top model receives when fully trained to gauge its final performance (300k). To determine the number of child models each of these searches would be able to train, we selected the value that would make the total amount of resources used by each control search equal to the maximum amount of resources used for our proposed searches, which require various amounts of resources depending on how many models fail to overcome hurdles. In the three trials we ran, our proposed search's total number of train steps used was 422M $\pm $ 21M, with a maximum of 446M. Thus the number of child models allotted for each non-PDH control search was set so that the total number of child model train steps used would be 446M. As demonstrated in Table 1, the search we propose, with PDH and Transformer seeding, has the best performance on average. It also is the most consistent, having the lowest standard deviation. Of all the searches conducted, only a single control run – “30K no hurdles" (Table 1 row 3) – produced a model that was better than any of our proposed search's best models. At the same time, the “30K no hurdles" setup also produced models that were significantly worse, which explains its high standard deviation. This phenomenon was a chief motivator for our developing this method. Although aggressive early stopping has the potential to produce strong models for cheap, searches that utilize it can also venture into modalities in which top fitness child models are only strong early on. Without running models for longer, whether or not this is happening cannot be detected. The 180K and 300K no hurdles searches did have insight into long term performance, but in a resource-inefficient manner that hurt these searches by limiting the number of generations they produced; for the “180k no hurdles" run to train as many models as PDH would require 1.08B train steps, over double what PDH used in our worst case. Searching with random seeding also proved to be ineffective, performing considerably worse than every other configuration. Of the five searches run, random seeding was the only one that had a top model perplexity higher than the Transformer, which is 4.75 $\pm $ 0.01 in the same setup. After confirming the effectiveness of our search procedure, we launched a larger scale version of our search using 270 workers. We trained 5k models per hurdle ( $m=5000$ ) and used larger step increments to get a closer approximation to 300k step performance: $s = <60, 60, 120>$ . The setup was the same as the Search Techniques experiments, except after 11k models we lowered the mutation rate to 0.01 and introduced the NONE value to the normalization mutation vocabulary. The search ran for 15K child models, requiring a total of 979M train steps. Over 13K models did not make it past the first hurdle, drastically reducing the resources required to view the 240 thousandth train step for top models, which would have cost 3.6B train steps for the same number of models without hurdles. After the search concluded, we then selected the top 20 models and trained them for the full 300k steps, each on a single TPU V.2 chip. The model that ended with the best perplexity is what we refer to as the Evolved Transformer (ET). Figure 3 shows the ET architecture. The most notable aspect of the Evolved Transformer is the use of wide depth-wise separable convolutions in the lower layers of the encoder and decoder blocks. The use of depth-wise convolution and self-attention was previously described in QANet BIBREF37 , however the overall architectures of the Evolved Transformer and QANet are different in many ways: e.g., QANet has smaller kernel sizes and no branching structures. The performance and analysis of the Evolved Transformer will be shown in the next section. The Evolved Transformer: Performance and Analysis To test the effectiveness of the found architecture – the Evolved Transformer – we compared it to the Transformer in its Tensor2Tensor training regime on WMT'14 En-De. Table 2 shows the results of these experiments run on the same 8 NVIDIA P100 hardware setup that was used by Vaswani et al. vaswani17. Observing ET's improved performance at parameter-comparable “base" and “big" sizes, we were also interested in understanding how small ET could be shrunk while still achieving the same performance as the Transformer. To create a spectrum of model sizes for each architecture, we selected different input embedding sizes and shrank or grew the rest of the model embedding sizes with the same proportions. Aside from embedding depths, these models are identical at all sizes, except the “big" 1024 input embedding size, for which all 8 head attention layers are upgraded to 16 head attention layers, as was done in Vaswani et al. vaswani17. ET demonstrates stronger performance than the Transformer at all sizes, with the largest difference of 0.7 BLEU at the smallest, mobile-friendly, size of $\sim $ 7M parameters. Performance on par with the “base" Transformer was reached when ET used just 71.4% of its FLOPS and performance of the “big" Transformer was exceeded by the ET model that used 44.8% less FLOPS. Figure 4 shows the FLOPS vs. BLEU performance of both architectures. To test ET's generalizability, we also compared it to the Transformer on an additional three well-established language tasks: WMT'14 En-Fr, WMT'14 En-Cs, and LM1B. Upgrading to 16 TPU V.2 chips, we doubled the number of synchronous workers for these experiments, pushing both models to their higher potential BIBREF17 . We ran each configuration 3 times, except WMT En-De, which we ran 6 times; this was a matter of resource availability and we gave priority to the task we searched on. As shown in Table 3 , ET performs at least one standard deviation above the Transformer in each of these tasks. Note that the Transformer mean BLEU scores in both Tables 2 and 3 for WMT'14 En-Fr and WMT'14 En-De are higher than those originally reported by Vaswani et al. BIBREF7 . As can be seen in Tables 2 and 3 , the Evolved Transformer is much more effective than the Transformer when its model size is small. When the model size becomes large, its BLEU performance saturates and the gap between the Evolved Transformer and the Transformer becomes smaller. One explanation for this behavior is that overfitting starts to occur at big model sizes, but we expect that data augmentation BIBREF17 or hyperparameter tuning could improve performance. The improvement gains in perplexity by ET, however, are still significant for big model sizes; it is worth emphasizing that perplexity is also a reliable metric for measuring machine translation quality BIBREF32 . To understand what mutations contributed to ET's improved performance we conducted two rounds of ablation testing. In the first round, we began with the Transformer and applied each mutation to it individually to measure the performance change each mutation introduces in isolation. In the second round, we began with ET and removed each mutation individually to again measure the impact of each single mutation. In both cases, each model was trained 3 times on WMT En-De for 300k steps with identical hyperparameters, using the inverse-square-root decay to 0 that the Transformer prefers. Each training was conducted on a single TPU V.2 chip. The results of these experiments are presented in Table 4 of the Appendix; we use validation perplexity for comparison because it was our fitness metric. In all cases, the augmented ET models outperformed the the augmented Transformer models, indicating that the gap in performance between ET and the Transformer cannot be attributed to any single mutation. The mutation with the seemingly strongest individual impact is the increase from 3 to 4 decoder blocks. However, even when this mutation is introduced to the Transformer and removed from ET, the resulting augmented ET model still has a higher fitness than the augmented Transformer model. Some mutations seem to only hurt model performance such as converting the encoder's first multi-head attention into a Gated Linear Unit. However, given how we formulate the problem – finding an improved model with a comparable number of parameters to the Transformer – these mutations might have been necessary. For example, when the Transformer decoder block is repeated 4 times, the resulting model has 69.6M parameters, which is outside of our allowed parameter range. Thus, mutations that shrank ET's total number of parameters, even at a slight degradation of performance, were necessary so that other more impactful parameter-expensive mutations, such as adding an additional decoder block, could be used. If model size is not of concern, some of these mutations could potentially be reverted to further improve performance. Likewise, parameter efficient layers, such as the depthwise-separable convolutions, could potentially be swapped for their less efficient counterparts, such as standard convolution. Conclusion We presented the first neural architecture search conducted to find improved feed-forward sequence models. We first constructed a large search space inspired by recent advances in seq2seq models and used it to search directly on the computationally intensive WMT En-De translation task. To mitigate the size of our space and the cost of training child models, we proposed using both our progressive dynamic hurdles method and seeding our initial population with a known strong model, the Transformer. When run at scale, our search found the Evolved Transformer. In a side by side comparison against the Transformer in an identical training regime, the Evolved Transformer showed consistent stronger performance on both translation and language modeling. On WMT En-De, the Evolved Transformer was twice as efficient FLOPS-wise as the Transformer big model and at a mobile-friendly size, the Evolved Transformer demonstrated a 0.7 BLEU gain over the Transformer architecture. Acknowledgements We would like to thank Ashish Vaswani, Jakob Uszkoreit, Niki Parmar, Noam Shazeer, Lukasz Kaiser and Ryan Sepassi for their help with Tensor2Tensor and for sharing their understanding of the Transformer. We are also grateful to David Dohan, Esteban Real, Yanping Huang, Alok Aggarwal, Vijay Vasudevan, and Chris Ying for lending their expertise in architecture search and evolution. Search Algorithms In the following, we describe the algorithm that we use to calculate child model fitness with hurdles (Algorithm "Search Algorithms" ) and evolution architecture search with progressive dynamic hurdles (Algorithm UID28 ). [h!] Calculate Model Fitness with Hurdles inputs: $model$ : the child model $s$ : vector of train step increments $h$ : queue of hurdles append $-\infty $ to $h$ TRAIN_N_STEPS( $model$ , $s_0$ ) $fitness \leftarrow $ EVALUATE( $model$ ) $i \leftarrow 0$ $s$0 $s$1 $s$2 TRAIN_N_STEPS( $s$3 , $s$4 ) $s$5 EVALUATE( $s$6 ) $s$7 return $s$8 Search Space Information In our search space, a child model's genetic encoding is expressed as: $[$ left input, left normalization, left layer, left relative output dimension, left activation, right input, right normalization, right layer, right relative output dimension, right activation, combiner function $]$ $\times $ 14 + $[$ number of cells $]$ $\times $ 2, with the first 6 blocks allocated to the encoder and the latter 8 allocated to the decoder. In the following, we will describe each of the components. Ablation studies of the Evolved Transformer To understand what mutations contributed to ET's improved performance we conducted two rounds of ablation testing. In the first round, we began with the Transformer and applied each mutation to it individually to measure the performance change each mutation introduces in isolation. In the second round, we began with ET and removed each mutation individually to again measure the impact of each single mutation. In both cases, each model was trained 3 times on WMT En-De for 300k steps with identical hyperparameters, using the inverse-square-root decay to 0 that the Transformer prefers. Each training was conducted on a single TPU V.2 chip. The results of these experiments are presented in Table 4 ; we use validation perplexity for comparison because it was our fitness metric. To highlight the impact of each augmented model's mutation, we present not only their perplexities but also the difference between their mean perplexity and their unaugmented base model mean perplexity in the "Mean Diff" columns: base model mean perplexity - augmented mean perplexity This delta estimates the change in performance each mutation creates in isolation. Red highlighted cells contain evidence that their corresponding mutation hurt overall performance. Green highlighted cells contain evidence that their corresponding mutation helped overall performance. In half of the cases, both the augmented Transformer's and the augmented Evolved Transformer's performances indicate that the mutation was helpful. Changing the number of attention heads from 8 to 16 was doubly indicated to be neutral and changing from 8 head self attention to a GLU layer in the decoder was doubly indicated to have hurt performance. However, this and other mutations that seemingly hurt performance may have been necessary given how we formulate the problem: finding an improved model with a comparable number of parameters to the Transformer. For example, when the Transformer decoder block is repeated 4 times, the resulting model has 69.6M parameters, which is outside of our allowed parameter range. Thus, mutations that shrank ET's total number of parameters, even at a slight degradation of performance, were necessary so that other more impactful parameter-expensive mutations, such as adding an additional decoder block, could be used. Other mutations have inconsistent evidence about how useful they are. This ablation study serves only to approximate what is useful, but how effective a mutation is also depends on the model it is being introduced to and how it interacts with other encoding field values.
Unanswerable
2b5dc3595dfc3d52a1525783d943b3dd0cc62473
2b5dc3595dfc3d52a1525783d943b3dd0cc62473_0
Q: How does Progressive Dynamic Hurdles work? Text: Introduction Over the past few years, impressive advances have been made in the field of neural architecture search. Reinforcement learning and evolution have both proven their capacity to produce models that exceed the performance of those designed by humans BIBREF0 , BIBREF1 . These advances have mostly focused on improving image models, although some effort has also been invested in searching for sequence models BIBREF2 , BIBREF3 . In these cases, it has always been to find improved recurrent neural networks (RNNs), which were long established as the de facto neural model for sequence problems BIBREF4 , BIBREF5 . However, recent works have shown that there are better alternatives to RNNs for solving sequence problems. Due to the success of convolution-based networks, such as Convolution Seq2Seq BIBREF6 , and full attention networks, such as the Transformer BIBREF7 , feed-forward networks are now a viable option for solving sequence-to-sequence (seq2seq) tasks. The main strength of feed-forward networks is that they are faster, and easier to train than RNNs. The goal of this work is to examine the use of neural architecture search methods to design better feed-forward architectures for seq2seq tasks. Specifically, we apply tournament selection architecture search to evolve from the Transformer, considered to be the state-of-art and widely-used, into a better and more efficient architecture. To achieve this, we construct a search space that reflects the recent advances in feed-forward seq2seq models and develop a method called progressive dynamic hurdles (PDH) that allows us to perform our search directly on the computationally demanding WMT 2014 English-German (En-De) translation task. Our search produces a new architecture – called the Evolved Transformer (ET) – which demonstrates consistent improvement over the original Transformer on four well-established language tasks: WMT 2014 English-German, WMT 2014 English-French (En-Fr), WMT 2014 English-Czech (En-Cs) and the 1 Billion Word Language Model Benchmark (LM1B). In our experiments with big size models, the Evolved Transformer is twice as efficient as the Transformer in FLOPS without loss of quality. At a much smaller – mobile-friendly – model size of $\sim $ 7M parameters, the Evolved Transformer outperforms the Transformer by 0.7 BLEU. Related Work RNNs have long been used as the default option for applying neural networks to sequence modeling BIBREF4 , BIBREF5 , with LSTM BIBREF8 and GRU BIBREF9 architectures being the most popular. However, recent work has shown that RNNs are not necessary to build state-of-the-art sequence models. For example, many high performance convolutional models have been designed, such as WaveNet BIBREF10 , Gated Convolution Networks BIBREF11 , Conv Seq2Seq BIBREF6 and Dynamic Lightweight Convolution model BIBREF12 . Perhaps the most promising architecture in this direction is the Transformer architecture BIBREF7 , which relies only on multi-head attention to convey spatial information. In this work, we use both convolutions and attention in our search space to leverage the strengths of both layer types. The recent advances in sequential feed-forward networks are not limited to architecture design. Various methods, such as BERT BIBREF13 and Radford et. al's pre-training technique BIBREF14 , have demonstrated how models such as the Transformer can improve over RNN pre-training BIBREF15 , BIBREF16 . For translation specifically, work on scaling up batch size BIBREF17 , BIBREF12 , using relative position representations BIBREF18 , and weighting multi-head attention BIBREF19 have all pushed the state-of-the-art for WMT 2014 En-De and En-Fr. However, these methods are orthogonal to this work, as we are only concerned with improving the neural network architecture itself, and not the techniques used for improving overall performance. The field of neural architecture search has also seen significant recent progress. The best performing architecture search methods are those that are computationally intensive BIBREF2 , BIBREF20 , BIBREF21 , BIBREF22 , BIBREF1 , BIBREF0 . Other methods have been developed with speed in mind, such as DARTS BIBREF23 , ENAS BIBREF3 , SMASH BIBREF24 , and SNAS BIBREF25 . These methods radically reduce the amount of time needed to run each search by approximating the performance of each candidate model, instead of investing resources to fully train and evaluate each candidate separately. Unfortunately, these faster methods produce slightly less competitive results. Zela et. al's zela18 utilization of Hyperband BIBREF26 and PNAS's BIBREF27 incorporation of a surrogate model are examples of approaches that try to both increase efficiency via candidate performance estimation and maximize search quality by training models to the end when necessary. The progressive dynamic hurdles method we introduce here is similar to these approaches in that we train our best models individually to the end, but optimize our procedure by discarding unpromising models early on. Methods We employ evolution-based architecture search because it is simple and has been shown to be more efficient than reinforcement learning when resources are limited BIBREF0 . We use the same tournament selection BIBREF28 algorithm as Real et al. real19, with the aging regularization omitted, and so encourage the reader to view their in-depth description of the method. In the interest of saving space, we will only give a brief overview of the algorithm here. Tournament selection evolutionary architecture search is conducted by first defining a gene encoding that describes a neural network architecture; we describe our encoding in the following Search Space subsection. An initial population is then created by randomly sampling from the space of gene encoding to create individuals. These individuals are assigned fitnesses based on training the the neural networks they describe on the target task and then evaluating their performance on the task's validation set. The population is then repeatedly sampled from to produce subpopulations, from which the individual with the highest fitness is selected as a parent. Selected parents have their gene encodings mutated – encoding fields randomly changed to different values – to produce child models. These child models are then assigned a fitness via training and evaluation on the target task, as the initial population was. When this fitness evaluation concludes, the population is sampled from once again, and the individual in the subpopulation with the lowest fitness is killed, meaning it is removed from the population. The newly evaluated child model is then added to the population, taking the killed individual's place. This process is repeated and results in a population with high fitness individuals, which in our case represent well-performing architectures. Search Space Our encoding search space is inspired by the NASNet search space BIBREF1 , but is altered to allow it to express architecture characteristics found in recent state-of-the-art feed-forward seq2seq networks. Crucially, we ensured that the search space can represent the Transformer, so that we can seed the search process with the Transformer itself. Our search space consists of two stackable cells, one for the model encoder and one for the decoder (see Figure 1 ). Each cell contains NASNet-style blocks, which receive two hidden state inputs and produce new hidden states as outputs; the encoder contains six blocks and the decoder contains eight blocks, so that the Transformer can be represented exactly. The blocks perform separate transformations to each input and then combine the transformation outputs together to produce a single block output; we will refer to the transformations applied to each input as a branch. Our search space contains five branch-level search fields (input, normalization, layer, output dimension and activation), one block-level search field (combiner function) and one cell-level search field (number of cells). In our search space, a child model's genetic encoding is expressed as: $[$ left input, left normalization, left layer, left relative output dimension, left activation, right input, right normalization, right layer, right relative output dimension, right activation, combiner function $]$ $\times $ 14 + $[$ number of cells $]$ $\times $ 2, with the first 6 blocks allocated to the encoder and the latter 8 allocated to the decoder. Given the vocabularies described in the Appendix, this yields a search space of $7.30 * 10^{115}$ models, although we do shrink this to some degree by introducing constraints (see the Appendix for more details). Seeding the Search Space with Transformer While previous neural architecture search works rely on well-formed hand crafted search spaces BIBREF1 , we intentionally leave our search minimally tuned, in a effort to alleviate our manual burden and emphasize the role of the automated search method. To help navigate the large search space we create for ourselves, we find it easier to seed our initial population with a known strong model, in this case the Transformer. This anchors the search to a known good starting point and guarantees at least a single strong potential parent in the population as the generations progress. We offer empirical support for these claims in our Results section. Evolution with Progressive Dynamic Hurdles The evolution algorithm we employ is adapted from the tournament selection evolutionary architecture search proposed by Real et al. real19, described above. Unlike Real et al. real19 who conducted their search on CIFAR-10, our search is conducted on a task that takes much longer to train and evaluate on. Specifically, to train a Transformer to peak performance on WMT'14 En-De requires $\sim $ 300k training steps, or 10 hours, in the base size when using a single Google TPU V.2 chip, as we do in our search. In contrast, Real et al. real19 used the less resource-intensive CIFAR-10 task BIBREF29 , which takes about two hours to train on, to assess their models during their search, as it was a good proxy for ImageNet BIBREF30 performance BIBREF1 . However, in our preliminary experimentation we could not find a proxy task that gave adequate signal for how well each child model would perform on the full WMT'14 En-De task; we investigated using only a fraction of the data set and various forms of aggressive early stopping. To address this problem we formulated a method to dynamically allocate resources to more promising architectures according to their fitness. This method, which we refer to as progressive dynamic hurdles (PDH), allows models that are consistently performing well to train for more steps. It begins as ordinary tournament selection evolutionary architecture search with early stopping, with each child model training for a relatively small $s_0$ number of steps before being evaluated for fitness. However, after a predetermined number of child models, $m$ , have been evaluated, a hurdle, $h_0$ , is created by calculating the the mean fitness of the current population. For the next $m$ child models produced, models that achieve a fitness greater than $h_0$ after $s_0$ train steps are granted an additional $s_1$ steps of training and then are evaluated again to determine their final fitness. Once another $m$ models have been considered this way, another hurdle, $h_1$ , is constructed by calculating the mean fitness of all members of the current population that were trained for the maximum number of steps. For the next $m$ child models, training and evaluation continues in the same fashion, except models with fitness greater than $m$0 after $m$1 steps of training are granted an additional $m$2 number of train steps, before being evaluated for their final fitness. This process is repeated until a satisfactory number of maximum training steps is reached. Algorithm 1 (Appendix) formalizes how the fitness of an individual model is calculated with hurdles and Algorithm 2 (Appendix) describes tournament selection augmented with progressive dynamic hurdles. Although different child models may train for different numbers of steps before being assigned their final fitness, this does not make their fitnesses incomparable. Tournament selection evolution is only concerned with relative fitness rank when selecting which subpopulation members will be killed and which will become parents; the margin by which one candidate is better or worse than the other members of the subpopulation does not matter. Assuming no model overfits during its training and that its fitness monotonically increases with respect to the number of train steps it is allocated, a comparison between two child models can be viewed as a comparison between their fitnesses at the lower of the two's cumulative train steps. Since the model that was allocated more train steps performed, by definition, above the fitness hurdle for the lower number of steps and the model that was allocated less steps performed, by definition, at or below that hurdle at the lower number of steps, it is guaranteed that the model with more train steps was better when it was evaluated at the lower number of train steps. The benefit of altering the fitness algorithm this way is that poor performing child models will not consume as many resources when their fitness is being computed. As soon as a candidate's fitness falls below a tolerable amount, its evaluation immediately ends. This may also result in good candidates being labeled as bad models if they are only strong towards the latter part of training. However, the resources saved as a result of discarding many bad models improves the overall quality of the search enough to justify potentially also discarding some good ones; this is supported empirically in our Results section. Datasets We use three different machine translation datasets to perform our experiments, all of which were taken from their Tensor2Tensor implementations. The first is WMT English-German, for which we mimic Vaswani et al.'s vaswani17 setup, using WMT'18 En-De training data without ParaCrawl BIBREF31 , yielding 4.5 million sentence pairs. In the same fashion, we use newstest2013 for development and test on newstest2014. The second translation dataset is WMT En-Fr, for which we also replicate Vaswani et.al's vaswani17 setup. We train on the 36 million sentence pairs of WMT'14 En-Fr, validate on newstest2013 and test on newstest2014. The final translation dataset is WMT English-Czech (En-Cs). We used the WMT'18 training dataset, again without ParaCrawl, and used newstest2013 and newstest2014 as validation and test sets. For all tasks, tokens were split using a shared source-target vocabulary of about 32k word-pieces BIBREF32 . All datasets were generated using Tensor2Tensor's “packed" scheme; sentences were shuffled and concatenated together with padding to form uniform 256 length inputs and targets, with examples longer than 256 being discarded. This yielded batch sizes of 4096 tokens per GPU or TPU chip; accordingly, 16 TPU chip configurations had $\sim $ 66K tokens per batch and 8 GPU chip configurations had $\sim $ 33K tokens per batch. For language modeling we used the 1 Billion Word Language Model Benchmark (LM1B) BIBREF33 , also using its “packed" Tensor2Tensor implementation. Again the tokens are split into a vocabulary of approximately 32k word-pieces and the sentences are shuffled. Training Details and Hyperparameters All of our experiments used Tensor2Tensor's Transformer TPU hyperparameter settings. These are nearly identical to those used by Vaswani et al. vaswani17, but modified to use the memory-efficient Adafactor BIBREF34 optimizer. Aside from using the optimizer itself, these hyperparameters set the warmup to a constant learning rate of $10^{-2}$ over 10k steps and then uses inverse-square-root learning-rate decay. For our experiments, we make only one change, which is to alter this decay so that it reaches 0 at the final step of training, which for our non-search experiments is uniformly 300k. We found that the our search candidate models, the Transformer, and the Evolved Transformer all benefited from this and so experimented with using linear decay, single-cycle cosine decay BIBREF35 and a modified inverse-square-root decay to 0 at 300k steps: $lr = step^{-0.00303926} - .962392$ ; every decay was paired with the same constant $10^{-2}$ warmup. We used WMT En-De validation perplexity to gauge model performance and found that the Transformer preferred the modified inverse-square-root decay. Therefore, this is what we used for both all our Transformer trainings and the architecture searches themselves. The Evolved Transformer performed best with cosine decay and so that is what we used for all of its trainings. Besides this one difference, the hyperparameter settings across models being compared are exactly the same. Because decaying to 0 resulted in only marginal weight changes towards the end of training, we did not use checkpoint averaging. Per-task there is one additional hyperparameter difference, which is dropout rate. For ET and all search child models, dropout was applied uniformly after each layer, approximating the Transformer's more nuanced dropout scheme. For En-De and En-Cs, all “big" sized models were given a higher dropout rate of 0.3, keeping in line with Vaswani et al. vaswani17, and all models with an input embedding size of 768 are given a dropout rate of 0.2. Aside from this, hyperparameters are identical across all translation tasks. For decoding we used the same beam decoding configuration used by Vaswani et al. vaswani17. That is a beam size of 4, length penalty ( $\alpha $ ) of 0.6, and maximum output length of input length + 50. All BLEU is calculated using case-sensitive tokenization and for WMT'14 En-De we also use the compound splitting that was used in Vaswani et al. vaswani17. Our language model training setup is identical to our machine translation setup except we remove label smoothing and lower the intra-attention dropout rate to 0. This was taken from the Tensor2Tensor hyperparameters for LM1B[2]. Search Configurations All of the architecture searches we describe were run on WMT'14 En-De. They utilized the search space and tournament selection evolution algorithm described in our Methods section. Unless otherwise noted, each search used 200 workers, which were equipped with a single Google TPU V.2 chip for training and evaluation. We maintained a population of size 100 with subpopulation sizes for both killing and reproducing set to 30. Mutations were applied independently per encoding field at a rate of 2.5%. For fitness we used the negative log perplexity of the validation set instead of BLEU because, as demonstrated in our Results section, perplexity is more consistent and that reduced the noise of our fitness signal. Results In this section, we will first benchmark the performance of our search method, progressive dynamic hurdles, against other evolutionary search methods BIBREF21 , BIBREF0 . We will then benchmark the Evolved Transformer, the result of our search method, against the Transformer BIBREF7 . Search Techniques We tested our evolution algorithm enhancements – using PDH and seeding the initial population with the Transformer – against control searches that did not use these techniques; without our enhancements, these controls function the same way as Real et. al's real19 searches, without aging regularization. Each search we describe was run 3 times and the top model from each run was retrained on a single TPU V.2 chip for 300k steps. The performance of the models after retraining is given in Table 1 . Our proposed search (Table 1 row 1), which used both PDH and Transformer seeding, was run first, with hurdles created every 1k models ( $m = 1000$ ) and six 30k train step (1 hour) increments ( $s=<30, 30, 30, 30, 30, 30>$ ). To test the effectiveness of seeding with the Transformer, we ran an identical search that was instead seeded with random valid encodings (Table 1 row 2). To test the effectiveness of PDH, we ran three controls (Table 1 rows 3-5) that each used a fixed number of train steps for each child model instead of hurdles (Table 1 column 2). For these we used the step increments (30k), the maximum number of steps our proposed search ultimately reaches (180k), and the total number of steps each top model receives when fully trained to gauge its final performance (300k). To determine the number of child models each of these searches would be able to train, we selected the value that would make the total amount of resources used by each control search equal to the maximum amount of resources used for our proposed searches, which require various amounts of resources depending on how many models fail to overcome hurdles. In the three trials we ran, our proposed search's total number of train steps used was 422M $\pm $ 21M, with a maximum of 446M. Thus the number of child models allotted for each non-PDH control search was set so that the total number of child model train steps used would be 446M. As demonstrated in Table 1, the search we propose, with PDH and Transformer seeding, has the best performance on average. It also is the most consistent, having the lowest standard deviation. Of all the searches conducted, only a single control run – “30K no hurdles" (Table 1 row 3) – produced a model that was better than any of our proposed search's best models. At the same time, the “30K no hurdles" setup also produced models that were significantly worse, which explains its high standard deviation. This phenomenon was a chief motivator for our developing this method. Although aggressive early stopping has the potential to produce strong models for cheap, searches that utilize it can also venture into modalities in which top fitness child models are only strong early on. Without running models for longer, whether or not this is happening cannot be detected. The 180K and 300K no hurdles searches did have insight into long term performance, but in a resource-inefficient manner that hurt these searches by limiting the number of generations they produced; for the “180k no hurdles" run to train as many models as PDH would require 1.08B train steps, over double what PDH used in our worst case. Searching with random seeding also proved to be ineffective, performing considerably worse than every other configuration. Of the five searches run, random seeding was the only one that had a top model perplexity higher than the Transformer, which is 4.75 $\pm $ 0.01 in the same setup. After confirming the effectiveness of our search procedure, we launched a larger scale version of our search using 270 workers. We trained 5k models per hurdle ( $m=5000$ ) and used larger step increments to get a closer approximation to 300k step performance: $s = <60, 60, 120>$ . The setup was the same as the Search Techniques experiments, except after 11k models we lowered the mutation rate to 0.01 and introduced the NONE value to the normalization mutation vocabulary. The search ran for 15K child models, requiring a total of 979M train steps. Over 13K models did not make it past the first hurdle, drastically reducing the resources required to view the 240 thousandth train step for top models, which would have cost 3.6B train steps for the same number of models without hurdles. After the search concluded, we then selected the top 20 models and trained them for the full 300k steps, each on a single TPU V.2 chip. The model that ended with the best perplexity is what we refer to as the Evolved Transformer (ET). Figure 3 shows the ET architecture. The most notable aspect of the Evolved Transformer is the use of wide depth-wise separable convolutions in the lower layers of the encoder and decoder blocks. The use of depth-wise convolution and self-attention was previously described in QANet BIBREF37 , however the overall architectures of the Evolved Transformer and QANet are different in many ways: e.g., QANet has smaller kernel sizes and no branching structures. The performance and analysis of the Evolved Transformer will be shown in the next section. The Evolved Transformer: Performance and Analysis To test the effectiveness of the found architecture – the Evolved Transformer – we compared it to the Transformer in its Tensor2Tensor training regime on WMT'14 En-De. Table 2 shows the results of these experiments run on the same 8 NVIDIA P100 hardware setup that was used by Vaswani et al. vaswani17. Observing ET's improved performance at parameter-comparable “base" and “big" sizes, we were also interested in understanding how small ET could be shrunk while still achieving the same performance as the Transformer. To create a spectrum of model sizes for each architecture, we selected different input embedding sizes and shrank or grew the rest of the model embedding sizes with the same proportions. Aside from embedding depths, these models are identical at all sizes, except the “big" 1024 input embedding size, for which all 8 head attention layers are upgraded to 16 head attention layers, as was done in Vaswani et al. vaswani17. ET demonstrates stronger performance than the Transformer at all sizes, with the largest difference of 0.7 BLEU at the smallest, mobile-friendly, size of $\sim $ 7M parameters. Performance on par with the “base" Transformer was reached when ET used just 71.4% of its FLOPS and performance of the “big" Transformer was exceeded by the ET model that used 44.8% less FLOPS. Figure 4 shows the FLOPS vs. BLEU performance of both architectures. To test ET's generalizability, we also compared it to the Transformer on an additional three well-established language tasks: WMT'14 En-Fr, WMT'14 En-Cs, and LM1B. Upgrading to 16 TPU V.2 chips, we doubled the number of synchronous workers for these experiments, pushing both models to their higher potential BIBREF17 . We ran each configuration 3 times, except WMT En-De, which we ran 6 times; this was a matter of resource availability and we gave priority to the task we searched on. As shown in Table 3 , ET performs at least one standard deviation above the Transformer in each of these tasks. Note that the Transformer mean BLEU scores in both Tables 2 and 3 for WMT'14 En-Fr and WMT'14 En-De are higher than those originally reported by Vaswani et al. BIBREF7 . As can be seen in Tables 2 and 3 , the Evolved Transformer is much more effective than the Transformer when its model size is small. When the model size becomes large, its BLEU performance saturates and the gap between the Evolved Transformer and the Transformer becomes smaller. One explanation for this behavior is that overfitting starts to occur at big model sizes, but we expect that data augmentation BIBREF17 or hyperparameter tuning could improve performance. The improvement gains in perplexity by ET, however, are still significant for big model sizes; it is worth emphasizing that perplexity is also a reliable metric for measuring machine translation quality BIBREF32 . To understand what mutations contributed to ET's improved performance we conducted two rounds of ablation testing. In the first round, we began with the Transformer and applied each mutation to it individually to measure the performance change each mutation introduces in isolation. In the second round, we began with ET and removed each mutation individually to again measure the impact of each single mutation. In both cases, each model was trained 3 times on WMT En-De for 300k steps with identical hyperparameters, using the inverse-square-root decay to 0 that the Transformer prefers. Each training was conducted on a single TPU V.2 chip. The results of these experiments are presented in Table 4 of the Appendix; we use validation perplexity for comparison because it was our fitness metric. In all cases, the augmented ET models outperformed the the augmented Transformer models, indicating that the gap in performance between ET and the Transformer cannot be attributed to any single mutation. The mutation with the seemingly strongest individual impact is the increase from 3 to 4 decoder blocks. However, even when this mutation is introduced to the Transformer and removed from ET, the resulting augmented ET model still has a higher fitness than the augmented Transformer model. Some mutations seem to only hurt model performance such as converting the encoder's first multi-head attention into a Gated Linear Unit. However, given how we formulate the problem – finding an improved model with a comparable number of parameters to the Transformer – these mutations might have been necessary. For example, when the Transformer decoder block is repeated 4 times, the resulting model has 69.6M parameters, which is outside of our allowed parameter range. Thus, mutations that shrank ET's total number of parameters, even at a slight degradation of performance, were necessary so that other more impactful parameter-expensive mutations, such as adding an additional decoder block, could be used. If model size is not of concern, some of these mutations could potentially be reverted to further improve performance. Likewise, parameter efficient layers, such as the depthwise-separable convolutions, could potentially be swapped for their less efficient counterparts, such as standard convolution. Conclusion We presented the first neural architecture search conducted to find improved feed-forward sequence models. We first constructed a large search space inspired by recent advances in seq2seq models and used it to search directly on the computationally intensive WMT En-De translation task. To mitigate the size of our space and the cost of training child models, we proposed using both our progressive dynamic hurdles method and seeding our initial population with a known strong model, the Transformer. When run at scale, our search found the Evolved Transformer. In a side by side comparison against the Transformer in an identical training regime, the Evolved Transformer showed consistent stronger performance on both translation and language modeling. On WMT En-De, the Evolved Transformer was twice as efficient FLOPS-wise as the Transformer big model and at a mobile-friendly size, the Evolved Transformer demonstrated a 0.7 BLEU gain over the Transformer architecture. Acknowledgements We would like to thank Ashish Vaswani, Jakob Uszkoreit, Niki Parmar, Noam Shazeer, Lukasz Kaiser and Ryan Sepassi for their help with Tensor2Tensor and for sharing their understanding of the Transformer. We are also grateful to David Dohan, Esteban Real, Yanping Huang, Alok Aggarwal, Vijay Vasudevan, and Chris Ying for lending their expertise in architecture search and evolution. Search Algorithms In the following, we describe the algorithm that we use to calculate child model fitness with hurdles (Algorithm "Search Algorithms" ) and evolution architecture search with progressive dynamic hurdles (Algorithm UID28 ). [h!] Calculate Model Fitness with Hurdles inputs: $model$ : the child model $s$ : vector of train step increments $h$ : queue of hurdles append $-\infty $ to $h$ TRAIN_N_STEPS( $model$ , $s_0$ ) $fitness \leftarrow $ EVALUATE( $model$ ) $i \leftarrow 0$ $s$0 $s$1 $s$2 TRAIN_N_STEPS( $s$3 , $s$4 ) $s$5 EVALUATE( $s$6 ) $s$7 return $s$8 Search Space Information In our search space, a child model's genetic encoding is expressed as: $[$ left input, left normalization, left layer, left relative output dimension, left activation, right input, right normalization, right layer, right relative output dimension, right activation, combiner function $]$ $\times $ 14 + $[$ number of cells $]$ $\times $ 2, with the first 6 blocks allocated to the encoder and the latter 8 allocated to the decoder. In the following, we will describe each of the components. Ablation studies of the Evolved Transformer To understand what mutations contributed to ET's improved performance we conducted two rounds of ablation testing. In the first round, we began with the Transformer and applied each mutation to it individually to measure the performance change each mutation introduces in isolation. In the second round, we began with ET and removed each mutation individually to again measure the impact of each single mutation. In both cases, each model was trained 3 times on WMT En-De for 300k steps with identical hyperparameters, using the inverse-square-root decay to 0 that the Transformer prefers. Each training was conducted on a single TPU V.2 chip. The results of these experiments are presented in Table 4 ; we use validation perplexity for comparison because it was our fitness metric. To highlight the impact of each augmented model's mutation, we present not only their perplexities but also the difference between their mean perplexity and their unaugmented base model mean perplexity in the "Mean Diff" columns: base model mean perplexity - augmented mean perplexity This delta estimates the change in performance each mutation creates in isolation. Red highlighted cells contain evidence that their corresponding mutation hurt overall performance. Green highlighted cells contain evidence that their corresponding mutation helped overall performance. In half of the cases, both the augmented Transformer's and the augmented Evolved Transformer's performances indicate that the mutation was helpful. Changing the number of attention heads from 8 to 16 was doubly indicated to be neutral and changing from 8 head self attention to a GLU layer in the decoder was doubly indicated to have hurt performance. However, this and other mutations that seemingly hurt performance may have been necessary given how we formulate the problem: finding an improved model with a comparable number of parameters to the Transformer. For example, when the Transformer decoder block is repeated 4 times, the resulting model has 69.6M parameters, which is outside of our allowed parameter range. Thus, mutations that shrank ET's total number of parameters, even at a slight degradation of performance, were necessary so that other more impactful parameter-expensive mutations, such as adding an additional decoder block, could be used. Other mutations have inconsistent evidence about how useful they are. This ablation study serves only to approximate what is useful, but how effective a mutation is also depends on the model it is being introduced to and how it interacts with other encoding field values.
It begins as ordinary tournament selection evolutionary architecture search with early stopping, with each child model training for a relatively small $s_0$ number of steps before being evaluated for fitness. However, after a predetermined number of child models, $m$ , have been evaluated, a hurdle, $h_0$ , is created by calculating the the mean fitness of the current population. For the next $m$ child models produced, models that achieve a fitness greater than $h_0$ after $s_0$ train steps are granted an additional $s_1$ steps of training and then are evaluated again to determine their final fitness. Once another $m$ models have been considered this way, another hurdle, $h_1$ , is constructed by calculating the mean fitness of all members of the current population that were trained for the maximum number of steps. For the next $m$ child models, training and evaluation continues in the same fashion, except models with fitness greater than $m$0 after $m$1 steps of training are granted an additional $m$2 number of train steps, before being evaluated for their final fitness. This process is repeated until a satisfactory number of maximum training steps is reached.
ee27e5b56e439546d710ce113c9be76e1bfa1a3d
ee27e5b56e439546d710ce113c9be76e1bfa1a3d_0
Q: Do they beat current state-of-the-art on SICK? Text: Introduction There has been rapid progress on natural language inference (NLI) in the last several years, due in large part to recent advances in neural modeling BIBREF0 and the introduction of several new large-scale inference datasets BIBREF1, BIBREF2, BIBREF3, BIBREF4. Given the high performance of current state-of-the-art models, there has also been interest in understanding the limitations of these models (given their uninterpretability) BIBREF5, BIBREF6, as well as finding systematic biases in benchmark datasets BIBREF7, BIBREF8. In parallel to these efforts, there have also been recent logic-based approaches to NLI BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, which take inspiration from linguistics. In contrast to early attempts at using logic BIBREF14, these approaches have proven to be more robust. However they tend to use many rules and their output can be hard to interpret. It is sometimes unclear whether the attendant complexity is justified, especially given that such models are currently far outpaced by data-driven models and are generally hard to hybridize with data-driven techniques. In this work, we introduce a new logical inference engine called MonaLog, which is based on natural logic and work on monotonicity stemming from vanBenthemEssays86. In contrast to the logical approaches cited above, our starting point is different in that we begin with the following two questions: 1) what is the simplest logical system that one can come up with to solve empirical NLI problems (i.e., the system with minimal amounts of primitives and background knowledge)?; and 2) what is the lower-bound performance of such a model? Like other approaches to natural logic BIBREF15, BIBREF16, our model works by reasoning over surface forms (as opposed to translating to symbolic representations) using a small inventory of monotonicity facts about quantifiers, lexical items and token-level polarity BIBREF17; proofs in the calculus are hence fully interpretable and expressible in ordinary language. Unlike existing work on natural logic, however, our model avoids the need for having expensive alignment and search sub-procedures BIBREF18, BIBREF19, and relies on a much smaller set of background knowledge and primitive relations than MacCartneyManning. To show the effectiveness of our approach, we show results on the SICK dataset BIBREF1, a common benchmark for logic-based NLI, and find MonaLog to be competitive with more complicated logic-based approaches (many of which require full semantic parsing and more complex logical machinery). We also introduce a supplementary version of SICK that corrects several common annotation mistakes (e.g., asymmetrical inference annotations) based on previous work by kalouli2017entail,kalouli2018. Positive results on both these datasets show the ability of lightweight monotonicity models to handle many of the inferences found in current NLI datasets, hence putting a more reliable lower-bound on what results the simplest logical approach is capable of achieving on this benchmark. Since our logic operates over surface forms, it is straightforward to hybridize our models. We investigate using MonaLog in combination with the language model BERT BIBREF20, including for compositional data augmentation, i.e, re-generating entailed versions of examples in our training sets. To our knowledge, our approach is the first attempt to use monotonicity for data augmentation, and we show that such augmentation can generate high-quality training data with which models like BERT can improve performance. Our System: MonaLog The goal of NLI is to determine, given a premise set $P$ and a hypothesis sentence $H$, whether $H$ follows from the meaning of $P$ BIBREF21. In this paper, we look at single-premise problems that involve making a standard 3-way classification decision (i.e., Entailment (H), Contradict (C) and Neutral (N)). Our general monotonicity reasoning system works according to the pipeline in Figure FIGREF1. Given a premise text, we first do Arrow Tagging by assigning polarity annotations (i.e., the arrows $\uparrow ,\downarrow $, which are the basic primitives of our logic) to tokens in text. These surface-level annotations, in turn, are associated with a set of natural logic inference rules that provide instructions for how to generate entailments and contradictions by span replacements over these arrows (which relies on a library of span replacement rules). For example, in the sentence All schoolgirls are on the train, the token schoolgirls is associated with a polarity annotation $\downarrow $, which indicates that in this sentential context, the span schoolgirls can be replaced with a semantically more specific concept (e.g., happy schoolgirls) in order to generate an entailment. A generation and search procedure is then applied to see if the hypothesis text can be generated from the premise using these inference rules. A proof in this model is finally a particular sequence of edits (e.g., see Figure FIGREF13) that derive the hypothesis text from the premise text rules and yield an entailment or contradiction. In the following sections, we provide the details of our particular implementation of these different components in MonaLog. Our System: MonaLog ::: Polarization (Arrow Tagging) Given an input premise $P$, MonaLog first polarizes each of its tokens and constituents, calling the system described by BIBREF17, which performs polarization on a CCG parse tree. For example, a polarized $P$ could be every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$. Note that since we ignore morphology in the system, tokens are represented by lemmas. Our System: MonaLog ::: Knowledge Base @!START@${K}$@!END@ and Sentence Base @!START@${S}$@!END@ MonaLog utilizes two auxiliary sets. First, a knowledge base ${K}$ that stores the world knowledge needed for inference, e.g., semanticist $\le $ linguist and swim $\le $ move, which captures the facts that $[\![\mbox{\em semanticist}]\!]$ denotes a subset of $[\![\mbox{\em linguist}]\!]$, and that $[\![\mbox{\em swim}]\!]$ denotes a subset of $[\![\mbox{\em move}]\!]$, respectively. Such world knowledge can be created manually for the problem at hand, or derived easily from existing resources such as WordNet BIBREF22. Note that we do not blindly add all relations from WordNet to our knowledge base, since this would hinge heavily on word sense disambiguation (we need to know whether the “bank” is a financial institution or a river bank to extract its relations correctly). In the current implementation, we avoid this by adding x $\le $ y or x $\perp $ y relations only if both x and y are words in the premise-hypothesis pair. Additionally, some relations that involve quantifiers and prepositions need to be hard-coded, since WordNet does not include them: every $=$ all $=$ each $\le $ most $\le $ many $\le $ a few $=$ several $\le $ some $=$ a; the $\le $ some $=$ a; on $\perp $ off; up $\perp $ down; etc. We also need to keep track of relations that can potentially be derived from the $P$-$H$ sentence pair. For instance, for all adjectives and nouns that appear in the sentence pair, it is easy to obtain: adj + n $\le $ n (black cat $\le $ cat). Similarly, we have n + PP/relative clause $\le $ n (friend in need $\le $ friend, dog that bites $\le $ dog), VP + advP/PP $\le $ VP (dance happily/in the morning $\le $ dance), and so on. We also have rules that extract pieces of knowledge from $P$ directly, e.g.: n$_1$ $\le $ n$_2$ from sentences of the pattern every n$_1$ is a n$_2$. One can also connect MonaLog to bigger knowledge graphs or ontologies such as DBpedia. A sentence base ${S}$, on the other hand, stores the generated entailments and contradictions. Our System: MonaLog ::: Generation Once we have a polarized CCG tree, and some $\le $ relations in ${K}$, generating entailments and contradictions is fairly straightforward. A concrete example is given in Figure FIGREF13. Note that the generated $\le $ instances are capable of producing mostly monotonicity inferences, but MonaLog can be extended to include other more complex inferences in natural logic, hence the name MonaLog. This extension is addressed in more detail in HuChenMoss. Our System: MonaLog ::: Generation ::: Entailments/inferences The key operation for generating entailments is replacement, or substitution. It can be summarized as follows: 1) For upward-entailing (UE) words/constituents, replace them with words/constituents that denote bigger sets. 2) For downward-entailing (DE) words/constituents, either replace them with those denoting smaller sets, or add modifiers (adjectives, adverbs and/or relative clauses) to create a smaller set. Thus for every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$, MonaLog can produce the following three entailments by replacing each word with the appropriate word from ${K}$: most$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$, every$^{\leavevmode {\color {red}\uparrow }}$ semanticist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$ and every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ move$^{\leavevmode {\color {red}\uparrow }}$. These are results of one replacement. Performing replacement for multiple rounds/depths can easily produce many more entailments. Our System: MonaLog ::: Generation ::: Contradictory sentences To generate sentences contradictory to the input sentence, we do the following: 1) if the sentence starts with “no (some)”, replace the first word with “some (no)”. 2) If the object is quantified by “a/some/the/every”, change the quantifier to “no”, and vice versa. 3) Negate the main verb or remove the negation. See examples in Figure FIGREF13. Our System: MonaLog ::: Generation ::: Neutral sentences MonaLog returns Neutral if it cannot find the hypothesis $H$ in ${S}.entailments$ or ${S}.contradictions$. Thus, there is no need to generate neutral sentences. Our System: MonaLog ::: Search Now that we have a set of inferences and contradictions stored in ${S}$, we can simply see if the hypothesis is in either one of the sets by comparing the strings. If yes, then return Entailment or Contradiction; if not, return Neutral, as schematically shown in Figure FIGREF13. However, the exact-string-match method is too brittle. Therefore, we apply a heuristic. If the only difference between sentences $S_1$ and $S_2$ is in the set {“a”, “be”, “ing”}, then $S_1$ and $S_2$ are considered semantically equivalent. The search is implemented using depth first search, with a default depth of 2, i.e. at most 2 replacements for each input sentence. At each node, MonaLog “expands” the sentence (i.e., an entailment of its parent) by obtaining its entailments and contradictions, and checks whether $H$ is in either set. If so, the search is terminated; otherwise the systems keeps searching until all the possible entailments and contradictions up to depth 2 have been visited. MonaLog and SICK We perform two experiments to test MonaLog. We first use MonaLog to solve the problems in a commonly used natural language inference dataset, SICK BIBREF1, comparing our results with previous systems. Second, we test the quality of the data generated by MonaLog. To do this, we generate more training data (sentence pairs) from the SICK training data using our system, and performe fine-tuning on BERT BIBREF20, a language model based on the transformer architecture BIBREF23, with the expanded dataset. In all experiments, we use the Base, Uncased model of BERT. MonaLog and SICK ::: The SICK Dataset The SICK BIBREF1 dataset includes around 10,000 English sentence pairs that are annotated to have either “Entailment”, “Neutral” or “Contradictory” relations. We choose SICK as our testing ground for several reasons. First, we want to test on a large-scale dataset, since we have shown that a similar model BIBREF24 reaches good results on parts of the smaller FraCaS dataset BIBREF25. Second, we want to make our results comparable to those of previous logic-based models such as the ones described in BIBREF26, BIBREF27, BIBREF11, BIBREF13, which were also tested on SICK. We use the data split provided in the dataset: 4,439 training problems, 4,906 test problems and 495 trial problems, see Table TABREF16 for examples. MonaLog and SICK ::: Hand-corrected SICK There are numerous issues with the original SICK dataset, as illustrated by BIBREF28, BIBREF29. They first manually checked 1,513 pairs tagged as “A entails B but B is neutral to A” (AeBBnA) in the original SICK, correcting 178 pairs that they considered to be wrong BIBREF28. Later, BIBREF29 extracted pairs from SICK whose premise and hypothesis differ in only one word, and created a simple rule-based system that used WordNet information to solve the problem. Their WordNet-based method was able to solve 1,651 problems, whose original labels in SICK were then manually checked and corrected against their system's output. They concluded that 336 problems are wrongly labeled in the original SICK. Combining the above two corrected subsets of SICK, minus the overlap, results in their corrected SICK dataset, which has 3,016 problems (3/10 of the full SICK), with 409 labels different from the original SICK (see breakdown in Table TABREF19). 16 of the corrections are in the trial set, 197 of them in the training set and 196 in the test set. This suggests that more than one out of ten problems in SICK are potentially problematic. For this reason, two authors of the current paper checked the 409 changes. We found that only 246 problems are labeled the same by our team and by BIBREF29. For cases where there is disagreement, we adjudicated the differences after a discussion. We are aware that the partially checked SICK (by two teams) is far from ideal. We therefore present results for two versions of SICK for experiment 1 (section SECREF4): the original SICK and the version corrected by our team. For the data augmentation experiment in section SECREF5, we only performed fine-tuning on the corrected SICK. As shown in a recent SICK annotation experiment by kalouli2019explaining, annotation is a complicated issue influenced by linguistic and non-linguistic factors. We leave checking the full SICK dataset to future work. Experiment 1: Using MonaLog Directly ::: Setup and Preprocessing The goal of experiment 1 is to test how accurately MonaLog solves problems in a large-scale dataset. We first used the system to solve the 495 problems in the trial set and then manually identified the cases in which the system failed. Then we determined which syntactic transformations are needed for MonaLog. After improving the results on the trial data by introducing a preprocessing step to handle limited syntactic variation (see below), we applied MonaLog on the test set. This means that the rule base of the system was optimized on the trial data, and we can test its generalization capability on the test data. The main obstacle for MonaLog is the syntactic variations in the dataset, illustrated in some examples in Table TABREF16. There exist multiple ways of dealing with these variations: One approach is to `normalize' unknown syntactic structures to a known structure. For example, we can transform passive sentences into active ones and convert existential sentences into the base form (see ex. 8399 and 219 in Table TABREF16). Another approach is to use some more abstract syntactic/semantic representation so that the linear word order can largely be ignored, e.g., represent a sentence by its dependency parse, or use Abstract Meaning Representation. Here, we explore the first option and leave the second approach to future work. We believe that dealing with a wide range of syntactic variations requires tools designed specifically for that purpose. The goal of MonaLog is to generate entailments and contradictions based on a polarized sentence instead. Below, we list the most important syntactic transformations we perform in preprocessing. Convert all passive sentences to active using pass2act. If the passive does not contain a by phrase, we add by a person. Convert existential clauses into their base form (see ex. 219 in Table TABREF16). Other transformations: someone/anyone/no one $\rightarrow ~$some/any/no person; there is no man doing sth. $\rightarrow ~$no man is doing sth.; etc. Experiment 1: Using MonaLog Directly ::: Results The results of our system on uncorrected and corrected SICK are presented in Table TABREF27, along with comparisons with other systems. Our accuracy on the uncorrected SICK (77.19%) is much higher than the majority baseline (56.36%) or the hypothesis-only baseline (56.87%) reported by BIBREF8, and only several points lower than current logic-based systems. Since our system is based on natural logic, there is no need for translation into logical forms, which makes the reasoning steps transparent and much easier to interpret. I.e., with entailments and contradictions, we can generate a natural language trace of the system, see Fig. FIGREF13. Our results on the corrected SICK are even higher (see lower part of Table TABREF27), demonstrating the effect of data quality on the final results. Note that with some simple syntactic transformations we can gain 1-2 points in accuracy. Table TABREF28 shows MonaLog's performance on the individual relations. The system is clearly very good at identifying entailments and contradictions, as demonstrated by the high precision values, especially on the corrected SICK set (98.50 precision for E and 95.02 precision for C). The lower recall values are due to MonaLog's current inability to handle syntactic variation. Based on these results, we tested a hybrid model of MonaLog and BERT (see Table TABREF27) where we exploit MonaLog's strength: Since MonaLog has a very high precision on Entailment and Contradiction, we can always trust MonaLog if it predicts E or C; when it returns N, we then fall back to BERT. This hybrid model improves the accuracy of BERT by 1% absolute to 85.95% on the corrected SICK. On the uncorrected SICK dataset, the hybrid system performs worse than BERT. Since MonaLog is optimized for the corrected SICK, it may mislabel many E and C judgments in the uncorrected dataset. The stand-alone BERT system performs better on the uncorrected data (86.74%) than the corrected set (85.00%). The corrected set may be too inconsistent since only a part has been checked. Overall, these hybird results show that it is possible to combine our high-precision system with deep learning architectures. However, more work is necessary to optimize this combined system. Experiment 1: Using MonaLog Directly ::: Error Analysis Upon closer inspection, some of MonaLog's errors consist of difficult cases, as shown in Table TABREF29. For example, in ex. 359, if our knowledge base ${K}$ contains the background fact $\mbox{\em chasing} \le \mbox{\em running}$, then MonaLog's judgment of C would be correct. In ex. 1402, if crying means screaming, then the label should be E; however, if crying here means shedding tears, then the label should probably be N. Here we also see potentially problematic labels (ex. 1760, 3403) in the original SICK dataset. Another point of interest is that 19 of MonaLog's mistakes are related to the antonym pair man vs. woman (e.g., ex. 5793 in Table TABREF29). This points to inconsistency of the SICK dataset: Whereas there are at least 19 cases tagged as Neutral (e.g., ex. 5793), there are at least 17 such pairs that are annotated as Contradictions in the test set (e.g., ex. 3521), P: A man is dancing, H: A woman is dancing (ex. 9214), P: A shirtless man is jumping over a log, H: A shirtless woman is jumping over a log. If man and woman refer to the same entity, then clearly that entity cannot be man and woman at the same time, which makes the sentence pair a contradiction. If, however, they do not refer to the same entity, then they should be Neutral. Experiment 2: Data Generation Using MonaLog Our second experiment focuses on using MonaLog to generate additional training data for machine learning models such as BERT. To our knowledge, this is the first time that a rule-based NLI system has been successfully used to generate training data for a deep learning application. Experiment 2: Data Generation Using MonaLog ::: Setup As described above, MonaLog generates entailments and contradictions when solving problems. These can be used as additional training data for a machine learning model. I.e., we pair the newly generated sentences with their input sentence, creating new pairs for training. For example, we take all the sentences in the nodes in Figure FIGREF13 as inferences and all the sentences in rectangles as contradictions, and then form sentence pairs with the input sentence. The additional data can be used directly, almost without human intervention. Thus for experiment 2, the goal is to examine the quality of these generated sentence pairs. For this, we re-train a BERT model on these pairs. If BERT trained on the manually annotated SICK training data is improved by adding data generated by MonaLog, then we can conclude that the generated data is of high quality, even comparable to human annotated data, which is what we found. More specifically, we compare the performance of BERT models trained on a) SICK training data alone, and b) SICK training data plus the entailing and contradictory pairs generated by MonaLog. All experiments are carried out using our corrected version of the SICK data set. However, note that MonaLog is designed to only generate entailments and contradictions. Thus, we only have access to newly generated examples for those two cases, we do not acquire any additional neutral cases. Consequently, adding these examples to the training data will introduce a skewing that does not reflect the class distribution in the test set. Since this will bias the machine learner against neutral cases, we use the following strategy to counteract that tendency: We relabel all cases where BERT is not confident enough for either E or C into N. We set this threshold to 0.95 but leave further optimization of the threshold to future work. Experiment 2: Data Generation Using MonaLog ::: Data Filtering and Quality Control MonaLog is prone to over-generation. For example, it may wrongly add the same adjective before a noun (phrase) twice to create a more specific noun, e.g., young young man $\le $ young man $\le $ man. Since it is possible that such examples influence the machine learning model negatively, we look into filtering such examples to improve the quality of the additional training data. We manually inspected 100 sentence pairs generated by MonaLog to check the quality and naturalness of the new sentences (see Table TABREF32 for examples). All of the generated sentences are correct in the sense that the relation between the premise and the hypothesis is correctly labeled as entailment or contradiction (see Table TABREF34). While we did not find any sentence pairs with wrong labels, some generated sentences are unnatural, as shown in Table TABREF32. Both unnatural examples contain two successive copies of the same PP. Note that our data generation hinges on correct polarities on the words and constituents. For instance, in the last example of Table TABREF32, the polarization system needs to know that few is downward entailing on both of its arguments, and without flips the arrow of its argument, in order to produce the correct polarities, on which the replacement of MonaLog depends. In order to filter unnatural sentences, such as the examples in Table TABREF32, we use a rule-based filter and remove sentences that contain bigrams of repeated words. We experiment with using one quarter or one half randomly selected sentences in addition to a setting where we use the complete set of generated sentences. Experiment 2: Data Generation Using MonaLog ::: Results Table TABREF37 shows the amount of additional sentence pairs per category along with the results of using the automatically generated sentences as additional training data. It is obvious that adding the additional training data results in gains in accuracy even though the training data becomes increasingly skewed towards E and C. When we add all additional sentence pairs, accuracy increases by more than 1.5 percent points. This demonstrates both the robustness of BERT in the current experiment and the usefulness of the generated data. The more data we add, the better the system performs. We also see that raising the threshold to relabel uncertain cases as neutral gives a small boost, from 86.51% to 86.71%. This translates into 10 cases where the relabeling corrected the answer. Finally, we also investigated whether the hybrid system, i.e., MonaLog followed by the re-trained BERT, can also profit from the additional training data. Intuitively, we would expect smaller gains since MonaLog already handles a fair amount of the entailments and contradictions, i.e., those cases where BERT profits from more examples. However the experiments show that the hybrid system reaches an even higher accuracy of 87.16%, more than 2 percent points above the baseline, equivalent to roughly 100 more problems correctly solved. Setting the high threshold for BERT to return E or C further improves accuracy to 87.49%. This brings us into the range of the state-of-the-art results, even though a direct comparison is not possible because of the differences between the corrected and uncorrected dataset. Conclusions and Future Work We have presented a working natural-logic-based system, MonaLog, which attains high accuracy on the SICK dataset and can be used to generated natural logic proofs. Considering how simple and straightforward our method is, we believe it can serve as a strong baseline or basis for other (much) more complicated systems, either logic-based or ML/DL-based. In addition, we have shown that MonaLog can generate high-quality training data, which improves the accuracy of a deep learning model when trained on the expanded dataset. As a minor point, we manually checked the corrected SICK dataset by BIBREF28, BIBREF29. There are several directions for future work. The first direction concerns the question how to handle syntactic variation from natural language input. That is, the computational process(es) for inference will usually be specified in terms of strict syntactic conditions, and naturally occurring sentences will typically not conform to those conditions. Among the strategies which allow their systems to better cope with premises and hypotheses with various syntactic structures are sophisticated versions of alignment used by e.g. MacCartney,YanakaMMB18. We will need to extend MonaLog to be able to handle such variation. In the future, we plan to use dependency relations as representations of natural language input and train a classifier that can determine which relations are crucial for inference. Second, as mentioned earlier, we are in need of a fully (rather than partially) checked SICK dataset to examine the impact of data quality on the results since the partially checked dataset may be inherently inconsistent between the checked and non-checked parts. Finally, with regard to the machine learning experiments, we plan to investigate other methods of addressing the imbalance in the training set created by additional entailments and contradictions. We will look into options for artificially creating neutral examples, e.g. by finding reverse entailments, as illustrated by richardson2019probing. Acknowledgements We thank the anonymous reviewers for their helpful comments. Hai Hu is supported by China Scholarship Council.
No
4688534a07a3cbd8afa738eea02cc6981a4fd285
4688534a07a3cbd8afa738eea02cc6981a4fd285_0
Q: How do they combine MonaLog with BERT? Text: Introduction There has been rapid progress on natural language inference (NLI) in the last several years, due in large part to recent advances in neural modeling BIBREF0 and the introduction of several new large-scale inference datasets BIBREF1, BIBREF2, BIBREF3, BIBREF4. Given the high performance of current state-of-the-art models, there has also been interest in understanding the limitations of these models (given their uninterpretability) BIBREF5, BIBREF6, as well as finding systematic biases in benchmark datasets BIBREF7, BIBREF8. In parallel to these efforts, there have also been recent logic-based approaches to NLI BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, which take inspiration from linguistics. In contrast to early attempts at using logic BIBREF14, these approaches have proven to be more robust. However they tend to use many rules and their output can be hard to interpret. It is sometimes unclear whether the attendant complexity is justified, especially given that such models are currently far outpaced by data-driven models and are generally hard to hybridize with data-driven techniques. In this work, we introduce a new logical inference engine called MonaLog, which is based on natural logic and work on monotonicity stemming from vanBenthemEssays86. In contrast to the logical approaches cited above, our starting point is different in that we begin with the following two questions: 1) what is the simplest logical system that one can come up with to solve empirical NLI problems (i.e., the system with minimal amounts of primitives and background knowledge)?; and 2) what is the lower-bound performance of such a model? Like other approaches to natural logic BIBREF15, BIBREF16, our model works by reasoning over surface forms (as opposed to translating to symbolic representations) using a small inventory of monotonicity facts about quantifiers, lexical items and token-level polarity BIBREF17; proofs in the calculus are hence fully interpretable and expressible in ordinary language. Unlike existing work on natural logic, however, our model avoids the need for having expensive alignment and search sub-procedures BIBREF18, BIBREF19, and relies on a much smaller set of background knowledge and primitive relations than MacCartneyManning. To show the effectiveness of our approach, we show results on the SICK dataset BIBREF1, a common benchmark for logic-based NLI, and find MonaLog to be competitive with more complicated logic-based approaches (many of which require full semantic parsing and more complex logical machinery). We also introduce a supplementary version of SICK that corrects several common annotation mistakes (e.g., asymmetrical inference annotations) based on previous work by kalouli2017entail,kalouli2018. Positive results on both these datasets show the ability of lightweight monotonicity models to handle many of the inferences found in current NLI datasets, hence putting a more reliable lower-bound on what results the simplest logical approach is capable of achieving on this benchmark. Since our logic operates over surface forms, it is straightforward to hybridize our models. We investigate using MonaLog in combination with the language model BERT BIBREF20, including for compositional data augmentation, i.e, re-generating entailed versions of examples in our training sets. To our knowledge, our approach is the first attempt to use monotonicity for data augmentation, and we show that such augmentation can generate high-quality training data with which models like BERT can improve performance. Our System: MonaLog The goal of NLI is to determine, given a premise set $P$ and a hypothesis sentence $H$, whether $H$ follows from the meaning of $P$ BIBREF21. In this paper, we look at single-premise problems that involve making a standard 3-way classification decision (i.e., Entailment (H), Contradict (C) and Neutral (N)). Our general monotonicity reasoning system works according to the pipeline in Figure FIGREF1. Given a premise text, we first do Arrow Tagging by assigning polarity annotations (i.e., the arrows $\uparrow ,\downarrow $, which are the basic primitives of our logic) to tokens in text. These surface-level annotations, in turn, are associated with a set of natural logic inference rules that provide instructions for how to generate entailments and contradictions by span replacements over these arrows (which relies on a library of span replacement rules). For example, in the sentence All schoolgirls are on the train, the token schoolgirls is associated with a polarity annotation $\downarrow $, which indicates that in this sentential context, the span schoolgirls can be replaced with a semantically more specific concept (e.g., happy schoolgirls) in order to generate an entailment. A generation and search procedure is then applied to see if the hypothesis text can be generated from the premise using these inference rules. A proof in this model is finally a particular sequence of edits (e.g., see Figure FIGREF13) that derive the hypothesis text from the premise text rules and yield an entailment or contradiction. In the following sections, we provide the details of our particular implementation of these different components in MonaLog. Our System: MonaLog ::: Polarization (Arrow Tagging) Given an input premise $P$, MonaLog first polarizes each of its tokens and constituents, calling the system described by BIBREF17, which performs polarization on a CCG parse tree. For example, a polarized $P$ could be every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$. Note that since we ignore morphology in the system, tokens are represented by lemmas. Our System: MonaLog ::: Knowledge Base @!START@${K}$@!END@ and Sentence Base @!START@${S}$@!END@ MonaLog utilizes two auxiliary sets. First, a knowledge base ${K}$ that stores the world knowledge needed for inference, e.g., semanticist $\le $ linguist and swim $\le $ move, which captures the facts that $[\![\mbox{\em semanticist}]\!]$ denotes a subset of $[\![\mbox{\em linguist}]\!]$, and that $[\![\mbox{\em swim}]\!]$ denotes a subset of $[\![\mbox{\em move}]\!]$, respectively. Such world knowledge can be created manually for the problem at hand, or derived easily from existing resources such as WordNet BIBREF22. Note that we do not blindly add all relations from WordNet to our knowledge base, since this would hinge heavily on word sense disambiguation (we need to know whether the “bank” is a financial institution or a river bank to extract its relations correctly). In the current implementation, we avoid this by adding x $\le $ y or x $\perp $ y relations only if both x and y are words in the premise-hypothesis pair. Additionally, some relations that involve quantifiers and prepositions need to be hard-coded, since WordNet does not include them: every $=$ all $=$ each $\le $ most $\le $ many $\le $ a few $=$ several $\le $ some $=$ a; the $\le $ some $=$ a; on $\perp $ off; up $\perp $ down; etc. We also need to keep track of relations that can potentially be derived from the $P$-$H$ sentence pair. For instance, for all adjectives and nouns that appear in the sentence pair, it is easy to obtain: adj + n $\le $ n (black cat $\le $ cat). Similarly, we have n + PP/relative clause $\le $ n (friend in need $\le $ friend, dog that bites $\le $ dog), VP + advP/PP $\le $ VP (dance happily/in the morning $\le $ dance), and so on. We also have rules that extract pieces of knowledge from $P$ directly, e.g.: n$_1$ $\le $ n$_2$ from sentences of the pattern every n$_1$ is a n$_2$. One can also connect MonaLog to bigger knowledge graphs or ontologies such as DBpedia. A sentence base ${S}$, on the other hand, stores the generated entailments and contradictions. Our System: MonaLog ::: Generation Once we have a polarized CCG tree, and some $\le $ relations in ${K}$, generating entailments and contradictions is fairly straightforward. A concrete example is given in Figure FIGREF13. Note that the generated $\le $ instances are capable of producing mostly monotonicity inferences, but MonaLog can be extended to include other more complex inferences in natural logic, hence the name MonaLog. This extension is addressed in more detail in HuChenMoss. Our System: MonaLog ::: Generation ::: Entailments/inferences The key operation for generating entailments is replacement, or substitution. It can be summarized as follows: 1) For upward-entailing (UE) words/constituents, replace them with words/constituents that denote bigger sets. 2) For downward-entailing (DE) words/constituents, either replace them with those denoting smaller sets, or add modifiers (adjectives, adverbs and/or relative clauses) to create a smaller set. Thus for every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$, MonaLog can produce the following three entailments by replacing each word with the appropriate word from ${K}$: most$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$, every$^{\leavevmode {\color {red}\uparrow }}$ semanticist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$ and every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ move$^{\leavevmode {\color {red}\uparrow }}$. These are results of one replacement. Performing replacement for multiple rounds/depths can easily produce many more entailments. Our System: MonaLog ::: Generation ::: Contradictory sentences To generate sentences contradictory to the input sentence, we do the following: 1) if the sentence starts with “no (some)”, replace the first word with “some (no)”. 2) If the object is quantified by “a/some/the/every”, change the quantifier to “no”, and vice versa. 3) Negate the main verb or remove the negation. See examples in Figure FIGREF13. Our System: MonaLog ::: Generation ::: Neutral sentences MonaLog returns Neutral if it cannot find the hypothesis $H$ in ${S}.entailments$ or ${S}.contradictions$. Thus, there is no need to generate neutral sentences. Our System: MonaLog ::: Search Now that we have a set of inferences and contradictions stored in ${S}$, we can simply see if the hypothesis is in either one of the sets by comparing the strings. If yes, then return Entailment or Contradiction; if not, return Neutral, as schematically shown in Figure FIGREF13. However, the exact-string-match method is too brittle. Therefore, we apply a heuristic. If the only difference between sentences $S_1$ and $S_2$ is in the set {“a”, “be”, “ing”}, then $S_1$ and $S_2$ are considered semantically equivalent. The search is implemented using depth first search, with a default depth of 2, i.e. at most 2 replacements for each input sentence. At each node, MonaLog “expands” the sentence (i.e., an entailment of its parent) by obtaining its entailments and contradictions, and checks whether $H$ is in either set. If so, the search is terminated; otherwise the systems keeps searching until all the possible entailments and contradictions up to depth 2 have been visited. MonaLog and SICK We perform two experiments to test MonaLog. We first use MonaLog to solve the problems in a commonly used natural language inference dataset, SICK BIBREF1, comparing our results with previous systems. Second, we test the quality of the data generated by MonaLog. To do this, we generate more training data (sentence pairs) from the SICK training data using our system, and performe fine-tuning on BERT BIBREF20, a language model based on the transformer architecture BIBREF23, with the expanded dataset. In all experiments, we use the Base, Uncased model of BERT. MonaLog and SICK ::: The SICK Dataset The SICK BIBREF1 dataset includes around 10,000 English sentence pairs that are annotated to have either “Entailment”, “Neutral” or “Contradictory” relations. We choose SICK as our testing ground for several reasons. First, we want to test on a large-scale dataset, since we have shown that a similar model BIBREF24 reaches good results on parts of the smaller FraCaS dataset BIBREF25. Second, we want to make our results comparable to those of previous logic-based models such as the ones described in BIBREF26, BIBREF27, BIBREF11, BIBREF13, which were also tested on SICK. We use the data split provided in the dataset: 4,439 training problems, 4,906 test problems and 495 trial problems, see Table TABREF16 for examples. MonaLog and SICK ::: Hand-corrected SICK There are numerous issues with the original SICK dataset, as illustrated by BIBREF28, BIBREF29. They first manually checked 1,513 pairs tagged as “A entails B but B is neutral to A” (AeBBnA) in the original SICK, correcting 178 pairs that they considered to be wrong BIBREF28. Later, BIBREF29 extracted pairs from SICK whose premise and hypothesis differ in only one word, and created a simple rule-based system that used WordNet information to solve the problem. Their WordNet-based method was able to solve 1,651 problems, whose original labels in SICK were then manually checked and corrected against their system's output. They concluded that 336 problems are wrongly labeled in the original SICK. Combining the above two corrected subsets of SICK, minus the overlap, results in their corrected SICK dataset, which has 3,016 problems (3/10 of the full SICK), with 409 labels different from the original SICK (see breakdown in Table TABREF19). 16 of the corrections are in the trial set, 197 of them in the training set and 196 in the test set. This suggests that more than one out of ten problems in SICK are potentially problematic. For this reason, two authors of the current paper checked the 409 changes. We found that only 246 problems are labeled the same by our team and by BIBREF29. For cases where there is disagreement, we adjudicated the differences after a discussion. We are aware that the partially checked SICK (by two teams) is far from ideal. We therefore present results for two versions of SICK for experiment 1 (section SECREF4): the original SICK and the version corrected by our team. For the data augmentation experiment in section SECREF5, we only performed fine-tuning on the corrected SICK. As shown in a recent SICK annotation experiment by kalouli2019explaining, annotation is a complicated issue influenced by linguistic and non-linguistic factors. We leave checking the full SICK dataset to future work. Experiment 1: Using MonaLog Directly ::: Setup and Preprocessing The goal of experiment 1 is to test how accurately MonaLog solves problems in a large-scale dataset. We first used the system to solve the 495 problems in the trial set and then manually identified the cases in which the system failed. Then we determined which syntactic transformations are needed for MonaLog. After improving the results on the trial data by introducing a preprocessing step to handle limited syntactic variation (see below), we applied MonaLog on the test set. This means that the rule base of the system was optimized on the trial data, and we can test its generalization capability on the test data. The main obstacle for MonaLog is the syntactic variations in the dataset, illustrated in some examples in Table TABREF16. There exist multiple ways of dealing with these variations: One approach is to `normalize' unknown syntactic structures to a known structure. For example, we can transform passive sentences into active ones and convert existential sentences into the base form (see ex. 8399 and 219 in Table TABREF16). Another approach is to use some more abstract syntactic/semantic representation so that the linear word order can largely be ignored, e.g., represent a sentence by its dependency parse, or use Abstract Meaning Representation. Here, we explore the first option and leave the second approach to future work. We believe that dealing with a wide range of syntactic variations requires tools designed specifically for that purpose. The goal of MonaLog is to generate entailments and contradictions based on a polarized sentence instead. Below, we list the most important syntactic transformations we perform in preprocessing. Convert all passive sentences to active using pass2act. If the passive does not contain a by phrase, we add by a person. Convert existential clauses into their base form (see ex. 219 in Table TABREF16). Other transformations: someone/anyone/no one $\rightarrow ~$some/any/no person; there is no man doing sth. $\rightarrow ~$no man is doing sth.; etc. Experiment 1: Using MonaLog Directly ::: Results The results of our system on uncorrected and corrected SICK are presented in Table TABREF27, along with comparisons with other systems. Our accuracy on the uncorrected SICK (77.19%) is much higher than the majority baseline (56.36%) or the hypothesis-only baseline (56.87%) reported by BIBREF8, and only several points lower than current logic-based systems. Since our system is based on natural logic, there is no need for translation into logical forms, which makes the reasoning steps transparent and much easier to interpret. I.e., with entailments and contradictions, we can generate a natural language trace of the system, see Fig. FIGREF13. Our results on the corrected SICK are even higher (see lower part of Table TABREF27), demonstrating the effect of data quality on the final results. Note that with some simple syntactic transformations we can gain 1-2 points in accuracy. Table TABREF28 shows MonaLog's performance on the individual relations. The system is clearly very good at identifying entailments and contradictions, as demonstrated by the high precision values, especially on the corrected SICK set (98.50 precision for E and 95.02 precision for C). The lower recall values are due to MonaLog's current inability to handle syntactic variation. Based on these results, we tested a hybrid model of MonaLog and BERT (see Table TABREF27) where we exploit MonaLog's strength: Since MonaLog has a very high precision on Entailment and Contradiction, we can always trust MonaLog if it predicts E or C; when it returns N, we then fall back to BERT. This hybrid model improves the accuracy of BERT by 1% absolute to 85.95% on the corrected SICK. On the uncorrected SICK dataset, the hybrid system performs worse than BERT. Since MonaLog is optimized for the corrected SICK, it may mislabel many E and C judgments in the uncorrected dataset. The stand-alone BERT system performs better on the uncorrected data (86.74%) than the corrected set (85.00%). The corrected set may be too inconsistent since only a part has been checked. Overall, these hybird results show that it is possible to combine our high-precision system with deep learning architectures. However, more work is necessary to optimize this combined system. Experiment 1: Using MonaLog Directly ::: Error Analysis Upon closer inspection, some of MonaLog's errors consist of difficult cases, as shown in Table TABREF29. For example, in ex. 359, if our knowledge base ${K}$ contains the background fact $\mbox{\em chasing} \le \mbox{\em running}$, then MonaLog's judgment of C would be correct. In ex. 1402, if crying means screaming, then the label should be E; however, if crying here means shedding tears, then the label should probably be N. Here we also see potentially problematic labels (ex. 1760, 3403) in the original SICK dataset. Another point of interest is that 19 of MonaLog's mistakes are related to the antonym pair man vs. woman (e.g., ex. 5793 in Table TABREF29). This points to inconsistency of the SICK dataset: Whereas there are at least 19 cases tagged as Neutral (e.g., ex. 5793), there are at least 17 such pairs that are annotated as Contradictions in the test set (e.g., ex. 3521), P: A man is dancing, H: A woman is dancing (ex. 9214), P: A shirtless man is jumping over a log, H: A shirtless woman is jumping over a log. If man and woman refer to the same entity, then clearly that entity cannot be man and woman at the same time, which makes the sentence pair a contradiction. If, however, they do not refer to the same entity, then they should be Neutral. Experiment 2: Data Generation Using MonaLog Our second experiment focuses on using MonaLog to generate additional training data for machine learning models such as BERT. To our knowledge, this is the first time that a rule-based NLI system has been successfully used to generate training data for a deep learning application. Experiment 2: Data Generation Using MonaLog ::: Setup As described above, MonaLog generates entailments and contradictions when solving problems. These can be used as additional training data for a machine learning model. I.e., we pair the newly generated sentences with their input sentence, creating new pairs for training. For example, we take all the sentences in the nodes in Figure FIGREF13 as inferences and all the sentences in rectangles as contradictions, and then form sentence pairs with the input sentence. The additional data can be used directly, almost without human intervention. Thus for experiment 2, the goal is to examine the quality of these generated sentence pairs. For this, we re-train a BERT model on these pairs. If BERT trained on the manually annotated SICK training data is improved by adding data generated by MonaLog, then we can conclude that the generated data is of high quality, even comparable to human annotated data, which is what we found. More specifically, we compare the performance of BERT models trained on a) SICK training data alone, and b) SICK training data plus the entailing and contradictory pairs generated by MonaLog. All experiments are carried out using our corrected version of the SICK data set. However, note that MonaLog is designed to only generate entailments and contradictions. Thus, we only have access to newly generated examples for those two cases, we do not acquire any additional neutral cases. Consequently, adding these examples to the training data will introduce a skewing that does not reflect the class distribution in the test set. Since this will bias the machine learner against neutral cases, we use the following strategy to counteract that tendency: We relabel all cases where BERT is not confident enough for either E or C into N. We set this threshold to 0.95 but leave further optimization of the threshold to future work. Experiment 2: Data Generation Using MonaLog ::: Data Filtering and Quality Control MonaLog is prone to over-generation. For example, it may wrongly add the same adjective before a noun (phrase) twice to create a more specific noun, e.g., young young man $\le $ young man $\le $ man. Since it is possible that such examples influence the machine learning model negatively, we look into filtering such examples to improve the quality of the additional training data. We manually inspected 100 sentence pairs generated by MonaLog to check the quality and naturalness of the new sentences (see Table TABREF32 for examples). All of the generated sentences are correct in the sense that the relation between the premise and the hypothesis is correctly labeled as entailment or contradiction (see Table TABREF34). While we did not find any sentence pairs with wrong labels, some generated sentences are unnatural, as shown in Table TABREF32. Both unnatural examples contain two successive copies of the same PP. Note that our data generation hinges on correct polarities on the words and constituents. For instance, in the last example of Table TABREF32, the polarization system needs to know that few is downward entailing on both of its arguments, and without flips the arrow of its argument, in order to produce the correct polarities, on which the replacement of MonaLog depends. In order to filter unnatural sentences, such as the examples in Table TABREF32, we use a rule-based filter and remove sentences that contain bigrams of repeated words. We experiment with using one quarter or one half randomly selected sentences in addition to a setting where we use the complete set of generated sentences. Experiment 2: Data Generation Using MonaLog ::: Results Table TABREF37 shows the amount of additional sentence pairs per category along with the results of using the automatically generated sentences as additional training data. It is obvious that adding the additional training data results in gains in accuracy even though the training data becomes increasingly skewed towards E and C. When we add all additional sentence pairs, accuracy increases by more than 1.5 percent points. This demonstrates both the robustness of BERT in the current experiment and the usefulness of the generated data. The more data we add, the better the system performs. We also see that raising the threshold to relabel uncertain cases as neutral gives a small boost, from 86.51% to 86.71%. This translates into 10 cases where the relabeling corrected the answer. Finally, we also investigated whether the hybrid system, i.e., MonaLog followed by the re-trained BERT, can also profit from the additional training data. Intuitively, we would expect smaller gains since MonaLog already handles a fair amount of the entailments and contradictions, i.e., those cases where BERT profits from more examples. However the experiments show that the hybrid system reaches an even higher accuracy of 87.16%, more than 2 percent points above the baseline, equivalent to roughly 100 more problems correctly solved. Setting the high threshold for BERT to return E or C further improves accuracy to 87.49%. This brings us into the range of the state-of-the-art results, even though a direct comparison is not possible because of the differences between the corrected and uncorrected dataset. Conclusions and Future Work We have presented a working natural-logic-based system, MonaLog, which attains high accuracy on the SICK dataset and can be used to generated natural logic proofs. Considering how simple and straightforward our method is, we believe it can serve as a strong baseline or basis for other (much) more complicated systems, either logic-based or ML/DL-based. In addition, we have shown that MonaLog can generate high-quality training data, which improves the accuracy of a deep learning model when trained on the expanded dataset. As a minor point, we manually checked the corrected SICK dataset by BIBREF28, BIBREF29. There are several directions for future work. The first direction concerns the question how to handle syntactic variation from natural language input. That is, the computational process(es) for inference will usually be specified in terms of strict syntactic conditions, and naturally occurring sentences will typically not conform to those conditions. Among the strategies which allow their systems to better cope with premises and hypotheses with various syntactic structures are sophisticated versions of alignment used by e.g. MacCartney,YanakaMMB18. We will need to extend MonaLog to be able to handle such variation. In the future, we plan to use dependency relations as representations of natural language input and train a classifier that can determine which relations are crucial for inference. Second, as mentioned earlier, we are in need of a fully (rather than partially) checked SICK dataset to examine the impact of data quality on the results since the partially checked dataset may be inherently inconsistent between the checked and non-checked parts. Finally, with regard to the machine learning experiments, we plan to investigate other methods of addressing the imbalance in the training set created by additional entailments and contradictions. We will look into options for artificially creating neutral examples, e.g. by finding reverse entailments, as illustrated by richardson2019probing. Acknowledgements We thank the anonymous reviewers for their helpful comments. Hai Hu is supported by China Scholarship Council.
They use Monalog for data-augmentation to fine-tune BERT on this task
45893f31ef07f0cca5783bd39c4e60630d6b93b3
45893f31ef07f0cca5783bd39c4e60630d6b93b3_0
Q: How do they select monotonicity facts? Text: Introduction There has been rapid progress on natural language inference (NLI) in the last several years, due in large part to recent advances in neural modeling BIBREF0 and the introduction of several new large-scale inference datasets BIBREF1, BIBREF2, BIBREF3, BIBREF4. Given the high performance of current state-of-the-art models, there has also been interest in understanding the limitations of these models (given their uninterpretability) BIBREF5, BIBREF6, as well as finding systematic biases in benchmark datasets BIBREF7, BIBREF8. In parallel to these efforts, there have also been recent logic-based approaches to NLI BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, which take inspiration from linguistics. In contrast to early attempts at using logic BIBREF14, these approaches have proven to be more robust. However they tend to use many rules and their output can be hard to interpret. It is sometimes unclear whether the attendant complexity is justified, especially given that such models are currently far outpaced by data-driven models and are generally hard to hybridize with data-driven techniques. In this work, we introduce a new logical inference engine called MonaLog, which is based on natural logic and work on monotonicity stemming from vanBenthemEssays86. In contrast to the logical approaches cited above, our starting point is different in that we begin with the following two questions: 1) what is the simplest logical system that one can come up with to solve empirical NLI problems (i.e., the system with minimal amounts of primitives and background knowledge)?; and 2) what is the lower-bound performance of such a model? Like other approaches to natural logic BIBREF15, BIBREF16, our model works by reasoning over surface forms (as opposed to translating to symbolic representations) using a small inventory of monotonicity facts about quantifiers, lexical items and token-level polarity BIBREF17; proofs in the calculus are hence fully interpretable and expressible in ordinary language. Unlike existing work on natural logic, however, our model avoids the need for having expensive alignment and search sub-procedures BIBREF18, BIBREF19, and relies on a much smaller set of background knowledge and primitive relations than MacCartneyManning. To show the effectiveness of our approach, we show results on the SICK dataset BIBREF1, a common benchmark for logic-based NLI, and find MonaLog to be competitive with more complicated logic-based approaches (many of which require full semantic parsing and more complex logical machinery). We also introduce a supplementary version of SICK that corrects several common annotation mistakes (e.g., asymmetrical inference annotations) based on previous work by kalouli2017entail,kalouli2018. Positive results on both these datasets show the ability of lightweight monotonicity models to handle many of the inferences found in current NLI datasets, hence putting a more reliable lower-bound on what results the simplest logical approach is capable of achieving on this benchmark. Since our logic operates over surface forms, it is straightforward to hybridize our models. We investigate using MonaLog in combination with the language model BERT BIBREF20, including for compositional data augmentation, i.e, re-generating entailed versions of examples in our training sets. To our knowledge, our approach is the first attempt to use monotonicity for data augmentation, and we show that such augmentation can generate high-quality training data with which models like BERT can improve performance. Our System: MonaLog The goal of NLI is to determine, given a premise set $P$ and a hypothesis sentence $H$, whether $H$ follows from the meaning of $P$ BIBREF21. In this paper, we look at single-premise problems that involve making a standard 3-way classification decision (i.e., Entailment (H), Contradict (C) and Neutral (N)). Our general monotonicity reasoning system works according to the pipeline in Figure FIGREF1. Given a premise text, we first do Arrow Tagging by assigning polarity annotations (i.e., the arrows $\uparrow ,\downarrow $, which are the basic primitives of our logic) to tokens in text. These surface-level annotations, in turn, are associated with a set of natural logic inference rules that provide instructions for how to generate entailments and contradictions by span replacements over these arrows (which relies on a library of span replacement rules). For example, in the sentence All schoolgirls are on the train, the token schoolgirls is associated with a polarity annotation $\downarrow $, which indicates that in this sentential context, the span schoolgirls can be replaced with a semantically more specific concept (e.g., happy schoolgirls) in order to generate an entailment. A generation and search procedure is then applied to see if the hypothesis text can be generated from the premise using these inference rules. A proof in this model is finally a particular sequence of edits (e.g., see Figure FIGREF13) that derive the hypothesis text from the premise text rules and yield an entailment or contradiction. In the following sections, we provide the details of our particular implementation of these different components in MonaLog. Our System: MonaLog ::: Polarization (Arrow Tagging) Given an input premise $P$, MonaLog first polarizes each of its tokens and constituents, calling the system described by BIBREF17, which performs polarization on a CCG parse tree. For example, a polarized $P$ could be every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$. Note that since we ignore morphology in the system, tokens are represented by lemmas. Our System: MonaLog ::: Knowledge Base @!START@${K}$@!END@ and Sentence Base @!START@${S}$@!END@ MonaLog utilizes two auxiliary sets. First, a knowledge base ${K}$ that stores the world knowledge needed for inference, e.g., semanticist $\le $ linguist and swim $\le $ move, which captures the facts that $[\![\mbox{\em semanticist}]\!]$ denotes a subset of $[\![\mbox{\em linguist}]\!]$, and that $[\![\mbox{\em swim}]\!]$ denotes a subset of $[\![\mbox{\em move}]\!]$, respectively. Such world knowledge can be created manually for the problem at hand, or derived easily from existing resources such as WordNet BIBREF22. Note that we do not blindly add all relations from WordNet to our knowledge base, since this would hinge heavily on word sense disambiguation (we need to know whether the “bank” is a financial institution or a river bank to extract its relations correctly). In the current implementation, we avoid this by adding x $\le $ y or x $\perp $ y relations only if both x and y are words in the premise-hypothesis pair. Additionally, some relations that involve quantifiers and prepositions need to be hard-coded, since WordNet does not include them: every $=$ all $=$ each $\le $ most $\le $ many $\le $ a few $=$ several $\le $ some $=$ a; the $\le $ some $=$ a; on $\perp $ off; up $\perp $ down; etc. We also need to keep track of relations that can potentially be derived from the $P$-$H$ sentence pair. For instance, for all adjectives and nouns that appear in the sentence pair, it is easy to obtain: adj + n $\le $ n (black cat $\le $ cat). Similarly, we have n + PP/relative clause $\le $ n (friend in need $\le $ friend, dog that bites $\le $ dog), VP + advP/PP $\le $ VP (dance happily/in the morning $\le $ dance), and so on. We also have rules that extract pieces of knowledge from $P$ directly, e.g.: n$_1$ $\le $ n$_2$ from sentences of the pattern every n$_1$ is a n$_2$. One can also connect MonaLog to bigger knowledge graphs or ontologies such as DBpedia. A sentence base ${S}$, on the other hand, stores the generated entailments and contradictions. Our System: MonaLog ::: Generation Once we have a polarized CCG tree, and some $\le $ relations in ${K}$, generating entailments and contradictions is fairly straightforward. A concrete example is given in Figure FIGREF13. Note that the generated $\le $ instances are capable of producing mostly monotonicity inferences, but MonaLog can be extended to include other more complex inferences in natural logic, hence the name MonaLog. This extension is addressed in more detail in HuChenMoss. Our System: MonaLog ::: Generation ::: Entailments/inferences The key operation for generating entailments is replacement, or substitution. It can be summarized as follows: 1) For upward-entailing (UE) words/constituents, replace them with words/constituents that denote bigger sets. 2) For downward-entailing (DE) words/constituents, either replace them with those denoting smaller sets, or add modifiers (adjectives, adverbs and/or relative clauses) to create a smaller set. Thus for every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$, MonaLog can produce the following three entailments by replacing each word with the appropriate word from ${K}$: most$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$, every$^{\leavevmode {\color {red}\uparrow }}$ semanticist$^{\leavevmode {\color {red}\downarrow }}$ swim$^{\leavevmode {\color {red}\uparrow }}$ and every$^{\leavevmode {\color {red}\uparrow }}$ linguist$^{\leavevmode {\color {red}\downarrow }}$ move$^{\leavevmode {\color {red}\uparrow }}$. These are results of one replacement. Performing replacement for multiple rounds/depths can easily produce many more entailments. Our System: MonaLog ::: Generation ::: Contradictory sentences To generate sentences contradictory to the input sentence, we do the following: 1) if the sentence starts with “no (some)”, replace the first word with “some (no)”. 2) If the object is quantified by “a/some/the/every”, change the quantifier to “no”, and vice versa. 3) Negate the main verb or remove the negation. See examples in Figure FIGREF13. Our System: MonaLog ::: Generation ::: Neutral sentences MonaLog returns Neutral if it cannot find the hypothesis $H$ in ${S}.entailments$ or ${S}.contradictions$. Thus, there is no need to generate neutral sentences. Our System: MonaLog ::: Search Now that we have a set of inferences and contradictions stored in ${S}$, we can simply see if the hypothesis is in either one of the sets by comparing the strings. If yes, then return Entailment or Contradiction; if not, return Neutral, as schematically shown in Figure FIGREF13. However, the exact-string-match method is too brittle. Therefore, we apply a heuristic. If the only difference between sentences $S_1$ and $S_2$ is in the set {“a”, “be”, “ing”}, then $S_1$ and $S_2$ are considered semantically equivalent. The search is implemented using depth first search, with a default depth of 2, i.e. at most 2 replacements for each input sentence. At each node, MonaLog “expands” the sentence (i.e., an entailment of its parent) by obtaining its entailments and contradictions, and checks whether $H$ is in either set. If so, the search is terminated; otherwise the systems keeps searching until all the possible entailments and contradictions up to depth 2 have been visited. MonaLog and SICK We perform two experiments to test MonaLog. We first use MonaLog to solve the problems in a commonly used natural language inference dataset, SICK BIBREF1, comparing our results with previous systems. Second, we test the quality of the data generated by MonaLog. To do this, we generate more training data (sentence pairs) from the SICK training data using our system, and performe fine-tuning on BERT BIBREF20, a language model based on the transformer architecture BIBREF23, with the expanded dataset. In all experiments, we use the Base, Uncased model of BERT. MonaLog and SICK ::: The SICK Dataset The SICK BIBREF1 dataset includes around 10,000 English sentence pairs that are annotated to have either “Entailment”, “Neutral” or “Contradictory” relations. We choose SICK as our testing ground for several reasons. First, we want to test on a large-scale dataset, since we have shown that a similar model BIBREF24 reaches good results on parts of the smaller FraCaS dataset BIBREF25. Second, we want to make our results comparable to those of previous logic-based models such as the ones described in BIBREF26, BIBREF27, BIBREF11, BIBREF13, which were also tested on SICK. We use the data split provided in the dataset: 4,439 training problems, 4,906 test problems and 495 trial problems, see Table TABREF16 for examples. MonaLog and SICK ::: Hand-corrected SICK There are numerous issues with the original SICK dataset, as illustrated by BIBREF28, BIBREF29. They first manually checked 1,513 pairs tagged as “A entails B but B is neutral to A” (AeBBnA) in the original SICK, correcting 178 pairs that they considered to be wrong BIBREF28. Later, BIBREF29 extracted pairs from SICK whose premise and hypothesis differ in only one word, and created a simple rule-based system that used WordNet information to solve the problem. Their WordNet-based method was able to solve 1,651 problems, whose original labels in SICK were then manually checked and corrected against their system's output. They concluded that 336 problems are wrongly labeled in the original SICK. Combining the above two corrected subsets of SICK, minus the overlap, results in their corrected SICK dataset, which has 3,016 problems (3/10 of the full SICK), with 409 labels different from the original SICK (see breakdown in Table TABREF19). 16 of the corrections are in the trial set, 197 of them in the training set and 196 in the test set. This suggests that more than one out of ten problems in SICK are potentially problematic. For this reason, two authors of the current paper checked the 409 changes. We found that only 246 problems are labeled the same by our team and by BIBREF29. For cases where there is disagreement, we adjudicated the differences after a discussion. We are aware that the partially checked SICK (by two teams) is far from ideal. We therefore present results for two versions of SICK for experiment 1 (section SECREF4): the original SICK and the version corrected by our team. For the data augmentation experiment in section SECREF5, we only performed fine-tuning on the corrected SICK. As shown in a recent SICK annotation experiment by kalouli2019explaining, annotation is a complicated issue influenced by linguistic and non-linguistic factors. We leave checking the full SICK dataset to future work. Experiment 1: Using MonaLog Directly ::: Setup and Preprocessing The goal of experiment 1 is to test how accurately MonaLog solves problems in a large-scale dataset. We first used the system to solve the 495 problems in the trial set and then manually identified the cases in which the system failed. Then we determined which syntactic transformations are needed for MonaLog. After improving the results on the trial data by introducing a preprocessing step to handle limited syntactic variation (see below), we applied MonaLog on the test set. This means that the rule base of the system was optimized on the trial data, and we can test its generalization capability on the test data. The main obstacle for MonaLog is the syntactic variations in the dataset, illustrated in some examples in Table TABREF16. There exist multiple ways of dealing with these variations: One approach is to `normalize' unknown syntactic structures to a known structure. For example, we can transform passive sentences into active ones and convert existential sentences into the base form (see ex. 8399 and 219 in Table TABREF16). Another approach is to use some more abstract syntactic/semantic representation so that the linear word order can largely be ignored, e.g., represent a sentence by its dependency parse, or use Abstract Meaning Representation. Here, we explore the first option and leave the second approach to future work. We believe that dealing with a wide range of syntactic variations requires tools designed specifically for that purpose. The goal of MonaLog is to generate entailments and contradictions based on a polarized sentence instead. Below, we list the most important syntactic transformations we perform in preprocessing. Convert all passive sentences to active using pass2act. If the passive does not contain a by phrase, we add by a person. Convert existential clauses into their base form (see ex. 219 in Table TABREF16). Other transformations: someone/anyone/no one $\rightarrow ~$some/any/no person; there is no man doing sth. $\rightarrow ~$no man is doing sth.; etc. Experiment 1: Using MonaLog Directly ::: Results The results of our system on uncorrected and corrected SICK are presented in Table TABREF27, along with comparisons with other systems. Our accuracy on the uncorrected SICK (77.19%) is much higher than the majority baseline (56.36%) or the hypothesis-only baseline (56.87%) reported by BIBREF8, and only several points lower than current logic-based systems. Since our system is based on natural logic, there is no need for translation into logical forms, which makes the reasoning steps transparent and much easier to interpret. I.e., with entailments and contradictions, we can generate a natural language trace of the system, see Fig. FIGREF13. Our results on the corrected SICK are even higher (see lower part of Table TABREF27), demonstrating the effect of data quality on the final results. Note that with some simple syntactic transformations we can gain 1-2 points in accuracy. Table TABREF28 shows MonaLog's performance on the individual relations. The system is clearly very good at identifying entailments and contradictions, as demonstrated by the high precision values, especially on the corrected SICK set (98.50 precision for E and 95.02 precision for C). The lower recall values are due to MonaLog's current inability to handle syntactic variation. Based on these results, we tested a hybrid model of MonaLog and BERT (see Table TABREF27) where we exploit MonaLog's strength: Since MonaLog has a very high precision on Entailment and Contradiction, we can always trust MonaLog if it predicts E or C; when it returns N, we then fall back to BERT. This hybrid model improves the accuracy of BERT by 1% absolute to 85.95% on the corrected SICK. On the uncorrected SICK dataset, the hybrid system performs worse than BERT. Since MonaLog is optimized for the corrected SICK, it may mislabel many E and C judgments in the uncorrected dataset. The stand-alone BERT system performs better on the uncorrected data (86.74%) than the corrected set (85.00%). The corrected set may be too inconsistent since only a part has been checked. Overall, these hybird results show that it is possible to combine our high-precision system with deep learning architectures. However, more work is necessary to optimize this combined system. Experiment 1: Using MonaLog Directly ::: Error Analysis Upon closer inspection, some of MonaLog's errors consist of difficult cases, as shown in Table TABREF29. For example, in ex. 359, if our knowledge base ${K}$ contains the background fact $\mbox{\em chasing} \le \mbox{\em running}$, then MonaLog's judgment of C would be correct. In ex. 1402, if crying means screaming, then the label should be E; however, if crying here means shedding tears, then the label should probably be N. Here we also see potentially problematic labels (ex. 1760, 3403) in the original SICK dataset. Another point of interest is that 19 of MonaLog's mistakes are related to the antonym pair man vs. woman (e.g., ex. 5793 in Table TABREF29). This points to inconsistency of the SICK dataset: Whereas there are at least 19 cases tagged as Neutral (e.g., ex. 5793), there are at least 17 such pairs that are annotated as Contradictions in the test set (e.g., ex. 3521), P: A man is dancing, H: A woman is dancing (ex. 9214), P: A shirtless man is jumping over a log, H: A shirtless woman is jumping over a log. If man and woman refer to the same entity, then clearly that entity cannot be man and woman at the same time, which makes the sentence pair a contradiction. If, however, they do not refer to the same entity, then they should be Neutral. Experiment 2: Data Generation Using MonaLog Our second experiment focuses on using MonaLog to generate additional training data for machine learning models such as BERT. To our knowledge, this is the first time that a rule-based NLI system has been successfully used to generate training data for a deep learning application. Experiment 2: Data Generation Using MonaLog ::: Setup As described above, MonaLog generates entailments and contradictions when solving problems. These can be used as additional training data for a machine learning model. I.e., we pair the newly generated sentences with their input sentence, creating new pairs for training. For example, we take all the sentences in the nodes in Figure FIGREF13 as inferences and all the sentences in rectangles as contradictions, and then form sentence pairs with the input sentence. The additional data can be used directly, almost without human intervention. Thus for experiment 2, the goal is to examine the quality of these generated sentence pairs. For this, we re-train a BERT model on these pairs. If BERT trained on the manually annotated SICK training data is improved by adding data generated by MonaLog, then we can conclude that the generated data is of high quality, even comparable to human annotated data, which is what we found. More specifically, we compare the performance of BERT models trained on a) SICK training data alone, and b) SICK training data plus the entailing and contradictory pairs generated by MonaLog. All experiments are carried out using our corrected version of the SICK data set. However, note that MonaLog is designed to only generate entailments and contradictions. Thus, we only have access to newly generated examples for those two cases, we do not acquire any additional neutral cases. Consequently, adding these examples to the training data will introduce a skewing that does not reflect the class distribution in the test set. Since this will bias the machine learner against neutral cases, we use the following strategy to counteract that tendency: We relabel all cases where BERT is not confident enough for either E or C into N. We set this threshold to 0.95 but leave further optimization of the threshold to future work. Experiment 2: Data Generation Using MonaLog ::: Data Filtering and Quality Control MonaLog is prone to over-generation. For example, it may wrongly add the same adjective before a noun (phrase) twice to create a more specific noun, e.g., young young man $\le $ young man $\le $ man. Since it is possible that such examples influence the machine learning model negatively, we look into filtering such examples to improve the quality of the additional training data. We manually inspected 100 sentence pairs generated by MonaLog to check the quality and naturalness of the new sentences (see Table TABREF32 for examples). All of the generated sentences are correct in the sense that the relation between the premise and the hypothesis is correctly labeled as entailment or contradiction (see Table TABREF34). While we did not find any sentence pairs with wrong labels, some generated sentences are unnatural, as shown in Table TABREF32. Both unnatural examples contain two successive copies of the same PP. Note that our data generation hinges on correct polarities on the words and constituents. For instance, in the last example of Table TABREF32, the polarization system needs to know that few is downward entailing on both of its arguments, and without flips the arrow of its argument, in order to produce the correct polarities, on which the replacement of MonaLog depends. In order to filter unnatural sentences, such as the examples in Table TABREF32, we use a rule-based filter and remove sentences that contain bigrams of repeated words. We experiment with using one quarter or one half randomly selected sentences in addition to a setting where we use the complete set of generated sentences. Experiment 2: Data Generation Using MonaLog ::: Results Table TABREF37 shows the amount of additional sentence pairs per category along with the results of using the automatically generated sentences as additional training data. It is obvious that adding the additional training data results in gains in accuracy even though the training data becomes increasingly skewed towards E and C. When we add all additional sentence pairs, accuracy increases by more than 1.5 percent points. This demonstrates both the robustness of BERT in the current experiment and the usefulness of the generated data. The more data we add, the better the system performs. We also see that raising the threshold to relabel uncertain cases as neutral gives a small boost, from 86.51% to 86.71%. This translates into 10 cases where the relabeling corrected the answer. Finally, we also investigated whether the hybrid system, i.e., MonaLog followed by the re-trained BERT, can also profit from the additional training data. Intuitively, we would expect smaller gains since MonaLog already handles a fair amount of the entailments and contradictions, i.e., those cases where BERT profits from more examples. However the experiments show that the hybrid system reaches an even higher accuracy of 87.16%, more than 2 percent points above the baseline, equivalent to roughly 100 more problems correctly solved. Setting the high threshold for BERT to return E or C further improves accuracy to 87.49%. This brings us into the range of the state-of-the-art results, even though a direct comparison is not possible because of the differences between the corrected and uncorrected dataset. Conclusions and Future Work We have presented a working natural-logic-based system, MonaLog, which attains high accuracy on the SICK dataset and can be used to generated natural logic proofs. Considering how simple and straightforward our method is, we believe it can serve as a strong baseline or basis for other (much) more complicated systems, either logic-based or ML/DL-based. In addition, we have shown that MonaLog can generate high-quality training data, which improves the accuracy of a deep learning model when trained on the expanded dataset. As a minor point, we manually checked the corrected SICK dataset by BIBREF28, BIBREF29. There are several directions for future work. The first direction concerns the question how to handle syntactic variation from natural language input. That is, the computational process(es) for inference will usually be specified in terms of strict syntactic conditions, and naturally occurring sentences will typically not conform to those conditions. Among the strategies which allow their systems to better cope with premises and hypotheses with various syntactic structures are sophisticated versions of alignment used by e.g. MacCartney,YanakaMMB18. We will need to extend MonaLog to be able to handle such variation. In the future, we plan to use dependency relations as representations of natural language input and train a classifier that can determine which relations are crucial for inference. Second, as mentioned earlier, we are in need of a fully (rather than partially) checked SICK dataset to examine the impact of data quality on the results since the partially checked dataset may be inherently inconsistent between the checked and non-checked parts. Finally, with regard to the machine learning experiments, we plan to investigate other methods of addressing the imbalance in the training set created by additional entailments and contradictions. We will look into options for artificially creating neutral examples, e.g. by finding reverse entailments, as illustrated by richardson2019probing. Acknowledgements We thank the anonymous reviewers for their helpful comments. Hai Hu is supported by China Scholarship Council.
They derive it from Wordnet
182c7919329bc5678cf0c79687a66c0f7782577e
182c7919329bc5678cf0c79687a66c0f7782577e_0
Q: How does the model recognize entities and their relation to answers at inference time when answers are not accessible? Text: Introduction Question Answering is a task that requires capabilities beyond simple NLP since it involves both linguistic techniques and inference abilities. Both the document sources and the questions are expressed in natural language, which is ambiguous and complex to understand. To perform such a task, a model needs in fact to understand the underlying meaning of text. Achieving this ability is quite challenging for a machine since it requires a reasoning phase (chaining facts, basic deductions, etc.) over knowledge extracted from the plain input data. In this article, we focus on two Question Answering tasks: a Reasoning Question Answering (RQA) and a Reading Comprehension (RC). These tasks are tested by submitting questions to be answered directly after reading a piece of text (e.g. a document or a paragraph). Recent progress in the field has been possible thanks to machine learning algorithms which automatically learn from large collections of data. Deep Learning BIBREF0 algorithms achieve the current State-of-The-Art in our tasks of interest. A particularly promising approach is based on Memory Augmented Neural Networks. These networks are also known as Memory Networks BIBREF1 or Neural Turing Machines BIBREF2 . In the literature the RQA and RC tasks are typically solved by different models. However, the two tasks share a similar scope and structure. We propose to tackle both with a model called Question Dependent Recurrent Entity Network, which improves over the model called Recurrent Entity Network BIBREF3 . Our major contributions are: 1) exploiting knowledge of the question for storing relevant facts in memory, 2) adding a tighter regularisation scheme, and 3) changing the activation functions. We test and compare our model on two datasets, bAbI BIBREF4 and BIBREF5 , which are standard benchmark for both tasks. The rest of the paper is organised as follows: section Related outlines the models used in QA tasks, while section Model the proposed QDREN model. Section Experiments and Results show training details and performance achieved by our model. The section Analysis reports a visualisation with the aim to explain the obtained results. Finally, section Conclusions summarise the work done. Reasoning Question Answering A set of synthetic tasks, called bAbI BIBREF4 , has been proposed for testing the ability of a machine in chaining facts, performing simple inductions or deductions. The dataset is available in two sizes, 1K and 10K training samples, and in two settings, i.e. with and without supporting facts. The latter allows knowing which facts in the input are needed for answering the question (i.e. a stronger supervision). Obviously, the 1K sample setting with no supporting facts is quite hard to handle, and it is still an open research problem. Memory Network BIBREF1 was one of the first models to provide the ability to explicitly store facts in memory, achieving good results on the bAbI dataset. An evolution of this model is the End to End Memory Network BIBREF6 , which allows for end-to-end training. This model represents the State-of-The-Art in the bAbI task with 1K training samples. Several other models have been tested in the bAbI tasks achieving competitive results, such as Neural Turing Machine BIBREF2 , Differentiable Neural Computer BIBREF7 and Dynamic Memory Network BIBREF8 , BIBREF9 . Several other baselines have also been proposed BIBREF4 , such as: an $n$ -gram BIBREF10 models, an LSTM reader and an SVM model. However, some of them still required strong supervision by means of the supporting facts. Reading Comprehension Reading Comprehension is defined as the ability to read some text, process it, and understand its meaning. A impending issue for tackling this task was to find suitably large datasets with human annotated samples. This shortcoming has been addressed by collecting documents which contain easy recognizable short summary, e.g. news articles, which contain a number of bullet points, summarizing aspects of the information contained in the article. Each of these short summaries is turned into a fill-in question template, by selecting an entity and replacing it with an anonymized placeholder. Three datasets follows this style of annotation: Children’s Text Books BIBREF11 , CNN & Daily Mail news articles BIBREF5 , and Who did What BIBREF12 . It is also worth to mention Squad BIBREF13 , a human annotated dataset from Stanford NLP group. Memory Networks, described in the previous sub-section, has been tested BIBREF11 on both the CNN and CBT datasets, achieving good results. The Attentive and Impatient Reader BIBREF5 was the first model proposed for the CNN and Daily Mail dataset, and it is therefore often used as a baseline. While this model achieved good initial results, shortly later a small variation to such model, called Standford Attentive Reader BIBREF14 , increased its accuracy by 10%. Another group of models are based on an Artificial Neural Network architecture called Pointer Network BIBREF15 . Attentive Sum Reader BIBREF16 and Attention over Attention BIBREF17 use a similar idea for solving different reading comprehension tasks. EpiReader BIBREF18 and Dynamic Entity Representation BIBREF19 , partially follow the Pointer Network framework but they also achieve impressive results in the RC tasks. Also for this task several baselines, both learning and non-learning, have been proposed. The most commonly used are: Frame-Semantics, Word distance, and LSTM Reader BIBREF5 and its variation (windowing etc.). Proposed Model Our model is based on the Recurrent Entity Network (REN) BIBREF3 model. The latter is the only model able to pass all the 20 bAbI tasks using the 10K sample size and without any supporting facts. However, this model fails many tasks with the 1K setting, and it has not been tried on more challenging RC datasets, like the CNN news articles. Thus, we propose a variant to the original model called Question Dependent Recurrent Entity Network ( $QDREN$ ). This model tries to overcome the limitations of the previous approach. The model consists in three main components: Input Encoder, Dynamic Memory, and Output Module. The training data consists of tuples $\lbrace (x_i,y_i)\rbrace _{i=1}^n$ , with $n$ equal to the sample size, where: $x_i$ is composed by a tuple $(T, q)$ , where $T$ is a set of sentences $\lbrace s_{1},\dots ,s_{t}\rbrace $ , each of which has $m$ words, and $q$ a single sentence with $k$ words representing the question. Instead, $y_i$ is a single word that represents the answer. The Input Encoder transforms the set of words of a sentence $s_{t}$ and the question $q$ into a single vector representation by using a multiplicative mask. Let's define $E\in \mathbb {R}^{|V|\times d}$ the embedding matrix, that is used to convert words to vectors, i.e. $E(w)=e \in \mathbb {R}^d$ . Hence, $\lbrace e_{1},\dots ,e_{m} \rbrace $ are the word embedding of each word in the sentence $s_{t}$ and $\lbrace e_{1},\dots ,e_{k}\rbrace $ the embedding of the question's words. The multiplicative masks for the sentences are defined as $f^{(s)} = \lbrace f_1^{(s)},\dots ,f_m^{(s)}\rbrace $ and $f^{(q)} = \lbrace f_1^{(q)},\dots ,f_m^{(q)}\rbrace $ for the question, where each $f_i \in \mathbb {R}^d$ . The encoded vector of a sentence is defined as: $$s_{t} = \sum _{r=1}^m e_{r} \odot f_r^{(s)} \qquad \qquad q= \sum _{r=1}^k e_{r} \odot f_r^{(q)} \nonumber $$ (Eq. 4) Dynamic Memory stores information of entities present in $T$ . This module is very similar to a Gated Recurrent Unit (GRU) BIBREF20 with a hidden state divided into blocks. Moreover, each block ideally represents an entity (i.e. person, location etc.), and it stores relevant facts about it. Each block $i$ is made of a hidden state $h_i\in \mathbb {R}^d$ and a key $k_i\in \mathbb {R}^d$ , where $d$ is the embedding size. The Dynamic Memory module is made of a set of blocks, which can be represent with a set of hidden states $\lbrace h_1,\dots ,h_z \rbrace $ and their correspondent set of keys $\lbrace k_1,\dots ,k_z \rbrace $ . The equation used to update a generic block $i$ are the following: $ g_i^{(t)} =& \sigma (s_t^T h_i^{(t-1)} + s_t^T k_i^{(t-1)} + s_t^T q ) &\text{(Gating Function)}&\\ \hat{h}_i^{(t)} =& \phi (U h_i^{(t-1)} + V k_i^{(t-1)} + W s_t ) &\text{(Candidate Memory)}&\\ h_i^{(t)} =& h_i^{(t-1)} + g_i^{(t)} \odot \hat{h}_i^{(t)} &\text{(New Memory)}&\\ h_i^{(t)} =& h_i^{(t)}/\Vert h_i^{(t)} \Vert &\text{(Reset Memory)}&\\ $ where $\sigma $ represents the sigmoid function, $\phi $ a generic activation function which can be chosen among a set (e.g. sigmoid, ReLU, etc.). $g_i^{(t)}$ is the gating function which determines how much the $i$ th memory should be updated, and $\hat{h}_i^{(t)}$ is the new candidate value of the memory to be combined with the existing one $h_i^{(t-1)}$ . The matrix $U \in \mathbb {R}^{d \times d}$ , $V \in \mathbb {R}^{d \times d}$ , $W \in \mathbb {R}^{d \times d}$ are shared among different blocks, and are trained together with the key vectors. The addition of the $s_t^T q$ term in the gating function is our main contribution. We add such term with the assumption that the question can be useful to focus the attention of the model while analyzing the input sentences. The Output Module creates a probability distribution over the memories' hidden states using the question $q$ . Thus, the hidden states are summed up, using the probability as weight, to obtain a single state representing all the input. Finally, the network output is obtained by combining the final state with the question. Let us define $R \in \mathbb {R}^{|V|\times d }$ , $H \in \mathbb {R}^{d \times d}$ , $\hat{y} \in \mathbb {R}^{|V|}$ , $z$ is the number of blocks, and $\phi $ can be chosen among different activation functions. Then, we have: $ p_i =& \text{Softmax}(q^T h_i)\\ u =& \sum _{j=1}^{z} p_j h_j \\ \hat{y} =& R \phi (q + H u) $ The model is trained using a cross entropy loss $H(\hat{y}, y)$ plus L2 regularisation term, where $y$ is the one hot encoding of the correct answer. The sigmoid function and the L2 term are two novelty added to the original REN. Overall, the trainable parameters are: $$\Theta = [E,f^{(s)},f^{(q)}, U, V, W, k_1,\dots ,k_z, R, H ] \nonumber $$ (Eq. 6) where $f^{(s)}$ refers to the sentence multiplicative masks, $f^{(q)}$ to the question multiplicative masks, and each $k_i$ to the key of a generic block $i$ . The number of parameters is dominated by $E$ and $R$ , since they depend on the vocabulary size. However, $R$ is normally is much smaller than $E$ like in the CNN dataset, in which the prediction is made on a restricted number of entities. All the parameters are learned using the Backpropagation Through Time (BPTT) algorithm. A schematic representation of the model is shown in Figure 1 . Experiments and Results Our model has been implemented using TensorFlow v1.1 BIBREF21 and the experiments have been run on a Linux server with 4 Nvidia P100 GPUs. As mentioned earlier, we tested our model in two datasets: the bAbI 1k sample and the CNN news articles. The first dataset have 20 separate tasks, each of which has 900/100/1000 training, validation, and test samples. Instead, the second one has 380298/3924/3198 training, validation and test samples. We kept the original splitting to compare our results with the existing ones. Analysis To better understand how our proposed model (i.e. QDREN) works and how it improves the accuracy of the existing REN, we studied the gating function behavior. Indeed, the output of this function decides how much and what we store in each memory cell, and it is also where our model differs from the original one. Moreover, we trained the QDREN and the original REN using the bAbI task number 1 (using 20 memory blocks). We pick up this task since both models pass it, and it is one of the simplest, which also allows to better understand and visualize the results. Indeed, we have tried to visualize other tasks but the result was difficult to understand since there were too many sentences in input and it was difficult to understand how the gate opened. The visualization result is shown in Figure 2 , where we plotted the activation matrix for both models, using a sample of the validation set. In these plots, we can notice how the two models learn which information to store. In Figure 2 (a), we notice that the QDREN is opening the gates just when in the sentence appears the entity named Mary. This because such entity is also present in the question (i.e., "where is Mary?"). Even though the model is focusing on the right entity, its gates are opening all at the same times. In fact, we guess that a sparser activation would be better since it may have modeled which other entities are relevant for the final answer. Instead, the gaiting activation of the original REN is sparse, which is good if we would like to learn all the relevant facts in the text. Indeed, the model effectively assigns a block to each entity and it opens the gates just ones such entity appears in the input sentences. For example, in Figure 2 (b) the block cell number 13 supposedly represents the entity Sandra, since each sentence in which this name appears the gate function of the block fully opens (value almost 1). Futher, we can notice the same phenomenon with the entity John (cell 10), Daniel (cell 4), and Mary (cell 14). Other entities (e.g., kitchen, bathroom, etc.) are more difficult to recognize in the plot since their activation is less strong and probably distributes this information among blocks. Conclusion In this paper we presented the Question Dependent Recurrent Entity Network, used for reasoning and reading comprehension tasks. This model uses a particular RNN cell in order to store just relevant information about the given question. In this way, in combination with the original Recurrent Entity Network (keys and memory), we improved the State-of-The-Art in the bAbI 1k task and achieved promising results in the Reading comprehension task on the CNN & Daily news dataset. However, we believe that there are still margins for improving the behavior for the proposed cell. Indeed, the cell has not enough expressive power to make a selective activation among different memory blocks (notice in Figure 2 (a) the gates open for all the memories). This does not seem to be a serious problem since we actually outperform other models, but it could be the key to finally pass all the bAbI tasks. Acknowledgments This work has been supported in part by grant no. GA_2016_009 "Grandi Attrezzature 2016" by the University of Pisa.
gating function, Dynamic Memory
0747cecb3c72594c5d15ba18490566be1ffdbfad
0747cecb3c72594c5d15ba18490566be1ffdbfad_0
Q: What other solutions do they compare to? Text: Introduction Teaching machines to learn reading comprehension is one of the core tasks in NLP field. Recently machine comprehension task accumulates much concern among NLP researchers. We have witnessed significant progress since the release of large-scale datasets like SQuAD BIBREF0 , MS-MARCO BIBREF1 , TriviaQA BIBREF2 , CNN/Daily Mail BIBREF3 and Children's Book Test BIBREF4 . The essential problem of machine comprehension is to predict the correct answer referring to a given passage with relevant question. If a machine can obtain a good score from predicting the right answer, we can say the machine is capable of understanding the given context. Many previous approaches BIBREF5 BIBREF6 BIBREF7 adopt attention mechanisms along with deep neural network tactics and pointer network to establish interactions between the question and the passage. The superiority of these frameworks are to enable questions focus on more relevant targeted areas within passages. Although these works have achieved promising performance for MC task, most of them still suffer from the inefficiency in three perspectives: (1) Comprehensive understanding on the lexical and linguistic level. (2) Complex interactions among questions and passages in a scientific reading procedure. (3) Precise answer refining over the passage. For all the time through, we consider a philosophy question: What will people do when they are having a reading comprehension test? Recall how our teacher taught us may shed some light. As a student, we recite words with relevant properties such as part-of-speech tag, the synonyms, entity type and so on. In order to promote answer's accuracy, we iteratively and interactively read the question and the passage to locate the answer's boundary. Sometimes we will check the answer to ensure the refining accuracy. Here we draw a flow path to depict what on earth the scientific reading skills are in the Figure 1. As we see, basic word understanding, iterative reading interaction and attentive checking are crucial in order to guarantee the answer accuracy. In this paper, we propose the novel framework named Smarnet with the hope that it can become as smart as humans. We design the structure in the view point of imitating how humans take the reading comprehension test. Specifically, we first introduce the Smarnet framework that exploits fine-grained word understanding with various attribution discriminations, like humans recite words with corresponding properties. We then develop the interactive attention with memory network to mimic human reading procedure. We also add a checking layer on the answer refining in order to ensure the accuracy. The main contributions of this paper are as follows: Task Description The goal of open-domain MC task is to infer the proper answer from the given text. For notation, given a passage INLINEFORM0 and a question INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are the length of the passage and the question. Each token is denoted as INLINEFORM4 , where INLINEFORM5 is the word embedding extracts from pre-trained word embedding lookups, INLINEFORM6 is the char-level matrix representing one-hot encoding of characters. The model should read and comprehend the interactions between INLINEFORM7 and INLINEFORM8 , and predict an answer INLINEFORM9 based on a continuous sub-span of INLINEFORM10 . Smarnet Structure The general framework of MC task can be coarsely summarized as a three-layer hierarchical process: Input embedding layer, Interaction modeling layer, answer refining layer. We then introduce our model from these three perspectives. Input Embedding Layer Familiar with lexical and linguistic properties is crucial in text understanding. We try to enrich the lexical factors to enhance the word representation. Inspired by Yang et al. BIBREF8 BIBREF9 BIBREF10 and BIBREF11 , we adopt a more fine-grained dynamic gating mechanism to model the lexical properties related to the question and the passage. Here we indicate our embedding method in Figure 2. We design two gates adopted as valves to dynamically control the flowing amount of word-level and character-level representations. For the passage word INLINEFORM0 , we use the concatenation of part-of-speech tag, named entity tag, term frequency tag, exact match tag and the surprisal tag. The exact match denote as INLINEFORM1 in three binary forms: original, lower-case and lemma forms, which indicates whether token INLINEFORM2 in the passage can be exactly matched to a question word in INLINEFORM3 . The surprisal tag measures the amount of information conveyed by a particular word from INLINEFORM4 . The less occurrence of a word, the more information it carries. For the question word INLINEFORM0 , we take the question type in place of the exact match information and remain the other features. The type of a question provides significant clue for the answer selection process. For example, the answer for a "when" type question prefers tokens about time or dates while a "why" type question requires longer inference. Here we select the top 11 common question types as illustrated in the diagram. If the model recognize a question's type, then all the words in this question will be assigned with the same QType feature. The gates of the passage and the question are computed as follows: INLINEFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are the parameters and INLINEFORM4 denotes an element-wise sigmoid function. Using the fine-grained gating mechanism conditioned on the lexical features, we can accurately control the information flows between word-level and char-level. Intuitively, the formulation is as follows: INLINEFORM0 where INLINEFORM0 is the element-wise multiplication operator. when the gate has high value, more information flows from the word-level representation; otherwise, char-level will take the dominating place. It is practical in real scenarios. For example, for unfamiliar noun entities, the gates tend to bias towards char-level representation in order to care richer morphological structure. Besides, we not only utilize the lexical properties as the gating feature, we also concatenate them as a supplement of lexical information. Therefore, the final representation of words are computed as follows: INLINEFORM1 where INLINEFORM0 is the concatenation function. Interaction Modeling Layer Recall how people deal with reading comprehension test. When we get a reading test paper, we read the question first to have a preliminary focal point. Then we skim the passage to refine the answer. Sometimes we may not directly ensure the answer's boundary, we go back and confirm the question. After confirming, we scan the passage and refine the right answer we thought. We also check the answer for insurance. Inspired by such a scientific reading procedure, we design the Smarnet with three components: contextual encoding, interactive attention with memory network, answer refining with checking insurance. As is shown in figure 3. We use Gated Recurrent Unit BIBREF12 with bi-directions to model the contextual representations. Here, It is rather remarkable that we do not immediately put the Bi-GRU on the passage words. Instead, we first encode the question and then apply a gate to control the question influence on each passage word, as is shown in the structure (a) and (b). Theoretically, when human do the reading comprehension, they often first read the question to have a rough view and then skim the passage with the impression of the question. No one can simultaneously read both the question and the passage without any overlap. Vice versa, after skimming the passage to get the preliminary comprehension, the whole passage representation is also applied to attend the question again with another gating mechanism, as is shown in the structure (c). This can be explained that people often reread the question to confirm whether they have thoroughly understand it. The outputs of the three steps (a) (b) (c) are calculated as follows: INLINEFORM0 where INLINEFORM0 is the lexical representation from the input layer. INLINEFORM1 is the hidden state of GRU for the INLINEFORM2 th question word. INLINEFORM3 is the original question semantic embedding obtained from the concatenation of the last hidden states of two GRUs. INLINEFORM4 where INLINEFORM0 is a question gate controlling the question influence on the passage. The larger the INLINEFORM1 is, the more impact the question takes on the passage word. We reduce the INLINEFORM2 dimension through multi-layer perceptron INLINEFORM3 since INLINEFORM4 and INLINEFORM5 are not in the same dimension. We then put the bi-GRU on top of each passage word to get the semantic representation of the whole passage INLINEFORM6 . INLINEFORM7 where INLINEFORM0 is a passage gate similar to INLINEFORM1 . INLINEFORM2 is the multi-layer perceptron to reduce dimension. INLINEFORM3 represents the confirmed question with the knowledge of the context. The essential point in answer refining lies on comprehensive understanding of the passage content under the guidance of the question. We build the interactive attention module to capture the mutual information between the question and the passage. From human's perspective, people repeatedly and interactively read the question and the passage, narrow the answer's boundary and put more attention on some passage parts where are more relevant to the question. We construct a shared similarity matrix INLINEFORM0 to attend the relevance between the passage INLINEFORM1 and the question INLINEFORM2 . Each element INLINEFORM3 is computed by the similarity of the INLINEFORM4 th passage word and the INLINEFORM5 th question word. We signify relevant question words into an attended question vector to collaborate with each context word. Let INLINEFORM0 represent the normalized attention distribution on the question words by INLINEFORM1 th passage word. The attention weight is calculated by INLINEFORM2 . Hence the attend question vector for all passage words INLINEFORM3 is obtained by INLINEFORM4 , where INLINEFORM5 . We further utilize INLINEFORM0 to form the question-aware passage representation. In order to comprehensively model the mutual information between the question and passage, we adopt a heuristic combining strategy to yield the extension as follows: INLINEFORM1 where INLINEFORM0 denotes the INLINEFORM1 th question-aware passage word under the INLINEFORM2 th hop, the INLINEFORM3 function is a concatenation function that fuses four input vectors. INLINEFORM4 denotes the hidden state of former INLINEFORM5 th passage word obtained from BiGRU. INLINEFORM6 denotes element-wise multiplication and INLINEFORM7 denotes the element-wise plus. Notice that after the concatenation of INLINEFORM8 , the dimension can reaches to INLINEFORM9 . We feed the concatenation vector into a BiGRU to reduce the hidden state dimension into INLINEFORM10 . Using BiGRU to reduce dimension provides an efficient way to facilitate the passage semantic information and enable later semantic parsing. Naive one-hop comprehension may fail to comprehensively understand the mutual question-passage information. Therefore, we propose a multi-hop memory network which allows to reread the question and answer. In our model, we totally apply two-hops memory network, as is depicted in the structure (c to e) and (f to h). In our experiment we found the two-hops can reach the best performance. In detail, the memory vector stores the question-aware passage representations, the old memory's output is updated through a repeated interaction attention. Answer Selection with Checking Mechanism The goal of open-domain MC task is to refine the sub-phrase from the passage as the final answer. Usually the answer span INLINEFORM0 is derived by predicting the start INLINEFORM1 and the end INLINEFORM2 indices of the phrase in the paragraph. In our model, we use two answer refining strategies from different levels of linguistic understanding: one is from original interaction output module and the other is from self-matching alignment. The two extracted answers are then applied into a checking component to final ensure the decision. For the original interaction output in structure (h), we directly aggregate the passage vectors through BiGRU. We compute the INLINEFORM0 and INLINEFORM1 probability distribution under the instruction of BIBREF13 and pointer network BIBREF14 by INLINEFORM2 where INLINEFORM0 is the output of the original interaction. INLINEFORM1 and INLINEFORM2 are trainable weight vectors. For the self-alignment in structure (j), we align the two hops outputs INLINEFORM0 with INLINEFORM1 in structure (e) and (h). The purpose of self-alignment aims to analysis the new insights on the passage as the comprehension gradually become clear after iterative hops. For each hop, the reader dynamically collects evidence from former passage representation and encodes the evidence to the new iteration. From human's perspective, each time we reread the passage, we get some new ideas or more solid comprehension on the basis of the former understanding. The self-alignment is computed by INLINEFORM2 where INLINEFORM0 is the first hop whole passage vector in the structure (e). We apply a gate mechanism with INLINEFORM1 to control the evidence flow to the next hop INLINEFORM2 . The output of self-alignment is computed by INLINEFORM3 where INLINEFORM0 and INLINEFORM1 are the predicted start and end indices after the self-alignment. For insurance, we obtain two groups of predicted answer span INLINEFORM0 and INLINEFORM1 . We then apply a checking strategy to compare the twice answer. This process is quite similar to human's behavior, people often reread the passage and may draw some different answers. Thus they need to compare the alternative answers and finally consider a best one. Here we employ the weighted sum of twice answer prediction indices to make the final decision: INLINEFORM2 where INLINEFORM0 is a weighted scalar controlling the proportion of the two predicted answers. We set the INLINEFORM1 as in most cases the latter predicted answer is more accurate comparing to the former one. The final INLINEFORM2 and INLINEFORM3 is then judged by the max value through the argmax operator. Training and Optimization We choose the training loss as the sum of the negative log probabilities of the true start and end position by the predicted distributions to train our model: DISPLAYFORM0 where INLINEFORM0 denotes all the model coefficients including the neural network parameters and the input gating function parameters, N is the number dataset examples, INLINEFORM1 and INLINEFORM2 are the predicted distributions of the output, INLINEFORM3 and INLINEFORM4 are the true start and end indices of the INLINEFORM5 th example. The objective function in our learning process is given by: DISPLAYFORM0 where INLINEFORM0 is the trade-off parameter between the training loss and regularization. To optimize the objective, we employ the stochastic gradient descent (SGD) with the diagonal variant of AdaDelta BIBREF15 . Datasets In this section we evaluate our model on the task of machine comprehension using the recently released large-scale datasets SQuAD BIBREF0 and TriviaQA BIBREF2 . SQuAD published by Stanford has obtained a huge attention over the past two years. It is composed of over 100K questions manually annotated by crowd workers on 536 Wikipedia articles. TriviaQA is a newly released open-domain MC dataset which consists of over 650K question-answer-evidence triples. It is derived by combining 95K Trivia enthusiast authored question-answer pairs with on average six supporting evidence documents per question. The length of contexts in TriviaQA is much longer than SQuAD and models trained on TriviaQA require more cross sentence reasoning to find answers. There are some similar settings between these two datasets. Each answer to the question is a segment of text from the corresponding reading context. Two metrics are used to evaluate models: Exact Match (EM) measures the percentage of predictions that match the ground truth answer exactly. F1 score measures the average overlap between the prediction and ground truth answer. Both datasets are randomly partitioned into training set (80%), dev set (10%) and test set (10%). Implemental Details We preprocess each passage and question using the library of nltk BIBREF16 and exploit the popular pre-trained word embedding GloVe with 100-dimensional vectors BIBREF17 for both questions and passages. The size of char-level embedding is also set as 100-dimensional and is obtained by CNN filters under the instruction of BIBREF18 . The Gated Recurrent Unit BIBREF12 which is variant from LSTM BIBREF19 is employed throughout our model. We adopt the AdaDelta BIBREF15 optimizer for training with an initial learning rate of 0.0005. The batch size is set to be 48 for both the SQuAD and TriviaQA datasets. We also apply dropout BIBREF20 between layers with a dropout rate of 0.2. For the multi-hop reasoning, we set the number of hops as 2 which is imitating human reading procedure on skimming and scanning. During training, we set the moving averages of all weights as the exponential decay rate of 0.999 BIBREF21 . The whole training process takes approximately 14 hours on a single 1080Ti GPU. Furthermore, as the SQuAD and TriviaQA are competitive MC benchmark, we train an ensemble model consisting of 16 training runs with the same architecture but identical hyper-parameters. The answer with the highest sum of the confidence score is chosen for each query. Overall Results We evaluate the performance of our proposed method based on two evaluation criteria EM and F1 for the MC tasks. We compare our model with other strong competitive methods on the SQuAD leaderboard and TriviaQA leaderboard. Table 1 and Table 2 respectively show the performance of single and ensemble models on the SQuAD leaderboard. The SQuAD leaderboard is very competitive among top NLP researchers from all over the world. We can see the top record has been frequently broken in order to reach the human's level. Our model was submitted by July 14, 2017, thus we compare our model on the single and ensemble performance against other competitor at that time. From the tables 1 and 2 we can see our single model achieves an EM score of INLINEFORM0 and a F1 score of INLINEFORM1 and the ensemble model improves to EM INLINEFORM2 and F1 INLINEFORM3 , which are both only after the r-net method BIBREF7 at the time of submission. These results sufficiently prove the significant superiority of our proposed model. We also compare our models on the recently proposed dataset TriviaQA. Table 3 shows the performance comparison on the test set of TriviaQA. We can see our Smarnet model outperforms the other baselines on both wikipedia domain and web domain. Ablation Results We respectively evaluate the individual contribution of the proposed module in our model. We conduct thorough ablation experiments on the SQuAD dev set, which are recorded on the table 4 and table 5. Table 4 shows the different effect of the lexical features. We see the full features integration obtain the best performance, which demonstrates the necessity of combining all the features into consideration. Among all the feature ablations, the Part-Of-Speech, Exact Match, Qtype features drop much more than the other features, which shows the importance of these three features. As the POS tag provides the critical lexical information, Exact Match and Qtype help to guide the attention in the interaction procedure. As for the final ablation of POS and NER, we can see the performance decays over 3% point, which clearly proves the usefulness of the comprehensive lexical information. Table 5 shows the ablation results on the different levels of components. We first replace our input gate mechanism into simplified feature concatenation strategy, the performance drops nearly 2.3% on the EM score, which proves the effectiveness of our proposed dynamic input gating mechanism. We then compare two methods which directly encode the passage words or use the question influence. The result proves that our modification of employing question influence on the passage encoding can boost the result up to 1.3% on the EM score. In our model, we apply two-hops memory network to further comprehend the passage. In the ablation test, we remove the iterative hops of memory network and only remain one interaction round. The result drops 2.6% point on the EM score, which indicate the significance of using memory network mechanism. Finally, we compare the last module of our proposed self-alignment checking with original pointer network. The final result shows the superiority of our proposed method. Parameters Tuning We conduct two parameters tuning experiments in order to get the optimal performance. Figure 4 shows the results on different hops of memory network. We see the number of hops set to 2 can get the best performance comparing to other number of hops. In addition, as the number of hops enlarges, the model is easily to get overfitting on the training set, hence the performance is decrease rather than increase. In figure 5, we set different weight of INLINEFORM0 into five groups INLINEFORM1 . The final results show that the proportion of the first answer prediction and the last one reaches to 2:3 can get the most confident answer judgement. The value of INLINEFORM2 which is greater than 1, indicating that the latter answer refining takes more insurance on the prediction decision. Related Work Machine Comprehension Dataset. Benchmark datasets play a vital role in the research advance. Previous human-labeled datasets on MC task are too small to train data-intensive models BIBREF22 BIBREF23 BIBREF24 . Recently, Large-scale datasets become available. CNN/Daily Mail BIBREF3 and Children's Book Test BIBREF4 generated in cloze style offer the availability to train more expressive neural models. The SQuAD BIBREF0 , TriviaQA BIBREF2 and MS-MARCO BIBREF1 datasets provide large and high-quality datasets which extract answers from text spans instead of single entities in cloze style. The open-domain style of MC tasks are more challenging and require different levels of reasoning from multiple sentences. In this paper, we evaluate our Smarnet framework on SQuAD and TriviaQA datasets. Machine Comprehension Models Previous works in MC task adopt deep neural modeling strategies with attention mechanisms, both on cloze style and open domain tasks. Along with cloze style datasets, Chen et al. BIBREF25 prove that computing the attention weights with a bilinear term instead of simple dot-product significantly improves the accuracy. Kadlec et al. BIBREF26 sum attention over candidate answer words in the document. Dhingra et al. BIBREF27 iteratively interact between the query and document by a multiplicative gating function. Cui et al. BIBREF28 compute a similarity matrix with two-way attention between the query and passage mutually. Sordoni et al. BIBREF29 exploit an iterative alternating neural attention to model the connections between the question and the passage. Open-domain machine comprehension tasks are more challenging and have attracted plenty of teams to pursue for higher performance on the leaderboard. Wang et al. BIBREF13 present match-LSTM and use pointer network to generate answers from the passage. Chen et al. BIBREF11 tackle the problem by using wikipedia as the unique knowledge source. Shen BIBREF30 adopt memory networks with reinforcement learning so as to dynamically control the number of hops. Seo et al. BIBREF5 use bi-directional attention flow mechanism and a multi-stage hierarchical process to represent the context. Xiong et al. BIBREF31 propose dynamic coattention networks to iteratively infer the answer. Yang et al. BIBREF8 present a fine-grained gating mechanism to dynamically combine word-level and character-level representations. Wang et al. BIBREF7 introduce self-matching attention to refine the gated representation by aligning the passage against itself. Reasoning by Memory Network Multi-hop reasoning combines with Memory networks have shown powerful competence on MC task BIBREF30 BIBREF27 BIBREF29 BIBREF31 BIBREF32 BIBREF6 BIBREF33 . Theoretically, multi-hop memory networks can repeat computing the attention biases between the query and the context through multiple layers. The memory networks typically maintain memory states which incorporate the information of current reasoning with the previous storage in the memory. Hu et al. BIBREF32 utilize a multi-hop answer pointer which allows the network to continue refining the predicted answer span. Gong et al. BIBREF6 adapt the BIDAF BIBREF5 with multi-hop attention mechanisms and achieve substantial performance. Pan et al. BIBREF34 introduce multi-layer embedding with memory network for full orientation matching on MC task. In our paper, we also adopt the memory network to mimic human behaviors on increasing their understanding by reread the context and the query multi times. We also apply a multi-hop checking mechanism to better refine the true answer. Conclusions In this paper, we tackle the problem of machine comprehension from the viewpoint of imitating human's ways in having reading comprehension examinations. We propose the Smarnet framework with the hope that it can become as smart as human for the reading comprehension problem. We first introduce a novel gating method with detailed word attributions to fully exploit prior knowledge of word semantic understanding. We then adopt a scientific procedure to guide machines to read and comprehend by using interactive attention and matching mechanisms between questions and passages. Furthermore, we employ the self-alignment with checking strategy to ensure the answer is refined after careful consideration. We evaluate the performance of our method on two large-scale datasets SQuAD and TriviaQA. The extensive experiments demonstrate the superiority of our Smarnet framework.
strong competitive methods on the SQuAD leaderboard and TriviaQA leaderboard
0e9c08b635c1ebfd36472550d619095541bb5af1
0e9c08b635c1ebfd36472550d619095541bb5af1_0
Q: How does the gatint mechanism combine word and character information? Text: Introduction Teaching machines to learn reading comprehension is one of the core tasks in NLP field. Recently machine comprehension task accumulates much concern among NLP researchers. We have witnessed significant progress since the release of large-scale datasets like SQuAD BIBREF0 , MS-MARCO BIBREF1 , TriviaQA BIBREF2 , CNN/Daily Mail BIBREF3 and Children's Book Test BIBREF4 . The essential problem of machine comprehension is to predict the correct answer referring to a given passage with relevant question. If a machine can obtain a good score from predicting the right answer, we can say the machine is capable of understanding the given context. Many previous approaches BIBREF5 BIBREF6 BIBREF7 adopt attention mechanisms along with deep neural network tactics and pointer network to establish interactions between the question and the passage. The superiority of these frameworks are to enable questions focus on more relevant targeted areas within passages. Although these works have achieved promising performance for MC task, most of them still suffer from the inefficiency in three perspectives: (1) Comprehensive understanding on the lexical and linguistic level. (2) Complex interactions among questions and passages in a scientific reading procedure. (3) Precise answer refining over the passage. For all the time through, we consider a philosophy question: What will people do when they are having a reading comprehension test? Recall how our teacher taught us may shed some light. As a student, we recite words with relevant properties such as part-of-speech tag, the synonyms, entity type and so on. In order to promote answer's accuracy, we iteratively and interactively read the question and the passage to locate the answer's boundary. Sometimes we will check the answer to ensure the refining accuracy. Here we draw a flow path to depict what on earth the scientific reading skills are in the Figure 1. As we see, basic word understanding, iterative reading interaction and attentive checking are crucial in order to guarantee the answer accuracy. In this paper, we propose the novel framework named Smarnet with the hope that it can become as smart as humans. We design the structure in the view point of imitating how humans take the reading comprehension test. Specifically, we first introduce the Smarnet framework that exploits fine-grained word understanding with various attribution discriminations, like humans recite words with corresponding properties. We then develop the interactive attention with memory network to mimic human reading procedure. We also add a checking layer on the answer refining in order to ensure the accuracy. The main contributions of this paper are as follows: Task Description The goal of open-domain MC task is to infer the proper answer from the given text. For notation, given a passage INLINEFORM0 and a question INLINEFORM1 , where INLINEFORM2 and INLINEFORM3 are the length of the passage and the question. Each token is denoted as INLINEFORM4 , where INLINEFORM5 is the word embedding extracts from pre-trained word embedding lookups, INLINEFORM6 is the char-level matrix representing one-hot encoding of characters. The model should read and comprehend the interactions between INLINEFORM7 and INLINEFORM8 , and predict an answer INLINEFORM9 based on a continuous sub-span of INLINEFORM10 . Smarnet Structure The general framework of MC task can be coarsely summarized as a three-layer hierarchical process: Input embedding layer, Interaction modeling layer, answer refining layer. We then introduce our model from these three perspectives. Input Embedding Layer Familiar with lexical and linguistic properties is crucial in text understanding. We try to enrich the lexical factors to enhance the word representation. Inspired by Yang et al. BIBREF8 BIBREF9 BIBREF10 and BIBREF11 , we adopt a more fine-grained dynamic gating mechanism to model the lexical properties related to the question and the passage. Here we indicate our embedding method in Figure 2. We design two gates adopted as valves to dynamically control the flowing amount of word-level and character-level representations. For the passage word INLINEFORM0 , we use the concatenation of part-of-speech tag, named entity tag, term frequency tag, exact match tag and the surprisal tag. The exact match denote as INLINEFORM1 in three binary forms: original, lower-case and lemma forms, which indicates whether token INLINEFORM2 in the passage can be exactly matched to a question word in INLINEFORM3 . The surprisal tag measures the amount of information conveyed by a particular word from INLINEFORM4 . The less occurrence of a word, the more information it carries. For the question word INLINEFORM0 , we take the question type in place of the exact match information and remain the other features. The type of a question provides significant clue for the answer selection process. For example, the answer for a "when" type question prefers tokens about time or dates while a "why" type question requires longer inference. Here we select the top 11 common question types as illustrated in the diagram. If the model recognize a question's type, then all the words in this question will be assigned with the same QType feature. The gates of the passage and the question are computed as follows: INLINEFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are the parameters and INLINEFORM4 denotes an element-wise sigmoid function. Using the fine-grained gating mechanism conditioned on the lexical features, we can accurately control the information flows between word-level and char-level. Intuitively, the formulation is as follows: INLINEFORM0 where INLINEFORM0 is the element-wise multiplication operator. when the gate has high value, more information flows from the word-level representation; otherwise, char-level will take the dominating place. It is practical in real scenarios. For example, for unfamiliar noun entities, the gates tend to bias towards char-level representation in order to care richer morphological structure. Besides, we not only utilize the lexical properties as the gating feature, we also concatenate them as a supplement of lexical information. Therefore, the final representation of words are computed as follows: INLINEFORM1 where INLINEFORM0 is the concatenation function. Interaction Modeling Layer Recall how people deal with reading comprehension test. When we get a reading test paper, we read the question first to have a preliminary focal point. Then we skim the passage to refine the answer. Sometimes we may not directly ensure the answer's boundary, we go back and confirm the question. After confirming, we scan the passage and refine the right answer we thought. We also check the answer for insurance. Inspired by such a scientific reading procedure, we design the Smarnet with three components: contextual encoding, interactive attention with memory network, answer refining with checking insurance. As is shown in figure 3. We use Gated Recurrent Unit BIBREF12 with bi-directions to model the contextual representations. Here, It is rather remarkable that we do not immediately put the Bi-GRU on the passage words. Instead, we first encode the question and then apply a gate to control the question influence on each passage word, as is shown in the structure (a) and (b). Theoretically, when human do the reading comprehension, they often first read the question to have a rough view and then skim the passage with the impression of the question. No one can simultaneously read both the question and the passage without any overlap. Vice versa, after skimming the passage to get the preliminary comprehension, the whole passage representation is also applied to attend the question again with another gating mechanism, as is shown in the structure (c). This can be explained that people often reread the question to confirm whether they have thoroughly understand it. The outputs of the three steps (a) (b) (c) are calculated as follows: INLINEFORM0 where INLINEFORM0 is the lexical representation from the input layer. INLINEFORM1 is the hidden state of GRU for the INLINEFORM2 th question word. INLINEFORM3 is the original question semantic embedding obtained from the concatenation of the last hidden states of two GRUs. INLINEFORM4 where INLINEFORM0 is a question gate controlling the question influence on the passage. The larger the INLINEFORM1 is, the more impact the question takes on the passage word. We reduce the INLINEFORM2 dimension through multi-layer perceptron INLINEFORM3 since INLINEFORM4 and INLINEFORM5 are not in the same dimension. We then put the bi-GRU on top of each passage word to get the semantic representation of the whole passage INLINEFORM6 . INLINEFORM7 where INLINEFORM0 is a passage gate similar to INLINEFORM1 . INLINEFORM2 is the multi-layer perceptron to reduce dimension. INLINEFORM3 represents the confirmed question with the knowledge of the context. The essential point in answer refining lies on comprehensive understanding of the passage content under the guidance of the question. We build the interactive attention module to capture the mutual information between the question and the passage. From human's perspective, people repeatedly and interactively read the question and the passage, narrow the answer's boundary and put more attention on some passage parts where are more relevant to the question. We construct a shared similarity matrix INLINEFORM0 to attend the relevance between the passage INLINEFORM1 and the question INLINEFORM2 . Each element INLINEFORM3 is computed by the similarity of the INLINEFORM4 th passage word and the INLINEFORM5 th question word. We signify relevant question words into an attended question vector to collaborate with each context word. Let INLINEFORM0 represent the normalized attention distribution on the question words by INLINEFORM1 th passage word. The attention weight is calculated by INLINEFORM2 . Hence the attend question vector for all passage words INLINEFORM3 is obtained by INLINEFORM4 , where INLINEFORM5 . We further utilize INLINEFORM0 to form the question-aware passage representation. In order to comprehensively model the mutual information between the question and passage, we adopt a heuristic combining strategy to yield the extension as follows: INLINEFORM1 where INLINEFORM0 denotes the INLINEFORM1 th question-aware passage word under the INLINEFORM2 th hop, the INLINEFORM3 function is a concatenation function that fuses four input vectors. INLINEFORM4 denotes the hidden state of former INLINEFORM5 th passage word obtained from BiGRU. INLINEFORM6 denotes element-wise multiplication and INLINEFORM7 denotes the element-wise plus. Notice that after the concatenation of INLINEFORM8 , the dimension can reaches to INLINEFORM9 . We feed the concatenation vector into a BiGRU to reduce the hidden state dimension into INLINEFORM10 . Using BiGRU to reduce dimension provides an efficient way to facilitate the passage semantic information and enable later semantic parsing. Naive one-hop comprehension may fail to comprehensively understand the mutual question-passage information. Therefore, we propose a multi-hop memory network which allows to reread the question and answer. In our model, we totally apply two-hops memory network, as is depicted in the structure (c to e) and (f to h). In our experiment we found the two-hops can reach the best performance. In detail, the memory vector stores the question-aware passage representations, the old memory's output is updated through a repeated interaction attention. Answer Selection with Checking Mechanism The goal of open-domain MC task is to refine the sub-phrase from the passage as the final answer. Usually the answer span INLINEFORM0 is derived by predicting the start INLINEFORM1 and the end INLINEFORM2 indices of the phrase in the paragraph. In our model, we use two answer refining strategies from different levels of linguistic understanding: one is from original interaction output module and the other is from self-matching alignment. The two extracted answers are then applied into a checking component to final ensure the decision. For the original interaction output in structure (h), we directly aggregate the passage vectors through BiGRU. We compute the INLINEFORM0 and INLINEFORM1 probability distribution under the instruction of BIBREF13 and pointer network BIBREF14 by INLINEFORM2 where INLINEFORM0 is the output of the original interaction. INLINEFORM1 and INLINEFORM2 are trainable weight vectors. For the self-alignment in structure (j), we align the two hops outputs INLINEFORM0 with INLINEFORM1 in structure (e) and (h). The purpose of self-alignment aims to analysis the new insights on the passage as the comprehension gradually become clear after iterative hops. For each hop, the reader dynamically collects evidence from former passage representation and encodes the evidence to the new iteration. From human's perspective, each time we reread the passage, we get some new ideas or more solid comprehension on the basis of the former understanding. The self-alignment is computed by INLINEFORM2 where INLINEFORM0 is the first hop whole passage vector in the structure (e). We apply a gate mechanism with INLINEFORM1 to control the evidence flow to the next hop INLINEFORM2 . The output of self-alignment is computed by INLINEFORM3 where INLINEFORM0 and INLINEFORM1 are the predicted start and end indices after the self-alignment. For insurance, we obtain two groups of predicted answer span INLINEFORM0 and INLINEFORM1 . We then apply a checking strategy to compare the twice answer. This process is quite similar to human's behavior, people often reread the passage and may draw some different answers. Thus they need to compare the alternative answers and finally consider a best one. Here we employ the weighted sum of twice answer prediction indices to make the final decision: INLINEFORM2 where INLINEFORM0 is a weighted scalar controlling the proportion of the two predicted answers. We set the INLINEFORM1 as in most cases the latter predicted answer is more accurate comparing to the former one. The final INLINEFORM2 and INLINEFORM3 is then judged by the max value through the argmax operator. Training and Optimization We choose the training loss as the sum of the negative log probabilities of the true start and end position by the predicted distributions to train our model: DISPLAYFORM0 where INLINEFORM0 denotes all the model coefficients including the neural network parameters and the input gating function parameters, N is the number dataset examples, INLINEFORM1 and INLINEFORM2 are the predicted distributions of the output, INLINEFORM3 and INLINEFORM4 are the true start and end indices of the INLINEFORM5 th example. The objective function in our learning process is given by: DISPLAYFORM0 where INLINEFORM0 is the trade-off parameter between the training loss and regularization. To optimize the objective, we employ the stochastic gradient descent (SGD) with the diagonal variant of AdaDelta BIBREF15 . Datasets In this section we evaluate our model on the task of machine comprehension using the recently released large-scale datasets SQuAD BIBREF0 and TriviaQA BIBREF2 . SQuAD published by Stanford has obtained a huge attention over the past two years. It is composed of over 100K questions manually annotated by crowd workers on 536 Wikipedia articles. TriviaQA is a newly released open-domain MC dataset which consists of over 650K question-answer-evidence triples. It is derived by combining 95K Trivia enthusiast authored question-answer pairs with on average six supporting evidence documents per question. The length of contexts in TriviaQA is much longer than SQuAD and models trained on TriviaQA require more cross sentence reasoning to find answers. There are some similar settings between these two datasets. Each answer to the question is a segment of text from the corresponding reading context. Two metrics are used to evaluate models: Exact Match (EM) measures the percentage of predictions that match the ground truth answer exactly. F1 score measures the average overlap between the prediction and ground truth answer. Both datasets are randomly partitioned into training set (80%), dev set (10%) and test set (10%). Implemental Details We preprocess each passage and question using the library of nltk BIBREF16 and exploit the popular pre-trained word embedding GloVe with 100-dimensional vectors BIBREF17 for both questions and passages. The size of char-level embedding is also set as 100-dimensional and is obtained by CNN filters under the instruction of BIBREF18 . The Gated Recurrent Unit BIBREF12 which is variant from LSTM BIBREF19 is employed throughout our model. We adopt the AdaDelta BIBREF15 optimizer for training with an initial learning rate of 0.0005. The batch size is set to be 48 for both the SQuAD and TriviaQA datasets. We also apply dropout BIBREF20 between layers with a dropout rate of 0.2. For the multi-hop reasoning, we set the number of hops as 2 which is imitating human reading procedure on skimming and scanning. During training, we set the moving averages of all weights as the exponential decay rate of 0.999 BIBREF21 . The whole training process takes approximately 14 hours on a single 1080Ti GPU. Furthermore, as the SQuAD and TriviaQA are competitive MC benchmark, we train an ensemble model consisting of 16 training runs with the same architecture but identical hyper-parameters. The answer with the highest sum of the confidence score is chosen for each query. Overall Results We evaluate the performance of our proposed method based on two evaluation criteria EM and F1 for the MC tasks. We compare our model with other strong competitive methods on the SQuAD leaderboard and TriviaQA leaderboard. Table 1 and Table 2 respectively show the performance of single and ensemble models on the SQuAD leaderboard. The SQuAD leaderboard is very competitive among top NLP researchers from all over the world. We can see the top record has been frequently broken in order to reach the human's level. Our model was submitted by July 14, 2017, thus we compare our model on the single and ensemble performance against other competitor at that time. From the tables 1 and 2 we can see our single model achieves an EM score of INLINEFORM0 and a F1 score of INLINEFORM1 and the ensemble model improves to EM INLINEFORM2 and F1 INLINEFORM3 , which are both only after the r-net method BIBREF7 at the time of submission. These results sufficiently prove the significant superiority of our proposed model. We also compare our models on the recently proposed dataset TriviaQA. Table 3 shows the performance comparison on the test set of TriviaQA. We can see our Smarnet model outperforms the other baselines on both wikipedia domain and web domain. Ablation Results We respectively evaluate the individual contribution of the proposed module in our model. We conduct thorough ablation experiments on the SQuAD dev set, which are recorded on the table 4 and table 5. Table 4 shows the different effect of the lexical features. We see the full features integration obtain the best performance, which demonstrates the necessity of combining all the features into consideration. Among all the feature ablations, the Part-Of-Speech, Exact Match, Qtype features drop much more than the other features, which shows the importance of these three features. As the POS tag provides the critical lexical information, Exact Match and Qtype help to guide the attention in the interaction procedure. As for the final ablation of POS and NER, we can see the performance decays over 3% point, which clearly proves the usefulness of the comprehensive lexical information. Table 5 shows the ablation results on the different levels of components. We first replace our input gate mechanism into simplified feature concatenation strategy, the performance drops nearly 2.3% on the EM score, which proves the effectiveness of our proposed dynamic input gating mechanism. We then compare two methods which directly encode the passage words or use the question influence. The result proves that our modification of employing question influence on the passage encoding can boost the result up to 1.3% on the EM score. In our model, we apply two-hops memory network to further comprehend the passage. In the ablation test, we remove the iterative hops of memory network and only remain one interaction round. The result drops 2.6% point on the EM score, which indicate the significance of using memory network mechanism. Finally, we compare the last module of our proposed self-alignment checking with original pointer network. The final result shows the superiority of our proposed method. Parameters Tuning We conduct two parameters tuning experiments in order to get the optimal performance. Figure 4 shows the results on different hops of memory network. We see the number of hops set to 2 can get the best performance comparing to other number of hops. In addition, as the number of hops enlarges, the model is easily to get overfitting on the training set, hence the performance is decrease rather than increase. In figure 5, we set different weight of INLINEFORM0 into five groups INLINEFORM1 . The final results show that the proportion of the first answer prediction and the last one reaches to 2:3 can get the most confident answer judgement. The value of INLINEFORM2 which is greater than 1, indicating that the latter answer refining takes more insurance on the prediction decision. Related Work Machine Comprehension Dataset. Benchmark datasets play a vital role in the research advance. Previous human-labeled datasets on MC task are too small to train data-intensive models BIBREF22 BIBREF23 BIBREF24 . Recently, Large-scale datasets become available. CNN/Daily Mail BIBREF3 and Children's Book Test BIBREF4 generated in cloze style offer the availability to train more expressive neural models. The SQuAD BIBREF0 , TriviaQA BIBREF2 and MS-MARCO BIBREF1 datasets provide large and high-quality datasets which extract answers from text spans instead of single entities in cloze style. The open-domain style of MC tasks are more challenging and require different levels of reasoning from multiple sentences. In this paper, we evaluate our Smarnet framework on SQuAD and TriviaQA datasets. Machine Comprehension Models Previous works in MC task adopt deep neural modeling strategies with attention mechanisms, both on cloze style and open domain tasks. Along with cloze style datasets, Chen et al. BIBREF25 prove that computing the attention weights with a bilinear term instead of simple dot-product significantly improves the accuracy. Kadlec et al. BIBREF26 sum attention over candidate answer words in the document. Dhingra et al. BIBREF27 iteratively interact between the query and document by a multiplicative gating function. Cui et al. BIBREF28 compute a similarity matrix with two-way attention between the query and passage mutually. Sordoni et al. BIBREF29 exploit an iterative alternating neural attention to model the connections between the question and the passage. Open-domain machine comprehension tasks are more challenging and have attracted plenty of teams to pursue for higher performance on the leaderboard. Wang et al. BIBREF13 present match-LSTM and use pointer network to generate answers from the passage. Chen et al. BIBREF11 tackle the problem by using wikipedia as the unique knowledge source. Shen BIBREF30 adopt memory networks with reinforcement learning so as to dynamically control the number of hops. Seo et al. BIBREF5 use bi-directional attention flow mechanism and a multi-stage hierarchical process to represent the context. Xiong et al. BIBREF31 propose dynamic coattention networks to iteratively infer the answer. Yang et al. BIBREF8 present a fine-grained gating mechanism to dynamically combine word-level and character-level representations. Wang et al. BIBREF7 introduce self-matching attention to refine the gated representation by aligning the passage against itself. Reasoning by Memory Network Multi-hop reasoning combines with Memory networks have shown powerful competence on MC task BIBREF30 BIBREF27 BIBREF29 BIBREF31 BIBREF32 BIBREF6 BIBREF33 . Theoretically, multi-hop memory networks can repeat computing the attention biases between the query and the context through multiple layers. The memory networks typically maintain memory states which incorporate the information of current reasoning with the previous storage in the memory. Hu et al. BIBREF32 utilize a multi-hop answer pointer which allows the network to continue refining the predicted answer span. Gong et al. BIBREF6 adapt the BIDAF BIBREF5 with multi-hop attention mechanisms and achieve substantial performance. Pan et al. BIBREF34 introduce multi-layer embedding with memory network for full orientation matching on MC task. In our paper, we also adopt the memory network to mimic human behaviors on increasing their understanding by reread the context and the query multi times. We also apply a multi-hop checking mechanism to better refine the true answer. Conclusions In this paper, we tackle the problem of machine comprehension from the viewpoint of imitating human's ways in having reading comprehension examinations. We propose the Smarnet framework with the hope that it can become as smart as human for the reading comprehension problem. We first introduce a novel gating method with detailed word attributions to fully exploit prior knowledge of word semantic understanding. We then adopt a scientific procedure to guide machines to read and comprehend by using interactive attention and matching mechanisms between questions and passages. Furthermore, we employ the self-alignment with checking strategy to ensure the answer is refined after careful consideration. We evaluate the performance of our method on two large-scale datasets SQuAD and TriviaQA. The extensive experiments demonstrate the superiority of our Smarnet framework.
when the gate has high value, more information flows from the word-level representation; otherwise, char-level will take the dominating place, for unfamiliar noun entities, the gates tend to bias towards char-level representation in order to care richer morphological structure
b13902af1bcf0e199a3ea42bbc8fcd8e696a381a
b13902af1bcf0e199a3ea42bbc8fcd8e696a381a_0
Q: Which dataset do they use? Text: Introduction In the last years, statistical machine translation (SMT) system generated state-of-the-art performance for most language pairs. Recently, systems using neural machine translation (NMT) were able to outperform SMT systems in several evaluations. These models are able to generate more fluent and accurate translation for most of sentences. Neural machine translation systems provide the output with high fluency. A weakness of NMT systems, however, is that they sometimes lose the original meaning of the source words during translation. One example from the first conference on machine translation (WMT16) test set is the segment in Table TABREF1 . The English word goalie is not translated to the correct German word Torwart, but to the German word Gott, which means god. One problem could be that we need to limit the vocabulary size in order to train the model efficiently. We used Byte Pair Encoding (BPE) BIBREF0 to represent the text using a fixed size vocabulary. In our case the word goali is splitted into three parts go, al and ie. Then it is more difficult to transport the meaning to the translation. In contrast to this, in phrase-based machine translation (PBMT), we do not need to limit the vocabulary and are often able to translate words even if we have seen them only very rarely in the training. In the example mentioned before, for instance, the PBMT system had no problems translating the expression correctly. On the other hand, official evaluation campaigns BIBREF1 have shown that NMT system often create grammatically correct sentence and are able to model the morphologically agreement much better in German. The goal of this work is to combine the advantages of neural and phrase-based machine translation systems. Handling of rare words is an essential aspect to consider when it comes to real-world applications. The pre-translation framework provides a straightforward way to support such applications. In our approach, we will first translate the input using a PBMT system, which can handle the rare words well. In a second step, we will generate the final translation using an NMT system. This NMT system is able to generate a more fluent and grammatically correct translation. Since the rare words are already handled by the PBMT system, there should be less problems to generate the translation of these words. Using this approach naturally introduces a necessity to handle the potential errors by the PBMT systems. The remaining of the paper is structured as follows: In the next section we will review the related work. In Section SECREF3 , we will briefly review the phrase-based and neural approach to machine translation. Section SECREF4 will introduce the approach presented in this paper to pre-translate the input using a PBMT system. In the following section, we will evaluate the approach and analyze the errors. Finally, we will finish with a conclusion. Related Work The idea of linear combining of machine translation systems using different paradigms has already been used successfully for SMT and rule-based machine translation (RBMT) BIBREF2 , BIBREF3 . They build an SMT system that is post-editing the output of an RBMT system. Using the combination of SMT and RBMT, they could outperform both single systems. Those experiments promote the area of automatic post-editing BIBREF4 . Recently, it was shown that models based on neural MT are very successful in this task BIBREF5 . For PBMT, there has been several attempts to apply preprocessing in order to improve the performance of the translation system. A commonly used preprocessing step is morphological splitting, like compound splitting in German BIBREF6 . Another example would be to use pre-reordering in order to achieve more monotone translation BIBREF7 . In addition, the usefulness of using the translations of the training data of a PBMT system has been shown. The translations have been used to re-train the translation model BIBREF8 or to train additional discriminative translation models BIBREF9 . In order to improve the translation of rare words in NMT, authors try to translate words that are not in the vocabulary in a post-processing step BIBREF10 . In BIBREF0 , a method to split words into sub-word units was presented to limit the vocabulary size. Also the integration of lexical probabilities into NMT was successfully investigated BIBREF11 . Phrase-based and Neural Machine Translation Starting with the initial work on word-based translation system BIBREF12 , phrase-based machine translation BIBREF13 , BIBREF14 segments the sentence into continuous phrases that are used as basic translation units. This allows for many-to-many alignments. Based on this segmentation, the probability of the translation is calculated using a log-linear combination of different features: DISPLAYFORM0 In the initial model, the features are based on language and translation model probabilities as well as a few count based features. In advanced PBMT systems, several additional features to better model the translation process have been developed. Especially models using neural networks were able to increase the translation performance. Recently, state-of-the art performance in machine translation was significantly improved by using neural machine translation. In this approach to machine translation, a recurrent neural network (RNN)-based encoder-decoder architecture is used to transform the source sentence into the target sentence. In the encoder, an RNN is used to encode the source sentence into a fixed size continuous space representation by inserting the source sentence word-by-word into the network. In a second step, the decoder is initialized by the representation of the source sentence and is then generating the target sequence one word after the other using the last generated word as input for the RNN BIBREF15 . One main drawback of this approach is that the whole source sentence has to be stored in a fixed-size context vector. To overcome this problem, BIBREF16 introduced the soft attention mechanism. Instead of only considering the last state of the encoder RNN, they use a weighted sum of all hidden states. Using these weights, the model is able to put attention on different parts of the source sentence depending on the current status of the decoder RNN. In addition, they extended the encoder RNN to a bi-directional one to be able to get information from the whole sentence at every position of the encoder RNN. A detailed description of the NMT framework can be found in BIBREF16 . PBMT Pre-translation for NMT (PreMT) In this work, we want to combine the advantages of PBMT and NMT. Using the combined system we should be able to generate a translation for all words that occur at least once in the training data, while maintaining high quality translations for most sentences from NMT. Motivated by several approaches to simplify the translation process for PBMT using preprocessing, we will translate the source as a preprocessing step using the phrase-base machine translation system. The main translation task is done by the neural machine translation model, which can choose between using the output of the PBMT system or the original input when generate the translation. Pipeline In our first attempt, we combined the phrase-based MT and the neural MT in one pipeline as shown in Figure FIGREF3 . The input is first processed by the phrase-based machine translation system from the input language INLINEFORM0 to the target language INLINEFORM1 . Since the machine translation system is not perfect, the output of the system may not be correct translation containing errors possibly. Therefore, we will call the output language of the PBMT system INLINEFORM2 . In a second step, we will train a neural monolingual translation system, that translates from the output of the PBMT system INLINEFORM0 to a better target sentence INLINEFORM1 . Mixed Input One drawback of the pipelined approach is that the PBMT system might introduce some errors in the translation that the NMT can not recover from. For example, it is possible that some information from the source sentence gets lost, since the word is entirely deleted during the translation of the PBMT system. We try to overcome this problem by building an NMT system that does not only take the output of the PBMT system, but also the original source sentence. One advantage of NMT system is that we can easily encode different input information. The architecture of our system is shown in Figure FIGREF3 . The implementation of the mixed input for the NMT system is straight forward. Given the source input INLINEFORM0 and the output of the PBMT system INLINEFORM1 , we generated the input for the NMT system. First, we ensured a non-overlapping vocabulary of INLINEFORM2 and INLINEFORM3 by marking each token in INLINEFORM4 by a character and INLINEFORM5 by different ones. Then both input sequences are concatenated to the input INLINEFORM6 of the NMT system. Using this representation, the NMT can learn to focus on source word INLINEFORM0 and words INLINEFORM1 when generating a word INLINEFORM2 . Training In both cases, we can no longer train the NMT system on the source language and target language data, but on the output of the PBMT system and the target language data. Therefore, we need to generate translations of the whole parallel training data using the PBMT system. Due to its ability to use very long phrases, a PBMT system normally performs significantly better on the training data than on unseen test data. This of course will harm the performance of our approach, because the NMT system will underestimate the number of improvements it has to perform on the test data. In order to limit this effect, we did not use the whole phrase tables when translating the training data. If a phrase pair only occurs once, we cannot learn it from a different sentence pair. Following BIBREF9 , we removed all phrase pairs that occur only once for the translation of the corpus. Experiments We analyze the approach on the English to German news translation task of the Conference on Statistical Machine Translation (WMT). First, we will describe the system and analyze the translation quality measured in BLEU. Afterwards, we will analyze the performance depending on the frequency of the words and finally show some example translations. System description For the pre-translation, we used a PBMT system. In order to analyze the influence of the quality of the PBMT system, we use two different systems, a baseline system and a system with advanced models. The systems were trained on all parallel data available for the WMT 2016. The news commentary corpus, the European parliament proceedings and the common crawl corpus sum up to 3.7M sentences and around 90M words. In the baseline system, we use three language models, a word-based, a bilingual BIBREF17 and a cluster based language model, using 100 automatically generated clusters using MKCLS BIBREF18 . The advanced system use pre-reodering BIBREF19 and lexicalized reordering. In addition, it uses a discriminative word lexicon BIBREF9 and a language model trained on the large monolingual data. Both systems were optimized on the tst2014 using Minimum error rate training BIBREF20 . A detailed description of the systems can be found in BIBREF21 . The neural machine translation was trained using Nematus. For the NMT system as well as for the PreMT system, we used the default configuration. In order to limit the vocabulary size, we use BPE as described in BIBREF0 with 40K operations. We run the NMT system for 420K iterations and stored a model every 30K iterations. We selected the model that performed best on the development data. For the ensemble system we took the last four models. We did not perform an additional fine-tuning. The PreMT system was trained on translations of the PBMT system of the corpus and the target side of the corpus. For this translation, we only used the baseline PBMT system. English - German Machine Translation The results of all systems are summarized in Table TABREF13 . It has to be noted, that the first set, tst2014, has been used as development data for the PBMT system and as validation set for the NMT-based systems. Using the neural MT system, we reach a BLEU score of 23.34 and 27.65 on tst2015 and tst2016. Using an ensemble system, we can improve the performance to 24.03 and 28.89 respectively. The baseline PBMT system performs 1.5 to 1.2 BLEU points worse than the single NMT system. Using the PBMT system with advanced models, we get the same performance on the tst2015 and 0.5 BLEU points better on tst2016 compared to the NMT system. First, we build a PreMT system using the pipeline method as described in Section SECREF6 . The system reaches a BLEU score of 22.04 and 26.75 on both test sets. While the PreMT can improve of the baseline PBMT system, the performance is worse than the pure NMT system. So the first approach to combine neural and statistical machine translation is not able the combine the strength of both system. In contrast, the NMT system seems to be not able to recover from the errors done by the SMT-based system. In a second experiment, we use the advanced PBMT system to generate the translation of the test data. We did not use it to generate a new training corpus, since the translation is computationally very expensive. So the PreMT system stays the same, being trained on the translation of the baseline PBMT. However, it is getting better quality translation in testing. This also leads to an improvement of 0.9 BLEU points on both test sets. Although it is smaller then the difference between the two initial phrase-based translation systems of around 1.5 BLUE points, we are able to improve the translation quality by using a better pre-translation system. It is interesting to see that we can improve the quality of the PreMT system, but improving one component (SMT Pre-Translation), even if we do it only in evaluation and not in training. But the system does not improve over the pure NMT system and even the post editing of the NMT system lowers the performance compared to the initial PBMT system used for pre-translation. After evaluating the pipelined system, we performed experiments using the mixed input system. This leads to an improvement in translation quality. Using the baseline PBMT system for per-translation, we perform 0.8 BLEU points better than the purely NMT system on tst2015 and 0.4 BLEU point better on tst2016. It also showed better performance than both PBMT systems on tst2015 and comparable performance with the advanced PBMT on tst2016. So by looking at the original input and the pre-translation, the NMT system is able to recover some of the errors done by the PBMT system and also to prevent errors the NMT does if it is directly translating the source sentence. Using the advanced PBMT system for input, we can get additional gains of 0.3 and 1.6 BLEU points The system even outperforms the ensemble system on tst2016. The experiments showed that deploying a pre-translation PBMT system with a better quality improves the NMT quality in the mixed input scheme, even when it is used only in testing, not in training. By using an ensemble of four model, we improve the model by one BLEU point on both test sets, leading to the best results of 25.35 and 30.67 BLEU points. This is 1.3 and 1.8 BLEU points better than the pure NMT ensemble system. System Comparison After evaluating the approach, we further analyze the different techniques for machine translation. For this, we compared the single NMT system, the advanced PBMT system and the mixed system using the advanced PBMT system as input. Out initial idea was that PBMT systems are better for translating rare words, while the NMT is generating more fluent translation. To confirm this assumption, we edited the output of all system. For all analyzed systems, we replaced all target words, which occur in the training data less than INLINEFORM0 times, by the UNK token. For large INLINEFORM1 , we have therefore only the most frequent words in the reference, while for lower INLINEFORM2 more and more words are used. The results for INLINEFORM0 are shown in Figure FIGREF15 . Of course, with lower INLINEFORM1 we will have fewer UNK tokens in the output. Therefore, we normalized the BLEU scores by the performance of the PreMT system. We can see in the figure, that when INLINEFORM0 , where only the common words are used, we perform best using the NMT system. The PreMT system performs similar and the PBMT system performs clearly worse. If we now decrease INLINEFORM1 , more and more less frequent words will be considered in the evaluation of the translation quality. Although the absolute BLEU scores raise for all systems, on these less frequent words the PBMT performs better than the NMT system and therefore, finally it even achieves a better performance. In contrast to this, the PreMT is able to benefit from the pre-translation of the PBMT system and therefore stays better than the PBMT system. Examples In Table TABREF17 we show the output of the PBMT, NMT and PreMT system. First, for the PBMT system, we see a typical error when translating from and to German. The verb of the subclause parried is located at the second position in English, but in the German sentence it has to be located at the end of the sentence. The PBMT system is often not able to perform this long-range reordering. For the NMT system, we see two other errors. Both, the words goalie and parried are quite rarely in the training data and therefore, they are splitted into several parts by the BPE algorithm. In this case, the NMT makes more errors. For the first word, the NMT system generates a complete wrong translation Gott (engl. god) instead of Torwart. The second word is just dropped and does not appear in the translation. The example shows that the pre-translation system prevents both errors. It is generating the correct words Torwart and pariert and putting them at the correct position in the German sentence. To better understand how the pre-translation system is able to generate this translation, we also generated the alignment matrix of the attention model as shown in Figure FIGREF18 . The x-axis shows the input, where the words from the pre-translation are marked by D_ and the words from the original source by E_. The y-axis carries the translation. The symbol @@ marks subword units generated by the BPE algorithm. First, as indicated by the two diagonal lines the model is considering as both inputs, the original source and the pre-translation by the two diagonal lines. Secondly, we see that the attention model is mainly focusing on the pre-translation for words that are not common and therefore got splitted into several parts by the BPE, such as shoot, goalie and parried. A second example, which shows what happens with rare words occur in the source sentence, is shown in Table TABREF19 . In this case, the word riot is not translated but just passed to the target language. This behaviour is helpful for rare words like named entities, but the NMT system is using it also for many words that are not named entities. Other examples for words that were just passed through and not translated are crossbar or vigil. Conclusion In this paper, we presented a technique to combine phrase-based and neural machine translation. Motivated by success in statistical machine translation, we used phrase-based machine translation to pre-translate the input and then we generate the final translation using neural machine translation. While a simple serial combination of both models could not generate better translation than the neural machine translation system, we are able to improve over neural machine translation using a mixed input. By simple concatenation of the phrase-based translation and the original source as input for the neural machine translation, we can increase the machine translation quality measured in BLEU. The single pre-translated system could even outperform the ensemble NMT system. For the ensemble system, the PreMT system could outperform the NMT system by up to 1.8 BLEU points. Using the combined approach, we can generate more fluent translation typical for the NMT system, but also translate rare words. These are often more easily translated by PBMT. Furthermore, we are able to improve the overall system performance by improving the individual components. Acknowledgments The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement n INLINEFORM0 645452. This work was supported by the Carl-Zeiss-Stiftung.
parallel data available for the WMT 2016
b84bce289c6c81d0a7507ae183b94982533576b3
b84bce289c6c81d0a7507ae183b94982533576b3_0
Q: How is the PBMT system trained? Text: Introduction In the last years, statistical machine translation (SMT) system generated state-of-the-art performance for most language pairs. Recently, systems using neural machine translation (NMT) were able to outperform SMT systems in several evaluations. These models are able to generate more fluent and accurate translation for most of sentences. Neural machine translation systems provide the output with high fluency. A weakness of NMT systems, however, is that they sometimes lose the original meaning of the source words during translation. One example from the first conference on machine translation (WMT16) test set is the segment in Table TABREF1 . The English word goalie is not translated to the correct German word Torwart, but to the German word Gott, which means god. One problem could be that we need to limit the vocabulary size in order to train the model efficiently. We used Byte Pair Encoding (BPE) BIBREF0 to represent the text using a fixed size vocabulary. In our case the word goali is splitted into three parts go, al and ie. Then it is more difficult to transport the meaning to the translation. In contrast to this, in phrase-based machine translation (PBMT), we do not need to limit the vocabulary and are often able to translate words even if we have seen them only very rarely in the training. In the example mentioned before, for instance, the PBMT system had no problems translating the expression correctly. On the other hand, official evaluation campaigns BIBREF1 have shown that NMT system often create grammatically correct sentence and are able to model the morphologically agreement much better in German. The goal of this work is to combine the advantages of neural and phrase-based machine translation systems. Handling of rare words is an essential aspect to consider when it comes to real-world applications. The pre-translation framework provides a straightforward way to support such applications. In our approach, we will first translate the input using a PBMT system, which can handle the rare words well. In a second step, we will generate the final translation using an NMT system. This NMT system is able to generate a more fluent and grammatically correct translation. Since the rare words are already handled by the PBMT system, there should be less problems to generate the translation of these words. Using this approach naturally introduces a necessity to handle the potential errors by the PBMT systems. The remaining of the paper is structured as follows: In the next section we will review the related work. In Section SECREF3 , we will briefly review the phrase-based and neural approach to machine translation. Section SECREF4 will introduce the approach presented in this paper to pre-translate the input using a PBMT system. In the following section, we will evaluate the approach and analyze the errors. Finally, we will finish with a conclusion. Related Work The idea of linear combining of machine translation systems using different paradigms has already been used successfully for SMT and rule-based machine translation (RBMT) BIBREF2 , BIBREF3 . They build an SMT system that is post-editing the output of an RBMT system. Using the combination of SMT and RBMT, they could outperform both single systems. Those experiments promote the area of automatic post-editing BIBREF4 . Recently, it was shown that models based on neural MT are very successful in this task BIBREF5 . For PBMT, there has been several attempts to apply preprocessing in order to improve the performance of the translation system. A commonly used preprocessing step is morphological splitting, like compound splitting in German BIBREF6 . Another example would be to use pre-reordering in order to achieve more monotone translation BIBREF7 . In addition, the usefulness of using the translations of the training data of a PBMT system has been shown. The translations have been used to re-train the translation model BIBREF8 or to train additional discriminative translation models BIBREF9 . In order to improve the translation of rare words in NMT, authors try to translate words that are not in the vocabulary in a post-processing step BIBREF10 . In BIBREF0 , a method to split words into sub-word units was presented to limit the vocabulary size. Also the integration of lexical probabilities into NMT was successfully investigated BIBREF11 . Phrase-based and Neural Machine Translation Starting with the initial work on word-based translation system BIBREF12 , phrase-based machine translation BIBREF13 , BIBREF14 segments the sentence into continuous phrases that are used as basic translation units. This allows for many-to-many alignments. Based on this segmentation, the probability of the translation is calculated using a log-linear combination of different features: DISPLAYFORM0 In the initial model, the features are based on language and translation model probabilities as well as a few count based features. In advanced PBMT systems, several additional features to better model the translation process have been developed. Especially models using neural networks were able to increase the translation performance. Recently, state-of-the art performance in machine translation was significantly improved by using neural machine translation. In this approach to machine translation, a recurrent neural network (RNN)-based encoder-decoder architecture is used to transform the source sentence into the target sentence. In the encoder, an RNN is used to encode the source sentence into a fixed size continuous space representation by inserting the source sentence word-by-word into the network. In a second step, the decoder is initialized by the representation of the source sentence and is then generating the target sequence one word after the other using the last generated word as input for the RNN BIBREF15 . One main drawback of this approach is that the whole source sentence has to be stored in a fixed-size context vector. To overcome this problem, BIBREF16 introduced the soft attention mechanism. Instead of only considering the last state of the encoder RNN, they use a weighted sum of all hidden states. Using these weights, the model is able to put attention on different parts of the source sentence depending on the current status of the decoder RNN. In addition, they extended the encoder RNN to a bi-directional one to be able to get information from the whole sentence at every position of the encoder RNN. A detailed description of the NMT framework can be found in BIBREF16 . PBMT Pre-translation for NMT (PreMT) In this work, we want to combine the advantages of PBMT and NMT. Using the combined system we should be able to generate a translation for all words that occur at least once in the training data, while maintaining high quality translations for most sentences from NMT. Motivated by several approaches to simplify the translation process for PBMT using preprocessing, we will translate the source as a preprocessing step using the phrase-base machine translation system. The main translation task is done by the neural machine translation model, which can choose between using the output of the PBMT system or the original input when generate the translation. Pipeline In our first attempt, we combined the phrase-based MT and the neural MT in one pipeline as shown in Figure FIGREF3 . The input is first processed by the phrase-based machine translation system from the input language INLINEFORM0 to the target language INLINEFORM1 . Since the machine translation system is not perfect, the output of the system may not be correct translation containing errors possibly. Therefore, we will call the output language of the PBMT system INLINEFORM2 . In a second step, we will train a neural monolingual translation system, that translates from the output of the PBMT system INLINEFORM0 to a better target sentence INLINEFORM1 . Mixed Input One drawback of the pipelined approach is that the PBMT system might introduce some errors in the translation that the NMT can not recover from. For example, it is possible that some information from the source sentence gets lost, since the word is entirely deleted during the translation of the PBMT system. We try to overcome this problem by building an NMT system that does not only take the output of the PBMT system, but also the original source sentence. One advantage of NMT system is that we can easily encode different input information. The architecture of our system is shown in Figure FIGREF3 . The implementation of the mixed input for the NMT system is straight forward. Given the source input INLINEFORM0 and the output of the PBMT system INLINEFORM1 , we generated the input for the NMT system. First, we ensured a non-overlapping vocabulary of INLINEFORM2 and INLINEFORM3 by marking each token in INLINEFORM4 by a character and INLINEFORM5 by different ones. Then both input sequences are concatenated to the input INLINEFORM6 of the NMT system. Using this representation, the NMT can learn to focus on source word INLINEFORM0 and words INLINEFORM1 when generating a word INLINEFORM2 . Training In both cases, we can no longer train the NMT system on the source language and target language data, but on the output of the PBMT system and the target language data. Therefore, we need to generate translations of the whole parallel training data using the PBMT system. Due to its ability to use very long phrases, a PBMT system normally performs significantly better on the training data than on unseen test data. This of course will harm the performance of our approach, because the NMT system will underestimate the number of improvements it has to perform on the test data. In order to limit this effect, we did not use the whole phrase tables when translating the training data. If a phrase pair only occurs once, we cannot learn it from a different sentence pair. Following BIBREF9 , we removed all phrase pairs that occur only once for the translation of the corpus. Experiments We analyze the approach on the English to German news translation task of the Conference on Statistical Machine Translation (WMT). First, we will describe the system and analyze the translation quality measured in BLEU. Afterwards, we will analyze the performance depending on the frequency of the words and finally show some example translations. System description For the pre-translation, we used a PBMT system. In order to analyze the influence of the quality of the PBMT system, we use two different systems, a baseline system and a system with advanced models. The systems were trained on all parallel data available for the WMT 2016. The news commentary corpus, the European parliament proceedings and the common crawl corpus sum up to 3.7M sentences and around 90M words. In the baseline system, we use three language models, a word-based, a bilingual BIBREF17 and a cluster based language model, using 100 automatically generated clusters using MKCLS BIBREF18 . The advanced system use pre-reodering BIBREF19 and lexicalized reordering. In addition, it uses a discriminative word lexicon BIBREF9 and a language model trained on the large monolingual data. Both systems were optimized on the tst2014 using Minimum error rate training BIBREF20 . A detailed description of the systems can be found in BIBREF21 . The neural machine translation was trained using Nematus. For the NMT system as well as for the PreMT system, we used the default configuration. In order to limit the vocabulary size, we use BPE as described in BIBREF0 with 40K operations. We run the NMT system for 420K iterations and stored a model every 30K iterations. We selected the model that performed best on the development data. For the ensemble system we took the last four models. We did not perform an additional fine-tuning. The PreMT system was trained on translations of the PBMT system of the corpus and the target side of the corpus. For this translation, we only used the baseline PBMT system. English - German Machine Translation The results of all systems are summarized in Table TABREF13 . It has to be noted, that the first set, tst2014, has been used as development data for the PBMT system and as validation set for the NMT-based systems. Using the neural MT system, we reach a BLEU score of 23.34 and 27.65 on tst2015 and tst2016. Using an ensemble system, we can improve the performance to 24.03 and 28.89 respectively. The baseline PBMT system performs 1.5 to 1.2 BLEU points worse than the single NMT system. Using the PBMT system with advanced models, we get the same performance on the tst2015 and 0.5 BLEU points better on tst2016 compared to the NMT system. First, we build a PreMT system using the pipeline method as described in Section SECREF6 . The system reaches a BLEU score of 22.04 and 26.75 on both test sets. While the PreMT can improve of the baseline PBMT system, the performance is worse than the pure NMT system. So the first approach to combine neural and statistical machine translation is not able the combine the strength of both system. In contrast, the NMT system seems to be not able to recover from the errors done by the SMT-based system. In a second experiment, we use the advanced PBMT system to generate the translation of the test data. We did not use it to generate a new training corpus, since the translation is computationally very expensive. So the PreMT system stays the same, being trained on the translation of the baseline PBMT. However, it is getting better quality translation in testing. This also leads to an improvement of 0.9 BLEU points on both test sets. Although it is smaller then the difference between the two initial phrase-based translation systems of around 1.5 BLUE points, we are able to improve the translation quality by using a better pre-translation system. It is interesting to see that we can improve the quality of the PreMT system, but improving one component (SMT Pre-Translation), even if we do it only in evaluation and not in training. But the system does not improve over the pure NMT system and even the post editing of the NMT system lowers the performance compared to the initial PBMT system used for pre-translation. After evaluating the pipelined system, we performed experiments using the mixed input system. This leads to an improvement in translation quality. Using the baseline PBMT system for per-translation, we perform 0.8 BLEU points better than the purely NMT system on tst2015 and 0.4 BLEU point better on tst2016. It also showed better performance than both PBMT systems on tst2015 and comparable performance with the advanced PBMT on tst2016. So by looking at the original input and the pre-translation, the NMT system is able to recover some of the errors done by the PBMT system and also to prevent errors the NMT does if it is directly translating the source sentence. Using the advanced PBMT system for input, we can get additional gains of 0.3 and 1.6 BLEU points The system even outperforms the ensemble system on tst2016. The experiments showed that deploying a pre-translation PBMT system with a better quality improves the NMT quality in the mixed input scheme, even when it is used only in testing, not in training. By using an ensemble of four model, we improve the model by one BLEU point on both test sets, leading to the best results of 25.35 and 30.67 BLEU points. This is 1.3 and 1.8 BLEU points better than the pure NMT ensemble system. System Comparison After evaluating the approach, we further analyze the different techniques for machine translation. For this, we compared the single NMT system, the advanced PBMT system and the mixed system using the advanced PBMT system as input. Out initial idea was that PBMT systems are better for translating rare words, while the NMT is generating more fluent translation. To confirm this assumption, we edited the output of all system. For all analyzed systems, we replaced all target words, which occur in the training data less than INLINEFORM0 times, by the UNK token. For large INLINEFORM1 , we have therefore only the most frequent words in the reference, while for lower INLINEFORM2 more and more words are used. The results for INLINEFORM0 are shown in Figure FIGREF15 . Of course, with lower INLINEFORM1 we will have fewer UNK tokens in the output. Therefore, we normalized the BLEU scores by the performance of the PreMT system. We can see in the figure, that when INLINEFORM0 , where only the common words are used, we perform best using the NMT system. The PreMT system performs similar and the PBMT system performs clearly worse. If we now decrease INLINEFORM1 , more and more less frequent words will be considered in the evaluation of the translation quality. Although the absolute BLEU scores raise for all systems, on these less frequent words the PBMT performs better than the NMT system and therefore, finally it even achieves a better performance. In contrast to this, the PreMT is able to benefit from the pre-translation of the PBMT system and therefore stays better than the PBMT system. Examples In Table TABREF17 we show the output of the PBMT, NMT and PreMT system. First, for the PBMT system, we see a typical error when translating from and to German. The verb of the subclause parried is located at the second position in English, but in the German sentence it has to be located at the end of the sentence. The PBMT system is often not able to perform this long-range reordering. For the NMT system, we see two other errors. Both, the words goalie and parried are quite rarely in the training data and therefore, they are splitted into several parts by the BPE algorithm. In this case, the NMT makes more errors. For the first word, the NMT system generates a complete wrong translation Gott (engl. god) instead of Torwart. The second word is just dropped and does not appear in the translation. The example shows that the pre-translation system prevents both errors. It is generating the correct words Torwart and pariert and putting them at the correct position in the German sentence. To better understand how the pre-translation system is able to generate this translation, we also generated the alignment matrix of the attention model as shown in Figure FIGREF18 . The x-axis shows the input, where the words from the pre-translation are marked by D_ and the words from the original source by E_. The y-axis carries the translation. The symbol @@ marks subword units generated by the BPE algorithm. First, as indicated by the two diagonal lines the model is considering as both inputs, the original source and the pre-translation by the two diagonal lines. Secondly, we see that the attention model is mainly focusing on the pre-translation for words that are not common and therefore got splitted into several parts by the BPE, such as shoot, goalie and parried. A second example, which shows what happens with rare words occur in the source sentence, is shown in Table TABREF19 . In this case, the word riot is not translated but just passed to the target language. This behaviour is helpful for rare words like named entities, but the NMT system is using it also for many words that are not named entities. Other examples for words that were just passed through and not translated are crossbar or vigil. Conclusion In this paper, we presented a technique to combine phrase-based and neural machine translation. Motivated by success in statistical machine translation, we used phrase-based machine translation to pre-translate the input and then we generate the final translation using neural machine translation. While a simple serial combination of both models could not generate better translation than the neural machine translation system, we are able to improve over neural machine translation using a mixed input. By simple concatenation of the phrase-based translation and the original source as input for the neural machine translation, we can increase the machine translation quality measured in BLEU. The single pre-translated system could even outperform the ensemble NMT system. For the ensemble system, the PreMT system could outperform the NMT system by up to 1.8 BLEU points. Using the combined approach, we can generate more fluent translation typical for the NMT system, but also translate rare words. These are often more easily translated by PBMT. Furthermore, we are able to improve the overall system performance by improving the individual components. Acknowledgments The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement n INLINEFORM0 645452. This work was supported by the Carl-Zeiss-Stiftung.
systems were optimized on the tst2014 using Minimum error rate training BIBREF20
9fd137bf7eabaf8bc234a18b6ea34471cf4a3b95
9fd137bf7eabaf8bc234a18b6ea34471cf4a3b95_0
Q: Which NMT architecture do they use? Text: Introduction In the last years, statistical machine translation (SMT) system generated state-of-the-art performance for most language pairs. Recently, systems using neural machine translation (NMT) were able to outperform SMT systems in several evaluations. These models are able to generate more fluent and accurate translation for most of sentences. Neural machine translation systems provide the output with high fluency. A weakness of NMT systems, however, is that they sometimes lose the original meaning of the source words during translation. One example from the first conference on machine translation (WMT16) test set is the segment in Table TABREF1 . The English word goalie is not translated to the correct German word Torwart, but to the German word Gott, which means god. One problem could be that we need to limit the vocabulary size in order to train the model efficiently. We used Byte Pair Encoding (BPE) BIBREF0 to represent the text using a fixed size vocabulary. In our case the word goali is splitted into three parts go, al and ie. Then it is more difficult to transport the meaning to the translation. In contrast to this, in phrase-based machine translation (PBMT), we do not need to limit the vocabulary and are often able to translate words even if we have seen them only very rarely in the training. In the example mentioned before, for instance, the PBMT system had no problems translating the expression correctly. On the other hand, official evaluation campaigns BIBREF1 have shown that NMT system often create grammatically correct sentence and are able to model the morphologically agreement much better in German. The goal of this work is to combine the advantages of neural and phrase-based machine translation systems. Handling of rare words is an essential aspect to consider when it comes to real-world applications. The pre-translation framework provides a straightforward way to support such applications. In our approach, we will first translate the input using a PBMT system, which can handle the rare words well. In a second step, we will generate the final translation using an NMT system. This NMT system is able to generate a more fluent and grammatically correct translation. Since the rare words are already handled by the PBMT system, there should be less problems to generate the translation of these words. Using this approach naturally introduces a necessity to handle the potential errors by the PBMT systems. The remaining of the paper is structured as follows: In the next section we will review the related work. In Section SECREF3 , we will briefly review the phrase-based and neural approach to machine translation. Section SECREF4 will introduce the approach presented in this paper to pre-translate the input using a PBMT system. In the following section, we will evaluate the approach and analyze the errors. Finally, we will finish with a conclusion. Related Work The idea of linear combining of machine translation systems using different paradigms has already been used successfully for SMT and rule-based machine translation (RBMT) BIBREF2 , BIBREF3 . They build an SMT system that is post-editing the output of an RBMT system. Using the combination of SMT and RBMT, they could outperform both single systems. Those experiments promote the area of automatic post-editing BIBREF4 . Recently, it was shown that models based on neural MT are very successful in this task BIBREF5 . For PBMT, there has been several attempts to apply preprocessing in order to improve the performance of the translation system. A commonly used preprocessing step is morphological splitting, like compound splitting in German BIBREF6 . Another example would be to use pre-reordering in order to achieve more monotone translation BIBREF7 . In addition, the usefulness of using the translations of the training data of a PBMT system has been shown. The translations have been used to re-train the translation model BIBREF8 or to train additional discriminative translation models BIBREF9 . In order to improve the translation of rare words in NMT, authors try to translate words that are not in the vocabulary in a post-processing step BIBREF10 . In BIBREF0 , a method to split words into sub-word units was presented to limit the vocabulary size. Also the integration of lexical probabilities into NMT was successfully investigated BIBREF11 . Phrase-based and Neural Machine Translation Starting with the initial work on word-based translation system BIBREF12 , phrase-based machine translation BIBREF13 , BIBREF14 segments the sentence into continuous phrases that are used as basic translation units. This allows for many-to-many alignments. Based on this segmentation, the probability of the translation is calculated using a log-linear combination of different features: DISPLAYFORM0 In the initial model, the features are based on language and translation model probabilities as well as a few count based features. In advanced PBMT systems, several additional features to better model the translation process have been developed. Especially models using neural networks were able to increase the translation performance. Recently, state-of-the art performance in machine translation was significantly improved by using neural machine translation. In this approach to machine translation, a recurrent neural network (RNN)-based encoder-decoder architecture is used to transform the source sentence into the target sentence. In the encoder, an RNN is used to encode the source sentence into a fixed size continuous space representation by inserting the source sentence word-by-word into the network. In a second step, the decoder is initialized by the representation of the source sentence and is then generating the target sequence one word after the other using the last generated word as input for the RNN BIBREF15 . One main drawback of this approach is that the whole source sentence has to be stored in a fixed-size context vector. To overcome this problem, BIBREF16 introduced the soft attention mechanism. Instead of only considering the last state of the encoder RNN, they use a weighted sum of all hidden states. Using these weights, the model is able to put attention on different parts of the source sentence depending on the current status of the decoder RNN. In addition, they extended the encoder RNN to a bi-directional one to be able to get information from the whole sentence at every position of the encoder RNN. A detailed description of the NMT framework can be found in BIBREF16 . PBMT Pre-translation for NMT (PreMT) In this work, we want to combine the advantages of PBMT and NMT. Using the combined system we should be able to generate a translation for all words that occur at least once in the training data, while maintaining high quality translations for most sentences from NMT. Motivated by several approaches to simplify the translation process for PBMT using preprocessing, we will translate the source as a preprocessing step using the phrase-base machine translation system. The main translation task is done by the neural machine translation model, which can choose between using the output of the PBMT system or the original input when generate the translation. Pipeline In our first attempt, we combined the phrase-based MT and the neural MT in one pipeline as shown in Figure FIGREF3 . The input is first processed by the phrase-based machine translation system from the input language INLINEFORM0 to the target language INLINEFORM1 . Since the machine translation system is not perfect, the output of the system may not be correct translation containing errors possibly. Therefore, we will call the output language of the PBMT system INLINEFORM2 . In a second step, we will train a neural monolingual translation system, that translates from the output of the PBMT system INLINEFORM0 to a better target sentence INLINEFORM1 . Mixed Input One drawback of the pipelined approach is that the PBMT system might introduce some errors in the translation that the NMT can not recover from. For example, it is possible that some information from the source sentence gets lost, since the word is entirely deleted during the translation of the PBMT system. We try to overcome this problem by building an NMT system that does not only take the output of the PBMT system, but also the original source sentence. One advantage of NMT system is that we can easily encode different input information. The architecture of our system is shown in Figure FIGREF3 . The implementation of the mixed input for the NMT system is straight forward. Given the source input INLINEFORM0 and the output of the PBMT system INLINEFORM1 , we generated the input for the NMT system. First, we ensured a non-overlapping vocabulary of INLINEFORM2 and INLINEFORM3 by marking each token in INLINEFORM4 by a character and INLINEFORM5 by different ones. Then both input sequences are concatenated to the input INLINEFORM6 of the NMT system. Using this representation, the NMT can learn to focus on source word INLINEFORM0 and words INLINEFORM1 when generating a word INLINEFORM2 . Training In both cases, we can no longer train the NMT system on the source language and target language data, but on the output of the PBMT system and the target language data. Therefore, we need to generate translations of the whole parallel training data using the PBMT system. Due to its ability to use very long phrases, a PBMT system normally performs significantly better on the training data than on unseen test data. This of course will harm the performance of our approach, because the NMT system will underestimate the number of improvements it has to perform on the test data. In order to limit this effect, we did not use the whole phrase tables when translating the training data. If a phrase pair only occurs once, we cannot learn it from a different sentence pair. Following BIBREF9 , we removed all phrase pairs that occur only once for the translation of the corpus. Experiments We analyze the approach on the English to German news translation task of the Conference on Statistical Machine Translation (WMT). First, we will describe the system and analyze the translation quality measured in BLEU. Afterwards, we will analyze the performance depending on the frequency of the words and finally show some example translations. System description For the pre-translation, we used a PBMT system. In order to analyze the influence of the quality of the PBMT system, we use two different systems, a baseline system and a system with advanced models. The systems were trained on all parallel data available for the WMT 2016. The news commentary corpus, the European parliament proceedings and the common crawl corpus sum up to 3.7M sentences and around 90M words. In the baseline system, we use three language models, a word-based, a bilingual BIBREF17 and a cluster based language model, using 100 automatically generated clusters using MKCLS BIBREF18 . The advanced system use pre-reodering BIBREF19 and lexicalized reordering. In addition, it uses a discriminative word lexicon BIBREF9 and a language model trained on the large monolingual data. Both systems were optimized on the tst2014 using Minimum error rate training BIBREF20 . A detailed description of the systems can be found in BIBREF21 . The neural machine translation was trained using Nematus. For the NMT system as well as for the PreMT system, we used the default configuration. In order to limit the vocabulary size, we use BPE as described in BIBREF0 with 40K operations. We run the NMT system for 420K iterations and stored a model every 30K iterations. We selected the model that performed best on the development data. For the ensemble system we took the last four models. We did not perform an additional fine-tuning. The PreMT system was trained on translations of the PBMT system of the corpus and the target side of the corpus. For this translation, we only used the baseline PBMT system. English - German Machine Translation The results of all systems are summarized in Table TABREF13 . It has to be noted, that the first set, tst2014, has been used as development data for the PBMT system and as validation set for the NMT-based systems. Using the neural MT system, we reach a BLEU score of 23.34 and 27.65 on tst2015 and tst2016. Using an ensemble system, we can improve the performance to 24.03 and 28.89 respectively. The baseline PBMT system performs 1.5 to 1.2 BLEU points worse than the single NMT system. Using the PBMT system with advanced models, we get the same performance on the tst2015 and 0.5 BLEU points better on tst2016 compared to the NMT system. First, we build a PreMT system using the pipeline method as described in Section SECREF6 . The system reaches a BLEU score of 22.04 and 26.75 on both test sets. While the PreMT can improve of the baseline PBMT system, the performance is worse than the pure NMT system. So the first approach to combine neural and statistical machine translation is not able the combine the strength of both system. In contrast, the NMT system seems to be not able to recover from the errors done by the SMT-based system. In a second experiment, we use the advanced PBMT system to generate the translation of the test data. We did not use it to generate a new training corpus, since the translation is computationally very expensive. So the PreMT system stays the same, being trained on the translation of the baseline PBMT. However, it is getting better quality translation in testing. This also leads to an improvement of 0.9 BLEU points on both test sets. Although it is smaller then the difference between the two initial phrase-based translation systems of around 1.5 BLUE points, we are able to improve the translation quality by using a better pre-translation system. It is interesting to see that we can improve the quality of the PreMT system, but improving one component (SMT Pre-Translation), even if we do it only in evaluation and not in training. But the system does not improve over the pure NMT system and even the post editing of the NMT system lowers the performance compared to the initial PBMT system used for pre-translation. After evaluating the pipelined system, we performed experiments using the mixed input system. This leads to an improvement in translation quality. Using the baseline PBMT system for per-translation, we perform 0.8 BLEU points better than the purely NMT system on tst2015 and 0.4 BLEU point better on tst2016. It also showed better performance than both PBMT systems on tst2015 and comparable performance with the advanced PBMT on tst2016. So by looking at the original input and the pre-translation, the NMT system is able to recover some of the errors done by the PBMT system and also to prevent errors the NMT does if it is directly translating the source sentence. Using the advanced PBMT system for input, we can get additional gains of 0.3 and 1.6 BLEU points The system even outperforms the ensemble system on tst2016. The experiments showed that deploying a pre-translation PBMT system with a better quality improves the NMT quality in the mixed input scheme, even when it is used only in testing, not in training. By using an ensemble of four model, we improve the model by one BLEU point on both test sets, leading to the best results of 25.35 and 30.67 BLEU points. This is 1.3 and 1.8 BLEU points better than the pure NMT ensemble system. System Comparison After evaluating the approach, we further analyze the different techniques for machine translation. For this, we compared the single NMT system, the advanced PBMT system and the mixed system using the advanced PBMT system as input. Out initial idea was that PBMT systems are better for translating rare words, while the NMT is generating more fluent translation. To confirm this assumption, we edited the output of all system. For all analyzed systems, we replaced all target words, which occur in the training data less than INLINEFORM0 times, by the UNK token. For large INLINEFORM1 , we have therefore only the most frequent words in the reference, while for lower INLINEFORM2 more and more words are used. The results for INLINEFORM0 are shown in Figure FIGREF15 . Of course, with lower INLINEFORM1 we will have fewer UNK tokens in the output. Therefore, we normalized the BLEU scores by the performance of the PreMT system. We can see in the figure, that when INLINEFORM0 , where only the common words are used, we perform best using the NMT system. The PreMT system performs similar and the PBMT system performs clearly worse. If we now decrease INLINEFORM1 , more and more less frequent words will be considered in the evaluation of the translation quality. Although the absolute BLEU scores raise for all systems, on these less frequent words the PBMT performs better than the NMT system and therefore, finally it even achieves a better performance. In contrast to this, the PreMT is able to benefit from the pre-translation of the PBMT system and therefore stays better than the PBMT system. Examples In Table TABREF17 we show the output of the PBMT, NMT and PreMT system. First, for the PBMT system, we see a typical error when translating from and to German. The verb of the subclause parried is located at the second position in English, but in the German sentence it has to be located at the end of the sentence. The PBMT system is often not able to perform this long-range reordering. For the NMT system, we see two other errors. Both, the words goalie and parried are quite rarely in the training data and therefore, they are splitted into several parts by the BPE algorithm. In this case, the NMT makes more errors. For the first word, the NMT system generates a complete wrong translation Gott (engl. god) instead of Torwart. The second word is just dropped and does not appear in the translation. The example shows that the pre-translation system prevents both errors. It is generating the correct words Torwart and pariert and putting them at the correct position in the German sentence. To better understand how the pre-translation system is able to generate this translation, we also generated the alignment matrix of the attention model as shown in Figure FIGREF18 . The x-axis shows the input, where the words from the pre-translation are marked by D_ and the words from the original source by E_. The y-axis carries the translation. The symbol @@ marks subword units generated by the BPE algorithm. First, as indicated by the two diagonal lines the model is considering as both inputs, the original source and the pre-translation by the two diagonal lines. Secondly, we see that the attention model is mainly focusing on the pre-translation for words that are not common and therefore got splitted into several parts by the BPE, such as shoot, goalie and parried. A second example, which shows what happens with rare words occur in the source sentence, is shown in Table TABREF19 . In this case, the word riot is not translated but just passed to the target language. This behaviour is helpful for rare words like named entities, but the NMT system is using it also for many words that are not named entities. Other examples for words that were just passed through and not translated are crossbar or vigil. Conclusion In this paper, we presented a technique to combine phrase-based and neural machine translation. Motivated by success in statistical machine translation, we used phrase-based machine translation to pre-translate the input and then we generate the final translation using neural machine translation. While a simple serial combination of both models could not generate better translation than the neural machine translation system, we are able to improve over neural machine translation using a mixed input. By simple concatenation of the phrase-based translation and the original source as input for the neural machine translation, we can increase the machine translation quality measured in BLEU. The single pre-translated system could even outperform the ensemble NMT system. For the ensemble system, the PreMT system could outperform the NMT system by up to 1.8 BLEU points. Using the combined approach, we can generate more fluent translation typical for the NMT system, but also translate rare words. These are often more easily translated by PBMT. Furthermore, we are able to improve the overall system performance by improving the individual components. Acknowledgments The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement n INLINEFORM0 645452. This work was supported by the Carl-Zeiss-Stiftung.
trained using Nematus, default configuration
249f2a9bd9d59679cbe82b3fa01572fc7a04f81b
249f2a9bd9d59679cbe82b3fa01572fc7a04f81b_0
Q: Do they train the NMT model on PBMT outputs? Text: Introduction In the last years, statistical machine translation (SMT) system generated state-of-the-art performance for most language pairs. Recently, systems using neural machine translation (NMT) were able to outperform SMT systems in several evaluations. These models are able to generate more fluent and accurate translation for most of sentences. Neural machine translation systems provide the output with high fluency. A weakness of NMT systems, however, is that they sometimes lose the original meaning of the source words during translation. One example from the first conference on machine translation (WMT16) test set is the segment in Table TABREF1 . The English word goalie is not translated to the correct German word Torwart, but to the German word Gott, which means god. One problem could be that we need to limit the vocabulary size in order to train the model efficiently. We used Byte Pair Encoding (BPE) BIBREF0 to represent the text using a fixed size vocabulary. In our case the word goali is splitted into three parts go, al and ie. Then it is more difficult to transport the meaning to the translation. In contrast to this, in phrase-based machine translation (PBMT), we do not need to limit the vocabulary and are often able to translate words even if we have seen them only very rarely in the training. In the example mentioned before, for instance, the PBMT system had no problems translating the expression correctly. On the other hand, official evaluation campaigns BIBREF1 have shown that NMT system often create grammatically correct sentence and are able to model the morphologically agreement much better in German. The goal of this work is to combine the advantages of neural and phrase-based machine translation systems. Handling of rare words is an essential aspect to consider when it comes to real-world applications. The pre-translation framework provides a straightforward way to support such applications. In our approach, we will first translate the input using a PBMT system, which can handle the rare words well. In a second step, we will generate the final translation using an NMT system. This NMT system is able to generate a more fluent and grammatically correct translation. Since the rare words are already handled by the PBMT system, there should be less problems to generate the translation of these words. Using this approach naturally introduces a necessity to handle the potential errors by the PBMT systems. The remaining of the paper is structured as follows: In the next section we will review the related work. In Section SECREF3 , we will briefly review the phrase-based and neural approach to machine translation. Section SECREF4 will introduce the approach presented in this paper to pre-translate the input using a PBMT system. In the following section, we will evaluate the approach and analyze the errors. Finally, we will finish with a conclusion. Related Work The idea of linear combining of machine translation systems using different paradigms has already been used successfully for SMT and rule-based machine translation (RBMT) BIBREF2 , BIBREF3 . They build an SMT system that is post-editing the output of an RBMT system. Using the combination of SMT and RBMT, they could outperform both single systems. Those experiments promote the area of automatic post-editing BIBREF4 . Recently, it was shown that models based on neural MT are very successful in this task BIBREF5 . For PBMT, there has been several attempts to apply preprocessing in order to improve the performance of the translation system. A commonly used preprocessing step is morphological splitting, like compound splitting in German BIBREF6 . Another example would be to use pre-reordering in order to achieve more monotone translation BIBREF7 . In addition, the usefulness of using the translations of the training data of a PBMT system has been shown. The translations have been used to re-train the translation model BIBREF8 or to train additional discriminative translation models BIBREF9 . In order to improve the translation of rare words in NMT, authors try to translate words that are not in the vocabulary in a post-processing step BIBREF10 . In BIBREF0 , a method to split words into sub-word units was presented to limit the vocabulary size. Also the integration of lexical probabilities into NMT was successfully investigated BIBREF11 . Phrase-based and Neural Machine Translation Starting with the initial work on word-based translation system BIBREF12 , phrase-based machine translation BIBREF13 , BIBREF14 segments the sentence into continuous phrases that are used as basic translation units. This allows for many-to-many alignments. Based on this segmentation, the probability of the translation is calculated using a log-linear combination of different features: DISPLAYFORM0 In the initial model, the features are based on language and translation model probabilities as well as a few count based features. In advanced PBMT systems, several additional features to better model the translation process have been developed. Especially models using neural networks were able to increase the translation performance. Recently, state-of-the art performance in machine translation was significantly improved by using neural machine translation. In this approach to machine translation, a recurrent neural network (RNN)-based encoder-decoder architecture is used to transform the source sentence into the target sentence. In the encoder, an RNN is used to encode the source sentence into a fixed size continuous space representation by inserting the source sentence word-by-word into the network. In a second step, the decoder is initialized by the representation of the source sentence and is then generating the target sequence one word after the other using the last generated word as input for the RNN BIBREF15 . One main drawback of this approach is that the whole source sentence has to be stored in a fixed-size context vector. To overcome this problem, BIBREF16 introduced the soft attention mechanism. Instead of only considering the last state of the encoder RNN, they use a weighted sum of all hidden states. Using these weights, the model is able to put attention on different parts of the source sentence depending on the current status of the decoder RNN. In addition, they extended the encoder RNN to a bi-directional one to be able to get information from the whole sentence at every position of the encoder RNN. A detailed description of the NMT framework can be found in BIBREF16 . PBMT Pre-translation for NMT (PreMT) In this work, we want to combine the advantages of PBMT and NMT. Using the combined system we should be able to generate a translation for all words that occur at least once in the training data, while maintaining high quality translations for most sentences from NMT. Motivated by several approaches to simplify the translation process for PBMT using preprocessing, we will translate the source as a preprocessing step using the phrase-base machine translation system. The main translation task is done by the neural machine translation model, which can choose between using the output of the PBMT system or the original input when generate the translation. Pipeline In our first attempt, we combined the phrase-based MT and the neural MT in one pipeline as shown in Figure FIGREF3 . The input is first processed by the phrase-based machine translation system from the input language INLINEFORM0 to the target language INLINEFORM1 . Since the machine translation system is not perfect, the output of the system may not be correct translation containing errors possibly. Therefore, we will call the output language of the PBMT system INLINEFORM2 . In a second step, we will train a neural monolingual translation system, that translates from the output of the PBMT system INLINEFORM0 to a better target sentence INLINEFORM1 . Mixed Input One drawback of the pipelined approach is that the PBMT system might introduce some errors in the translation that the NMT can not recover from. For example, it is possible that some information from the source sentence gets lost, since the word is entirely deleted during the translation of the PBMT system. We try to overcome this problem by building an NMT system that does not only take the output of the PBMT system, but also the original source sentence. One advantage of NMT system is that we can easily encode different input information. The architecture of our system is shown in Figure FIGREF3 . The implementation of the mixed input for the NMT system is straight forward. Given the source input INLINEFORM0 and the output of the PBMT system INLINEFORM1 , we generated the input for the NMT system. First, we ensured a non-overlapping vocabulary of INLINEFORM2 and INLINEFORM3 by marking each token in INLINEFORM4 by a character and INLINEFORM5 by different ones. Then both input sequences are concatenated to the input INLINEFORM6 of the NMT system. Using this representation, the NMT can learn to focus on source word INLINEFORM0 and words INLINEFORM1 when generating a word INLINEFORM2 . Training In both cases, we can no longer train the NMT system on the source language and target language data, but on the output of the PBMT system and the target language data. Therefore, we need to generate translations of the whole parallel training data using the PBMT system. Due to its ability to use very long phrases, a PBMT system normally performs significantly better on the training data than on unseen test data. This of course will harm the performance of our approach, because the NMT system will underestimate the number of improvements it has to perform on the test data. In order to limit this effect, we did not use the whole phrase tables when translating the training data. If a phrase pair only occurs once, we cannot learn it from a different sentence pair. Following BIBREF9 , we removed all phrase pairs that occur only once for the translation of the corpus. Experiments We analyze the approach on the English to German news translation task of the Conference on Statistical Machine Translation (WMT). First, we will describe the system and analyze the translation quality measured in BLEU. Afterwards, we will analyze the performance depending on the frequency of the words and finally show some example translations. System description For the pre-translation, we used a PBMT system. In order to analyze the influence of the quality of the PBMT system, we use two different systems, a baseline system and a system with advanced models. The systems were trained on all parallel data available for the WMT 2016. The news commentary corpus, the European parliament proceedings and the common crawl corpus sum up to 3.7M sentences and around 90M words. In the baseline system, we use three language models, a word-based, a bilingual BIBREF17 and a cluster based language model, using 100 automatically generated clusters using MKCLS BIBREF18 . The advanced system use pre-reodering BIBREF19 and lexicalized reordering. In addition, it uses a discriminative word lexicon BIBREF9 and a language model trained on the large monolingual data. Both systems were optimized on the tst2014 using Minimum error rate training BIBREF20 . A detailed description of the systems can be found in BIBREF21 . The neural machine translation was trained using Nematus. For the NMT system as well as for the PreMT system, we used the default configuration. In order to limit the vocabulary size, we use BPE as described in BIBREF0 with 40K operations. We run the NMT system for 420K iterations and stored a model every 30K iterations. We selected the model that performed best on the development data. For the ensemble system we took the last four models. We did not perform an additional fine-tuning. The PreMT system was trained on translations of the PBMT system of the corpus and the target side of the corpus. For this translation, we only used the baseline PBMT system. English - German Machine Translation The results of all systems are summarized in Table TABREF13 . It has to be noted, that the first set, tst2014, has been used as development data for the PBMT system and as validation set for the NMT-based systems. Using the neural MT system, we reach a BLEU score of 23.34 and 27.65 on tst2015 and tst2016. Using an ensemble system, we can improve the performance to 24.03 and 28.89 respectively. The baseline PBMT system performs 1.5 to 1.2 BLEU points worse than the single NMT system. Using the PBMT system with advanced models, we get the same performance on the tst2015 and 0.5 BLEU points better on tst2016 compared to the NMT system. First, we build a PreMT system using the pipeline method as described in Section SECREF6 . The system reaches a BLEU score of 22.04 and 26.75 on both test sets. While the PreMT can improve of the baseline PBMT system, the performance is worse than the pure NMT system. So the first approach to combine neural and statistical machine translation is not able the combine the strength of both system. In contrast, the NMT system seems to be not able to recover from the errors done by the SMT-based system. In a second experiment, we use the advanced PBMT system to generate the translation of the test data. We did not use it to generate a new training corpus, since the translation is computationally very expensive. So the PreMT system stays the same, being trained on the translation of the baseline PBMT. However, it is getting better quality translation in testing. This also leads to an improvement of 0.9 BLEU points on both test sets. Although it is smaller then the difference between the two initial phrase-based translation systems of around 1.5 BLUE points, we are able to improve the translation quality by using a better pre-translation system. It is interesting to see that we can improve the quality of the PreMT system, but improving one component (SMT Pre-Translation), even if we do it only in evaluation and not in training. But the system does not improve over the pure NMT system and even the post editing of the NMT system lowers the performance compared to the initial PBMT system used for pre-translation. After evaluating the pipelined system, we performed experiments using the mixed input system. This leads to an improvement in translation quality. Using the baseline PBMT system for per-translation, we perform 0.8 BLEU points better than the purely NMT system on tst2015 and 0.4 BLEU point better on tst2016. It also showed better performance than both PBMT systems on tst2015 and comparable performance with the advanced PBMT on tst2016. So by looking at the original input and the pre-translation, the NMT system is able to recover some of the errors done by the PBMT system and also to prevent errors the NMT does if it is directly translating the source sentence. Using the advanced PBMT system for input, we can get additional gains of 0.3 and 1.6 BLEU points The system even outperforms the ensemble system on tst2016. The experiments showed that deploying a pre-translation PBMT system with a better quality improves the NMT quality in the mixed input scheme, even when it is used only in testing, not in training. By using an ensemble of four model, we improve the model by one BLEU point on both test sets, leading to the best results of 25.35 and 30.67 BLEU points. This is 1.3 and 1.8 BLEU points better than the pure NMT ensemble system. System Comparison After evaluating the approach, we further analyze the different techniques for machine translation. For this, we compared the single NMT system, the advanced PBMT system and the mixed system using the advanced PBMT system as input. Out initial idea was that PBMT systems are better for translating rare words, while the NMT is generating more fluent translation. To confirm this assumption, we edited the output of all system. For all analyzed systems, we replaced all target words, which occur in the training data less than INLINEFORM0 times, by the UNK token. For large INLINEFORM1 , we have therefore only the most frequent words in the reference, while for lower INLINEFORM2 more and more words are used. The results for INLINEFORM0 are shown in Figure FIGREF15 . Of course, with lower INLINEFORM1 we will have fewer UNK tokens in the output. Therefore, we normalized the BLEU scores by the performance of the PreMT system. We can see in the figure, that when INLINEFORM0 , where only the common words are used, we perform best using the NMT system. The PreMT system performs similar and the PBMT system performs clearly worse. If we now decrease INLINEFORM1 , more and more less frequent words will be considered in the evaluation of the translation quality. Although the absolute BLEU scores raise for all systems, on these less frequent words the PBMT performs better than the NMT system and therefore, finally it even achieves a better performance. In contrast to this, the PreMT is able to benefit from the pre-translation of the PBMT system and therefore stays better than the PBMT system. Examples In Table TABREF17 we show the output of the PBMT, NMT and PreMT system. First, for the PBMT system, we see a typical error when translating from and to German. The verb of the subclause parried is located at the second position in English, but in the German sentence it has to be located at the end of the sentence. The PBMT system is often not able to perform this long-range reordering. For the NMT system, we see two other errors. Both, the words goalie and parried are quite rarely in the training data and therefore, they are splitted into several parts by the BPE algorithm. In this case, the NMT makes more errors. For the first word, the NMT system generates a complete wrong translation Gott (engl. god) instead of Torwart. The second word is just dropped and does not appear in the translation. The example shows that the pre-translation system prevents both errors. It is generating the correct words Torwart and pariert and putting them at the correct position in the German sentence. To better understand how the pre-translation system is able to generate this translation, we also generated the alignment matrix of the attention model as shown in Figure FIGREF18 . The x-axis shows the input, where the words from the pre-translation are marked by D_ and the words from the original source by E_. The y-axis carries the translation. The symbol @@ marks subword units generated by the BPE algorithm. First, as indicated by the two diagonal lines the model is considering as both inputs, the original source and the pre-translation by the two diagonal lines. Secondly, we see that the attention model is mainly focusing on the pre-translation for words that are not common and therefore got splitted into several parts by the BPE, such as shoot, goalie and parried. A second example, which shows what happens with rare words occur in the source sentence, is shown in Table TABREF19 . In this case, the word riot is not translated but just passed to the target language. This behaviour is helpful for rare words like named entities, but the NMT system is using it also for many words that are not named entities. Other examples for words that were just passed through and not translated are crossbar or vigil. Conclusion In this paper, we presented a technique to combine phrase-based and neural machine translation. Motivated by success in statistical machine translation, we used phrase-based machine translation to pre-translate the input and then we generate the final translation using neural machine translation. While a simple serial combination of both models could not generate better translation than the neural machine translation system, we are able to improve over neural machine translation using a mixed input. By simple concatenation of the phrase-based translation and the original source as input for the neural machine translation, we can increase the machine translation quality measured in BLEU. The single pre-translated system could even outperform the ensemble NMT system. For the ensemble system, the PreMT system could outperform the NMT system by up to 1.8 BLEU points. Using the combined approach, we can generate more fluent translation typical for the NMT system, but also translate rare words. These are often more easily translated by PBMT. Furthermore, we are able to improve the overall system performance by improving the individual components. Acknowledgments The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement n INLINEFORM0 645452. This work was supported by the Carl-Zeiss-Stiftung.
Yes
626873982852ec83c59193dd2cf73769bf77b3ed
626873982852ec83c59193dd2cf73769bf77b3ed_0
Q: what evaluation methods are discussed? Text: Introduction Language identification (“”) is the task of determining the natural language that a document or part thereof is written in. Recognizing text in a specific language comes naturally to a human reader familiar with the language. intro:langid presents excerpts from Wikipedia articles in different languages on the topic of Natural Language Processing (“NLP”), labeled according to the language they are written in. Without referring to the labels, readers of this article will certainly have recognized at least one language in intro:langid, and many are likely to be able to identify all the languages therein. Research into aims to mimic this human ability to recognize specific languages. Over the years, a number of computational approaches have been developed that, through the use of specially-designed algorithms and indexing structures, are able to infer the language being used without the need for human intervention. The capability of such systems could be described as super-human: an average person may be able to identify a handful of languages, and a trained linguist or translator may be familiar with many dozens, but most of us will have, at some point, encountered written texts in languages they cannot place. However, research aims to develop systems that are able to identify any human language, a set which numbers in the thousands BIBREF0 . In a broad sense, applies to any modality of language, including speech, sign language, and handwritten text, and is relevant for all means of information storage that involve language, digital or otherwise. However, in this survey we limit the scope of our discussion to of written text stored in a digitally-encoded form. Research to date on has traditionally focused on monolingual documents BIBREF1 (we discuss for multilingual documents in openissues:multilingual). In monolingual , the task is to assign each document a unique language label. Some work has reported near-perfect accuracy for of large documents in a small number of languages, prompting some researchers to label it a “solved task” BIBREF2 . However, in order to attain such accuracy, simplifying assumptions have to be made, such as the aforementioned monolinguality of each document, as well as assumptions about the type and quantity of data, and the number of languages considered. The ability to accurately detect the language that a document is written in is an enabling technology that increases accessibility of data and has a wide variety of applications. For example, presenting information in a user's native language has been found to be a critical factor in attracting website visitors BIBREF3 . Text processing techniques developed in natural language processing and Information Retrieval (“IR”) generally presuppose that the language of the input text is known, and many techniques assume that all documents are in the same language. In order to apply text processing techniques to real-world data, automatic is used to ensure that only documents in relevant languages are subjected to further processing. In information storage and retrieval, it is common to index documents in a multilingual collection by the language that they are written in, and is necessary for document collections where the languages of documents are not known a-priori, such as for data crawled from the World Wide Web. Another application of that predates computational methods is the detection of the language of a document for routing to a suitable translator. This application has become even more prominent due to the advent of Machine Translation (“MT”) methods: in order for MT to be applied to translate a document to a target language, it is generally necessary to determine the source language of the document, and this is the task of . also plays a part in providing support for the documentation and use of low-resource languages. One area where is frequently used in this regard is in linguistic corpus creation, where is used to process targeted web crawls to collect text resources for low-resource languages. A large part of the motivation for this article is the observation that lacks a “home discipline”, and as such, the literature is fragmented across a number of fields, including NLP, IR, machine learning, data mining, social medial analysis, computer science education, and systems science. This has hampered the field, in that there have been many instances of research being carried out with only partial knowledge of other work on the topic, and the myriad of published systems and datasets. Finally, it should be noted that this survey does not make a distinction between languages, language varieties, and dialects. Whatever demarcation is made between languages, varieties and dialects, a system is trained to identify the associated document classes. Of course, the more similar two classes are, the more challenging it is for a system to discriminate between them. Training a system to discriminate between similar languages such as Croatian and Serbian BIBREF4 , language varieties like Brazilian and European Portuguese BIBREF5 , or a set of Arabic dialects BIBREF6 is more challenging than training systems to discriminate between, for example, Japanese and Finnish. Even so, as evidenced in this article, from a computational perspective, the algorithms and features used to discriminate between languages, language varieties, and dialects are identical. as Text Categorization is in some ways a special case of text categorization, and previous research has examined applying standard text categorization methods to BIBREF7 , BIBREF8 . BIBREF9 provides a definition of text categorization, which can be summarized as the task of mapping a document onto a pre-determined set of classes. This is a very broad definition, and indeed one that is applicable to a wide variety of tasks, amongst which falls modern-day . The archetypal text categorization task is perhaps the classification of newswire articles according to the topics that they discuss, exemplified by the Reuters-21578 dataset BIBREF10 . However, has particular characteristics that make it different from typical text categorization tasks: These distinguishing characteristics present unique challenges and offer particular opportunities, so much so that research in has generally proceeded independently of text categorization research. In this survey, we will examine the common themes and ideas that underpin research in . We begin with a brief history of research that has led to modern (history), and then proceed to review the literature, first introducing the mathematical notation used in the article (notation), and then providing synthesis and analysis of existing research, focusing specifically on the representation of text (features) and the learning algorithms used (methods). We examine the methods for evaluating the quality of the systems (evaluation) as well as the areas where has been applied (applications), and then provide an overview of “off-the-shelf” systems (ots). We conclude the survey with a discussion of the open issues in (openissues), enumerating issues and existing efforts to address them, as well as charting the main directions where further research in is required. Previous Surveys Although there are some dedicated survey articles, these tend to be relatively short; there have not been any comprehensive surveys of research in automated LI of text to date. The largest survey so far can be found in the literature review of Marco Lui's PhD thesis BIBREF11 , which served as an early draft and starting point for the current article. BIBREF12 provides a historical overview of language identification focusing on the use of language models. BIBREF13 gives a brief overview of some of the methods used for , and BIBREF14 provide a review of some of the techniques and applications used previously. BIBREF15 gives a short overview of some of the challenges, algorithms and available tools for . BIBREF16 provides a brief summary of , how it relates to other research areas, and some outstanding challenges, but only does so in general terms and does not go into any detail about existing work in the area. Another brief article about is BIBREF17 , which covers both of spoken language as well as of written documents, and also discusses of documents stored as images rather than digitally-encoded text. A Brief History of as a task predates computational methods – the earliest interest in the area was motivated by the needs of translators, and simple manual methods were developed to quickly identify documents in specific languages. The earliest known work to describe a functional program for text is by BIBREF18 , a statistician, who used multiple discriminant analysis to teach a computer how to distinguish, at the word level, between English, Swedish and Finnish. Mustonen compiled a list of linguistically-motivated character-based features, and trained his language identifier on 300 words for each of the three target languages. The training procedure created two discriminant functions, which were tested with 100 words for each language. The experiment resulted in 76% of the words being correctly classified; even by current standards this percentage would be seen as acceptable given the small amount of training material, although the composition of training and test data is not clear, making the experiment unreproducible. In the early 1970s, BIBREF19 considered the problem of automatic . According to BIBREF20 and the available abstract of Nakamura's article, his language identifier was able to distinguish between 25 languages written with the Latin alphabet. As features, the method used the occurrence rates of characters and words in each language. From the abstract it seems that, in addition to the frequencies, he used some binary presence/absence features of particular characters or words, based on manual . BIBREF20 wrote his master's thesis “Language Identification by Statistical Analysis” for the Naval Postgraduate School at Monterey, California. The continued interest and the need to use of text in military intelligence settings is evidenced by the recent articles of, for example, BIBREF21 , BIBREF22 , BIBREF23 , and BIBREF24 . As features for , BIBREF20 used, e.g., the relative frequencies of characters and character bigrams. With a majority vote classifier ensemble of seven classifiers using Kolmogor-Smirnov's Test of Goodness of Fit and Yule's characteristic ( INLINEFORM0 ), he managed to achieve 89% accuracy over 53 characters when distinguishing between English and Spanish. His thesis actually includes the identifier program code (for the IBM System/360 Model 67 mainframe), and even the language models in printed form. Much of the earliest work on automatic was focused on identification of spoken language, or did not make a distinction between written and spoken language. For example, the work of BIBREF25 is primarily focused on of spoken utterances, but makes a broader contribution in demonstrating the feasibility of on the basis of a statistical model of broad phonetic information. However, their experiments do not use actual speech data, but rather “synthetic” data in the form of phonetic transcriptions derived from written text. Another subfield of speech technology, speech synthesis, has also generated a considerable amount of research in the of text, starting from the 1980s. In speech synthesis, the need to know the source language of individual words is crucial in determining how they should be pronounced. BIBREF26 uses the relative frequencies of character trigrams as probabilities and determines the language of words using a Bayesian model. Church explains the method – that has since been widely used in LI – as a small part of an article concentrating on many aspects of letter stress assignment in speech synthesis, which is probably why BIBREF27 is usually attributed to being the one to have introduced the aforementioned method to of text. As Beesley's article concentrated solely on the problem of LI, this single focus probably enabled his research to have greater visibility. The role of the program implementing his method was to route documents to MT systems, and Beesley's paper more clearly describes what has later come to be known as a character model. The fact that the distribution of characters is relatively consistent for a given language was already well known. The highest-cited early work on automatic is BIBREF7 . Cavnar and Trenkle's method (which we describe in detail in outofplace) builds up per-document and per-language profiles, and classifies a document according to which language profile it is most similar to, using a rank-order similarity metric. They evaluate their system on 3478 documents in eight languages obtained from USENET newsgroups, reporting a best overall accuracy of 99.8%. Gertjan van Noord produced an implementation of the method of Cavnar and Trenkle named , which has become eponymous with the method itself. is packaged with pre-trained models for a number of languages, and so it is likely that the strong results reported by Cavnar and Trenkle, combined with the ready availability of an “off-the-shelf” implementation, has resulted in the exceptional popularity of this particular method. BIBREF7 can be considered a milestone in automatic , as it popularized the use of automatic methods on character models for , and to date the method is still considered a benchmark for automatic . On Notation This section introduces the notation used throughout this article to describe methods. We have translated the notation in the original papers to our notation, to make it easier to see the similarities and differences between the methods presented in the literature. The formulas presented could be used to implement language identifiers and re-evaluate the studies they were originally presented in. A corpus INLINEFORM0 consists of individual tokens INLINEFORM1 which may be bytes, characters or words. INLINEFORM2 is comprised of a finite sequence of individual tokens, INLINEFORM3 . The total count of individual tokens INLINEFORM4 in INLINEFORM5 is denoted by INLINEFORM6 . In a corpus INLINEFORM7 with non-overlapping segments INLINEFORM8 , each segment is referred to as INLINEFORM9 , which may be a short document or a word or some other way of segmenting the corpus. The number of segments is denoted as INLINEFORM10 . A feature INLINEFORM0 is some countable characteristic of the corpus INLINEFORM1 . When referring to the set of all features INLINEFORM2 in a corpus INLINEFORM3 , we use INLINEFORM4 , and the number of features is denoted by INLINEFORM5 . A set of unique features in a corpus INLINEFORM6 is denoted by INLINEFORM7 . The number of unique features is referred to as INLINEFORM8 . The count of a feature INLINEFORM9 in the corpus INLINEFORM10 is referred to as INLINEFORM11 . If a corpus is divided into segments INLINEFORM12 , the count of a feature INLINEFORM13 in INLINEFORM14 is defined as the sum of counts over the segments of the corpus, i.e. INLINEFORM15 . Note that the segmentation may affect the count of a feature in INLINEFORM16 as features do not cross segment borders. A frequently-used feature is an , which consists of a contiguous sequence of INLINEFORM0 individual tokens. An starting at position INLINEFORM1 in a corpus segment is denoted INLINEFORM2 , where positions INLINEFORM3 remain within the same segment of the corpus as INLINEFORM4 . If INLINEFORM5 , INLINEFORM6 is an individual token. When referring to all of length INLINEFORM7 in a corpus INLINEFORM8 , we use INLINEFORM9 and the count of all such is denoted by INLINEFORM10 . The count of an INLINEFORM11 in a corpus segment INLINEFORM12 is referred to as INLINEFORM13 and is defined by count: DISPLAYFORM0 The set of languages is INLINEFORM0 , and INLINEFORM1 denotes the number of languages. A corpus INLINEFORM2 in language INLINEFORM3 is denoted by INLINEFORM4 . A language model INLINEFORM5 based on INLINEFORM6 is denoted by INLINEFORM7 . The features given values by the model INLINEFORM8 are the domain INLINEFORM9 of the model. In a language model, a value INLINEFORM10 for the feature INLINEFORM11 is denoted by INLINEFORM12 . For each potential language INLINEFORM13 of a corpus INLINEFORM14 in an unknown language, a resulting score INLINEFORM15 is calculated. A corpus in an unknown language is also referred to as a test document. An Archetypal Language Identifier The design of a supervised language identifier can generally be deconstructed into four key steps: A representation of text is selected A model for each language is derived from a training corpus of labelled documents A function is defined that determines the similarity between a document and each language The language of a document is predicted based on the highest-scoring model On the Equivalence of Methods The theoretical description of some of the methods leaves room for interpretation on how to implement them. BIBREF28 define an algorithm to be any well-defined computational procedure. BIBREF29 introduces a three-tiered classification where programs implement algorithms and algorithms implement functions. The examples of functions given by BIBREF29 , sort and find max differ from our identify language as they are always solvable and produce the same results. In this survey, we have considered two methods to be the same if they always produce exactly the same results from exactly the same inputs. This would not be in line with the definition of an algorithm by BIBREF29 , as in his example there are two different algorithms mergesort and quicksort that implement the function sort, always producing identical results with the same input. What we in this survey call a method, is actually a function in the tiers presented by BIBREF29 . Features In this section, we present an extensive list of features used in , some of which are not self-evident. The equations written in the unified notation defined earlier show how the values INLINEFORM0 used in the language models are calculated from the tokens INLINEFORM1 . For each feature type, we generally introduce the first published article that used that feature type, as well as more recent articles where the feature type has been considered. Bytes and Encodings In , text is typically modeled as a stream of characters. However, there is a slight mismatch between this view and how text is actually stored: documents are digitized using a particular encoding, which is a mapping from characters (e.g. a character in an alphabet), onto the actual sequence of bytes that is stored and transmitted by computers. Encodings vary in how many bytes they use to represent each character. Some encodings use a fixed number of bytes for each character (e.g. ASCII), whereas others use a variable-length encoding (e.g. UTF-8). Some encodings are specific to a given language (e.g. GuoBiao 18030 or Big5 for Chinese), whereas others are specifically designed to represent as many languages as possible (e.g. the Unicode family of encodings). Languages can often be represented in a number of different encodings (e.g. UTF-8 and Shift-JIS for Japanese), and sometimes encodings are specifically designed to share certain codepoints (e.g. all single-byte UTF-8 codepoints are exactly the same as ASCII). Most troubling for , isomorphic encodings can be used to encode different languages, meaning that the determination of the encoding often doesn't help in honing in on the language. Infamous examples of this are the ISO-8859 and EUC encoding families. Encodings pose unique challenges for practical applications: a given language can often be encoded in different forms, and a given encoding can often map onto multiple languages. Some research has included an explicit encoding detection step to resolve bytes to the characters they represent BIBREF30 , effectively transcoding the document into a standardized encoding before attempting to identify the language. However, transcoding is computationally expensive, and other research suggests that it may be possible to ignore encoding and build a single per-language model covering multiple encodings simultaneously BIBREF31 , BIBREF32 . Another solution is to treat each language-encoding pair as a separate category BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . The disadvantage of this is that it increases the computational cost by modeling a larger number of classes. Most of the research has avoided issues of encoding entirely by assuming that all documents use the same encoding BIBREF37 . This may be a reasonable assumption in some settings, such as when processing data from a single source (e.g. all data from Twitter and Wikipedia is UTF-8 encoded). In practice, a disadvantage of this approach may be that some encodings are only applicable to certain languages (e.g. S-JIS for Japanese and Big5 for Chinese), so knowing that a document is in a particular encoding can provide information that would be lost if the document is transcoded to a universal encoding such as UTF-8. BIBREF38 used a parallel state machine to detect which encoding scheme a file could potentially have been encoded with. The knowledge of the encoding, if detected, is then used to narrow down the possible languages. Most features and methods do not make a distinction between bytes or characters, and because of this we will present feature and method descriptions in terms of characters, even if byte tokenization was actually used in the original research. Characters In this section, we review how individual character tokens have been used as features in . BIBREF39 used the formatting of numbers when distinguishing between Malay and Indonesian. BIBREF40 used the presence of non-alphabetic characters between the current word and the words before and after as features. BIBREF41 used emoticons (or emojis) in Arabic dialect identification with Naive Bayes (“NB”; see product). Non-alphabetic characters have also been used by BIBREF42 , BIBREF43 , BIBREF44 , and BIBREF45 . BIBREF46 used knowledge of alphabets to exclude languages where a language-unique character in a test document did not appear. BIBREF47 used alphabets collected from dictionaries to check if a word might belong to a language. BIBREF48 used the Unicode database to get the possible languages of individual Unicode characters. Lately, the knowledge of relevant alphabets has been used for also by BIBREF49 and BIBREF44 . Capitalization is mostly preserved when calculating character frequencies, but in contexts where it is possible to identify the orthography of a given document and where capitalization exists in the orthography, lowercasing can be used to reduce sparseness. In recent work, capitalization was used as a special feature by BIBREF42 , BIBREF43 , and BIBREF45 . BIBREF50 was the first to use the length of words in . BIBREF51 used the length of full person names comprising several words. Lately, the number of characters in words has been used for by BIBREF52 , BIBREF53 , BIBREF44 , and BIBREF45 . BIBREF52 also used the length of the two preceding words. BIBREF54 used character frequencies as feature vectors. In a feature vector, each feature INLINEFORM0 has its own integer value. The raw frequency – also called term frequency (TF) – is calculated for each language INLINEFORM1 as: DISPLAYFORM0 BIBREF20 was the first to use the probability of characters. He calculated the probabilities as relative frequencies, by dividing the frequency of a feature found in the corpus by the total count of features of the same type in the corpus. When the relative frequency of a feature INLINEFORM0 is used as a value, it is calculated for each language INLINEFORM1 as: DISPLAYFORM0 BIBREF55 calculated the relative frequencies of one character prefixes, and BIBREF56 did the same for one character suffixes. BIBREF57 calculated character frequency document frequency (“LFDF”) values. BIBREF58 compared their own Inverse Class Frequency (“ICF”) method with the Arithmetic Average Centroid (“AAC”) and the Class Feature Centroid (“CFC”) feature vector updating methods. In ICF a character appearing frequently only in some language gets more positive weight for that language. The values differ from Inverse Document Frequency (“IDF”, artemenko1), as they are calculated using also the frequencies of characters in other languages. Their ICF-based vectors generally performed better than those based on AAC or CFC. BIBREF59 explored using the relative frequencies of characters with similar discriminating weights. BIBREF58 also used Mutual Information (“MI”) and chi-square weighting schemes with characters. BIBREF32 compared the identification results of single characters with the use of character bigrams and trigrams when classifying over 67 languages. Both bigrams and trigrams generally performed better than unigrams. BIBREF60 also found that the identification results from identifiers using just characters are generally worse than those using character sequences. Character Combinations In this section we consider the different combinations of characters used in the literature. Character mostly consist of all possible characters in a given encoding, but can also consist of only alphabetic or ideographic characters. BIBREF56 calculated the co-occurrence ratios of any two characters, as well as the ratio of consonant clusters of different sizes to the total number of consonants. BIBREF61 used the combination of every bigram and their counts in words. BIBREF53 used the proportions of question and exclamation marks to the total number of the end of sentence punctuation as features with several machine learning algorithms. BIBREF62 used FastText to generate character n-gram embeddings BIBREF63 . Neural network generated embeddings are explained in cooccurrencesofwords. BIBREF20 used the relative frequencies of vowels following vowels, consonants following vowels, vowels following consonants and consonants following consonants. BIBREF52 used vowel-consonant ratios as one of the features with Support Vector Machines (“SVMs”, supportvectormachines), Decision Trees (“DTs”, decisiontrees), and Conditional Random Fields (“CRFs”, openissues:short). BIBREF41 used the existence of word lengthening effects and repeated punctuation as features. BIBREF64 used the presence of characters repeating more than twice in a row as a feature with simple scoring (simple1). BIBREF65 used more complicated repetitions identified by regular expressions. BIBREF66 used letter and character bigram repetition with a CRF. BIBREF67 used the count of character sequences with three or more identical characters, using several machine learning algorithms. Character are continuous sequences of characters of length INLINEFORM0 . They can be either consecutive or overlapping. Consecutive character bigrams created from the four character sequence door are do and or, whereas the overlapping bigrams are do, oo, and or. Overlapping are most often used in the literature. Overlapping produces a greater number and variety of from the same amount of text. BIBREF20 was the first to use combinations of any two characters. He calculated the relative frequency of each bigram. RFTable2 lists more recent articles where relative frequencies of of characters have been used. BIBREF20 also used the relative frequencies of two character combinations which had one unknown character between them, also known as gapped bigrams. BIBREF68 used a modified relative frequency of character unigrams and bigrams. Character trigram frequencies relative to the word count were used by BIBREF92 , who calculated the values INLINEFORM0 as in vega1. Let INLINEFORM1 be the word-tokenized segmentation of the corpus INLINEFORM2 of character tokens, then: DISPLAYFORM0 where INLINEFORM0 is the count of character trigrams INLINEFORM1 in INLINEFORM2 , and INLINEFORM3 is the total word count in the corpus. Later frequencies relative to the word count were used by BIBREF93 for character bigrams and trigrams. BIBREF25 divided characters into five phonetic groups and used a Markovian method to calculate the probability of each bigram consisting of these phonetic groups. In Markovian methods, the probability of a given character INLINEFORM0 is calculated relative to a fixed-size character context INLINEFORM1 in corpus INLINEFORM2 , as follows: DISPLAYFORM0 where INLINEFORM0 is an prefix of INLINEFORM1 of length INLINEFORM2 . In this case, the probability INLINEFORM3 is the value INLINEFORM4 , where INLINEFORM5 , in the model INLINEFORM6 . BIBREF94 used 4-grams with recognition weights which were derived from Markovian probabilities. MarkovianTable lists some of the more recent articles where Markovian character have been used. BIBREF110 was the first author to propose a full-fledged probabilistic language identifier. He defines the probability of a trigram INLINEFORM0 being written in the language INLINEFORM1 to be: DISPLAYFORM0 He considers the prior probabilities of each language INLINEFORM0 to be equal, which leads to: DISPLAYFORM0 BIBREF110 used the probabilities INLINEFORM0 as the values INLINEFORM1 in the language models. BIBREF111 used a list of the most frequent bigrams and trigrams with logarithmic weighting. BIBREF112 was the first to use direct frequencies of character as feature vectors. BIBREF113 used Principal Component Analysis (“PCA”) to select only the most discriminating bigrams in the feature vectors representing languages. BIBREF114 used the most frequent and discriminating byte unigrams, bigrams, and trigrams among their feature functions. They define the most discriminating features as those which have the most differing relative frequencies between the models of the different languages. BIBREF115 tested from two to five using frequencies as feature vectors, frequency ordered lists, relative frequencies, and Markovian probabilities. FrequencyVectorTable lists the more recent articles where the frequency of character have been used as features. In the method column, “RF” refers to Random Forest (cf. decisiontrees), “LR” to Logistic Regression (discriminantfunctions), “KRR” to Kernel Ridge Regression (vectors), “KDA” to Kernel Discriminant Analysis (vectors), and “NN” to Neural Networks (neuralnetworks). BIBREF47 used the last two and three characters of open class words. BIBREF34 used an unordered list of distinct trigrams with the simple scoring method (Simplescoring). BIBREF132 used Fisher's discriminant function to choose the 1000 most discriminating trigrams. BIBREF133 used unique 4-grams of characters with positive Decision Rules (Decisionrule). BIBREF134 used the frequencies of bi- and trigrams in words unique to a language. BIBREF135 used lists of the most frequent trigrams. BIBREF38 divided possible character bigrams into those that are commonly used in a language and to those that are not. They used the ratio of the commonly used bigrams to all observed bigrams to give a confidence score for each language. BIBREF136 used the difference between the ISO Latin-1 code values of two consecutive characters as well as two characters separated by another character, also known as gapped character bigrams. BIBREF137 used the IDF and the transition probability of trigrams. They calculated the IDF values INLINEFORM0 of trigrams INLINEFORM1 for each language INLINEFORM2 , as in artemenko1, where INLINEFORM3 is the number of trigrams INLINEFORM4 in the corpus of the language INLINEFORM5 and INLINEFORM6 is the number of languages in which the trigram INLINEFORM7 is found, where INLINEFORM8 is the language-segmented training corpus with each language as a single segment. DISPLAYFORM0 INLINEFORM0 is defined as: DISPLAYFORM0 BIBREF138 used from one to four, which were weighted with “TF-IDF” (Term Frequency–Inverse Document Frequency). TF-IDF was calculated as: DISPLAYFORM0 TF-IDF weighting or close variants have been widely used for . BIBREF139 used “CF-IOF” (Class Frequency-Inverse Overall Frequency) weighted 3- and 4-grams. BIBREF140 used the logarithm of the ratio of the counts of character bigrams and trigrams in the English and Hindi dictionaries. BIBREF141 used a feature weighting scheme based on mutual information (“MI”). They also tried weighting schemes based on the “GSS” (Galavotti, Sebastiani, and Simi) and “NGL” (Ng, Goh, and Low) coefficients, but using the MI-based weighting scheme proved the best in their evaluations when they used the sum of values method (sumvalues1). BIBREF67 used punctuation trigrams, where the first character has to be a punctuation mark (but not the other two characters). BIBREF142 used consonant bi- and trigrams which were generated from words after the vowels had been removed. The language models mentioned earlier consisted only of of the same size INLINEFORM0 . If from one to four were used, then there were four separate language models. BIBREF7 created ordered lists of the most frequent for each language. BIBREF143 used similar lists with symmetric cross-entropy. BIBREF144 used a Markovian method to calculate the probability of byte trigrams interpolated with byte unigrams. BIBREF145 created a language identifier based on character of different sizes over 281 languages, and obtained an identification accuracy of 62.8% for extremely short samples (5–9 characters). Their language identifier was used or evaluated by BIBREF146 , BIBREF147 , and BIBREF148 . BIBREF146 managed to improve the identification results by feeding the raw language distance calculations into an SVM. DifferingNgramTable3 lists recent articles where character of differing sizes have been used. “LR” in the methods column refer to Logistic Regression (maxent), “LSTM RNN” to Long Short-Term Memory Recurrent Neural Networks (neuralnetworks), and “DAN” to Deep Averaging Networks (neuralnetworks). BIBREF30 used up to the four last characters of words and calculated their relative frequencies. BIBREF149 used frequencies of 2–7-grams, normalized relative to the total number of in all the language models as well as the current language model. BIBREF60 compared the use of different sizes of in differing combinations, and found that combining of differing sizes resulted in better identification scores. BIBREF150 , BIBREF151 , BIBREF152 used mixed length domain-independent language models of byte from one to three or four. Mixed length language models were also generated by BIBREF36 and later by BIBREF153 , BIBREF101 , who used the most frequent and discriminating longer than two bytes, up to a maximum of 12 bytes, based on their weighted relative frequencies. INLINEFORM0 of the most frequent were extracted from training corpora for each language, and their relative frequencies were calculated. In the tests reported in BIBREF153 , INLINEFORM1 varied from 200 to 3,500 . Later BIBREF154 also evaluated different combinations of character as well as their combinations with words. BIBREF155 used mixed-order frequencies relative to the total number of in the language model. BIBREF61 used frequencies of from one to five and gapped 3- and 4-grams as features with an SVM. As an example, some gapped 4-grams from the word Sterneberg would be Senb, tree, enbr, and reeg. BIBREF156 used character as a backoff from Markovian word . BIBREF157 used the frequencies of word initial ranging from 3 to the length of the word minus 1. BIBREF158 used the most relevant selected using the absolute value of the Pearson correlation. BIBREF159 used only the first 10 characters from a longer word to generate the , while the rest were ignored. BIBREF160 used only those which had the highest TF-IDF scores. BIBREF43 used character weighted by means of the “BM25” (Best Match 25) weighting scheme. BIBREF161 used byte up to length 25. BIBREF61 used consonant sequences generated from words. BIBREF189 used the presence of vowel sequences as a feature with a NB classifier (see naivebayes) when distinguishing between English and transliterated Indian languages. BIBREF190 used a basic dictionary (basicdictionary) composed of the 400 most common character 4-grams. BIBREF46 and BIBREF110 used character combinations (of different sizes) that either existed in only one language or did not exist in one or more languages. Morphemes, Syllables and Chunks BIBREF191 used the suffixes of lexical words derived from untagged corpora. BIBREF192 used prefixes and suffixes determined using linguistic knowledge of the Arabic language. BIBREF193 used suffixes and prefixes in rule-based . BIBREF134 used morphemes and morpheme trigrams (morphotactics) constructed by Creutz's algorithm BIBREF194 . BIBREF195 used prefixes and suffixes constructed by his own algorithm, which was later also used by BIBREF196 . BIBREF197 used morpheme lexicons in . BIBREF196 compared the use of morphological features with the use of variable sized character . When choosing between ten European languages, the morphological features obtained only 26.0% accuracy while the reached 82.7%. BIBREF198 lemmatized Malay words in order to get the base forms. BIBREF199 used a morphological analyzer of Arabic. BIBREF70 used morphological information from a part-of-speech (POS) tagger. BIBREF189 and BIBREF64 used manually selected suffixes as features. BIBREF200 created morphological grammars to distinguish between Croatian and Serbian. BIBREF201 used morphemes created by Morfessor, but they also used manually created morphological rules. BIBREF102 used a suffix module containing the most frequent suffixes. BIBREF202 and BIBREF159 used word suffixes as features with CRFs. BIBREF119 used an unsupervised method to learn morphological features from training data. The method collects candidate affixes from a dictionary built using the training data. If the remaining part of a word is found from the dictionary after removing a candidate affix, the candidate affix is considered to be a morpheme. BIBREF119 used 5% of the most frequent affixes in language identification. BIBREF183 used character classified into different types, which included prefixes and suffixes. PrefixSuffixTable lists some of the more recent articles where prefixes and suffixes collected from a training corpus has been used for . BIBREF206 used trigrams composed of syllables. BIBREF198 used Markovian syllable bigrams for between Malay and English. Later BIBREF207 also experimented with syllable uni- and trigrams. BIBREF114 used the most frequent as well as the most discriminating Indian script syllables, called aksharas. They used single aksharas, akshara bigrams, and akshara trigrams. Syllables would seem to be especially apt in situations where distinction needs to be made between two closely-related languages. BIBREF96 used the trigrams of non-syllable chunks that were based on MI. BIBREF198 experimented also with Markovian bigrams using both character and grapheme bigrams, but the syllable bigrams proved to work better. Graphemes in this case are the minimal units of the writing system, where a single character may be composed of several graphemes (e.g. in the case of the Hangul or Thai writing systems). Later, BIBREF207 also used grapheme uni- and trigrams. BIBREF207 achieved their best results combining word unigrams and syllable bigrams with a grapheme back-off. BIBREF208 used the MADAMIRA toolkit for D3 decliticization and then used D3-token 5-grams. D3 decliticization is a way to preprocess Arabic words presented by BIBREF209 . Graphones are sequences of characters linked to sequences of corresponding phonemes. They are automatically deduced from a bilingual corpus which consists of words and their correct pronunciations using Joint Sequence Models (“JSM”). BIBREF210 used language tags instead of phonemes when generating the graphones and then used Markovian graphone from 1 to 8 in . Words BIBREF211 used the position of the current word in word-level . The position of words in sentences has also been used as a feature in code-switching detection by BIBREF52 . It had predictive power greater than the language label or length of the previous word. BIBREF18 used the characteristics of words as parts of discriminating functions. BIBREF212 used the string edit distance and overlap between the word to be identified and words in dictionaries. Similarly BIBREF140 used a modified edit distance, which considers the common spelling substitutions when Hindi is written using latin characters. BIBREF213 used the Minimum Edit Distance (“MED”). Basic dictionaries are unordered lists of words belonging to a language. Basic dictionaries do not include information about word frequency, and are independent of the dictionaries of other languages. BIBREF110 used a dictionary for as a part of his speech synthesizer. Each word in a dictionary had only one possible “language”, or pronunciation category. More recently, a basic dictionary has been used for by BIBREF214 , BIBREF52 , and BIBREF90 . Unique word dictionaries include only those words of the language, that do not belong to the other languages targeted by the language identifier. BIBREF215 used unique short words (from one to three characters) to differentiate between languages. Recently, a dictionary of unique words was used for by BIBREF116 , BIBREF216 , and BIBREF67 . BIBREF47 used exhaustive lists of function words collected from dictionaries. BIBREF217 used stop words – that is non-content or closed-class words – as a training corpus. Similarly, BIBREF218 used words from closed word classes, and BIBREF97 used lists of function words. BIBREF219 used a lexicon of Arabic words and phrases that convey modality. Common to these features is that they are determined based on linguistic knowledge. BIBREF220 used the most relevant words for each language. BIBREF221 used unique or nearly unique words. BIBREF80 used Information Gain Word-Patterns (“IG-WP”) to select the words with the highest information gain. BIBREF222 made an (unordered) list of the most common words for each language, as, more recently, did BIBREF223 , BIBREF83 , and BIBREF85 . BIBREF224 encoded the most common words to root forms with the Soundex algorithm. BIBREF225 collected the frequencies of words into feature vectors. BIBREF112 compared the use of character from 2 to 5 with the use of words. Using words resulted in better identification results than using character bigrams (test document sizes of 20, 50, 100 or 200 characters), but always worse than character 3-, 4- or 5-grams. However, the combined use of words and character 4-grams gave the best results of all tested combinations, obtaining 95.6% accuracy for 50 character sequences when choosing between 13 languages. BIBREF158 used TF-IDF scores of words to distinguish between language groups. Recently, the frequency of words has also been used for by BIBREF180 , BIBREF183 , BIBREF129 , and BIBREF142 . BIBREF226 and BIBREF227 were the first to use relative frequencies of words in . As did BIBREF112 for word frequencies, also BIBREF60 found that combining the use of character with the use of words provided the best results. His language identifier obtained 99.8% average recall for 50 character sequences for the 10 evaluated languages (choosing between the 13 languages known by the language identifier) when using character from 1 to 6 combined with words. BIBREF98 calculated the relative frequency of words over all the languages. BIBREF137 calculated the IDF of words, following the approach outlined in artemenko1. BIBREF177 calculated the Pointwise Mutual Information (“PMI”) for words and used it to group words to Chinese dialects or dialect groups. Recently, the relative frequency of words has also been used for by BIBREF184 , BIBREF148 and BIBREF91 BIBREF228 used the relative frequency of words with less than six characters. Recently, BIBREF83 also used short words, as did BIBREF45 . BIBREF229 used the relative frequency calculated from Google searches. Google was later also used by BIBREF96 and BIBREF230 . BIBREF231 created probability maps for words for German dialect identification between six dialects. In a word probability map, each predetermined geographic point has a probability for each word form. Probabilities were derived using a linguistic atlas and automatically-induced dialect lexicons. BIBREF232 used commercial spelling checkers, which utilized lexicons and morphological analyzers. The language identifier of BIBREF232 obtained 97.9% accuracy when classifying one-line texts between 11 official South African languages. BIBREF233 used the ALMORGEANA analyzer to check if the word had an analysis in modern standard Arabic. They also used sound change rules to use possible phonological variants with the analyzer. BIBREF234 used spellchecking and morphological analyzers to detect English words from Hindi–English mixed search queries. BIBREF235 used spelling checkers to distinguish between 15 languages, extending the work of BIBREF232 with dynamic model selection in order to gain better performance. BIBREF157 used a similarity count to find if mystery words were misspelled versions of words in a dictionary. BIBREF236 used an “LBG-VQ” (Linde, Buzo & Gray algorithm for Vector Quantization) approach to design a codebook for each language BIBREF237 . The codebook contained a predetermined number of codevectors. Each codeword represented the word it was generated from as well as zero or more words close to it in the vector space. Word Combinations BIBREF41 used the number of words in a sentence with NB. BIBREF53 and BIBREF45 used the sentence length calculated in both words and characters with several machine learning algorithms. BIBREF53 used the ratio to the total number of words of: once-occurring words, twice-occurring words, short words, long words, function words, adjectives and adverbs, personal pronouns, and question words. They also used the word-length distribution for words of 1–20 characters. BIBREF193 used at least the preceding and proceeding words with manual rules in word-level for text-to-speech synthesis. BIBREF238 used Markovian word with a Hidden Markov Model (“HMM”) tagger (othermethods). WordNgramTable lists more recent articles where word or similar constructs have been used. “PPM” in the methods column refers to Prediction by Partial Matching (smoothing), and “kNN” to INLINEFORM0 Nearest Neighbor classification (ensemble). BIBREF239 used word trigrams simultaneously with character 4-grams. He concluded that word-based models can be used to augment the results from character when they are not providing reliable identification results. WordCharacterNgramTable lists articles where both character and word have been used together. “CBOW” in the methods column refer to Continuous Bag of Words neural network (neuralnetworks), and “MIRA” to Margin Infused Relaxed Algorithm (supportvectormachines). BIBREF154 evaluated different combinations of word and character with SVMs. The best combination for language variety identification was using all the features simultaneously. BIBREF187 used normal and gapped word and character simultaneously. BIBREF240 uses word embeddings consisting of Positive Pointwise Mutual Information (“PPMI”) counts to represent each word type. Then they use Truncated Singular Value Decomposition (“TSVD”) to reduce the dimension of the word vectors to 100. BIBREF241 used INLINEFORM0 -means clustering when building dialectal Arabic corpora. BIBREF242 used features provided by Latent Semantic Analysis (“LSA”) with SVMs and NB. BIBREF243 present two models, the CBOW model and the continuous skip-gram model. The CBOW model can be used to generate a word given it's context and the skip-gram model can generate the context given a word. The projection matrix, which is the weight matrix between the input layer and the hidden layer, can be divided into vectors, one vector for each word in the vocabulary. These word-vectors are also referred to as word embeddings. The embeddings can be used as features in other tasks after the neural network has been trained. BIBREF244 , BIBREF245 , BIBREF80 , BIBREF246 , BIBREF247 , BIBREF248 , BIBREF62 , and BIBREF130 used word embeddings generated by the word2vec skip-gram model BIBREF243 as features in . BIBREF249 used word2vec word embeddings and INLINEFORM0 -means clustering. BIBREF250 , BIBREF251 , and BIBREF44 also used word embeddings created with word2vec. BIBREF167 trained both character and word embeddings using FastText text classification method BIBREF63 on the Discriminating between Similar Languages (“DSL”) 2016 shared task, where it reached low accuracy when compared with the other methods. BIBREF205 used FastText to train word vectors including subword information. Then he used these word vectors together with some additional word features to train a CRF-model which was used for codeswitching detection. BIBREF212 extracted features from the hidden layer of a Recurrent Neural Network (“RNN”) that had been trained to predict the next character in a string. They used the features with a SVM classifier. BIBREF229 evaluated methods for detecting foreign language inclusions and experimented with a Conditional Markov Model (“CMM”) tagger, which had performed well on Named Entity Recognition (“NER”). BIBREF229 was able to produce the best results by incorporating her own English inclusion classifier's decision as a feature for the tagger, and not using the taggers POS tags. BIBREF197 used syntactic parsers together with dictionaries and morpheme lexicons. BIBREF278 used composed of POS tags and function words. BIBREF173 used labels from a NER system, cluster prefixes, and Brown clusters BIBREF279 . BIBREF214 used POS tag from one to three and BIBREF43 from one to five, and BIBREF67 used POS tag trigrams with TF-IDF weighting. BIBREF203 , BIBREF42 , BIBREF53 , and BIBREF45 have also recently used POS tags. BIBREF80 used POS tags with emotion-labeled graphs in Spanish variety identification. In emotion-labeled graphs, each POS-tag was connected to one or more emotion nodes if a relationship between the original word and the emotion was found from the Spanish Emotion Lexicon. They also used POS-tags with IG-WP. BIBREF208 used the MADAMIRA tool for morphological analysis disambiguation. The polySVOX text analysis module described by BIBREF197 uses two-level rules and morpheme lexicons on sub-word level and separate definite clause grammars (DCGs) on word, sentence, and paragraph levels. The language of sub-word units, words, sentences, and paragraphs in multilingual documents is identified at the same time as performing syntactic analysis for the document. BIBREF280 converted sentences into POS-tag patterns using a word-POS dictionary for Malay. The POS-tag patterns were then used by a neural network to indicate whether the sentences were written in Malay or not. BIBREF281 used Jspell to detect differences in the grammar of Portuguese variants. BIBREF200 used a syntactic grammar to recognize verb-da-verb constructions, which are characteristic of the Serbian language. The syntactic grammar was used together with several morphological grammars to distinguish between Croatian and Serbian. BIBREF193 used the weighted scores of the words to the left and right of the word to be classified. BIBREF238 used language labels within an HMM. BIBREF282 used the language labels of other words in the same sentence to determine the language of the ambiguous word. The languages of the other words had been determined by the positive Decision Rules (Decisionrule), using dictionaries of unique words when possible. BIBREF213 , BIBREF71 used the language tags of the previous three words with an SVM. BIBREF283 used language labels of surrounding words with NB. BIBREF82 used the language probabilities of the previous word to determining weights for languages. BIBREF156 used unigram, bigram and trigram language label transition probabilities. BIBREF284 used the language labels for the two previous words as well as knowledge of whether code-switching had already been detected or not. BIBREF285 used the language label of the previous word to determine the language of an ambiguous word. BIBREF286 also used the language label of the previous word. BIBREF287 used the language identifications of 2–4 surrounding words for post-identification correction in word-level . BIBREF109 used language labels with a CRF. BIBREF52 used language labels of the current and two previous words in code-switching point prediction. Their predictive strength was lower than the count of code-switches, but better than the length or position of the word. All of the features were used together with NB, DT and SVM. BIBREF288 used language label bigrams with an HMM. BIBREF41 used the word-level language labels obtained with the approach of BIBREF289 on sentence-level dialect identification. Feature Smoothing Feature smoothing is required in order to handle the cases where not all features INLINEFORM0 in a test document have been attested in the training corpora. Thus, it is used especially when the count of features is high, or when the amount of training data is low. Smoothing is usually handled as part of the method, and not pre-calculated into the language models. Most of the smoothing methods evaluated by BIBREF290 have been used in , and we follow the order of methods in that article. In Laplace smoothing, an extra number of occurrences is added to every possible feature in the language model. BIBREF291 used Laplace's sample size correction (add-one smoothing) with the product of Markovian probabilities. BIBREF292 experimented with additive smoothing of 0.5, and noted that it was almost as good as Good-Turing smoothing. BIBREF290 calculate the values for each as: DISPLAYFORM0 where INLINEFORM0 is the probability estimate of INLINEFORM1 in the model and INLINEFORM2 its frequency in the training corpus. INLINEFORM3 is the total number of of length INLINEFORM4 and INLINEFORM5 the number of distinct in the training corpus. INLINEFORM6 is the Lidstone smoothing parameter. When using Laplace smoothing, INLINEFORM7 is equal to 1 and with Lidstone smoothing, the INLINEFORM8 is usually set to a value between 0 and 1. The penalty values used by BIBREF170 with the HeLI method function as a form of additive smoothing. BIBREF145 evaluated additive, Katz, absolute discounting, and Kneser-Ney smoothing methods. Additive smoothing produced the least accurate results of the four methods. BIBREF293 and BIBREF258 evaluated NB with several different Lidstone smoothing values. BIBREF107 used additive smoothing with character as a baseline classifier, which they were unable to beat with Convolutional Neural Networks (“CNNs”). BIBREF292 used Good-Turing smoothing with the product of Markovian probabilities. BIBREF290 define the Good-Turing smoothed count INLINEFORM0 as: DISPLAYFORM0 where INLINEFORM0 is the number of features occurring exactly INLINEFORM1 times in the corpus INLINEFORM2 . Lately Good-Turing smoothing has been used by BIBREF294 and BIBREF88 . BIBREF220 used Jelinek-Mercer smoothing correction over the relative frequencies of words, calculated as follows: DISPLAYFORM0 where INLINEFORM0 is a smoothing parameter, which is usually some small value like 0.1. BIBREF105 used character 1–8 grams with Jelinek-Mercer smoothing. Their language identifier using character 5-grams achieved 3rd place (out of 12) in the TweetLID shared task constrained track. BIBREF95 and BIBREF145 used the Katz back-off smoothing BIBREF295 from the SRILM toolkit, with perplexity. Katz smoothing is an extension of Good-Turing discounting. The probability mass left over from the discounted is then distributed over unseen via a smoothing factor. In the smoothing evaluations by BIBREF145 , Katz smoothing performed almost as well as absolute discounting, which produced the best results. BIBREF296 evaluated Witten-Bell, Katz, and absolute discounting smoothing methods. Witten-Bell got 87.7%, Katz 87.5%, and absolute discounting 87.4% accuracy with character 4-grams. BIBREF297 used the PPM-C algorithm for . PPM-C is basically a product of Markovian probabilities with an escape scheme. If an unseen context is encountered for the character being processed, the escape probability is used together with a lower-order model probability. In PPM-C, the escape probability is the sum of the seen contexts in the language model. PPM-C was lately used by BIBREF165 . The PPM-D+ algorithm was used by BIBREF298 . BIBREF299 and BIBREF300 used a PPM-A variant. BIBREF301 also used PPM. The language identifier of BIBREF301 obtained 91.4% accuracy when classifying 100 character texts between 277 languages. BIBREF302 used Witten-Bell smoothing with perplexity. BIBREF303 used a Chunk-Based Language Model (“CBLM”), which is similar to PPM models. BIBREF145 used several smoothing techniques with Markovian probabilities. Absolute discounting from the VariKN toolkit performed the best. BIBREF145 define the smoothing as follows: a constant INLINEFORM0 is subtracted from the counts INLINEFORM1 of all observed INLINEFORM2 and the held-out probability mass is distributed between the unseen in relation to the probabilities of lower order INLINEFORM3 , as follows: DISPLAYFORM0 where INLINEFORM0 is a scaling factor that makes the conditional distribution sum to one. Absolute discounting with Markovian probabilities from the VariKN toolkit was later also used by BIBREF146 , BIBREF147 , and BIBREF148 . The original Kneser-Ney smoothing is based on absolute discounting with an added back-off function to lower-order models BIBREF145 . BIBREF290 introduced a modified version of the Kneser-Ney smoothing using interpolation instead of back-off. BIBREF304 used the Markovian probabilities with Witten-Bell and modified Kneser-Ney smoothing. BIBREF88 , BIBREF166 , and BIBREF261 also recently used modified Kneser-Ney discounting. BIBREF119 used both original and modified Kneser-Ney smoothings. In the evaluations of BIBREF145 , Kneser-Ney smoothing fared better than additive, but somewhat worse than the Katz and absolute discounting smoothing. Lately BIBREF109 also used Kneser-Ney smoothing. BIBREF86 , BIBREF87 evaluated several smoothing techniques with character and word : Laplace/Lidstone, Witten-Bell, Good-Turing, and Kneser-Ney. In their evaluations, additive smoothing with 0.1 provided the best results. Good-Turing was not as good as additive smoothing, but better than Witten-Bell and Kneser-Ney smoothing. Witten-Bell proved to be clearly better than Kneser-Ney. Methods In recent years there has been a tendency towards attempting to combine several different types of features into one classifier or classifier ensemble. Many recent studies use readily available classifier implementations and simply report how well they worked with the feature set used in the context of their study. There are many methods presented in this article that are still not available as out of the box implementations, however. There are many studies which have not been re-evaluated at all, going as far back as BIBREF18 . Our hope is that this article will inspire new studies and many previously unseen ways of combining features and methods. In the following sections, the reviewed articles are grouped by the methods used for . Decision Rules BIBREF46 used a positive Decision Rules with unique characters and character , that is, if a unique character or character was found, the language was identified. The positive Decision Rule (unique features) for the test document INLINEFORM0 and the training corpus INLINEFORM1 can be formulated as follows: DISPLAYFORM0 where INLINEFORM0 is the set of unique features in INLINEFORM1 , INLINEFORM2 is the corpus for language INLINEFORM3 , and INLINEFORM4 is a corpus of any other language INLINEFORM5 . Positive decision rules can also be used with non-unique features when the decisions are made in a certain order. For example, BIBREF52 presents the pseudo code for her dictionary lookup tool, where these kind of decisions are part of an if-then-else statement block. Her (manual) rule-based dictionary lookup tool works better for Dutch–English code-switching detection than the SVM, DT, or CRF methods she experiments with. The positive Decision Rule has also been used recently by BIBREF85 , BIBREF190 , BIBREF287 , BIBREF216 , BIBREF305 , BIBREF169 , and BIBREF214 . In the negative Decision Rule, if a character or character combination that was found in INLINEFORM0 does not exist in a particular language, that language is omitted from further identification. The negative Decision Rule can be expressed as: DISPLAYFORM0 where INLINEFORM0 is the corpus for language INLINEFORM1 . The negative Decision Rule was first used by BIBREF47 in . BIBREF118 evaluated the JRIP classifier from the Waikato Environment for Knowledge Analysis (“WEKA”). JRIP is an implementation of the propositional rule learner. It was found to be inferior to the SVM, NB and DT algorithms. In isolation the desicion rules tend not to scale well to larger numbers of languages (or very short test documents), and are thus mostly used in combination with other methods or as a Decision Tree. Decision Trees BIBREF306 were the earliest users of Decision Trees (“DT”) in . They used DT based on characters and their context without any frequency information. In training the DT, each node is split into child nodes according to an information theoretic optimization criterion. For each node a feature is chosen, which maximizes the information gain at that node. The information gain is calculated for each feature and the feature with the highest gain is selected for the node. In the identification phase, the nodes are traversed until only one language is left (leaf node). Later, BIBREF196 , BIBREF307 , and BIBREF308 have been especially successful in using DTs. Random Forest (RF) is an ensemble classifier generating many DTs. It has been succesfully used in by BIBREF140 , BIBREF201 , BIBREF309 , and BIBREF185 , BIBREF172 . Simple Scoring In simple scoring, each feature in the test document is checked against the language model for each language, and languages which contain that feature are given a point, as follows: DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 th feature found in the test document INLINEFORM2 . The language scoring the most points is the winner. Simple scoring is still a good alternative when facing an easy problem such as preliminary language group identification. It was recently used for this purpose by BIBREF246 with a basic dictionary. They achieved 99.8% accuracy when identifying between 6 language groups. BIBREF310 use a version of simple scoring as a distance measure, assigning a penalty value to features not found in a model. In this version, the language scoring the least amount of points is the winner. Their language identifier obtained 100% success rate with character 4-grams when classifying relatively large documents (from 1 to 3 kilobytes), between 10 languages. Simple scoring was also used lately by BIBREF166 , BIBREF311 , and BIBREF90 . Sum or Average of Values The sum of values can be expressed as: DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 th feature found in the test document INLINEFORM2 , and INLINEFORM3 is the value for the feature in the language model of the language INLINEFORM4 . The language with the highest score is the winner. The simplest case of sumvalues1 is when the text to be identified contains only one feature. An example of this is BIBREF157 who used the frequencies of short words as values in word-level identification. For longer words, he summed up the frequencies of different-sized found in the word to be identified. BIBREF210 first calculated the language corresponding to each graphone. They then summed up the predicted languages, and the language scoring the highest was the winner. When a tie occurred, they used the product of the Markovian graphone . Their method managed to outperform SVMs in their tests. BIBREF46 used the average of all the relative frequencies of the in the text to be identified. BIBREF312 evaluated several variations of the LIGA algorithm introduced by BIBREF313 . BIBREF308 and BIBREF148 also used LIGA and logLIGA methods. The average or sum of relative frequencies was also used recently by BIBREF85 and BIBREF108 . BIBREF57 summed up LFDF values (see characters), obtaining 99.75% accuracy when classifying document sized texts between four languages using Arabic script. BIBREF110 calculates the score of the language for the test document INLINEFORM0 as the average of the probability estimates of the features, as follows: DISPLAYFORM0 where INLINEFORM0 is the number of features in the test document INLINEFORM1 . BIBREF153 summed weighted relative frequencies of character , and normalized the score by dividing by the length (in characters) of the test document. Taking the average of the terms in the sums does not change the order of the scored languages, but it gives comparable results between different lengths of test documents. BIBREF92 , BIBREF314 summed up the feature weights and divided them by the number of words in the test document in order to set a threshold to detect unknown languages. Their language identifier obtained 89% precision and 94% recall when classifying documents between five languages. BIBREF192 used a weighting method combining alphabets, prefixes, suffixes and words. BIBREF233 summed up values from a word trigram ranking, basic dictionary and morphological analyzer lookup. BIBREF282 summed up language labels of the surrounding words to identify the language of the current word. BIBREF200 summed up points awarded by the presence of morphological and syntactic features. BIBREF102 used inverse rank positions as values. BIBREF158 computed the sum of keywords weighted with TF-IDF. BIBREF315 summed up the TF-IDF derived probabilities of words. Product of Values The product of values can be expressed as follows: DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 th feature found in test document INLINEFORM2 , and INLINEFORM3 is the value for the feature in the language model of language INLINEFORM4 . The language with the highest score is the winner. Some form of feature smoothing is usually required with the product of values method to avoid multiplying by zero. BIBREF26 was the first to use the product of relative frequencies and it has been widely used ever since; recent examples include BIBREF86 , BIBREF87 , BIBREF161 , and BIBREF148 . Some of the authors use a sum of log frequencies rather than a product of frequencies to avoid underflow issues over large numbers of features, but the two methods yield the same relative ordering, with the proviso that the maximum of multiplying numbers between 0 and 1 becomes the minimum of summing their negative logarithms, as can be inferred from: DISPLAYFORM0 When (multinomial) NB is used in , each feature used has a probability to indicate each language. The probabilities of all features found in the test document are multiplied for each language, and the language with the highest probability is selected, as in productvalues1. Theoretically the features are assumed to be independent of each other, but in practice using features that are functionally dependent can improve classification accuracy BIBREF316 . NB implementations have been widely used for , usually with a more varied set of features than simple character or word of the same type and length. The features are typically represented as feature vectors given to a NB classifier. BIBREF283 trained a NB classifier with language labels of surrounding words to help predict the language of ambiguous words first identified using an SVM. The language identifier used by BIBREF77 obtained 99.97% accuracy with 5-grams of characters when classifying sentence-sized texts between six language groups. BIBREF265 used a probabilistic model similar to NB. BIBREF252 used NB and naive Bayes EM, which uses the Expectation–Maximization (“EM”) algorithm in a semi-supervised setting to improve accuracy. BIBREF4 used Gaussian naive Bayes (“GNB”, i.e. NB with Gaussian estimation over continuous variables) from scikit-learn. In contrast to NB, in Bayesian networks the features are not assumed to be independent of each other. The network learns the dependencies between features in a training phase. BIBREF315 used a Bayesian Net classifier in two-staged (group first) over the open track of the DSL 2015 shared task. BIBREF130 similarly evaluated Bayesian Nets, but found them to perform worse than the other 11 algorithms they tested. BIBREF25 used the product of the Markovian probabilities of character bigrams. The language identifier created by BIBREF153 , BIBREF101 , “whatlang”, obtains 99.2% classification accuracy with smoothing for 65 character test strings, when distinguishing between 1,100 languages. The product of Markovian probabilities has recently also been used by BIBREF109 and BIBREF260 . BIBREF170 use a word-based backoff method called HeLI. Here, each language is represented by several different language models, only one of which is used for each word found in the test document. The language models for each language are: a word-level language model, and one or more models based on character of order 1– INLINEFORM0 . When a word that is not included in the word-level model is encountered in a test document, the method backs off to using character of the size INLINEFORM1 . If there is not even a partial coverage here, the method backs off to lower order and continues backing off until at least a partial coverage is obtained (potentially all the way to character unigrams). The system of BIBREF170 implementing the HeLI method attained shared first place in the closed track of the DSL 2016 shared task BIBREF317 , and was the best method tested by BIBREF148 for test documents longer than 30 characters. Similarity Measures The well-known method of BIBREF7 uses overlapping character of varying sizes based on words. The language models are created by tokenizing the training texts for each language INLINEFORM0 into words, and then padding each word with spaces, one before and four after. Each padded word is then divided into overlapping character of sizes 1–5, and the counts of every unique are calculated over the training corpus. The are ordered by frequency and INLINEFORM1 of the most frequent , INLINEFORM2 , are used as the domain of the language model INLINEFORM3 for the language INLINEFORM4 . The rank of an INLINEFORM5 in language INLINEFORM6 is determined by the frequency in the training corpus INLINEFORM7 and denoted INLINEFORM8 . During , the test document INLINEFORM0 is treated in a similar way and a corresponding model INLINEFORM1 of the K most frequent is created. Then a distance score is calculated between the model of the test document and each of the language models. The value INLINEFORM2 is calculated as the difference in ranks between INLINEFORM3 and INLINEFORM4 of the INLINEFORM5 in the domain INLINEFORM6 of the model of the test document. If an is not found in a language model, a special penalty value INLINEFORM7 is added to the total score of the language for each missing . The penalty value should be higher than the maximum possible distance between ranks. DISPLAYFORM0 The score INLINEFORM0 for each language INLINEFORM1 is the sum of values, as in sumvalues1. The language with the lowest score INLINEFORM2 is selected as the identified language. The method is equivalent to Spearman's measure of disarray BIBREF318 . The out-of-place method has been widely used in literature as a baseline. In the evaluations of BIBREF148 for 285 languages, the out-of-place method achieved an F-score of 95% for 35-character test documents. It was the fourth best of the seven evaluated methods for test document lengths over 20 characters. Local Rank Distance BIBREF319 is a measure of difference between two strings. LRD is calculated by adding together the distances identical units (for example character ) are from each other between the two strings. The distance is only calculated within a local window of predetermined length. BIBREF122 and BIBREF320 used LRD with a Radial Basis Function (“RBF”) kernel (see RBF). For learning they experimented with both Kernel Discriminant Analysis (“KDA”) and Kernel Ridge Regression (“KRR”). BIBREF248 also used KDA. BIBREF224 calculated the Levenshtein distance between the language models and each word in the mystery text. The similary score for each language was the inverse of the sum of the Levenshtein distances. Their language identifier obtained 97.7% precision when classifying texts from two to four words between five languages. Later BIBREF216 used Levenshtein distance for Algerian dialect identification and BIBREF305 for query word identification. BIBREF321 , BIBREF322 , BIBREF323 , and BIBREF324 calculated the difference between probabilities as in Equation EQREF109 . DISPLAYFORM0 where INLINEFORM0 is the probability for the feature INLINEFORM1 in the mystery text and INLINEFORM2 the corresponding probability in the language model of the language INLINEFORM3 . The language with the lowest score INLINEFORM4 is selected as the most likely language for the mystery text. BIBREF239 , BIBREF262 used the log probability difference and the absolute log probability difference. The log probability difference proved slightly better, obtaining a precision of 94.31% using both character and word when classifying 100 character texts between 53 language-encoding pairs. Depending on the algorithm, it can be easier to view language models as vectors of weights over the target features. In the following methods, each language is represented by one or more feature vectors. Methods where each feature type is represented by only one feature vector are also sometimes referred to as centroid-based BIBREF58 or nearest prototype methods. Distance measures are generally applied to all features included in the feature vectors. BIBREF31 calculated the squared Euclidean distance between feature vectors. The Squared Euclidean distance can be calculated as: DISPLAYFORM0 BIBREF93 used the simQ similarity measure, which is closely related to the Squared Euclidean distance. BIBREF155 investigated the of multilingual documents using a Stochastic Learning Weak Estimator (“SLWE”) method. In SLWE, the document is processed one word at a time and the language of each word is identified using a feature vector representing the current word as well as the words processed so far. This feature vector includes all possible units from the language models – in their case mixed-order character from one to four. The vector is updated using the SLWE updating scheme to increase the probabilities of units found in the current word. The probabilities of units that have been found in previous words, but not in the current one, are on the other hand decreased. After processing each word, the distance of the feature vector to the probability distribution of each language is calculated, and the best-matching language is chosen as the language of the current word. Their language identifier obtained 96.0% accuracy when classifying sentences with ten words between three languages. They used the Euclidean distance as the distance measure as follows: DISPLAYFORM0 BIBREF325 compared the use of Euclidean distance with their own similarity functions. BIBREF112 calculated the cosine angle between the feature vector of the test document and the feature vectors acting as language models. This is also called the cosine similarity and is calculated as follows: DISPLAYFORM0 The method of BIBREF112 was evaluated by BIBREF326 in the context of over multilingual documents. The cosine similarity was used recently by BIBREF131 . One common trick with cosine similarity is to pre-normalise the feature vectors to unit length (e.g. BIBREF36 ), in which case the calculation takes the form of the simple dot product: DISPLAYFORM0 BIBREF60 used chi-squared distance, calculated as follows: DISPLAYFORM0 BIBREF85 compared Manhattan, Bhattacharyya, chi-squared, Canberra, Bray Curtis, histogram intersection, correlation distances, and out-of-place distances, and found the out-of-place method to be the most accurate. BIBREF239 , BIBREF262 used cross-entropy and symmetric cross-entropy. Cross-entropy is calculated as follows, where INLINEFORM0 and INLINEFORM1 are the probabilities of the feature INLINEFORM2 in the the test document INLINEFORM3 and the corpus INLINEFORM4 : DISPLAYFORM0 Symmetric cross-entropy is calculated as: DISPLAYFORM0 For cross-entropy, distribution INLINEFORM0 must be smoothed, and for symmetric cross-entropy, both probability distributions must be smoothed. Cross-entropy was used recently by BIBREF161 . BIBREF301 used a cross-entropy estimating method they call the Mean of Matching Statistics (“MMS”). In MMS every possible suffix of the mystery text INLINEFORM1 is compared to the language model of each language and the average of the lengths of the longest possible units in the language model matching the beginning of each suffix is calculated. BIBREF327 and BIBREF32 calculated the relative entropy between the language models and the test document, as follows: DISPLAYFORM0 This method is also commonly referred to as Kullback-Leibler (“KL”) distance or skew divergence. BIBREF60 compared relative entropy with the product of the relative frequencies for different-sized character , and found that relative entropy was only competitive when used with character bigrams. The product of relative frequencies gained clearly higher recall with higher-order when compared with relative entropy. BIBREF239 , BIBREF262 also used the RE and MRE measures, which are based on relative entropy. The RE measure is calculated as follows: DISPLAYFORM0 MRE is the symmetric version of the same measure. In the tests performed by BIBREF239 , BIBREF262 , the RE measure with character outperformed other tested methods obtaining 98.51% precision when classifying 100 character texts between 53 language-encoding pairs. BIBREF304 used a logistic regression (“LR”) model (also commonly referred to as “maximum entropy” within NLP), smoothed with a Gaussian prior. BIBREF328 defined LR for character-based features as follows: DISPLAYFORM0 where INLINEFORM0 is a normalization factor and INLINEFORM1 is the word count in the word-tokenized test document. BIBREF158 used an LR classifier and found it to be considerably faster than an SVM, with comparable results. Their LR classifier ranked 6 out of 9 on the closed submission track of the DSL 2015 shared task. BIBREF199 used Adaptive Logistic Regression, which automatically optimizes parameters. In recent years LR has been widely used for . BIBREF95 was the first to use perplexity for , in the manner of a language model. He calculated the perplexity for the test document INLINEFORM0 as follows: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 were the Katz smoothed relative frequencies of word n-grams INLINEFORM1 of the length INLINEFORM2 . BIBREF146 and BIBREF148 evaluated the best performing method used by BIBREF145 . Character n-gram based perplexity was the best method for extremely short texts in the evaluations of BIBREF148 , but for longer sequences the methods of BIBREF36 and BIBREF60 proved to be better. Lately, BIBREF182 also used perplexity. BIBREF20 used Yule's characteristic K and the Kolmogorov-Smirnov goodness of fit test to categorize languages. Kolmogorov-Smirnov proved to be the better of the two, obtaining 89% recall for 53 characters (one punch card) of text when choosing between two languages. In the goodness of fit test, the ranks of features in the models of the languages and the test document are compared. BIBREF329 experimented with Jiang and Conrath's (JC) distance BIBREF330 and Lin's similarity measure BIBREF331 , as well as the out-of-place method. They conclude that Lin's similarity measure was consistently the most accurate of the three. JC-distance measure was later evaluated by BIBREF239 , BIBREF262 , and was outperformed by the RE measure. BIBREF39 and BIBREF332 calculated special ratios from the number of trigrams in the language models when compared with the text to be identified. BIBREF333 , BIBREF334 , BIBREF335 used the quadratic discrimination score to create the feature vectors representing the languages and the test document. They then calculated the Mahalanobis distance between the languages and the test document. Their language identifier obtained 98.9% precision when classifying texts of four “screen lines” between 19 languages. BIBREF336 used odds ratio to identify the language of parts of words when identifying between two languages. Odds ratio for language INLINEFORM0 when compared with language INLINEFORM1 for morph INLINEFORM2 is calculated as in Equation EQREF127 . DISPLAYFORM0 Discriminant Functions The differences between languages can be stored in discriminant functions. The functions are then used to map the test document into an INLINEFORM0 -dimensional space. The distance of the test document to the languages known by the language identifier is calculated, and the nearest language is selected (in the manner of a nearest prototype classifier). BIBREF114 used multiple linear regression to calculate discriminant functions for two-way for Indian languages. BIBREF337 compared linear regression, NB, and LR. The precision for the three methods was very similar, with linear regression coming second in terms of precision after LR. Multiple discriminant analysis was used for by BIBREF18 . He used two functions, the first separated Finnish from English and Swedish, and the second separated English and Swedish from each other. He used Mahalanobis' INLINEFORM0 as a distance measure. BIBREF113 used Multivariate Analysis (“MVA”) with Principal Component Analysis (“PCA”) for dimensionality reduction and . BIBREF59 compared discriminant analysis with SVM and NN using characters as features, and concluded that the SVM was the best method. BIBREF40 experimented with the Winnow 2 algorithm BIBREF338 , but the method was outperformed by other methods they tested. Support Vector Machines (“SVMs”) With support vector machines (“SVMs”), a binary classifier is learned by learning a separating hyperplane between the two classes of instances which maximizes the margin between them. The simplest way to extend the basic SVM model into a multiclass classifier is via a suite of one-vs-rest classifiers, where the classifier with the highest score determines the language of the test document. One feature of SVMs that has made them particularly popular is their compatibility with kernels, whereby the separating hyperplane can be calculated via a non-linear projection of the original instance space. In the following paragraphs, we list the different kernels that have been used with SVMs for . For with SVMs, the predominant approach has been a simple linear kernel SVM model. The linear kernel model has a weight vector INLINEFORM0 and the classification of a feature vector INLINEFORM1 , representing the test document INLINEFORM2 , is calculated as follows: DISPLAYFORM0 where INLINEFORM0 is a scalar bias term. If INLINEFORM1 is equal to or greater than zero, INLINEFORM2 is categorized as INLINEFORM3 . The first to use a linear kernel SVM were BIBREF339 , and generally speaking, linear-kernel SVMs have been widely used for , with great success across a range of shared tasks. BIBREF100 were the first to apply polynomial kernel SVMs to . With a polynomial kernel INLINEFORM0 can be calculated as: DISPLAYFORM0 where INLINEFORM0 is the polynomial degree, and a hyperparameter of the model. Another popular kernel is the RBF function, also known as a Gaussian or squared exponential kernel. With an RBF kernel INLINEFORM0 is calculated as: DISPLAYFORM0 where INLINEFORM0 is a hyperparameter. BIBREF321 were the first to use an RBF kernel SVM for . With sigmoid kernel SVMs, also known as hyperbolic tangent SVMs, INLINEFORM0 can be calculated as: DISPLAYFORM0 BIBREF340 were the first to use a sigmoid kernel SVM for , followed by BIBREF341 , who found the SVM to perform better than NB, Classification And Regression Tree (“CART”), or the sum of relative frequencies. Other kernels that have been used with SVMs for include exponential kernels BIBREF178 and rational kernels BIBREF342 . BIBREF31 were the first to use SVMs for , in the form of string kernels using Ukkonen's algorithm. They used same string kernels with Euclidean distance, which did not perform as well as SVM. BIBREF87 compared SVMs with linear and on-line passive–aggressive kernels for , and found passive–aggressive kernels to perform better, but both SVMs to be inferior to NB and Log-Likelihood Ratio (sum of log-probabilities). BIBREF339 experimented with the Sequential Minimal Optimization (“SMO”) algorithm, but found a simple linear kernel SVM to perform better. BIBREF118 achieved the best results using the SMO algorithm, whereas BIBREF123 found CRFs to work better than SMO. BIBREF178 found that SMO was better than linear, exponential and polynomial kernel SVMs for Arabic tweet gender and dialect prediction. MultipleKernelSVMarticlesTable lists articles where SVMs with different kernels have been compared. BIBREF343 evaluated three different SVM approaches using datasets from different DSL shared tasks. SVM-based approaches were the top performing systems in the 2014 and 2015 shared tasks. BIBREF277 used SVMs with the Margin Infused Relaxed Algorithm, which is an incremental version of SVM training. In their evaluation, this method achieved better results than off-the-shelf . Neural Networks (“NN”) BIBREF344 was the first to use Neural Networks (“NN”) for , in the form of a simple BackPropagation Neural Network (“BPNN”) BIBREF345 with a single layer of hidden units, which is also called a multi-layer perceptron (“MLP”) model. She used words as the input features for the neural network. BIBREF346 and BIBREF347 succesfully applied MLP to . BIBREF348 , BIBREF349 and BIBREF350 used radial basis function (RBF) networks for . BIBREF351 were the first to use adaptive resonance learning (“ART”) neural networks for . BIBREF85 used Neural Text Categorizer (“NTC”: BIBREF352 ) as a baseline. NTC is an MLP-like NN using string vectors instead of number vectors. BIBREF111 were the first to use a RNN for . They concluded that RNNs are less accurate than the simple sum of logarithms of counts of character bi- or trigrams, possibly due to the relatively modestly-sized dataset they experimented with. BIBREF221 compared NNs with the out-of-place method (see sec. UID104 ). Their results show that the latter, used with bigrams and trigrams of characters, obtains clearly higher identification accuracy when dealing with test documents shorter than 400 characters. RNNs were more successfully used later by BIBREF245 who also incorporated character n-gram features in to the network architecture. BIBREF223 were the first to use a Long Short-Term Memory (“LSTM”) for BIBREF353 , and BIBREF354 was the first to use Gated Recurrent Unit networks (“GRUs”), both of which are RNN variants. BIBREF354 used byte-level representations of sentences as input for the networks. Recently, BIBREF89 and BIBREF176 also used LSTMs. Later, GRUs were successfully used for by BIBREF355 and BIBREF356 . In addition to GRUs, BIBREF354 also experimented with deep residual networks (“ResNets”) at DSL 2016. During 2016 and 2017, there was a spike in the use of convolutional neural networks (CNNs) for , most successfully by BIBREF302 and BIBREF357 . Recently, BIBREF358 combined a CNN with adversarial learning to better generalize to unseen domains, surpassing the results of BIBREF151 based on the same training regime as . BIBREF275 used CBOW NN, achieving better results over the development set of DSL 2017 than RNN-based neural networks. BIBREF62 used deep averaging networks (DANs) based on word embeddings in language variety identification. Other Methods BIBREF45 used the decision table majority classifier algorithm from the WEKA toolkit in English variety detection. The bagging algorithm using DTs was the best method they tested (73.86% accuracy), followed closely by the decision table with 73.07% accuracy. BIBREF359 were the first to apply hidden Markov models (HMM) to . More recently HMMs have been used by BIBREF214 , BIBREF288 , and BIBREF261 . BIBREF360 generated aggregate Markov models, which resulted in the best results when distinguishing between six languages, obtaining 74% accuracy with text length of ten characters. BIBREF156 used an extended Markov Model (“eMM”), which is essentially a standard HMM with modified emission probabilities. Their eMM used manually optimized weights to combine four scores (products of relative frequencies) into one score. BIBREF361 used Markov logic networks BIBREF362 to predict the language used in interlinear glossed text examples contained in linguistic papers. BIBREF363 evaluated the use of unsupervised Fuzzy C Means algorithm (“FCM”) in language identification. The unsupervised algorithm was used on the training data to create document clusters. Each cluster was tagged with the language having the most documents in the cluster. Then in the identification phase, the mystery text was mapped to the closest cluster and identified with its language. A supervised centroid classifier based on cosine similarity obtained clearly better results in their experiments (93% vs. 77% accuracy). BIBREF119 and BIBREF67 evaluated the extreme gradient boosting (“XGBoost”) method BIBREF364 . BIBREF119 found that gradient boosting gave better results than RFs, while conversely, BIBREF67 found that LR gave better results than gradient boosting. BIBREF365 used compression methods for , whereby a single test document is added to the training text of each language in turn, and the language with the smallest difference (after compression) between the sizes of the original training text file and the combined training and test document files is selected as the prediction. This has obvious disadvantages in terms of real-time computational cost for prediction, but is closely related to language modeling approaches to (with the obvious difference that the language model doesn't need to be retrained multiply for each test document). In terms of compression methods, BIBREF366 experimented with Maximal Tree Machines (“MTMs”), and BIBREF367 used LZW-based compression. Very popular in text categorization and topic modeling, BIBREF368 , BIBREF23 , and BIBREF24 used Latent Dirichlet Allocation (“LDA”: BIBREF369 ) based features in classifying tweets between Arabic dialects, English, and French. Each tweet was assigned with an LDA topic, which was used as one of the features of an LR classifier. BIBREF249 used a Gaussian Process classifier with an RBF kernel in an ensemble with an LR classifier. Their ensemble achieved only ninth place in the “PAN” (Plagiarism Analysis, Authorship Identification, and Near-Duplicate Detection workshop) Author Profiling language variety shared task BIBREF370 and did not reach the results of the baseline for the task. BIBREF181 , BIBREF188 used a Passive Aggressive classifier, which proved to be almost as good as the SVMs in their evaluations between five different machine learning algorithms from the same package. Ensemble Methods Ensemble methods are meta-classification methods capable of combining several base classifiers into a combined model via a “meta-classifier” over the outputs of the base classifiers, either explicitly trained or some heuristic. It is a simple and effective approach that is used widely in machine learning to boost results beyond those of the individual base classifiers, and particularly effective when applied to large numbers of individually uncorrelated base classifiers. BIBREF20 used simple majority voting to combine classifiers using different features and methods. In majority voting, the language of the test document is identified if a majority ( INLINEFORM0 ) of the classifiers in the ensemble vote for the same language. In plurality voting, the language with most votes is chosen as in the simple scoring method (simple1). Some authors also refer to plurality voting as majority voting. BIBREF371 used majority voting in tweet . BIBREF210 used majority voting with JSM classifiers. BIBREF265 and BIBREF269 used majority voting between SVM classifiers trained with different features. BIBREF266 used majority voting to combine four classifiers: RF, random tree, SVM, and DT. BIBREF372 and BIBREF152 used majority voting between three off-the-shelf language identifiers. BIBREF104 used majority voting between perplexity-based and other classifiers. BIBREF141 used majority voting between three sum of relative frequencies-based classifiers where values were weighted with different weighting schemes. BIBREF270 , BIBREF125 , BIBREF171 , BIBREF185 , BIBREF172 , and BIBREF260 used plurality voting with SVMs. BIBREF182 used voting between several perplexity-based classifiers with different features at the 2017 DSL shared task. A voting ensemble gave better results on the closed track than a singular word-based perplexity classifier (0.9025 weighted F1-score over 0.9013), but worse results on the open track (0.9016 with ensemble and 0.9065 without). In a highest probability ensemble, the winner is simply the language which is given the highest probability by any of the individual classifiers in the ensemble. BIBREF96 used Gaussian Mixture Models (“GMM”) to give probabilities to the outputs of classifiers using different features. BIBREF372 used higher confidence between two off-the-shelf language identifiers. BIBREF265 used GMM to transform SVM prediction scores into probabilities. BIBREF270 , BIBREF125 used highest confidence over a range of base SVMs. BIBREF125 used an ensemble composed of low-dimension hash-based classifiers. According to their experiments, hashing provided up to 86% dimensionality reduction without negatively affecting performance. Their probability-based ensemble obtained 89.2% accuracy, while the voting ensemble got 88.7%. BIBREF166 combined an SVM and a LR classifier. A mean probability ensemble can be used to combine classifiers that produce probabilities (or other mutually comparable values) for languages. The average of values for each language over the classifier results is used to determine the winner and the results are equal to the sum of values method (sumvalues1). BIBREF270 evaluated several ensemble methods and found that the mean probability ensemble attained better results than plurality voting, median probability, product, highest confidence, or Borda count ensembles. In a median probability ensemble, the medians over the probabilities given by the individual classifiers are calculated for each language. BIBREF270 and BIBREF171 used a median probability rule ensemble over SVM classifiers. Consistent with the results of BIBREF270 , BIBREF171 found that a mean ensemble was better than a median ensemble, attaining 68% accuracy vs. 67% for the median ensemble. A product rule ensemble takes the probabilities for the base classifiers and calculates their product (or sum of the log probabilities), with the effect of penalising any language where there is a particularly low probability from any of the base classifiers. BIBREF210 used log probability voting with JSM classifiers. BIBREF210 observed a small increase in average accuracy using the product ensemble over a majority voting ensemble. In a INLINEFORM0 -best ensemble, several models are created for each language INLINEFORM1 by partitioning the corpus INLINEFORM2 into separate samples. The score INLINEFORM3 is calculated for each model. For each language, plurality voting is then applied to the INLINEFORM4 models with the best scores to predict the language of the test document INLINEFORM5 . BIBREF349 evaluated INLINEFORM6 -best with INLINEFORM7 based on several similarity measures. BIBREF54 compared INLINEFORM8 and INLINEFORM9 and concluded that there was no major difference in accuracy when distinguishing between six languages (100 character test set). BIBREF373 experimented with INLINEFORM10 -best classifiers, but they gave clearly worse results than the other classifiers they evaluated. BIBREF212 used INLINEFORM11 -best in two phases, first selecting INLINEFORM12 closest neighbors with simple similarity, and then using INLINEFORM13 with a more advanced similarity ranking. In bagging, independent samples of the training data are generated by random sampling with replacement, individual classifiers are trained over each such training data sample, and the final classification is determined by plurality voting. BIBREF67 evaluated the use of bagging with an LR classifier in PAN 2017 language variety identification shared task, however, bagging did not improve the accuracy in the 10-fold cross-validation experiments on the training set. BIBREF374 used bagging with word convolutional neural networks (“W-CNN”). BIBREF45 used bagging with DTs in English national variety detection and found DT-based bagging to be the best evaluated method when all 60 different features (a wide selection of formal, POS, lexicon-based, and data-based features) were used, attaining 73.86% accuracy. BIBREF45 continued the experiments using the ReliefF feature selection algorithm from the WEKA toolkit to select the most efficient features, and achieved 77.32% accuracy over the reduced feature set using a NB classifier. BIBREF130 evaluated the Rotation Forest meta classifier for DTs. The method randomly splits the used features into a pre-determined number of subsets and then uses PCA for each subset. It obtained 66.6% accuracy, attaining fifth place among the twelve methods evaluated. The AdaBoost algorithm BIBREF375 examines the performance of the base classifiers on the evaluation set and iteratively boosts the significance of misclassified training instances, with a restart mechanism to avoid local minima. AdaBoost was the best of the five machine learning techniques evaluated by BIBREF53 , faring better than C4.5, NB, RF, and linear SVM. BIBREF130 used the LogitBoost variation of AdaBoost. It obtained 67.0% accuracy, attaining third place among the twelve methods evaluated. In stacking, a higher level classifier is explicitly trained on the output of several base classifiers. BIBREF96 used AdaBoost.ECC and CART to combine classifiers using different features. More recently, BIBREF127 used LR to combine the results of five RNNs. As an ensemble they produced better results than NB and LR, which were better than the individual RNNs. Also in 2017, BIBREF185 , BIBREF172 used RF to combine several linear SVMs with different features. The system used by BIBREF172 ranked first in the German dialect identification shared task, and the system by BIBREF185 came second (71.65% accuracy) in the Arabic dialect identification shared task. Empirical Evaluation In the previous two sections, we have alluded to issues of evaluation in research to date. In this section, we examine the literature more closely, providing a broad overview of the evaluation metrics that have been used, as well as the experimental settings in which research has been evaluated. Standardized Evaluation for The most common approach is to treat the task as a document-level classification problem. Given a set of evaluation documents, each having a known correct label from a closed set of labels (often referred to as the “gold-standard”), and a predicted label for each document from the same set, the document-level accuracy is the proportion of documents that are correctly labeled over the entire evaluation collection. This is the most frequently reported metric and conveys the same information as the error rate, which is simply the proportion of documents that are incorrectly labeled (i.e. INLINEFORM0 ). Authors sometimes provide a per-language breakdown of results. There are two distinct ways in which results are generally summarized per-language: (1) precision, in which documents are grouped according to their predicted language; and (2) recall, in which documents are grouped according to what language they are actually written in. Earlier work has tended to only provide a breakdown based on the correct label (i.e. only reporting per-language recall). This gives us a sense of how likely a document in any given language is to be classified correctly, but does not give an indication of how likely a prediction for a given language is of being correct. Under the monolingual assumption (i.e. each document is written in exactly one language), this is not too much of a problem, as a false negative for one language must also be a false positive for another language, so precision and recall are closely linked. Nonetheless, authors have recently tended to explicitly provide both precision and recall for clarity. It is also common practice to report an F-score INLINEFORM0 , which is the harmonic mean of precision and recall. The F-score (also sometimes called F1-score or F-measure) was developed in IR to measure the effectiveness of retrieval with respect to a user who attaches different relative importance to precision and recall BIBREF376 . When used as an evaluation metric for classification tasks, it is common to place equal weight on precision and recall (hence “F1”-score, in reference to the INLINEFORM1 hyper-parameter, which equally weights precision and recall when INLINEFORM2 ). In addition to evaluating performance for each individual language, authors have also sought to convey the relationship between classification errors and specific sets of languages. Errors in systems are generally not random; rather, certain sets of languages are much more likely to be confused. The typical method of conveying this information is through the use of a confusion matrix, a tabulation of the distribution of (predicted language, actual language) pairs. Presenting full confusion matrices becomes problematic as the number of languages considered increases, and as a result has become relatively uncommon in work that covers a broader range of languages. Per-language results are also harder to interpret as the number of languages increases, and so it is common to present only collection-level summary statistics. There are two conventional methods for summarizing across a whole collection: (1) giving each document equal weight; and (2) giving each class (i.e. language) equal weight. (1) is referred to as a micro-average, and (2) as a macro-average. For under the monolingual assumption, micro-averaged precision and recall are the same, since each instance of a false positive for one language must also be a false negative for another language. In other words, micro-averaged precision and recall are both simply the collection-level accuracy. On the other hand, macro-averaged precision and recall give equal weight to each language. In datasets where the number of documents per language is the same, this again works out to being the collection-level average. However, research has frequently dealt with datasets where there is a substantial skew between classes. In such cases, the collection-level accuracy is strongly biased towards more heavily-represented languages. To address this issue, in work on skewed document collections, authors tend to report both the collection-level accuracy and the macro-averaged precision/recall/F-score, in order to give a more complete picture of the characteristics of the method being studied. Whereas the notions of macro-averaged precision and recall are clearly defined, there are two possible methods to calculate the macro-averaged F-score. The first is to calculate it as the harmonic mean of the macro-averaged precision and recall, and the second is to calculate it as the arithmetic mean of the per-class F-score. The comparability of published results is also limited by the variation in size and source of the data used for evaluation. In work to date, authors have used data from a variety of different sources to evaluate the performance of proposed solutions. Typically, data for a number of languages is collected from a single source, and the number of languages considered varies widely. Earlier work tended to focus on a smaller number of Western European languages. Later work has shifted focus to supporting larger numbers of languages simultaneously, with the work of BIBREF101 pushing the upper bound, reporting a language identifier that supports over 1300 languages. The increased size of the language set considered is partly due to the increased availability of language-labeled documents from novel sources such as Wikipedia and Twitter. This supplements existing data from translations of the Universal Declaration of Human Rights, bible translations, as well as parallel texts from MT datasets such as OPUS and SETimes, and European Government data such as JRC-Acquis. These factors have led to a shift away from proprietary datasets such as the ECI multilingual corpus that were commonly used in earlier research. As more languages are considered simultaneously, the accuracy of systems decreases. A particularly striking illustration of this is the evaluation results by BIBREF148 for the logLIGA method BIBREF312 . BIBREF312 report an accuracy of 99.8% over tweets (averaging 80 characters) in six European languages as opposed to the 97.9% from the original LIGA method. The LIGA and logLIGA implementations by BIBREF148 have comparable accuracy for six languages, but the accuracy for 285 languages (with 70 character test length) is only slightly over 60% for logLIGA and the original LIGA method is at almost 85%. Many evaluations are not directly comparable as the test sizes, language sets, and hyper-parameters differ. A particularly good example is the method of BIBREF7 . The original paper reports an accuracy of 99.8% over eight European languages (>300 bytes test size). BIBREF150 report an accuracy of 68.6% for the method over a dataset of 67 languages (500 byte test size), and BIBREF148 report an accuracy of over 90% for 285 languages (25 character test size). Separate to the question of the number and variety of languages included are issues regarding the quantity of training data used. A number of studies have examined the relationship between accuracy and quantity of training data through the use of learning curves. The general finding is that accuracy increases with more training data, though there are some authors that report an optimal amount of training data, where adding more training data decreases accuracy thereafter BIBREF377 . Overall, it is not clear whether there is a universal quantity of data that is “enough” for any language, rather this amount appears to be affected by the particular set of languages as well as the domain of the data. The breakdown presented by BIBREF32 shows that with less than 100KB per language, there are some languages where classification accuracy is near perfect, whereas there are others where it is very poor. Another aspect that is frequently reported on is how long a sample of text needs to be before its language can be correctly detected. Unsurprisingly, the general consensus is that longer samples are easier to classify correctly. There is a strong interest in classifying short segments of text, as certain applications naturally involve short text documents, such as of microblog messages or search engine queries. Another area where of texts as short as one word has been investigated is in the context of dealing with documents that contain text in more than one language, where word-level has been proposed as a possible solution (see openissues:multilingual). These outstanding challenges have led to research focused specifically on of shorter segments of text, which we discuss in more detail in openissues:short. From a practical perspective, knowing the rate at which a system can process and classify documents is useful as it allows a practitioner to predict the time required to process a document collection given certain computational resources. However, so many factors influence the rate at which documents are processed that comparison of absolute values across publications is largely meaningless. Instead, it is more valuable to consider publications that compare multiple systems under controlled conditions (same computer hardware, same evaluation data, etc.). The most common observations are that classification times between different algorithms can differ by orders of magnitude, and that the fastest methods are not always the most accurate. Beyond that, the diversity of systems tested and the variety in the test data make it difficult to draw further conclusions about the relative speed of algorithms. Where explicit feature selection is used, the number of features retained is a parameter of interest, as it affects both the memory requirements of the system and its classification rate. In general, a smaller feature set results in a faster and more lightweight identifier. Relatively few authors give specific details of the relationship between the number of features selected and accuracy. A potential reason for this is that the improvement in accuracy plateaus with increasing feature count, though the exact number of features required varies substantially with the method and the data used. At the lower end of the scale, BIBREF7 report that 300–400 features per language is sufficient. Conversely BIBREF148 found that, for the same method, the best results for the evaluation set were attained with 20,000 features per language. Corpora Used for Evaluation As discussed in standardevaluation, the objective comparison of different methods for is difficult due to the variation in the data that different authors have used to evaluate methods. BIBREF32 emphasize this by demonstrating how the performance of a system can vary according to the data used for evaluation. This implies that comparisons of results reported by different authors may not be meaningful, as a strong result in one paper may not translate into a strong result on the dataset used in a different paper. In other areas of research, authors have proposed standardized corpora to allow for the objective comparison of different methods. Some authors have released datasets to accompany their work, to allow for direct replication of their experiments and encourage comparison and standardization. datasets lists a number of datasets that have been released to accompany specific publications. In this list, we only include corpora that were prepared specifically for research, and that include the full text of documents. Corpora of language-labelled Twitter messages that only provide document identifiers are also available, but reproducing the full original corpus is always an issue as the original Twitter messages are deleted or otherwise made unavailable. One challenge in standardizing datasets for is that the codes used to label languages are not fully standardized, and a large proportion of labeling systems only cover a minor portion of the languages used in the world today BIBREF381 . BIBREF382 discuss this problem in detail, listing different language code sets, as well as the internal structure exhibited by some of the code sets. Some standards consider certain groups of “languages” as varieties of a single macro-language, whereas others consider them to be discrete languages. An example of this is found in South Slavic languages, where some language code sets refer to Serbo-Croatian, whereas others make distinctions between Bosnian, Serbian and Croatian BIBREF98 . The unclear boundaries between such languages make it difficult to build a reference corpus of documents for each language, or to compare language-specific results across datasets. Another challenge in standardizing datasets for is the great deal of variation that can exist between data in the same language. We examine this in greater detail in openissues:encoding, where we discuss how the same language can use a number of different orthographies, can be digitized using a number of different encodings, and may also exist in transliterated forms. The issue of variation within a language complicates the development of standardized datasets, due to challenges in determining which variants of a language should be included. Since we have seen that the performance of systems can vary per-domain BIBREF32 , that research is often motivated by target applications (see applications), and that domain-specific information can be used to improve accuracy (see openissues:domainspecific), it is often unsound to use a generic dataset to develop a language identifier for a particular domain. A third challenge in standardizing datasets for is the cost of obtaining correctly-labeled data. Manual labeling of data is usually prohibitively expensive, as it requires access to native speakers of all languages that the dataset aims to include. Large quantities of raw text data are available from sources such as web crawls or Wikipedia, but this data is frequently mislabeled (e.g. most non-English Wikipedias still include some English-language documents). In constructing corpora from such resources, it is common to use some form of automatic , but this makes such corpora unsuitable for evaluation purposes as they are biased towards documents that can be correctly identified by automatic systems BIBREF152 . Future work in this area could investigate other means of ensuring correct gold-standard labels while minimizing the annotation cost. Despite these challenges, standardized datasets are critical for replicable and comparable research in . Where a subset of data is used from a larger collection, researchers should include details of the specific subset, including any breakdown into training and test data, or partitions for cross-validation. Where data from a new source is used, justification should be given for its inclusion, as well as some means for other researchers to replicate experiments on the same dataset. Shared Tasks To address specific sub-problems in , a number of shared tasks have been organized on problems such as in multilingual documents BIBREF378 , code-switched data BIBREF383 , discriminating between closely related languages BIBREF384 , and dialect and language variety identification in various languages BIBREF385 , BIBREF386 , BIBREF370 , BIBREF387 . Shared tasks are important for because they provide datasets and standardized evaluation methods that serve as benchmarks for the community. We summarize all shared tasks organized to date in sharedtasks. Generally, datasets for shared tasks have been made publicly available after the conclusion of the task, and are a good source of standardized evaluation data. However, the shared tasks to date have tended to target specific sub-problems in , and no general, broad-coverage datasets have been compiled. Widespread interest in over closely-related languages has resulted in a number of shared tasks that specifically tackle the issue. Some tasks have focused on varieties of a specific language. For example, the DEFT2010 shared task BIBREF385 examined varieties of French, requiring participants to classify French documents with respect to their geographical source, in addition to the decade in which they were published. Another example is the Arabic Dialect Identification (“ADI”) shared task at the VarDial workshop BIBREF126 , BIBREF386 , and the Arabic Multi-Genre Broadcast (“MGB”) Challenge BIBREF387 . Two shared tasks focused on a narrow group of languages using Twitter data. The first was TweetLID, a shared task on of Twitter messages according to six languages in common use in Spain, namely: Spanish, Portuguese, Catalan, English, Galician, and Basque (in order of the number of documents in the dataset) BIBREF388 , BIBREF389 . The organizers provided almost 35,000 Twitter messages, and in addition to the six monolingual tags, supported four additional categories: undetermined, multilingual (i.e. the message contains more than one language, without requiring the system to specify the component languages), ambiguous (i.e. the message is ambiguous between two or more of the six target languages), and other (i.e. the message is in a language other than the six target languages). The second shared task was the PAN lab on authorship profiling 2017 BIBREF370 . The PAN lab on authorship profiling is held annually and historically has focused on age, gender, and personality traits prediction in social media. In 2017 the competition introduced the inclusion of language varieties and dialects of Arabic, English, Spanish, and Portuguese, More ambitiously, the four editions of the Discriminating between Similar Languages (DSL) BIBREF384 , BIBREF6 , BIBREF317 , BIBREF386 shared tasks required participants to discriminate between a set of languages in several language groups, each consisting of highly-similar languages or national varieties of that language. The dataset, entitled DSL Corpus Collection (“DSLCC”) BIBREF77 , and the languages included are summarized in dslcc. Historically the best-performing systems BIBREF265 , BIBREF390 , BIBREF43 have approached the task via hierarchical classification, first predicting the language group, then the language within that group. Application Areas There are various reasons to investigate . Studies in approach the task from different perspectives, and with different motivations and application goals in mind. In this section, we briefly summarize what these motivations are, and how their specific needs differ. The oldest motivation for automatic is perhaps in conjunction with translation BIBREF27 . Automatic is used as a pre-processing step to determine what translation model to apply to an input text, whether it be by routing to a specific human translator or by applying MT. Such a use case is still very common, and can be seen in the Google Chrome web browser, where an built-in module is used to offer MT services to the user when the detected language of the web page being visited differs from the user's language settings. NLP components such as POS taggers and parsers tend to make a strong assumption that the input text is monolingual in a given language. Similarly to the translation case, can play an obvious role in routing documents written in different languages to NLP components tailored to those languages. More subtle is the case of documents with mixed multilingual content, the most commonly-occurring instance of which is foreign inclusion, where a document is predominantly in a single language (e.g. German or Japanese) but is interspersed with words and phrases (often technical terms) from a language such as English. For example, BIBREF391 found that around 6% of word tokens in German text sourced from the Internet are English inclusions. In the context of POS tagging, one strategy for dealing with inclusions is to have a dedicated POS for all foreign words, and force the POS tagger to perform both foreign inclusion detection and POS tag these words in the target language; this is the approach taken in the Penn POS tagset, for example BIBREF392 . An alternative strategy is to have an explicit foreign inclusion detection pre-processor, and some special handling of foreign inclusions. For example, in the context of German parsing, BIBREF391 used foreign inclusion predictions to restrict the set of (German) POS tags used to form a parse tree, and found that this approach substantially improved parser accuracy. Another commonly-mentioned use case is for multilingual document storage and retrieval. A document retrieval system (such as, but not limited to, a web search engine) may be required to index documents in multiple languages. In such a setting, it is common to apply at two points: (1) to the documents being indexed; and (2) to the queries being executed on the collection. Simple keyword matching techniques can be problematic in text-based document retrieval, because the same word can be valid in multiple languages. A classic example of such words (known as “false friends”) includes gift, which in German means “poison”. Performing on both the document and the query helps to avoid confusion between such terms, by taking advantage of the context in which it appears in order to infer the language. This has resulted in specific work in of web pages, as well as search engine queries. BIBREF393 and BIBREF394 give overviews of shared tasks specifically concentrating on language labeling of individual search query words. Having said this, in many cases, the search query itself does a sufficiently good job of selecting documents in a particular language, and overt is often not performed in mixed multilingual search contexts. Automatic has also been used to facilitate linguistic and other text-based research. BIBREF34 report that their motivation for developing a language identifier was “to find out how many web pages are written in a particular language”. Automatic has been used in constructing web-based corpora. The Crúbadán project BIBREF395 and the Finno-Ugric Languages and the Internet project BIBREF396 make use of automated techniques to gather linguistic resources for under-resourced languages. Similarly, the Online Database of INterlinear text (“ODIN”: BIBREF397 ) uses automated as one of the steps in collecting interlinear glossed text from the web for purposes of linguistic search and bootstrapping NLP tools. One challenge in collecting linguistic resources from the web is that documents can be multilingual (i.e. contain text in more than one language). This is problematic for standard methods, which assume that a document is written in a single language, and has prompted research into segmenting text by language, as well as word-level , to enable extraction of linguistic resources from multilingual documents. A number of shared tasks discussed in detail in evaluation:sharedtasks included data from social media. Examples are the TweetLID shared task on tweet held at SEPLN 2014 BIBREF388 , BIBREF389 , the data sets used in the first and second shared tasks on in code-switched data which were partially taken from Twitter BIBREF383 , BIBREF398 , and the third edition of the DSL shared task which contained two out-of-domain test sets consisting of tweets BIBREF317 . The 5th edition of the PAN at CLEF author profiling task included language variety identification for tweets BIBREF370 . There has also been research on identifying the language of private messages between eBay users BIBREF399 , presumably as a filtering step prior to more in-depth data analysis. Off-the-Shelf Language Identifiers An “off-the-shelf” language identifier is software that is distributed with pre-trained models for a number of languages, so that a user is not required to provide training data before using the system. Such a setup is highly attractive to many end-users of automatic whose main interest is in utilizing the output of a language identifier rather than implementing and developing the technique. To this end, a number of off-the-shelf language identifiers have been released over time. Many authors have evaluated these off-the-shelf identifiers, including a recent evaluation involving 13 language identifiers which was carried out by BIBREF400 . In this section, we provide a brief summary of open-source or otherwise free systems that are available, as well as the key characteristics of each system. We have also included dates of when the software has been last updated as of October 2018. TextCat is the most well-known Perl implementation of the out-of-place method, it lists models for 76 languages in its off-the-shelf configuration; the program is not actively maintained. TextCat is not the only example of an off-the-shelf implementation of the out-of-place method: other implementations include libtextcat with 76 language models, JTCL with 15 languages, and mguesser with 104 models for different language-encoding pairs. The main issue addressed by later implementations is classification speed: TextCat is implemented in Perl and is not optimized for speed, whereas implementations such as libtextcat and mguesser have been specifically written to be fast and efficient. whatlang-rs uses an algorithm based on character trigrams and refers the user to the BIBREF7 article. It comes pre-trained with 83 languages. is the language identifier embedded in the Google Chrome web browser. It uses a NB classifier, and script-specific classification strategies. assumes that all the input is in UTF-8, and assigns the responsibility of encoding detection and transcoding to the user. uses Unicode information to determine the script of the input. also implements a number of pre-processing heuristics to help boost performance on its target domain (web pages), such as stripping character sequences like .jpg. The standard implementation supports 83 languages, and an extended model is also available, that supports 160 languages. is a Java library that implements a language identifier based on a NB classifier trained over character . The software comes with pre-trained models for 53 languages, using data from Wikipedia. makes use of a range of normalization heuristics to improve the performance on particular languages, including: (1) clustering of Chinese/Japanese/Korean characters to reduce sparseness; (2) removal of “language-independent” characters, and other text normalization; and (3) normalization of Arabic characters. is a Python implementation of the method described by BIBREF150 , which exploits training data for the same language across multiple different sources of text to identify sequences of characters that are strongly predictive of a given language, regardless of the source of the text. This feature set is combined with a NB classifier, and is distributed with a pre-trained model for 97 languages prepared using data from 5 different text sources. BIBREF151 provide an empirical comparison of to , and and find that it compares favorably both in terms of accuracy and classification speed. There are also implementations of the classifier component (but not the training portion) of in Java, C, and JavaScript. BIBREF153 uses a vector-space model with per-feature weighting on character sequences. One particular feature of is that it uses discriminative training in selecting features, i.e. it specifically makes use of features that are strong evidence against a particular language, which is generally not captured by NB models. Another feature of is that it uses inter-string smoothing to exploit sentence-level locality in making language predictions, under the assumption that adjacent sentences are likely to be in the same language. BIBREF153 reports that this substantially improves the accuracy of the identifier. Another distinguishing feature of is that it comes pre-trained with data for 1400 languages, which is the highest number by a large margin of any off-the-shelf system. whatthelang is a recent language identifier written in Python, which utilizes the FastText NN-based text classification algorithm. It supports 176 languages. implements an off-the-shelf classifier trained using Wikipedia data, covering 122 languages. Although not described as such, the actual classification algorithm used is a linear model, and is thus closely related to both NB and a cosine-based vector space model. In addition to the above-mentioned general-purpose language identifiers, there have also been efforts to produce pre-trained language identifiers targeted specifically at Twitter messages. is a Twitter-specific tool with built-in models for 19 languages. It uses a document representation based on tries BIBREF401 . The algorithm is a LR classifier using all possible substrings of the data, which is important to maximize the available information from the relatively short Twitter messages. BIBREF152 provides a comparison of 8 off-the-shelf language identifiers applied without re-training to Twitter messages. One issue they report is that comparing the accuracy of off-the-shelf systems is difficult because of the different subset of languages supported by each system, which may also not fully cover the languages present in the target data. The authors choose to compare accuracy over the full set of languages, arguing that this best reflects the likely use-case of applying an off-the-shelf system to new data. They find that the best individual systems are , and , but that slightly higher accuracy can be attained by a simple voting-based ensemble classifier involving these three systems. In addition to this, commercial or other closed-source language identifiers and language identifier services exist, of which we name a few. The Polyglot 3000 and Lextek Language Identifier are standalone language identifiers for Windows. Open Xerox Language Identifier is a web service with available REST and SOAP APIs. Research Directions and Open Issues in Several papers have catalogued open issues in BIBREF327 , BIBREF382 , BIBREF1 , BIBREF334 , BIBREF32 , BIBREF324 , BIBREF317 . Some of the issues, such as text representation (features) and choice of algorithm (methods), have already been covered in detail in this survey. In this section, we synthesize the remaining issues into a single section, and also add new issues that have not been discussed in previous work. For each issue, we review related work and suggest promising directions for future work. Text Preprocessing Text preprocessing (also known as normalization) is an umbrella term for techniques where an automatic transformation is applied to text before it is presented to a classifier. The aim of such a process is to eliminate sources of variation that are expected to be confounding factors with respect to the target task. Text preprocessing is slightly different from data cleaning, as data cleaning is a transformation applied only to training data, whereas normalization is applied to both training and test data. BIBREF1 raise text preprocessing as an outstanding issue in , arguing that its effects on the task have not been sufficiently investigated. In this section, we summarize the normalization strategies that have been proposed in the literature. Case folding is the elimination of capitalization, replacing characters in a text with either their lower-case or upper-case forms. Basic approaches generally map between [a-z] and [A-Z] in the ASCII encoding, but this approach is insufficient for extended Latin encodings, where diacritics must also be appropriately handled. A resource that makes this possible is the Unicode Character Database (UCD) which defines uppercase, lowercase and titlecase properties for each character, enabling automatic case folding for documents in a Unicode encoding such as UTF-8. Range compression is the grouping of a range of characters into a single logical set for counting purposes, and is a technique that is commonly used to deal with the sparsity that results from character sets for ideographic languages, such as Chinese, that may have thousands of unique “characters”, each of which is observed with relatively low frequency. BIBREF402 use such a technique where all characters in a given range are mapped into a single “bucket”, and the frequency of items in each bucket is used as a feature to represent the document. Byte-level representations of encodings that use multi-byte sequences to represent codepoints achieve a similar effect by “splitting” codepoints. In encodings such as UTF-8, the codepoints used by a single language are usually grouped together in “code planes”, where each codepoint in a given code plane shares the same upper byte. Thus, even though the distribution over codepoints may be quite sparse, when the byte-level representation uses byte sequences that are shorter than the multi-byte sequence of a codepoint, the shared upper byte will be predictive of specific languages. Cleaning may also be applied, where heuristic rules are used to remove some data that is perceived to hinder the accuracy of the language identifier. For example, BIBREF34 identify HTML entities as a candidate for removal in document cleaning, on the basis that classifiers trained on data which does not include such entities may drop in accuracy when applied to raw HTML documents. includes heuristics such as expanding HTML entities, deleting digits and punctuation, and removing SGML-like tags. Similarly, also removes “language-independent characters” such as numbers, symbols, URLs, and email addresses. It also removes words that are all-capitals and tries to remove other acronyms and proper names using heuristics. In the domain of Twitter messages, BIBREF313 remove links, usernames, smilies, and hashtags (a Twitter-specific “tagging” feature), arguing that these entities are language independent and thus should not feature in the model. BIBREF136 address of web pages, and report removing HTML formatting, and applying stopping using a small stopword list. BIBREF59 carry out experiments on the ECI multilingual corpus and report removing punctuation, space characters, and digits. The idea of preprocessing text to eliminate domain-specific “noise” is closely related to the idea of learning domain-independent characteristics of a language BIBREF150 . One difference is that normalization is normally heuristic-driven, where a manually-specified set of rules is used to eliminate unwanted elements of the text, whereas domain-independent text representations are data-driven, where text from different sources is used to identify the characteristics that a language shares between different sources. Both approaches share conceptual similarities with problems such as content extraction for web pages. In essence, the aim is to isolate the components of the text that actually represent language, and suppress the components that carry other information. One application is the language-aware extraction of text strings embedded in binary files, which has been shown to perform better than conventional heuristic approaches BIBREF36 . Future work in this area could focus specifically on the application of language-aware techniques to content extraction, using models of language to segment documents into textual and non-textual components. Such methods could also be used to iteratively improve itself by improving the quality of training data. Orthography and Transliteration is further complicated when we consider that some languages can be written in different orthographies (e.g. Bosnian and Serbian can be written in both Latin and Cyrillic script). Transliteration is another phenomenon that has a similar effect, whereby phonetic transcriptions in another script are produced for particular languages. These transcriptions can either be standardized and officially sanctioned, such as the use of Hanyu Pinyin for Chinese, or may also emerge irregularly and organically as in the case of arabizi for Arabic BIBREF403 . BIBREF1 identify variation in the encodings and scripts used by a given language as an open issue in , pointing out that early work tended to focus on languages written using a romanized script, and suggesting that dealing with issues of encoding and orthography adds substantial complexity to the task. BIBREF34 discuss the relative difficulties of discriminating between languages that vary in any combination of encoding, script and language family, and give examples of pairs of languages that fall into each category. across orthographies and transliteration is an area that has not received much attention in work to date, but presents unique and interesting challenges that are suitable targets for future research. An interesting and unexplored question is whether it is possible to detect that documents in different encodings or scripts are written in the same language, or what language a text is transliterated from, without any a-priori knowledge of the encoding or scripts used. One possible approach to this could be to take advantage of standard orderings of alphabets in a language – the pattern of differences between adjacent characters should be consistent across encodings, though whether this is characteristic of any given language requires exploration. Supporting Low-Resource Languages BIBREF1 paint a fairly bleak picture of the support for low-resource languages in automatic . This is supported by the arguments of BIBREF382 who detail specific issues in building hugely multilingual datasets. BIBREF404 also specifically called for research into automatic for low-density languages. Ethnologue BIBREF0 lists a total of 7099 languages. BIBREF382 describe the Ethnologue in more detail, and discuss the role that plays in other aspects of supporting minority languages, including detecting and cataloging resources. The problem is circular: methods are typically supervised, and need training data for each language to be covered, but the most efficient way to recover such data is through methods. A number of projects are ongoing with the specific aim of gathering linguistic data from the web, targeting as broad a set of languages as possible. One such project is the aforementioned ODIN BIBREF361 , BIBREF397 , which aims to collect parallel snippets of text from Linguistics articles published on the web. ODIN specifically targets articles containing Interlinear Glossed Text (IGT), a semi-structured format for presenting text and a corresponding gloss that is commonly used in Linguistics. Other projects that exist with the aim of creating text corpora for under-resourced languages by crawling the web are the Crúbadán project BIBREF395 and SeedLing BIBREF405 . The Crúbadán crawler uses seed data in a target language to generate word lists that in turn are used as queries for a search engine. The returned documents are then compared with the seed resource via an automatic language identifier, which is used to eliminate false positives. BIBREF395 reports that corpora for over 400 languages have been built using this method. The SeedLing project crawls texts from several web sources which has resulted in a total of 1451 languages from 105 language families. According to the authors, this represents 19% of the world's languages. Much recent work on multilingual documents (openissues:multilingual) has been done with support for minority languages as a key goal. One of the common problems with gathering linguistic data from the web is that the data in the target language is often embedded in a document containing data in another language. This has spurred recent developments in text segmentation by language and word-level . BIBREF326 present a method to detect documents that contain text in more than one language and identify the languages present with their relative proportions in the document. The method is evaluated on real-world data from a web crawl targeted to collect documents for specific low-density languages. for low-resource languages is a promising area for future work. One of the key questions that has not been clearly answered is how much data is needed to accurately model a language for purposes of . Work to date suggests that there may not be a simple answer to this question as accuracy varies according to the number and variety of languages modeled BIBREF32 , as well as the diversity of data available to model a specific language BIBREF150 . Number of Languages Early research in tended to focus on a very limited number of languages (sometimes as few as 2). This situation has improved somewhat with many current off-the-shelf language identifiers supporting on the order of 50–100 languages (ots). The standout in this regard is BIBREF101 , supporting 1311 languages in its default configuration. However, evaluation of the identifier of BIBREF153 on a different domain found that the system suffered in terms of accuracy because it detected many languages that were not present in the test data BIBREF152 . BIBREF397 describe the construction of web crawlers specifically targeting IGT, as well as the identification of the languages represented in the IGT snippets. for thousands of languages from very small quantities of text is one of the issues that they have had to tackle. They list four specific challenges for in ODIN: (1) the large number of languages; (2) “unseen” languages that appear in the test data but not in training data; (3) short target sentences; and (4) (sometimes inconsistent) transliteration into Latin text. Their solution to this task is to take advantage of a domain-specific feature: they assume that the name of the language that they are extracting must appear in the document containing the IGT, and hence treat this as a co-reference resolution problem. They report that this approach significantly outperforms the text-based approach in this particular problem setting. An interesting area to explore is the trade-off between the number of languages supported and the accuracy per-language. From existing results it is not clear if it is possible to continue increasing the number of languages supported without adversely affecting the average accuracy, but it would be useful to quantify if this is actually the case across a broad range of text sources. mostlanguages lists the articles where the with more than 30 languages has been investigated. “Unseen” Languages and Unsupervised “Unseen” languages are languages that we do not have training data for but may nonetheless be encountered by a system when applied to real-world data. Dealing with languages for which we do not have training data has been identified as an issue by BIBREF1 and has also been mentioned by BIBREF361 as a specific challenge in harvesting linguistic data from the web. BIBREF233 use an unlabeled training set with a labeled evaluation set for token-level code switching identification between Modern Standard Arabic (MSA) and dialectal Arabic. They utilize existing dictionaries and also a morphological analyzer for MSA, so the system is supported by extensive external knowledge sources. The possibility to use unannotated training material is nonetheless a very useful feature. Some authors have attempted to tackle the unseen language problem through attempts at unsupervised labeling of text by language. BIBREF225 uses an unsupervised clustering algorithm to separate a multilingual corpus into groups corresponding to languages. She uses singular value decomposition (SVD) to first identify the words that discriminate between documents and then to separate the terms into highly correlating groups. The documents grouped together by these discriminating terms are merged and the process is repeated until the wanted number of groups (corresponding to languages) is reached. BIBREF412 also presents an approach to unseen language problem, building graphs of co-occurrences of words in sentences, and then partitioning the graph using a custom graph-clustering algorithm which labels each word in the cluster with a single label. The number of labels is initialized to be the same as the number of words, and decreases as the algorithm is recursively applied. After a small number of iterations (the authors report 20), the labels become relatively stable and can be interpreted as cluster labels. Smaller clusters are then discarded, and the remaining clusters are interpreted as groups of words for each language. BIBREF413 compared the Chinese Whispers algorithm of BIBREF412 and Graclus clustering on unsupervised Tweet . They conclude that Chinese Whispers is better suited to . BIBREF414 used Fuzzy ART NNs for unsupervised language clustering for documents in Arabic, Persian, and Urdu. In Fuzzy ART, the clusters are also dynamically updated during the identification process. BIBREF415 also tackle the unseen language problem through clustering. They use a character representation for text, and a clustering algorithm that consists of an initial INLINEFORM0 -means phase, followed by particle-swarm optimization. This produces a large number of small clusters, which are then labeled by language through a separate step. BIBREF240 used co-occurrences of words with INLINEFORM1 -means clustering in word-level unsupervised . They used a Dirichlet process Gaussian mixture model (“DPGMM”), a non-parametric variant of a GMM, to automatically determine the number of clusters, and manually labeled the language of each cluster. BIBREF249 also used INLINEFORM2 -means clustering, and BIBREF416 used the INLINEFORM3 -means clustering algorithm in a custom framework. BIBREF244 utilized unlabeled data to improve their system by using a CRF autoencoder, unsupervised word embeddings, and word lists. A different partial solution to the issue of unseen languages is to design the classifier to be able to output “unknown” as a prediction for language. This helps to alleviate one of the problems commonly associated with the presence of unseen languages – classifiers without an “unknown” facility are forced to pick a language for each document, and in the case of unseen languages, the choice may be arbitrary and unpredictable BIBREF412 . When is used for filtering purposes, i.e. to select documents in a single language, this mislabeling can introduce substantial noise into the data extracted; furthermore, it does not matter what or how many unseen languages there are, as long as they are consistently rejected. Therefore the “unknown” output provides an adequate solution to the unseen language problem for purposes of filtering. The easiest way to implement unknown language detection is through thresholding. Most systems internally compute a score for each language for an unknown text, so thresholding can be applied either with a global threshold BIBREF33 , a per-language threshold BIBREF34 , or by comparing the score for the top-scoring INLINEFORM0 -languages. The problem of unseen languages and open-set recognition was also considered by BIBREF270 , BIBREF84 , and BIBREF126 . BIBREF126 experiments with one-class classification (“OCC”) and reaches an F-score on 98.9 using OC-SVMs (SVMs trained only with data from one language) to discriminate between 10 languages. Another possible method for unknown language detection that has not been explored extensively in the literature, is the use of non-parametric mixture models based on Hierarchical Dirichlet Processes (“HDP”). Such models have been successful in topic modeling, where an outstanding issue with the popular LDA model is the need to specify the number of topics in advance. BIBREF326 introduced an approach to detecting multilingual documents that uses a model very similar to LDA, where languages are analogous to topics in the LDA model. Using a similar analogy, an HDP-based model may be able to detect documents that are written in a language that is not currently modeled by the system. BIBREF24 used LDA to cluster unannotated tweets. Recently BIBREF417 used LDA in unsupervised sentence-level . They manually identified the languages of the topics created with LDA. If there were more topics than languages then the topics in the same language were merged. Filtering, a task that we mentioned earlier in this section, is a very common application of , and it is therefore surprising that there is little research on filtering for specific languages. Filtering is a limit case of with unseen languages, where all languages but one can be considered unknown. Future work could examine how useful different types of negative evidence are for filtering – if we want to detect English documents, e.g., are there empirical advantages in having distinct models of Italian and German (even if we don't care about the distinction between the two languages), or can we group them all together in a single “negative” class? Are we better off including as many languages as possible in the negative class, or can we safely exclude some? Multilingual Documents Multilingual documents are documents that contain text in more than one language. In constructing the hrWac corpus, BIBREF97 found that 4% of the documents they collected contained text in more than one language. BIBREF329 report that web pages in many languages contain formulaic strings in English that do not actually contribute to the content of the page, but may nonetheless confound attempts to identify multilingual documents. Recent research has investigated how to make use of multilingual documents from sources such as web crawls BIBREF40 , forum posts BIBREF263 , and microblog messages BIBREF418 . However, most methods assume that a document contains text from a single language, and so are not directly applicable to multilingual documents. Handling of multilingual documents has been named as an open research question BIBREF1 . Most NLP techniques presuppose monolingual input data, so inclusion of data in foreign languages introduces noise, and can degrade the performance of NLP systems. Automatic detection of multilingual documents can be used as a pre-filtering step to improve the quality of input data. Detecting multilingual documents is also important for acquiring linguistic data from the web, and has applications in mining bilingual texts for statistical MT from online resources BIBREF418 , or to study code-switching phenomena in online communications. There has also been interest in extracting text resources for low-density languages from multilingual web pages containing both the low-density language and another language such as English. The need to handle multilingual documents has prompted researchers to revisit the granularity of . Many researchers consider document-level to be relatively easy, and that sentence-level and word-level are more suitable targets for further research. However, word-level and sentence-level tokenization are not language-independent tasks, and for some languages are substantially harder than others BIBREF419 . BIBREF112 is a language identifier that supports identification of multilingual documents. The system is based on a vector space model using cosine similarity. for multilingual documents is performed through the use of virtual mixed languages. BIBREF112 shows how to construct vectors representative of particular combinations of languages independent of the relative proportions, and proposes a method for choosing combinations of languages to consider for any given document. One weakness of this approach is that for exhaustive coverage, this method is factorial in the number of languages, and as such intractable for a large set of languages. Furthermore, calculating the parameters for the virtual mixed languages becomes infeasibly complex for mixtures of more than 3 languages. As mentioned previously, BIBREF326 propose an LDA-inspired method for multilingual documents that is able to identify that a document is multilingual, identify the languages present and estimate the relative proportions of the document written in each language. To remove the need to specify the number of topics (or in this case, languages) in advance, BIBREF326 use a greedy heuristic that attempts to find the subset of languages that maximizes the posterior probability of a target document. One advantage of this approach is that it is not constrained to 3-language combinations like the method of BIBREF112 . Language set identification has also been considered by BIBREF34 , BIBREF407 , and BIBREF420 , BIBREF276 . To encourage further research on for multilingual documents, in the aforementioned shared task hosted by the Australiasian Language Technology Workshop 2010, discussed in evaluation:sharedtasks, participants were required to predict the language(s) present in a held-out test set containing monolingual and bilingual documents BIBREF378 . The dataset was prepared using data from Wikipedia, and bilingual documents were produced using a segment from an article in one language and a segment from the equivalent article in another language. Equivalence between articles was determined using the cross-language links embedded within each Wikipedia article. The winning entry BIBREF421 first built monolingual models from multilingual training data, and then applied them to a chunked version of the test data, making the final prediction a function of the prediction over chunks. Another approach to handling multilingual documents is to attempt to segment them into contiguous monolingual segments. In addition to identifying the languages present, this requires identifying the locations of boundaries in the text which mark the transition from one language to another. Several methods for supervised language segmentation have been proposed. BIBREF33 generalized a algorithm for monolingual documents by adding a dynamic programming algorithm based on a simple Markov model of multilingual documents. More recently, multilingual algorithms have also been presented by BIBREF140 , BIBREF73 , BIBREF74 , BIBREF106 , and BIBREF82 . Short Texts of short strings is known to be challenging for existing techniques. BIBREF37 tested four different classification methods, and found that all have substantially lower accuracy when applied to texts of 25 characters compared with texts of 125 characters. These findings were later strengthened, for example, by BIBREF145 and BIBREF148 . BIBREF195 describes a method specifically targeted at short texts that augments a dictionary with an affix table, which was tested over synthetic data derived from a parallel bible corpus. BIBREF145 focus on messages of 5–21 characters, using language models over data drawn the from Universal Declaration of Human Rights (UDHR). We would expect that generic methods for of short texts should be effective in any domain where short texts are found, such as search engine queries or microblog messages. However, BIBREF195 and BIBREF145 both only test their systems in a single domain: bible texts in the former case, and texts from the UDHR in the latter case. Other research has shown that results do not trivially generalize across domains BIBREF32 , and found that in UDHR documents is relatively easy BIBREF301 . For both bible and UDHR data, we expect that the linguistic content is relatively grammatical and well-formed, an expectation that does not carry across to domains such as search engine queries and microblogs. Another “short text” domain where has been studied is of proper names. BIBREF306 identify this as an issue. BIBREF422 found that of names is more accurate than of generic words of equivalent length. BIBREF299 raise an important criticism of work on Twitter messages to date: only a small number of European languages has been considered. BIBREF299 expand the scope of for Twitter, covering nine languages across Cyrillic, Arabic and Devanagari scripts. BIBREF152 expand the evaluation further, introducing a dataset of language-labeled Twitter messages across 65 languages constructed using a semi-automatic method that leverages user identity to avoid inducing a bias in the evaluation set towards messages that existing systems are able to identify correctly. BIBREF152 also test a 1300-language model based on BIBREF153 , but find that it performs relatively poorly in the target domain due to a tendency to over-predict low-resource languages. Work has also been done on of single words in a document, where the task is to label each word in the document with a specific language. Work to date in this area has assumed that word tokenization can be carried out on the basis of whitespace. BIBREF35 explore word-level in the context of segmenting a multilingual document into monolingual segments. Other work has assumed that the languages present in the document are known in advance. Conditional random fields (“CRFs”: BIBREF423 ) are a sequence labeling method most often used in for labeling the language of individual words in a multilingual text. CRFs can be thought of as a finite state model with probabilistic transition probabilities optimised over pre-defined cliques. They can use any observations made from the test document as features, including language labels given by monolingual language identifiers for words. BIBREF40 used a CRF trained with generalized expectation criteria, and found it to be the most accurate of all methods tested (NB, LR, HMM, CRF) at word-level . BIBREF40 introduce a technique to estimate the parameters using only monolingual data, an important consideration as there is no readily-available collection of manually-labeled multilingual documents with word-level annotations. BIBREF263 present a two-pass approach to processing Turkish-Dutch bilingual documents, where the first pass labels each word independently and the second pass uses the local context of a word to further refine the predictions. BIBREF263 achieved 97,6% accuracy on distinguishing between the two languages using a linear-chain CRF. BIBREF180 are the only ones so far to use a CRF for of monolingual texts. With a CRF, they attained a higher F-score in German dialect identification than NB or an ensemble consisting of NB, CRF, and SVM. Lately CRFs were also used for by BIBREF52 and BIBREF44 . BIBREF296 investigate of individual words in the context of code switching. They find that smoothing of models substantially improves accuracy of a language identifier based on a NB classifier when applied to individual words. Similar Languages, Language Varieties, and Dialects While one line of research into has focused on pushing the boundaries of how many languages are supported simultaneously by a single system BIBREF382 , BIBREF36 , BIBREF153 , another has taken a complementary path and focused on in groups of similar languages. Research in this area typically does not make a distinction between languages, varieties and dialects, because such terminological differences tend to be politically rather than linguistically motivated BIBREF424 , BIBREF382 , BIBREF5 , and from an NLP perspective the challenges faced are very similar. for closely-related languages, language varieties, and dialects has been studied for Malay–Indonesian BIBREF332 , Indian languages BIBREF114 , South Slavic languages BIBREF377 , BIBREF98 , BIBREF4 , BIBREF425 , Serbo-Croatian dialects BIBREF426 , English varieties BIBREF278 , BIBREF45 , Dutch–Flemish BIBREF53 , Dutch dialects (including a temporal dimension) BIBREF427 , German Dialects BIBREF428 Mainland–Singaporean–Taiwanese Chinese BIBREF429 , Portuguese varieties BIBREF5 , BIBREF259 , Spanish varieties BIBREF70 , BIBREF147 , French varieties BIBREF430 , BIBREF431 , BIBREF432 , languages of the Iberian Peninsula BIBREF388 , Romanian dialects BIBREF120 , and Arabic dialects BIBREF41 , BIBREF78 , BIBREF433 , BIBREF75 , BIBREF434 , the last of which we discuss in more detail in this section. As to off-the-shelf tools which can identify closely-related languages, BIBREF79 released a system trained to identify 27 languages, including 10 language varieties. Closely-related languages, language varieties, and dialects have also been the focus of a number of shared tasks in recent years as discussed in evaluation:sharedtasks. Similar languages are a known problem for existing language identifiers BIBREF332 , BIBREF435 . BIBREF34 identify language pairs from the same language family that also share a common script and the same encoding, as the most difficult to discriminate. BIBREF98 report that achieves only 45% accuracy when trained and tested on 3-way Bosnian/Serbian/Croatian dataset. BIBREF278 found that methods are not competitive with conventional word-based document categorization methods in distinguishing between national varieties of English. BIBREF332 reports that a character trigram model is able to distinguish Malay/Indonesian from English, French, German, and Dutch, but handcrafted rules are needed to distinguish between Malay and Indonesian. One kind of rule is the use of “exclusive words” that are known to occur in only one of the languages. A similar idea is used by BIBREF98 , in automatically learning a “blacklist” of words that have a strong negative correlation with a language – i.e. their presence implies that the text is not written in a particular language. In doing so, they achieve an overall accuracy of 98%, far surpassing the 45% of . BIBREF153 also adopts such “discriminative training” to make use of negative evidence in . BIBREF435 observed that general-purpose approaches to typically use a character representation of text, but successful approaches for closely-related languages, varieties, and dialects seem to favor a word-based representation or higher-order (e.g. 4-grams, 5-grams, and even 6-grams) that often cover whole words BIBREF429 , BIBREF98 , BIBREF278 , BIBREF343 . The study compared character with word-based representations for over varieties of Spanish, Portuguese and French, and found that word-level models performed better for varieties of Spanish, but character models perform better in the case of Portuguese and French. To train accurate and robust systems that discriminate between language varieties or similar languages, models should ideally be able to capture not only lexical but more abstract systemic differences between languages. One way to achieve this, is by using features that use de-lexicalized text representations (e.g. by substituting named entities or content words by placeholders), or at a higher level of abstraction, using POS tags or other morphosyntactic information BIBREF70 , BIBREF390 , BIBREF43 , or even adversarial machine learning to modify the learned representations to remove such artefacts BIBREF358 . Finally, an interesting research direction could be to combine work on closely-related languages with the analysis of regional or dialectal differences in language use BIBREF436 , BIBREF437 , BIBREF438 , BIBREF432 . In recent years, there has been a significant increase of interest in the computational processing of Arabic. This is evidenced by a number of research papers in several NLP tasks and applications including the identification/discrimination of Arabic dialects BIBREF41 , BIBREF78 . Arabic is particularly interesting for researchers interested in language variation due to the fact that the language is often in a diaglossic situation, in which the standard form (Modern Standard Arabic or “MSA”) coexists with several regional dialects which are used in everyday communication. Among the studies published on the topic of Arabic , BIBREF41 proposed a supervised approach to distinguish between MSA and Egyptian Arabic at the sentence level, and achieved up to 85.5% accuracy over an Arabic online commentary dataset BIBREF379 . BIBREF433 achieved higher results over the same dataset using a linear-kernel SVM classifier. BIBREF78 compiled a dataset containing MSA, Egyptian Arabic, Gulf Arabic and Levantine Arabic, and used it to investigate three classification tasks: (1) MSA and dialectal Arabic; (2) four-way classification – MSA, Egyptian Arabic, Gulf Arabic, and Levantine Arabic; and (3) three-way classification – Egyptian Arabic, Gulf Arabic, and Levantine Arabic. BIBREF439 explores the use of sentence-level Arabic dialect identification as a pre-processor for MT, in customizing the selection of the MT model used to translate a given sentence to the dialect it uses. In performing dialect-specific MT, the authors achieve an improvement of 1.0% BLEU score compared with a baseline system which does not differentiate between Arabic dialects. Finally, in addition to the above-mentioned dataset of BIBREF379 , there are a number of notable multi-dialect corpora of Arabic: a multi-dialect corpus of broadcast speeches used in the ADI shared task BIBREF440 ; a multi-dialect corpus of (informal) written Arabic containing newspaper comments and Twitter data BIBREF441 ; a parallel corpus of 2,000 sentences in MSA, Egyptian Arabic, Tunisian Arabic, Jordanian Arabic, Palestinian Arabic, and Syrian Arabic, in addition to English BIBREF442 ; a corpus of sentences in 18 Arabic dialects (corresponding to 18 different Arabic-speaking countries) based on data manually sourced from web forums BIBREF75 ; and finally two recently compiled multi-dialect corpora containing microblog posts from Twitter BIBREF241 , BIBREF443 . While not specifically targeted at identifying language varieties, BIBREF355 made the critical observation that when naively trained, systems tend to perform most poorly over language varieties from the lowest socio-economic demographics (focusing particularly on the case of English), as they tend to be most under-represented in training corpora. If, as a research community, we are interested in the social equitability of our systems, it is critical that we develop datasets that are truly representative of the global population, to better quantify and remove this effect. To this end, BIBREF355 detail a method for constructing a more representative dataset, and demonstrate the impact of training on such a dataset in terms of alleviating socio-economic bias. Domain-specific One approach to is to build a generic language identifier that aims to correctly identify the language of a text without any information about the source of the text. Some work has specifically targeted across multiple domains, learning characteristics of languages that are consistent between different sources of text BIBREF150 . However, there are often domain-specific features that are useful for identifying the language of a text. In this survey, our primary focus has been on of digitally-encoded text, using only the text itself as evidence on which to base the prediction of the language. Within a text, there can sometimes be domain-specific peculiarities that can be used for . For example, BIBREF399 investigates of user-to-user messages in the eBay e-commerce portal. He finds that using only the first two and last two words of a message is sufficient for identifying the language of a message. Conclusions This article has presented a comprehensive survey on language identification of digitally-encoded text. We have shown that is a rich, complex, and multi-faceted problem that has engaged a wide variety of research communities. accuracy is critical as it is often the first step in longer text processing pipelines, so errors made in will propagate and degrade the performance of later stages. Under controlled conditions, such as limiting the number of languages to a small set of Western European languages and using long, grammatical, and structured text such as government documents as training data, it is possible to achieve near-perfect accuracy. This led many researchers to consider a solved problem, as argued by BIBREF2 . However, becomes much harder when taking into account the peculiarities of real-world data, such as very short documents (e.g. search engine queries), non-linguistic “noise” (e.g. HTML markup), non-standard use of language (e.g. as seen in social media data), and mixed-language documents (e.g. forum posts in multilingual web forums). Modern approaches to are generally data-driven and are based on comparing new documents with models of each target language learned from data. The types of models as well as the sources of training data used in the literature are diverse, and work to date has not compared and evaluated these in a systematic manner, making it difficult to draw broader conclusions about what the “best” method for actually is. We have attempted to synthesize results to date to identify a set of “best practices”, but these should be treated as guidelines and should always be considered in the broader context of a target application. Existing work on serves to illustrate that the scope and depth of the problem are much greater than they may first appear. In openissues, we discussed open issues in , identifying the key challenges, and outlining opportunities for future research. Far from being a solved problem, aspects of make it an archetypal learning task with subtleties that could be tackled by future work on supervised learning, representation learning, multi-task learning, domain adaptation, multi-label classification and other subfields of machine learning. We hope that this paper can serve as a reference point for future work in the area, both for providing insight into work to date, as well as pointing towards the key aspects that merit further investigation. This research was supported in part by the Australian Research Council, the Kone Foundation and the Academy of Finland. We would like to thank Kimmo Koskenniemi for many valuable discussions and comments concerning the early phases of the features and the methods sections.
document-level accuracy, precision, recall, F-score
b3a09d2e3156c51bd5fdc110a2a00a67bb8c0e42
b3a09d2e3156c51bd5fdc110a2a00a67bb8c0e42_0
Q: what are the off-the-shelf systems discussed in the paper? Text: Introduction Language identification (“”) is the task of determining the natural language that a document or part thereof is written in. Recognizing text in a specific language comes naturally to a human reader familiar with the language. intro:langid presents excerpts from Wikipedia articles in different languages on the topic of Natural Language Processing (“NLP”), labeled according to the language they are written in. Without referring to the labels, readers of this article will certainly have recognized at least one language in intro:langid, and many are likely to be able to identify all the languages therein. Research into aims to mimic this human ability to recognize specific languages. Over the years, a number of computational approaches have been developed that, through the use of specially-designed algorithms and indexing structures, are able to infer the language being used without the need for human intervention. The capability of such systems could be described as super-human: an average person may be able to identify a handful of languages, and a trained linguist or translator may be familiar with many dozens, but most of us will have, at some point, encountered written texts in languages they cannot place. However, research aims to develop systems that are able to identify any human language, a set which numbers in the thousands BIBREF0 . In a broad sense, applies to any modality of language, including speech, sign language, and handwritten text, and is relevant for all means of information storage that involve language, digital or otherwise. However, in this survey we limit the scope of our discussion to of written text stored in a digitally-encoded form. Research to date on has traditionally focused on monolingual documents BIBREF1 (we discuss for multilingual documents in openissues:multilingual). In monolingual , the task is to assign each document a unique language label. Some work has reported near-perfect accuracy for of large documents in a small number of languages, prompting some researchers to label it a “solved task” BIBREF2 . However, in order to attain such accuracy, simplifying assumptions have to be made, such as the aforementioned monolinguality of each document, as well as assumptions about the type and quantity of data, and the number of languages considered. The ability to accurately detect the language that a document is written in is an enabling technology that increases accessibility of data and has a wide variety of applications. For example, presenting information in a user's native language has been found to be a critical factor in attracting website visitors BIBREF3 . Text processing techniques developed in natural language processing and Information Retrieval (“IR”) generally presuppose that the language of the input text is known, and many techniques assume that all documents are in the same language. In order to apply text processing techniques to real-world data, automatic is used to ensure that only documents in relevant languages are subjected to further processing. In information storage and retrieval, it is common to index documents in a multilingual collection by the language that they are written in, and is necessary for document collections where the languages of documents are not known a-priori, such as for data crawled from the World Wide Web. Another application of that predates computational methods is the detection of the language of a document for routing to a suitable translator. This application has become even more prominent due to the advent of Machine Translation (“MT”) methods: in order for MT to be applied to translate a document to a target language, it is generally necessary to determine the source language of the document, and this is the task of . also plays a part in providing support for the documentation and use of low-resource languages. One area where is frequently used in this regard is in linguistic corpus creation, where is used to process targeted web crawls to collect text resources for low-resource languages. A large part of the motivation for this article is the observation that lacks a “home discipline”, and as such, the literature is fragmented across a number of fields, including NLP, IR, machine learning, data mining, social medial analysis, computer science education, and systems science. This has hampered the field, in that there have been many instances of research being carried out with only partial knowledge of other work on the topic, and the myriad of published systems and datasets. Finally, it should be noted that this survey does not make a distinction between languages, language varieties, and dialects. Whatever demarcation is made between languages, varieties and dialects, a system is trained to identify the associated document classes. Of course, the more similar two classes are, the more challenging it is for a system to discriminate between them. Training a system to discriminate between similar languages such as Croatian and Serbian BIBREF4 , language varieties like Brazilian and European Portuguese BIBREF5 , or a set of Arabic dialects BIBREF6 is more challenging than training systems to discriminate between, for example, Japanese and Finnish. Even so, as evidenced in this article, from a computational perspective, the algorithms and features used to discriminate between languages, language varieties, and dialects are identical. as Text Categorization is in some ways a special case of text categorization, and previous research has examined applying standard text categorization methods to BIBREF7 , BIBREF8 . BIBREF9 provides a definition of text categorization, which can be summarized as the task of mapping a document onto a pre-determined set of classes. This is a very broad definition, and indeed one that is applicable to a wide variety of tasks, amongst which falls modern-day . The archetypal text categorization task is perhaps the classification of newswire articles according to the topics that they discuss, exemplified by the Reuters-21578 dataset BIBREF10 . However, has particular characteristics that make it different from typical text categorization tasks: These distinguishing characteristics present unique challenges and offer particular opportunities, so much so that research in has generally proceeded independently of text categorization research. In this survey, we will examine the common themes and ideas that underpin research in . We begin with a brief history of research that has led to modern (history), and then proceed to review the literature, first introducing the mathematical notation used in the article (notation), and then providing synthesis and analysis of existing research, focusing specifically on the representation of text (features) and the learning algorithms used (methods). We examine the methods for evaluating the quality of the systems (evaluation) as well as the areas where has been applied (applications), and then provide an overview of “off-the-shelf” systems (ots). We conclude the survey with a discussion of the open issues in (openissues), enumerating issues and existing efforts to address them, as well as charting the main directions where further research in is required. Previous Surveys Although there are some dedicated survey articles, these tend to be relatively short; there have not been any comprehensive surveys of research in automated LI of text to date. The largest survey so far can be found in the literature review of Marco Lui's PhD thesis BIBREF11 , which served as an early draft and starting point for the current article. BIBREF12 provides a historical overview of language identification focusing on the use of language models. BIBREF13 gives a brief overview of some of the methods used for , and BIBREF14 provide a review of some of the techniques and applications used previously. BIBREF15 gives a short overview of some of the challenges, algorithms and available tools for . BIBREF16 provides a brief summary of , how it relates to other research areas, and some outstanding challenges, but only does so in general terms and does not go into any detail about existing work in the area. Another brief article about is BIBREF17 , which covers both of spoken language as well as of written documents, and also discusses of documents stored as images rather than digitally-encoded text. A Brief History of as a task predates computational methods – the earliest interest in the area was motivated by the needs of translators, and simple manual methods were developed to quickly identify documents in specific languages. The earliest known work to describe a functional program for text is by BIBREF18 , a statistician, who used multiple discriminant analysis to teach a computer how to distinguish, at the word level, between English, Swedish and Finnish. Mustonen compiled a list of linguistically-motivated character-based features, and trained his language identifier on 300 words for each of the three target languages. The training procedure created two discriminant functions, which were tested with 100 words for each language. The experiment resulted in 76% of the words being correctly classified; even by current standards this percentage would be seen as acceptable given the small amount of training material, although the composition of training and test data is not clear, making the experiment unreproducible. In the early 1970s, BIBREF19 considered the problem of automatic . According to BIBREF20 and the available abstract of Nakamura's article, his language identifier was able to distinguish between 25 languages written with the Latin alphabet. As features, the method used the occurrence rates of characters and words in each language. From the abstract it seems that, in addition to the frequencies, he used some binary presence/absence features of particular characters or words, based on manual . BIBREF20 wrote his master's thesis “Language Identification by Statistical Analysis” for the Naval Postgraduate School at Monterey, California. The continued interest and the need to use of text in military intelligence settings is evidenced by the recent articles of, for example, BIBREF21 , BIBREF22 , BIBREF23 , and BIBREF24 . As features for , BIBREF20 used, e.g., the relative frequencies of characters and character bigrams. With a majority vote classifier ensemble of seven classifiers using Kolmogor-Smirnov's Test of Goodness of Fit and Yule's characteristic ( INLINEFORM0 ), he managed to achieve 89% accuracy over 53 characters when distinguishing between English and Spanish. His thesis actually includes the identifier program code (for the IBM System/360 Model 67 mainframe), and even the language models in printed form. Much of the earliest work on automatic was focused on identification of spoken language, or did not make a distinction between written and spoken language. For example, the work of BIBREF25 is primarily focused on of spoken utterances, but makes a broader contribution in demonstrating the feasibility of on the basis of a statistical model of broad phonetic information. However, their experiments do not use actual speech data, but rather “synthetic” data in the form of phonetic transcriptions derived from written text. Another subfield of speech technology, speech synthesis, has also generated a considerable amount of research in the of text, starting from the 1980s. In speech synthesis, the need to know the source language of individual words is crucial in determining how they should be pronounced. BIBREF26 uses the relative frequencies of character trigrams as probabilities and determines the language of words using a Bayesian model. Church explains the method – that has since been widely used in LI – as a small part of an article concentrating on many aspects of letter stress assignment in speech synthesis, which is probably why BIBREF27 is usually attributed to being the one to have introduced the aforementioned method to of text. As Beesley's article concentrated solely on the problem of LI, this single focus probably enabled his research to have greater visibility. The role of the program implementing his method was to route documents to MT systems, and Beesley's paper more clearly describes what has later come to be known as a character model. The fact that the distribution of characters is relatively consistent for a given language was already well known. The highest-cited early work on automatic is BIBREF7 . Cavnar and Trenkle's method (which we describe in detail in outofplace) builds up per-document and per-language profiles, and classifies a document according to which language profile it is most similar to, using a rank-order similarity metric. They evaluate their system on 3478 documents in eight languages obtained from USENET newsgroups, reporting a best overall accuracy of 99.8%. Gertjan van Noord produced an implementation of the method of Cavnar and Trenkle named , which has become eponymous with the method itself. is packaged with pre-trained models for a number of languages, and so it is likely that the strong results reported by Cavnar and Trenkle, combined with the ready availability of an “off-the-shelf” implementation, has resulted in the exceptional popularity of this particular method. BIBREF7 can be considered a milestone in automatic , as it popularized the use of automatic methods on character models for , and to date the method is still considered a benchmark for automatic . On Notation This section introduces the notation used throughout this article to describe methods. We have translated the notation in the original papers to our notation, to make it easier to see the similarities and differences between the methods presented in the literature. The formulas presented could be used to implement language identifiers and re-evaluate the studies they were originally presented in. A corpus INLINEFORM0 consists of individual tokens INLINEFORM1 which may be bytes, characters or words. INLINEFORM2 is comprised of a finite sequence of individual tokens, INLINEFORM3 . The total count of individual tokens INLINEFORM4 in INLINEFORM5 is denoted by INLINEFORM6 . In a corpus INLINEFORM7 with non-overlapping segments INLINEFORM8 , each segment is referred to as INLINEFORM9 , which may be a short document or a word or some other way of segmenting the corpus. The number of segments is denoted as INLINEFORM10 . A feature INLINEFORM0 is some countable characteristic of the corpus INLINEFORM1 . When referring to the set of all features INLINEFORM2 in a corpus INLINEFORM3 , we use INLINEFORM4 , and the number of features is denoted by INLINEFORM5 . A set of unique features in a corpus INLINEFORM6 is denoted by INLINEFORM7 . The number of unique features is referred to as INLINEFORM8 . The count of a feature INLINEFORM9 in the corpus INLINEFORM10 is referred to as INLINEFORM11 . If a corpus is divided into segments INLINEFORM12 , the count of a feature INLINEFORM13 in INLINEFORM14 is defined as the sum of counts over the segments of the corpus, i.e. INLINEFORM15 . Note that the segmentation may affect the count of a feature in INLINEFORM16 as features do not cross segment borders. A frequently-used feature is an , which consists of a contiguous sequence of INLINEFORM0 individual tokens. An starting at position INLINEFORM1 in a corpus segment is denoted INLINEFORM2 , where positions INLINEFORM3 remain within the same segment of the corpus as INLINEFORM4 . If INLINEFORM5 , INLINEFORM6 is an individual token. When referring to all of length INLINEFORM7 in a corpus INLINEFORM8 , we use INLINEFORM9 and the count of all such is denoted by INLINEFORM10 . The count of an INLINEFORM11 in a corpus segment INLINEFORM12 is referred to as INLINEFORM13 and is defined by count: DISPLAYFORM0 The set of languages is INLINEFORM0 , and INLINEFORM1 denotes the number of languages. A corpus INLINEFORM2 in language INLINEFORM3 is denoted by INLINEFORM4 . A language model INLINEFORM5 based on INLINEFORM6 is denoted by INLINEFORM7 . The features given values by the model INLINEFORM8 are the domain INLINEFORM9 of the model. In a language model, a value INLINEFORM10 for the feature INLINEFORM11 is denoted by INLINEFORM12 . For each potential language INLINEFORM13 of a corpus INLINEFORM14 in an unknown language, a resulting score INLINEFORM15 is calculated. A corpus in an unknown language is also referred to as a test document. An Archetypal Language Identifier The design of a supervised language identifier can generally be deconstructed into four key steps: A representation of text is selected A model for each language is derived from a training corpus of labelled documents A function is defined that determines the similarity between a document and each language The language of a document is predicted based on the highest-scoring model On the Equivalence of Methods The theoretical description of some of the methods leaves room for interpretation on how to implement them. BIBREF28 define an algorithm to be any well-defined computational procedure. BIBREF29 introduces a three-tiered classification where programs implement algorithms and algorithms implement functions. The examples of functions given by BIBREF29 , sort and find max differ from our identify language as they are always solvable and produce the same results. In this survey, we have considered two methods to be the same if they always produce exactly the same results from exactly the same inputs. This would not be in line with the definition of an algorithm by BIBREF29 , as in his example there are two different algorithms mergesort and quicksort that implement the function sort, always producing identical results with the same input. What we in this survey call a method, is actually a function in the tiers presented by BIBREF29 . Features In this section, we present an extensive list of features used in , some of which are not self-evident. The equations written in the unified notation defined earlier show how the values INLINEFORM0 used in the language models are calculated from the tokens INLINEFORM1 . For each feature type, we generally introduce the first published article that used that feature type, as well as more recent articles where the feature type has been considered. Bytes and Encodings In , text is typically modeled as a stream of characters. However, there is a slight mismatch between this view and how text is actually stored: documents are digitized using a particular encoding, which is a mapping from characters (e.g. a character in an alphabet), onto the actual sequence of bytes that is stored and transmitted by computers. Encodings vary in how many bytes they use to represent each character. Some encodings use a fixed number of bytes for each character (e.g. ASCII), whereas others use a variable-length encoding (e.g. UTF-8). Some encodings are specific to a given language (e.g. GuoBiao 18030 or Big5 for Chinese), whereas others are specifically designed to represent as many languages as possible (e.g. the Unicode family of encodings). Languages can often be represented in a number of different encodings (e.g. UTF-8 and Shift-JIS for Japanese), and sometimes encodings are specifically designed to share certain codepoints (e.g. all single-byte UTF-8 codepoints are exactly the same as ASCII). Most troubling for , isomorphic encodings can be used to encode different languages, meaning that the determination of the encoding often doesn't help in honing in on the language. Infamous examples of this are the ISO-8859 and EUC encoding families. Encodings pose unique challenges for practical applications: a given language can often be encoded in different forms, and a given encoding can often map onto multiple languages. Some research has included an explicit encoding detection step to resolve bytes to the characters they represent BIBREF30 , effectively transcoding the document into a standardized encoding before attempting to identify the language. However, transcoding is computationally expensive, and other research suggests that it may be possible to ignore encoding and build a single per-language model covering multiple encodings simultaneously BIBREF31 , BIBREF32 . Another solution is to treat each language-encoding pair as a separate category BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 . The disadvantage of this is that it increases the computational cost by modeling a larger number of classes. Most of the research has avoided issues of encoding entirely by assuming that all documents use the same encoding BIBREF37 . This may be a reasonable assumption in some settings, such as when processing data from a single source (e.g. all data from Twitter and Wikipedia is UTF-8 encoded). In practice, a disadvantage of this approach may be that some encodings are only applicable to certain languages (e.g. S-JIS for Japanese and Big5 for Chinese), so knowing that a document is in a particular encoding can provide information that would be lost if the document is transcoded to a universal encoding such as UTF-8. BIBREF38 used a parallel state machine to detect which encoding scheme a file could potentially have been encoded with. The knowledge of the encoding, if detected, is then used to narrow down the possible languages. Most features and methods do not make a distinction between bytes or characters, and because of this we will present feature and method descriptions in terms of characters, even if byte tokenization was actually used in the original research. Characters In this section, we review how individual character tokens have been used as features in . BIBREF39 used the formatting of numbers when distinguishing between Malay and Indonesian. BIBREF40 used the presence of non-alphabetic characters between the current word and the words before and after as features. BIBREF41 used emoticons (or emojis) in Arabic dialect identification with Naive Bayes (“NB”; see product). Non-alphabetic characters have also been used by BIBREF42 , BIBREF43 , BIBREF44 , and BIBREF45 . BIBREF46 used knowledge of alphabets to exclude languages where a language-unique character in a test document did not appear. BIBREF47 used alphabets collected from dictionaries to check if a word might belong to a language. BIBREF48 used the Unicode database to get the possible languages of individual Unicode characters. Lately, the knowledge of relevant alphabets has been used for also by BIBREF49 and BIBREF44 . Capitalization is mostly preserved when calculating character frequencies, but in contexts where it is possible to identify the orthography of a given document and where capitalization exists in the orthography, lowercasing can be used to reduce sparseness. In recent work, capitalization was used as a special feature by BIBREF42 , BIBREF43 , and BIBREF45 . BIBREF50 was the first to use the length of words in . BIBREF51 used the length of full person names comprising several words. Lately, the number of characters in words has been used for by BIBREF52 , BIBREF53 , BIBREF44 , and BIBREF45 . BIBREF52 also used the length of the two preceding words. BIBREF54 used character frequencies as feature vectors. In a feature vector, each feature INLINEFORM0 has its own integer value. The raw frequency – also called term frequency (TF) – is calculated for each language INLINEFORM1 as: DISPLAYFORM0 BIBREF20 was the first to use the probability of characters. He calculated the probabilities as relative frequencies, by dividing the frequency of a feature found in the corpus by the total count of features of the same type in the corpus. When the relative frequency of a feature INLINEFORM0 is used as a value, it is calculated for each language INLINEFORM1 as: DISPLAYFORM0 BIBREF55 calculated the relative frequencies of one character prefixes, and BIBREF56 did the same for one character suffixes. BIBREF57 calculated character frequency document frequency (“LFDF”) values. BIBREF58 compared their own Inverse Class Frequency (“ICF”) method with the Arithmetic Average Centroid (“AAC”) and the Class Feature Centroid (“CFC”) feature vector updating methods. In ICF a character appearing frequently only in some language gets more positive weight for that language. The values differ from Inverse Document Frequency (“IDF”, artemenko1), as they are calculated using also the frequencies of characters in other languages. Their ICF-based vectors generally performed better than those based on AAC or CFC. BIBREF59 explored using the relative frequencies of characters with similar discriminating weights. BIBREF58 also used Mutual Information (“MI”) and chi-square weighting schemes with characters. BIBREF32 compared the identification results of single characters with the use of character bigrams and trigrams when classifying over 67 languages. Both bigrams and trigrams generally performed better than unigrams. BIBREF60 also found that the identification results from identifiers using just characters are generally worse than those using character sequences. Character Combinations In this section we consider the different combinations of characters used in the literature. Character mostly consist of all possible characters in a given encoding, but can also consist of only alphabetic or ideographic characters. BIBREF56 calculated the co-occurrence ratios of any two characters, as well as the ratio of consonant clusters of different sizes to the total number of consonants. BIBREF61 used the combination of every bigram and their counts in words. BIBREF53 used the proportions of question and exclamation marks to the total number of the end of sentence punctuation as features with several machine learning algorithms. BIBREF62 used FastText to generate character n-gram embeddings BIBREF63 . Neural network generated embeddings are explained in cooccurrencesofwords. BIBREF20 used the relative frequencies of vowels following vowels, consonants following vowels, vowels following consonants and consonants following consonants. BIBREF52 used vowel-consonant ratios as one of the features with Support Vector Machines (“SVMs”, supportvectormachines), Decision Trees (“DTs”, decisiontrees), and Conditional Random Fields (“CRFs”, openissues:short). BIBREF41 used the existence of word lengthening effects and repeated punctuation as features. BIBREF64 used the presence of characters repeating more than twice in a row as a feature with simple scoring (simple1). BIBREF65 used more complicated repetitions identified by regular expressions. BIBREF66 used letter and character bigram repetition with a CRF. BIBREF67 used the count of character sequences with three or more identical characters, using several machine learning algorithms. Character are continuous sequences of characters of length INLINEFORM0 . They can be either consecutive or overlapping. Consecutive character bigrams created from the four character sequence door are do and or, whereas the overlapping bigrams are do, oo, and or. Overlapping are most often used in the literature. Overlapping produces a greater number and variety of from the same amount of text. BIBREF20 was the first to use combinations of any two characters. He calculated the relative frequency of each bigram. RFTable2 lists more recent articles where relative frequencies of of characters have been used. BIBREF20 also used the relative frequencies of two character combinations which had one unknown character between them, also known as gapped bigrams. BIBREF68 used a modified relative frequency of character unigrams and bigrams. Character trigram frequencies relative to the word count were used by BIBREF92 , who calculated the values INLINEFORM0 as in vega1. Let INLINEFORM1 be the word-tokenized segmentation of the corpus INLINEFORM2 of character tokens, then: DISPLAYFORM0 where INLINEFORM0 is the count of character trigrams INLINEFORM1 in INLINEFORM2 , and INLINEFORM3 is the total word count in the corpus. Later frequencies relative to the word count were used by BIBREF93 for character bigrams and trigrams. BIBREF25 divided characters into five phonetic groups and used a Markovian method to calculate the probability of each bigram consisting of these phonetic groups. In Markovian methods, the probability of a given character INLINEFORM0 is calculated relative to a fixed-size character context INLINEFORM1 in corpus INLINEFORM2 , as follows: DISPLAYFORM0 where INLINEFORM0 is an prefix of INLINEFORM1 of length INLINEFORM2 . In this case, the probability INLINEFORM3 is the value INLINEFORM4 , where INLINEFORM5 , in the model INLINEFORM6 . BIBREF94 used 4-grams with recognition weights which were derived from Markovian probabilities. MarkovianTable lists some of the more recent articles where Markovian character have been used. BIBREF110 was the first author to propose a full-fledged probabilistic language identifier. He defines the probability of a trigram INLINEFORM0 being written in the language INLINEFORM1 to be: DISPLAYFORM0 He considers the prior probabilities of each language INLINEFORM0 to be equal, which leads to: DISPLAYFORM0 BIBREF110 used the probabilities INLINEFORM0 as the values INLINEFORM1 in the language models. BIBREF111 used a list of the most frequent bigrams and trigrams with logarithmic weighting. BIBREF112 was the first to use direct frequencies of character as feature vectors. BIBREF113 used Principal Component Analysis (“PCA”) to select only the most discriminating bigrams in the feature vectors representing languages. BIBREF114 used the most frequent and discriminating byte unigrams, bigrams, and trigrams among their feature functions. They define the most discriminating features as those which have the most differing relative frequencies between the models of the different languages. BIBREF115 tested from two to five using frequencies as feature vectors, frequency ordered lists, relative frequencies, and Markovian probabilities. FrequencyVectorTable lists the more recent articles where the frequency of character have been used as features. In the method column, “RF” refers to Random Forest (cf. decisiontrees), “LR” to Logistic Regression (discriminantfunctions), “KRR” to Kernel Ridge Regression (vectors), “KDA” to Kernel Discriminant Analysis (vectors), and “NN” to Neural Networks (neuralnetworks). BIBREF47 used the last two and three characters of open class words. BIBREF34 used an unordered list of distinct trigrams with the simple scoring method (Simplescoring). BIBREF132 used Fisher's discriminant function to choose the 1000 most discriminating trigrams. BIBREF133 used unique 4-grams of characters with positive Decision Rules (Decisionrule). BIBREF134 used the frequencies of bi- and trigrams in words unique to a language. BIBREF135 used lists of the most frequent trigrams. BIBREF38 divided possible character bigrams into those that are commonly used in a language and to those that are not. They used the ratio of the commonly used bigrams to all observed bigrams to give a confidence score for each language. BIBREF136 used the difference between the ISO Latin-1 code values of two consecutive characters as well as two characters separated by another character, also known as gapped character bigrams. BIBREF137 used the IDF and the transition probability of trigrams. They calculated the IDF values INLINEFORM0 of trigrams INLINEFORM1 for each language INLINEFORM2 , as in artemenko1, where INLINEFORM3 is the number of trigrams INLINEFORM4 in the corpus of the language INLINEFORM5 and INLINEFORM6 is the number of languages in which the trigram INLINEFORM7 is found, where INLINEFORM8 is the language-segmented training corpus with each language as a single segment. DISPLAYFORM0 INLINEFORM0 is defined as: DISPLAYFORM0 BIBREF138 used from one to four, which were weighted with “TF-IDF” (Term Frequency–Inverse Document Frequency). TF-IDF was calculated as: DISPLAYFORM0 TF-IDF weighting or close variants have been widely used for . BIBREF139 used “CF-IOF” (Class Frequency-Inverse Overall Frequency) weighted 3- and 4-grams. BIBREF140 used the logarithm of the ratio of the counts of character bigrams and trigrams in the English and Hindi dictionaries. BIBREF141 used a feature weighting scheme based on mutual information (“MI”). They also tried weighting schemes based on the “GSS” (Galavotti, Sebastiani, and Simi) and “NGL” (Ng, Goh, and Low) coefficients, but using the MI-based weighting scheme proved the best in their evaluations when they used the sum of values method (sumvalues1). BIBREF67 used punctuation trigrams, where the first character has to be a punctuation mark (but not the other two characters). BIBREF142 used consonant bi- and trigrams which were generated from words after the vowels had been removed. The language models mentioned earlier consisted only of of the same size INLINEFORM0 . If from one to four were used, then there were four separate language models. BIBREF7 created ordered lists of the most frequent for each language. BIBREF143 used similar lists with symmetric cross-entropy. BIBREF144 used a Markovian method to calculate the probability of byte trigrams interpolated with byte unigrams. BIBREF145 created a language identifier based on character of different sizes over 281 languages, and obtained an identification accuracy of 62.8% for extremely short samples (5–9 characters). Their language identifier was used or evaluated by BIBREF146 , BIBREF147 , and BIBREF148 . BIBREF146 managed to improve the identification results by feeding the raw language distance calculations into an SVM. DifferingNgramTable3 lists recent articles where character of differing sizes have been used. “LR” in the methods column refer to Logistic Regression (maxent), “LSTM RNN” to Long Short-Term Memory Recurrent Neural Networks (neuralnetworks), and “DAN” to Deep Averaging Networks (neuralnetworks). BIBREF30 used up to the four last characters of words and calculated their relative frequencies. BIBREF149 used frequencies of 2–7-grams, normalized relative to the total number of in all the language models as well as the current language model. BIBREF60 compared the use of different sizes of in differing combinations, and found that combining of differing sizes resulted in better identification scores. BIBREF150 , BIBREF151 , BIBREF152 used mixed length domain-independent language models of byte from one to three or four. Mixed length language models were also generated by BIBREF36 and later by BIBREF153 , BIBREF101 , who used the most frequent and discriminating longer than two bytes, up to a maximum of 12 bytes, based on their weighted relative frequencies. INLINEFORM0 of the most frequent were extracted from training corpora for each language, and their relative frequencies were calculated. In the tests reported in BIBREF153 , INLINEFORM1 varied from 200 to 3,500 . Later BIBREF154 also evaluated different combinations of character as well as their combinations with words. BIBREF155 used mixed-order frequencies relative to the total number of in the language model. BIBREF61 used frequencies of from one to five and gapped 3- and 4-grams as features with an SVM. As an example, some gapped 4-grams from the word Sterneberg would be Senb, tree, enbr, and reeg. BIBREF156 used character as a backoff from Markovian word . BIBREF157 used the frequencies of word initial ranging from 3 to the length of the word minus 1. BIBREF158 used the most relevant selected using the absolute value of the Pearson correlation. BIBREF159 used only the first 10 characters from a longer word to generate the , while the rest were ignored. BIBREF160 used only those which had the highest TF-IDF scores. BIBREF43 used character weighted by means of the “BM25” (Best Match 25) weighting scheme. BIBREF161 used byte up to length 25. BIBREF61 used consonant sequences generated from words. BIBREF189 used the presence of vowel sequences as a feature with a NB classifier (see naivebayes) when distinguishing between English and transliterated Indian languages. BIBREF190 used a basic dictionary (basicdictionary) composed of the 400 most common character 4-grams. BIBREF46 and BIBREF110 used character combinations (of different sizes) that either existed in only one language or did not exist in one or more languages. Morphemes, Syllables and Chunks BIBREF191 used the suffixes of lexical words derived from untagged corpora. BIBREF192 used prefixes and suffixes determined using linguistic knowledge of the Arabic language. BIBREF193 used suffixes and prefixes in rule-based . BIBREF134 used morphemes and morpheme trigrams (morphotactics) constructed by Creutz's algorithm BIBREF194 . BIBREF195 used prefixes and suffixes constructed by his own algorithm, which was later also used by BIBREF196 . BIBREF197 used morpheme lexicons in . BIBREF196 compared the use of morphological features with the use of variable sized character . When choosing between ten European languages, the morphological features obtained only 26.0% accuracy while the reached 82.7%. BIBREF198 lemmatized Malay words in order to get the base forms. BIBREF199 used a morphological analyzer of Arabic. BIBREF70 used morphological information from a part-of-speech (POS) tagger. BIBREF189 and BIBREF64 used manually selected suffixes as features. BIBREF200 created morphological grammars to distinguish between Croatian and Serbian. BIBREF201 used morphemes created by Morfessor, but they also used manually created morphological rules. BIBREF102 used a suffix module containing the most frequent suffixes. BIBREF202 and BIBREF159 used word suffixes as features with CRFs. BIBREF119 used an unsupervised method to learn morphological features from training data. The method collects candidate affixes from a dictionary built using the training data. If the remaining part of a word is found from the dictionary after removing a candidate affix, the candidate affix is considered to be a morpheme. BIBREF119 used 5% of the most frequent affixes in language identification. BIBREF183 used character classified into different types, which included prefixes and suffixes. PrefixSuffixTable lists some of the more recent articles where prefixes and suffixes collected from a training corpus has been used for . BIBREF206 used trigrams composed of syllables. BIBREF198 used Markovian syllable bigrams for between Malay and English. Later BIBREF207 also experimented with syllable uni- and trigrams. BIBREF114 used the most frequent as well as the most discriminating Indian script syllables, called aksharas. They used single aksharas, akshara bigrams, and akshara trigrams. Syllables would seem to be especially apt in situations where distinction needs to be made between two closely-related languages. BIBREF96 used the trigrams of non-syllable chunks that were based on MI. BIBREF198 experimented also with Markovian bigrams using both character and grapheme bigrams, but the syllable bigrams proved to work better. Graphemes in this case are the minimal units of the writing system, where a single character may be composed of several graphemes (e.g. in the case of the Hangul or Thai writing systems). Later, BIBREF207 also used grapheme uni- and trigrams. BIBREF207 achieved their best results combining word unigrams and syllable bigrams with a grapheme back-off. BIBREF208 used the MADAMIRA toolkit for D3 decliticization and then used D3-token 5-grams. D3 decliticization is a way to preprocess Arabic words presented by BIBREF209 . Graphones are sequences of characters linked to sequences of corresponding phonemes. They are automatically deduced from a bilingual corpus which consists of words and their correct pronunciations using Joint Sequence Models (“JSM”). BIBREF210 used language tags instead of phonemes when generating the graphones and then used Markovian graphone from 1 to 8 in . Words BIBREF211 used the position of the current word in word-level . The position of words in sentences has also been used as a feature in code-switching detection by BIBREF52 . It had predictive power greater than the language label or length of the previous word. BIBREF18 used the characteristics of words as parts of discriminating functions. BIBREF212 used the string edit distance and overlap between the word to be identified and words in dictionaries. Similarly BIBREF140 used a modified edit distance, which considers the common spelling substitutions when Hindi is written using latin characters. BIBREF213 used the Minimum Edit Distance (“MED”). Basic dictionaries are unordered lists of words belonging to a language. Basic dictionaries do not include information about word frequency, and are independent of the dictionaries of other languages. BIBREF110 used a dictionary for as a part of his speech synthesizer. Each word in a dictionary had only one possible “language”, or pronunciation category. More recently, a basic dictionary has been used for by BIBREF214 , BIBREF52 , and BIBREF90 . Unique word dictionaries include only those words of the language, that do not belong to the other languages targeted by the language identifier. BIBREF215 used unique short words (from one to three characters) to differentiate between languages. Recently, a dictionary of unique words was used for by BIBREF116 , BIBREF216 , and BIBREF67 . BIBREF47 used exhaustive lists of function words collected from dictionaries. BIBREF217 used stop words – that is non-content or closed-class words – as a training corpus. Similarly, BIBREF218 used words from closed word classes, and BIBREF97 used lists of function words. BIBREF219 used a lexicon of Arabic words and phrases that convey modality. Common to these features is that they are determined based on linguistic knowledge. BIBREF220 used the most relevant words for each language. BIBREF221 used unique or nearly unique words. BIBREF80 used Information Gain Word-Patterns (“IG-WP”) to select the words with the highest information gain. BIBREF222 made an (unordered) list of the most common words for each language, as, more recently, did BIBREF223 , BIBREF83 , and BIBREF85 . BIBREF224 encoded the most common words to root forms with the Soundex algorithm. BIBREF225 collected the frequencies of words into feature vectors. BIBREF112 compared the use of character from 2 to 5 with the use of words. Using words resulted in better identification results than using character bigrams (test document sizes of 20, 50, 100 or 200 characters), but always worse than character 3-, 4- or 5-grams. However, the combined use of words and character 4-grams gave the best results of all tested combinations, obtaining 95.6% accuracy for 50 character sequences when choosing between 13 languages. BIBREF158 used TF-IDF scores of words to distinguish between language groups. Recently, the frequency of words has also been used for by BIBREF180 , BIBREF183 , BIBREF129 , and BIBREF142 . BIBREF226 and BIBREF227 were the first to use relative frequencies of words in . As did BIBREF112 for word frequencies, also BIBREF60 found that combining the use of character with the use of words provided the best results. His language identifier obtained 99.8% average recall for 50 character sequences for the 10 evaluated languages (choosing between the 13 languages known by the language identifier) when using character from 1 to 6 combined with words. BIBREF98 calculated the relative frequency of words over all the languages. BIBREF137 calculated the IDF of words, following the approach outlined in artemenko1. BIBREF177 calculated the Pointwise Mutual Information (“PMI”) for words and used it to group words to Chinese dialects or dialect groups. Recently, the relative frequency of words has also been used for by BIBREF184 , BIBREF148 and BIBREF91 BIBREF228 used the relative frequency of words with less than six characters. Recently, BIBREF83 also used short words, as did BIBREF45 . BIBREF229 used the relative frequency calculated from Google searches. Google was later also used by BIBREF96 and BIBREF230 . BIBREF231 created probability maps for words for German dialect identification between six dialects. In a word probability map, each predetermined geographic point has a probability for each word form. Probabilities were derived using a linguistic atlas and automatically-induced dialect lexicons. BIBREF232 used commercial spelling checkers, which utilized lexicons and morphological analyzers. The language identifier of BIBREF232 obtained 97.9% accuracy when classifying one-line texts between 11 official South African languages. BIBREF233 used the ALMORGEANA analyzer to check if the word had an analysis in modern standard Arabic. They also used sound change rules to use possible phonological variants with the analyzer. BIBREF234 used spellchecking and morphological analyzers to detect English words from Hindi–English mixed search queries. BIBREF235 used spelling checkers to distinguish between 15 languages, extending the work of BIBREF232 with dynamic model selection in order to gain better performance. BIBREF157 used a similarity count to find if mystery words were misspelled versions of words in a dictionary. BIBREF236 used an “LBG-VQ” (Linde, Buzo & Gray algorithm for Vector Quantization) approach to design a codebook for each language BIBREF237 . The codebook contained a predetermined number of codevectors. Each codeword represented the word it was generated from as well as zero or more words close to it in the vector space. Word Combinations BIBREF41 used the number of words in a sentence with NB. BIBREF53 and BIBREF45 used the sentence length calculated in both words and characters with several machine learning algorithms. BIBREF53 used the ratio to the total number of words of: once-occurring words, twice-occurring words, short words, long words, function words, adjectives and adverbs, personal pronouns, and question words. They also used the word-length distribution for words of 1–20 characters. BIBREF193 used at least the preceding and proceeding words with manual rules in word-level for text-to-speech synthesis. BIBREF238 used Markovian word with a Hidden Markov Model (“HMM”) tagger (othermethods). WordNgramTable lists more recent articles where word or similar constructs have been used. “PPM” in the methods column refers to Prediction by Partial Matching (smoothing), and “kNN” to INLINEFORM0 Nearest Neighbor classification (ensemble). BIBREF239 used word trigrams simultaneously with character 4-grams. He concluded that word-based models can be used to augment the results from character when they are not providing reliable identification results. WordCharacterNgramTable lists articles where both character and word have been used together. “CBOW” in the methods column refer to Continuous Bag of Words neural network (neuralnetworks), and “MIRA” to Margin Infused Relaxed Algorithm (supportvectormachines). BIBREF154 evaluated different combinations of word and character with SVMs. The best combination for language variety identification was using all the features simultaneously. BIBREF187 used normal and gapped word and character simultaneously. BIBREF240 uses word embeddings consisting of Positive Pointwise Mutual Information (“PPMI”) counts to represent each word type. Then they use Truncated Singular Value Decomposition (“TSVD”) to reduce the dimension of the word vectors to 100. BIBREF241 used INLINEFORM0 -means clustering when building dialectal Arabic corpora. BIBREF242 used features provided by Latent Semantic Analysis (“LSA”) with SVMs and NB. BIBREF243 present two models, the CBOW model and the continuous skip-gram model. The CBOW model can be used to generate a word given it's context and the skip-gram model can generate the context given a word. The projection matrix, which is the weight matrix between the input layer and the hidden layer, can be divided into vectors, one vector for each word in the vocabulary. These word-vectors are also referred to as word embeddings. The embeddings can be used as features in other tasks after the neural network has been trained. BIBREF244 , BIBREF245 , BIBREF80 , BIBREF246 , BIBREF247 , BIBREF248 , BIBREF62 , and BIBREF130 used word embeddings generated by the word2vec skip-gram model BIBREF243 as features in . BIBREF249 used word2vec word embeddings and INLINEFORM0 -means clustering. BIBREF250 , BIBREF251 , and BIBREF44 also used word embeddings created with word2vec. BIBREF167 trained both character and word embeddings using FastText text classification method BIBREF63 on the Discriminating between Similar Languages (“DSL”) 2016 shared task, where it reached low accuracy when compared with the other methods. BIBREF205 used FastText to train word vectors including subword information. Then he used these word vectors together with some additional word features to train a CRF-model which was used for codeswitching detection. BIBREF212 extracted features from the hidden layer of a Recurrent Neural Network (“RNN”) that had been trained to predict the next character in a string. They used the features with a SVM classifier. BIBREF229 evaluated methods for detecting foreign language inclusions and experimented with a Conditional Markov Model (“CMM”) tagger, which had performed well on Named Entity Recognition (“NER”). BIBREF229 was able to produce the best results by incorporating her own English inclusion classifier's decision as a feature for the tagger, and not using the taggers POS tags. BIBREF197 used syntactic parsers together with dictionaries and morpheme lexicons. BIBREF278 used composed of POS tags and function words. BIBREF173 used labels from a NER system, cluster prefixes, and Brown clusters BIBREF279 . BIBREF214 used POS tag from one to three and BIBREF43 from one to five, and BIBREF67 used POS tag trigrams with TF-IDF weighting. BIBREF203 , BIBREF42 , BIBREF53 , and BIBREF45 have also recently used POS tags. BIBREF80 used POS tags with emotion-labeled graphs in Spanish variety identification. In emotion-labeled graphs, each POS-tag was connected to one or more emotion nodes if a relationship between the original word and the emotion was found from the Spanish Emotion Lexicon. They also used POS-tags with IG-WP. BIBREF208 used the MADAMIRA tool for morphological analysis disambiguation. The polySVOX text analysis module described by BIBREF197 uses two-level rules and morpheme lexicons on sub-word level and separate definite clause grammars (DCGs) on word, sentence, and paragraph levels. The language of sub-word units, words, sentences, and paragraphs in multilingual documents is identified at the same time as performing syntactic analysis for the document. BIBREF280 converted sentences into POS-tag patterns using a word-POS dictionary for Malay. The POS-tag patterns were then used by a neural network to indicate whether the sentences were written in Malay or not. BIBREF281 used Jspell to detect differences in the grammar of Portuguese variants. BIBREF200 used a syntactic grammar to recognize verb-da-verb constructions, which are characteristic of the Serbian language. The syntactic grammar was used together with several morphological grammars to distinguish between Croatian and Serbian. BIBREF193 used the weighted scores of the words to the left and right of the word to be classified. BIBREF238 used language labels within an HMM. BIBREF282 used the language labels of other words in the same sentence to determine the language of the ambiguous word. The languages of the other words had been determined by the positive Decision Rules (Decisionrule), using dictionaries of unique words when possible. BIBREF213 , BIBREF71 used the language tags of the previous three words with an SVM. BIBREF283 used language labels of surrounding words with NB. BIBREF82 used the language probabilities of the previous word to determining weights for languages. BIBREF156 used unigram, bigram and trigram language label transition probabilities. BIBREF284 used the language labels for the two previous words as well as knowledge of whether code-switching had already been detected or not. BIBREF285 used the language label of the previous word to determine the language of an ambiguous word. BIBREF286 also used the language label of the previous word. BIBREF287 used the language identifications of 2–4 surrounding words for post-identification correction in word-level . BIBREF109 used language labels with a CRF. BIBREF52 used language labels of the current and two previous words in code-switching point prediction. Their predictive strength was lower than the count of code-switches, but better than the length or position of the word. All of the features were used together with NB, DT and SVM. BIBREF288 used language label bigrams with an HMM. BIBREF41 used the word-level language labels obtained with the approach of BIBREF289 on sentence-level dialect identification. Feature Smoothing Feature smoothing is required in order to handle the cases where not all features INLINEFORM0 in a test document have been attested in the training corpora. Thus, it is used especially when the count of features is high, or when the amount of training data is low. Smoothing is usually handled as part of the method, and not pre-calculated into the language models. Most of the smoothing methods evaluated by BIBREF290 have been used in , and we follow the order of methods in that article. In Laplace smoothing, an extra number of occurrences is added to every possible feature in the language model. BIBREF291 used Laplace's sample size correction (add-one smoothing) with the product of Markovian probabilities. BIBREF292 experimented with additive smoothing of 0.5, and noted that it was almost as good as Good-Turing smoothing. BIBREF290 calculate the values for each as: DISPLAYFORM0 where INLINEFORM0 is the probability estimate of INLINEFORM1 in the model and INLINEFORM2 its frequency in the training corpus. INLINEFORM3 is the total number of of length INLINEFORM4 and INLINEFORM5 the number of distinct in the training corpus. INLINEFORM6 is the Lidstone smoothing parameter. When using Laplace smoothing, INLINEFORM7 is equal to 1 and with Lidstone smoothing, the INLINEFORM8 is usually set to a value between 0 and 1. The penalty values used by BIBREF170 with the HeLI method function as a form of additive smoothing. BIBREF145 evaluated additive, Katz, absolute discounting, and Kneser-Ney smoothing methods. Additive smoothing produced the least accurate results of the four methods. BIBREF293 and BIBREF258 evaluated NB with several different Lidstone smoothing values. BIBREF107 used additive smoothing with character as a baseline classifier, which they were unable to beat with Convolutional Neural Networks (“CNNs”). BIBREF292 used Good-Turing smoothing with the product of Markovian probabilities. BIBREF290 define the Good-Turing smoothed count INLINEFORM0 as: DISPLAYFORM0 where INLINEFORM0 is the number of features occurring exactly INLINEFORM1 times in the corpus INLINEFORM2 . Lately Good-Turing smoothing has been used by BIBREF294 and BIBREF88 . BIBREF220 used Jelinek-Mercer smoothing correction over the relative frequencies of words, calculated as follows: DISPLAYFORM0 where INLINEFORM0 is a smoothing parameter, which is usually some small value like 0.1. BIBREF105 used character 1–8 grams with Jelinek-Mercer smoothing. Their language identifier using character 5-grams achieved 3rd place (out of 12) in the TweetLID shared task constrained track. BIBREF95 and BIBREF145 used the Katz back-off smoothing BIBREF295 from the SRILM toolkit, with perplexity. Katz smoothing is an extension of Good-Turing discounting. The probability mass left over from the discounted is then distributed over unseen via a smoothing factor. In the smoothing evaluations by BIBREF145 , Katz smoothing performed almost as well as absolute discounting, which produced the best results. BIBREF296 evaluated Witten-Bell, Katz, and absolute discounting smoothing methods. Witten-Bell got 87.7%, Katz 87.5%, and absolute discounting 87.4% accuracy with character 4-grams. BIBREF297 used the PPM-C algorithm for . PPM-C is basically a product of Markovian probabilities with an escape scheme. If an unseen context is encountered for the character being processed, the escape probability is used together with a lower-order model probability. In PPM-C, the escape probability is the sum of the seen contexts in the language model. PPM-C was lately used by BIBREF165 . The PPM-D+ algorithm was used by BIBREF298 . BIBREF299 and BIBREF300 used a PPM-A variant. BIBREF301 also used PPM. The language identifier of BIBREF301 obtained 91.4% accuracy when classifying 100 character texts between 277 languages. BIBREF302 used Witten-Bell smoothing with perplexity. BIBREF303 used a Chunk-Based Language Model (“CBLM”), which is similar to PPM models. BIBREF145 used several smoothing techniques with Markovian probabilities. Absolute discounting from the VariKN toolkit performed the best. BIBREF145 define the smoothing as follows: a constant INLINEFORM0 is subtracted from the counts INLINEFORM1 of all observed INLINEFORM2 and the held-out probability mass is distributed between the unseen in relation to the probabilities of lower order INLINEFORM3 , as follows: DISPLAYFORM0 where INLINEFORM0 is a scaling factor that makes the conditional distribution sum to one. Absolute discounting with Markovian probabilities from the VariKN toolkit was later also used by BIBREF146 , BIBREF147 , and BIBREF148 . The original Kneser-Ney smoothing is based on absolute discounting with an added back-off function to lower-order models BIBREF145 . BIBREF290 introduced a modified version of the Kneser-Ney smoothing using interpolation instead of back-off. BIBREF304 used the Markovian probabilities with Witten-Bell and modified Kneser-Ney smoothing. BIBREF88 , BIBREF166 , and BIBREF261 also recently used modified Kneser-Ney discounting. BIBREF119 used both original and modified Kneser-Ney smoothings. In the evaluations of BIBREF145 , Kneser-Ney smoothing fared better than additive, but somewhat worse than the Katz and absolute discounting smoothing. Lately BIBREF109 also used Kneser-Ney smoothing. BIBREF86 , BIBREF87 evaluated several smoothing techniques with character and word : Laplace/Lidstone, Witten-Bell, Good-Turing, and Kneser-Ney. In their evaluations, additive smoothing with 0.1 provided the best results. Good-Turing was not as good as additive smoothing, but better than Witten-Bell and Kneser-Ney smoothing. Witten-Bell proved to be clearly better than Kneser-Ney. Methods In recent years there has been a tendency towards attempting to combine several different types of features into one classifier or classifier ensemble. Many recent studies use readily available classifier implementations and simply report how well they worked with the feature set used in the context of their study. There are many methods presented in this article that are still not available as out of the box implementations, however. There are many studies which have not been re-evaluated at all, going as far back as BIBREF18 . Our hope is that this article will inspire new studies and many previously unseen ways of combining features and methods. In the following sections, the reviewed articles are grouped by the methods used for . Decision Rules BIBREF46 used a positive Decision Rules with unique characters and character , that is, if a unique character or character was found, the language was identified. The positive Decision Rule (unique features) for the test document INLINEFORM0 and the training corpus INLINEFORM1 can be formulated as follows: DISPLAYFORM0 where INLINEFORM0 is the set of unique features in INLINEFORM1 , INLINEFORM2 is the corpus for language INLINEFORM3 , and INLINEFORM4 is a corpus of any other language INLINEFORM5 . Positive decision rules can also be used with non-unique features when the decisions are made in a certain order. For example, BIBREF52 presents the pseudo code for her dictionary lookup tool, where these kind of decisions are part of an if-then-else statement block. Her (manual) rule-based dictionary lookup tool works better for Dutch–English code-switching detection than the SVM, DT, or CRF methods she experiments with. The positive Decision Rule has also been used recently by BIBREF85 , BIBREF190 , BIBREF287 , BIBREF216 , BIBREF305 , BIBREF169 , and BIBREF214 . In the negative Decision Rule, if a character or character combination that was found in INLINEFORM0 does not exist in a particular language, that language is omitted from further identification. The negative Decision Rule can be expressed as: DISPLAYFORM0 where INLINEFORM0 is the corpus for language INLINEFORM1 . The negative Decision Rule was first used by BIBREF47 in . BIBREF118 evaluated the JRIP classifier from the Waikato Environment for Knowledge Analysis (“WEKA”). JRIP is an implementation of the propositional rule learner. It was found to be inferior to the SVM, NB and DT algorithms. In isolation the desicion rules tend not to scale well to larger numbers of languages (or very short test documents), and are thus mostly used in combination with other methods or as a Decision Tree. Decision Trees BIBREF306 were the earliest users of Decision Trees (“DT”) in . They used DT based on characters and their context without any frequency information. In training the DT, each node is split into child nodes according to an information theoretic optimization criterion. For each node a feature is chosen, which maximizes the information gain at that node. The information gain is calculated for each feature and the feature with the highest gain is selected for the node. In the identification phase, the nodes are traversed until only one language is left (leaf node). Later, BIBREF196 , BIBREF307 , and BIBREF308 have been especially successful in using DTs. Random Forest (RF) is an ensemble classifier generating many DTs. It has been succesfully used in by BIBREF140 , BIBREF201 , BIBREF309 , and BIBREF185 , BIBREF172 . Simple Scoring In simple scoring, each feature in the test document is checked against the language model for each language, and languages which contain that feature are given a point, as follows: DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 th feature found in the test document INLINEFORM2 . The language scoring the most points is the winner. Simple scoring is still a good alternative when facing an easy problem such as preliminary language group identification. It was recently used for this purpose by BIBREF246 with a basic dictionary. They achieved 99.8% accuracy when identifying between 6 language groups. BIBREF310 use a version of simple scoring as a distance measure, assigning a penalty value to features not found in a model. In this version, the language scoring the least amount of points is the winner. Their language identifier obtained 100% success rate with character 4-grams when classifying relatively large documents (from 1 to 3 kilobytes), between 10 languages. Simple scoring was also used lately by BIBREF166 , BIBREF311 , and BIBREF90 . Sum or Average of Values The sum of values can be expressed as: DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 th feature found in the test document INLINEFORM2 , and INLINEFORM3 is the value for the feature in the language model of the language INLINEFORM4 . The language with the highest score is the winner. The simplest case of sumvalues1 is when the text to be identified contains only one feature. An example of this is BIBREF157 who used the frequencies of short words as values in word-level identification. For longer words, he summed up the frequencies of different-sized found in the word to be identified. BIBREF210 first calculated the language corresponding to each graphone. They then summed up the predicted languages, and the language scoring the highest was the winner. When a tie occurred, they used the product of the Markovian graphone . Their method managed to outperform SVMs in their tests. BIBREF46 used the average of all the relative frequencies of the in the text to be identified. BIBREF312 evaluated several variations of the LIGA algorithm introduced by BIBREF313 . BIBREF308 and BIBREF148 also used LIGA and logLIGA methods. The average or sum of relative frequencies was also used recently by BIBREF85 and BIBREF108 . BIBREF57 summed up LFDF values (see characters), obtaining 99.75% accuracy when classifying document sized texts between four languages using Arabic script. BIBREF110 calculates the score of the language for the test document INLINEFORM0 as the average of the probability estimates of the features, as follows: DISPLAYFORM0 where INLINEFORM0 is the number of features in the test document INLINEFORM1 . BIBREF153 summed weighted relative frequencies of character , and normalized the score by dividing by the length (in characters) of the test document. Taking the average of the terms in the sums does not change the order of the scored languages, but it gives comparable results between different lengths of test documents. BIBREF92 , BIBREF314 summed up the feature weights and divided them by the number of words in the test document in order to set a threshold to detect unknown languages. Their language identifier obtained 89% precision and 94% recall when classifying documents between five languages. BIBREF192 used a weighting method combining alphabets, prefixes, suffixes and words. BIBREF233 summed up values from a word trigram ranking, basic dictionary and morphological analyzer lookup. BIBREF282 summed up language labels of the surrounding words to identify the language of the current word. BIBREF200 summed up points awarded by the presence of morphological and syntactic features. BIBREF102 used inverse rank positions as values. BIBREF158 computed the sum of keywords weighted with TF-IDF. BIBREF315 summed up the TF-IDF derived probabilities of words. Product of Values The product of values can be expressed as follows: DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 th feature found in test document INLINEFORM2 , and INLINEFORM3 is the value for the feature in the language model of language INLINEFORM4 . The language with the highest score is the winner. Some form of feature smoothing is usually required with the product of values method to avoid multiplying by zero. BIBREF26 was the first to use the product of relative frequencies and it has been widely used ever since; recent examples include BIBREF86 , BIBREF87 , BIBREF161 , and BIBREF148 . Some of the authors use a sum of log frequencies rather than a product of frequencies to avoid underflow issues over large numbers of features, but the two methods yield the same relative ordering, with the proviso that the maximum of multiplying numbers between 0 and 1 becomes the minimum of summing their negative logarithms, as can be inferred from: DISPLAYFORM0 When (multinomial) NB is used in , each feature used has a probability to indicate each language. The probabilities of all features found in the test document are multiplied for each language, and the language with the highest probability is selected, as in productvalues1. Theoretically the features are assumed to be independent of each other, but in practice using features that are functionally dependent can improve classification accuracy BIBREF316 . NB implementations have been widely used for , usually with a more varied set of features than simple character or word of the same type and length. The features are typically represented as feature vectors given to a NB classifier. BIBREF283 trained a NB classifier with language labels of surrounding words to help predict the language of ambiguous words first identified using an SVM. The language identifier used by BIBREF77 obtained 99.97% accuracy with 5-grams of characters when classifying sentence-sized texts between six language groups. BIBREF265 used a probabilistic model similar to NB. BIBREF252 used NB and naive Bayes EM, which uses the Expectation–Maximization (“EM”) algorithm in a semi-supervised setting to improve accuracy. BIBREF4 used Gaussian naive Bayes (“GNB”, i.e. NB with Gaussian estimation over continuous variables) from scikit-learn. In contrast to NB, in Bayesian networks the features are not assumed to be independent of each other. The network learns the dependencies between features in a training phase. BIBREF315 used a Bayesian Net classifier in two-staged (group first) over the open track of the DSL 2015 shared task. BIBREF130 similarly evaluated Bayesian Nets, but found them to perform worse than the other 11 algorithms they tested. BIBREF25 used the product of the Markovian probabilities of character bigrams. The language identifier created by BIBREF153 , BIBREF101 , “whatlang”, obtains 99.2% classification accuracy with smoothing for 65 character test strings, when distinguishing between 1,100 languages. The product of Markovian probabilities has recently also been used by BIBREF109 and BIBREF260 . BIBREF170 use a word-based backoff method called HeLI. Here, each language is represented by several different language models, only one of which is used for each word found in the test document. The language models for each language are: a word-level language model, and one or more models based on character of order 1– INLINEFORM0 . When a word that is not included in the word-level model is encountered in a test document, the method backs off to using character of the size INLINEFORM1 . If there is not even a partial coverage here, the method backs off to lower order and continues backing off until at least a partial coverage is obtained (potentially all the way to character unigrams). The system of BIBREF170 implementing the HeLI method attained shared first place in the closed track of the DSL 2016 shared task BIBREF317 , and was the best method tested by BIBREF148 for test documents longer than 30 characters. Similarity Measures The well-known method of BIBREF7 uses overlapping character of varying sizes based on words. The language models are created by tokenizing the training texts for each language INLINEFORM0 into words, and then padding each word with spaces, one before and four after. Each padded word is then divided into overlapping character of sizes 1–5, and the counts of every unique are calculated over the training corpus. The are ordered by frequency and INLINEFORM1 of the most frequent , INLINEFORM2 , are used as the domain of the language model INLINEFORM3 for the language INLINEFORM4 . The rank of an INLINEFORM5 in language INLINEFORM6 is determined by the frequency in the training corpus INLINEFORM7 and denoted INLINEFORM8 . During , the test document INLINEFORM0 is treated in a similar way and a corresponding model INLINEFORM1 of the K most frequent is created. Then a distance score is calculated between the model of the test document and each of the language models. The value INLINEFORM2 is calculated as the difference in ranks between INLINEFORM3 and INLINEFORM4 of the INLINEFORM5 in the domain INLINEFORM6 of the model of the test document. If an is not found in a language model, a special penalty value INLINEFORM7 is added to the total score of the language for each missing . The penalty value should be higher than the maximum possible distance between ranks. DISPLAYFORM0 The score INLINEFORM0 for each language INLINEFORM1 is the sum of values, as in sumvalues1. The language with the lowest score INLINEFORM2 is selected as the identified language. The method is equivalent to Spearman's measure of disarray BIBREF318 . The out-of-place method has been widely used in literature as a baseline. In the evaluations of BIBREF148 for 285 languages, the out-of-place method achieved an F-score of 95% for 35-character test documents. It was the fourth best of the seven evaluated methods for test document lengths over 20 characters. Local Rank Distance BIBREF319 is a measure of difference between two strings. LRD is calculated by adding together the distances identical units (for example character ) are from each other between the two strings. The distance is only calculated within a local window of predetermined length. BIBREF122 and BIBREF320 used LRD with a Radial Basis Function (“RBF”) kernel (see RBF). For learning they experimented with both Kernel Discriminant Analysis (“KDA”) and Kernel Ridge Regression (“KRR”). BIBREF248 also used KDA. BIBREF224 calculated the Levenshtein distance between the language models and each word in the mystery text. The similary score for each language was the inverse of the sum of the Levenshtein distances. Their language identifier obtained 97.7% precision when classifying texts from two to four words between five languages. Later BIBREF216 used Levenshtein distance for Algerian dialect identification and BIBREF305 for query word identification. BIBREF321 , BIBREF322 , BIBREF323 , and BIBREF324 calculated the difference between probabilities as in Equation EQREF109 . DISPLAYFORM0 where INLINEFORM0 is the probability for the feature INLINEFORM1 in the mystery text and INLINEFORM2 the corresponding probability in the language model of the language INLINEFORM3 . The language with the lowest score INLINEFORM4 is selected as the most likely language for the mystery text. BIBREF239 , BIBREF262 used the log probability difference and the absolute log probability difference. The log probability difference proved slightly better, obtaining a precision of 94.31% using both character and word when classifying 100 character texts between 53 language-encoding pairs. Depending on the algorithm, it can be easier to view language models as vectors of weights over the target features. In the following methods, each language is represented by one or more feature vectors. Methods where each feature type is represented by only one feature vector are also sometimes referred to as centroid-based BIBREF58 or nearest prototype methods. Distance measures are generally applied to all features included in the feature vectors. BIBREF31 calculated the squared Euclidean distance between feature vectors. The Squared Euclidean distance can be calculated as: DISPLAYFORM0 BIBREF93 used the simQ similarity measure, which is closely related to the Squared Euclidean distance. BIBREF155 investigated the of multilingual documents using a Stochastic Learning Weak Estimator (“SLWE”) method. In SLWE, the document is processed one word at a time and the language of each word is identified using a feature vector representing the current word as well as the words processed so far. This feature vector includes all possible units from the language models – in their case mixed-order character from one to four. The vector is updated using the SLWE updating scheme to increase the probabilities of units found in the current word. The probabilities of units that have been found in previous words, but not in the current one, are on the other hand decreased. After processing each word, the distance of the feature vector to the probability distribution of each language is calculated, and the best-matching language is chosen as the language of the current word. Their language identifier obtained 96.0% accuracy when classifying sentences with ten words between three languages. They used the Euclidean distance as the distance measure as follows: DISPLAYFORM0 BIBREF325 compared the use of Euclidean distance with their own similarity functions. BIBREF112 calculated the cosine angle between the feature vector of the test document and the feature vectors acting as language models. This is also called the cosine similarity and is calculated as follows: DISPLAYFORM0 The method of BIBREF112 was evaluated by BIBREF326 in the context of over multilingual documents. The cosine similarity was used recently by BIBREF131 . One common trick with cosine similarity is to pre-normalise the feature vectors to unit length (e.g. BIBREF36 ), in which case the calculation takes the form of the simple dot product: DISPLAYFORM0 BIBREF60 used chi-squared distance, calculated as follows: DISPLAYFORM0 BIBREF85 compared Manhattan, Bhattacharyya, chi-squared, Canberra, Bray Curtis, histogram intersection, correlation distances, and out-of-place distances, and found the out-of-place method to be the most accurate. BIBREF239 , BIBREF262 used cross-entropy and symmetric cross-entropy. Cross-entropy is calculated as follows, where INLINEFORM0 and INLINEFORM1 are the probabilities of the feature INLINEFORM2 in the the test document INLINEFORM3 and the corpus INLINEFORM4 : DISPLAYFORM0 Symmetric cross-entropy is calculated as: DISPLAYFORM0 For cross-entropy, distribution INLINEFORM0 must be smoothed, and for symmetric cross-entropy, both probability distributions must be smoothed. Cross-entropy was used recently by BIBREF161 . BIBREF301 used a cross-entropy estimating method they call the Mean of Matching Statistics (“MMS”). In MMS every possible suffix of the mystery text INLINEFORM1 is compared to the language model of each language and the average of the lengths of the longest possible units in the language model matching the beginning of each suffix is calculated. BIBREF327 and BIBREF32 calculated the relative entropy between the language models and the test document, as follows: DISPLAYFORM0 This method is also commonly referred to as Kullback-Leibler (“KL”) distance or skew divergence. BIBREF60 compared relative entropy with the product of the relative frequencies for different-sized character , and found that relative entropy was only competitive when used with character bigrams. The product of relative frequencies gained clearly higher recall with higher-order when compared with relative entropy. BIBREF239 , BIBREF262 also used the RE and MRE measures, which are based on relative entropy. The RE measure is calculated as follows: DISPLAYFORM0 MRE is the symmetric version of the same measure. In the tests performed by BIBREF239 , BIBREF262 , the RE measure with character outperformed other tested methods obtaining 98.51% precision when classifying 100 character texts between 53 language-encoding pairs. BIBREF304 used a logistic regression (“LR”) model (also commonly referred to as “maximum entropy” within NLP), smoothed with a Gaussian prior. BIBREF328 defined LR for character-based features as follows: DISPLAYFORM0 where INLINEFORM0 is a normalization factor and INLINEFORM1 is the word count in the word-tokenized test document. BIBREF158 used an LR classifier and found it to be considerably faster than an SVM, with comparable results. Their LR classifier ranked 6 out of 9 on the closed submission track of the DSL 2015 shared task. BIBREF199 used Adaptive Logistic Regression, which automatically optimizes parameters. In recent years LR has been widely used for . BIBREF95 was the first to use perplexity for , in the manner of a language model. He calculated the perplexity for the test document INLINEFORM0 as follows: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 were the Katz smoothed relative frequencies of word n-grams INLINEFORM1 of the length INLINEFORM2 . BIBREF146 and BIBREF148 evaluated the best performing method used by BIBREF145 . Character n-gram based perplexity was the best method for extremely short texts in the evaluations of BIBREF148 , but for longer sequences the methods of BIBREF36 and BIBREF60 proved to be better. Lately, BIBREF182 also used perplexity. BIBREF20 used Yule's characteristic K and the Kolmogorov-Smirnov goodness of fit test to categorize languages. Kolmogorov-Smirnov proved to be the better of the two, obtaining 89% recall for 53 characters (one punch card) of text when choosing between two languages. In the goodness of fit test, the ranks of features in the models of the languages and the test document are compared. BIBREF329 experimented with Jiang and Conrath's (JC) distance BIBREF330 and Lin's similarity measure BIBREF331 , as well as the out-of-place method. They conclude that Lin's similarity measure was consistently the most accurate of the three. JC-distance measure was later evaluated by BIBREF239 , BIBREF262 , and was outperformed by the RE measure. BIBREF39 and BIBREF332 calculated special ratios from the number of trigrams in the language models when compared with the text to be identified. BIBREF333 , BIBREF334 , BIBREF335 used the quadratic discrimination score to create the feature vectors representing the languages and the test document. They then calculated the Mahalanobis distance between the languages and the test document. Their language identifier obtained 98.9% precision when classifying texts of four “screen lines” between 19 languages. BIBREF336 used odds ratio to identify the language of parts of words when identifying between two languages. Odds ratio for language INLINEFORM0 when compared with language INLINEFORM1 for morph INLINEFORM2 is calculated as in Equation EQREF127 . DISPLAYFORM0 Discriminant Functions The differences between languages can be stored in discriminant functions. The functions are then used to map the test document into an INLINEFORM0 -dimensional space. The distance of the test document to the languages known by the language identifier is calculated, and the nearest language is selected (in the manner of a nearest prototype classifier). BIBREF114 used multiple linear regression to calculate discriminant functions for two-way for Indian languages. BIBREF337 compared linear regression, NB, and LR. The precision for the three methods was very similar, with linear regression coming second in terms of precision after LR. Multiple discriminant analysis was used for by BIBREF18 . He used two functions, the first separated Finnish from English and Swedish, and the second separated English and Swedish from each other. He used Mahalanobis' INLINEFORM0 as a distance measure. BIBREF113 used Multivariate Analysis (“MVA”) with Principal Component Analysis (“PCA”) for dimensionality reduction and . BIBREF59 compared discriminant analysis with SVM and NN using characters as features, and concluded that the SVM was the best method. BIBREF40 experimented with the Winnow 2 algorithm BIBREF338 , but the method was outperformed by other methods they tested. Support Vector Machines (“SVMs”) With support vector machines (“SVMs”), a binary classifier is learned by learning a separating hyperplane between the two classes of instances which maximizes the margin between them. The simplest way to extend the basic SVM model into a multiclass classifier is via a suite of one-vs-rest classifiers, where the classifier with the highest score determines the language of the test document. One feature of SVMs that has made them particularly popular is their compatibility with kernels, whereby the separating hyperplane can be calculated via a non-linear projection of the original instance space. In the following paragraphs, we list the different kernels that have been used with SVMs for . For with SVMs, the predominant approach has been a simple linear kernel SVM model. The linear kernel model has a weight vector INLINEFORM0 and the classification of a feature vector INLINEFORM1 , representing the test document INLINEFORM2 , is calculated as follows: DISPLAYFORM0 where INLINEFORM0 is a scalar bias term. If INLINEFORM1 is equal to or greater than zero, INLINEFORM2 is categorized as INLINEFORM3 . The first to use a linear kernel SVM were BIBREF339 , and generally speaking, linear-kernel SVMs have been widely used for , with great success across a range of shared tasks. BIBREF100 were the first to apply polynomial kernel SVMs to . With a polynomial kernel INLINEFORM0 can be calculated as: DISPLAYFORM0 where INLINEFORM0 is the polynomial degree, and a hyperparameter of the model. Another popular kernel is the RBF function, also known as a Gaussian or squared exponential kernel. With an RBF kernel INLINEFORM0 is calculated as: DISPLAYFORM0 where INLINEFORM0 is a hyperparameter. BIBREF321 were the first to use an RBF kernel SVM for . With sigmoid kernel SVMs, also known as hyperbolic tangent SVMs, INLINEFORM0 can be calculated as: DISPLAYFORM0 BIBREF340 were the first to use a sigmoid kernel SVM for , followed by BIBREF341 , who found the SVM to perform better than NB, Classification And Regression Tree (“CART”), or the sum of relative frequencies. Other kernels that have been used with SVMs for include exponential kernels BIBREF178 and rational kernels BIBREF342 . BIBREF31 were the first to use SVMs for , in the form of string kernels using Ukkonen's algorithm. They used same string kernels with Euclidean distance, which did not perform as well as SVM. BIBREF87 compared SVMs with linear and on-line passive–aggressive kernels for , and found passive–aggressive kernels to perform better, but both SVMs to be inferior to NB and Log-Likelihood Ratio (sum of log-probabilities). BIBREF339 experimented with the Sequential Minimal Optimization (“SMO”) algorithm, but found a simple linear kernel SVM to perform better. BIBREF118 achieved the best results using the SMO algorithm, whereas BIBREF123 found CRFs to work better than SMO. BIBREF178 found that SMO was better than linear, exponential and polynomial kernel SVMs for Arabic tweet gender and dialect prediction. MultipleKernelSVMarticlesTable lists articles where SVMs with different kernels have been compared. BIBREF343 evaluated three different SVM approaches using datasets from different DSL shared tasks. SVM-based approaches were the top performing systems in the 2014 and 2015 shared tasks. BIBREF277 used SVMs with the Margin Infused Relaxed Algorithm, which is an incremental version of SVM training. In their evaluation, this method achieved better results than off-the-shelf . Neural Networks (“NN”) BIBREF344 was the first to use Neural Networks (“NN”) for , in the form of a simple BackPropagation Neural Network (“BPNN”) BIBREF345 with a single layer of hidden units, which is also called a multi-layer perceptron (“MLP”) model. She used words as the input features for the neural network. BIBREF346 and BIBREF347 succesfully applied MLP to . BIBREF348 , BIBREF349 and BIBREF350 used radial basis function (RBF) networks for . BIBREF351 were the first to use adaptive resonance learning (“ART”) neural networks for . BIBREF85 used Neural Text Categorizer (“NTC”: BIBREF352 ) as a baseline. NTC is an MLP-like NN using string vectors instead of number vectors. BIBREF111 were the first to use a RNN for . They concluded that RNNs are less accurate than the simple sum of logarithms of counts of character bi- or trigrams, possibly due to the relatively modestly-sized dataset they experimented with. BIBREF221 compared NNs with the out-of-place method (see sec. UID104 ). Their results show that the latter, used with bigrams and trigrams of characters, obtains clearly higher identification accuracy when dealing with test documents shorter than 400 characters. RNNs were more successfully used later by BIBREF245 who also incorporated character n-gram features in to the network architecture. BIBREF223 were the first to use a Long Short-Term Memory (“LSTM”) for BIBREF353 , and BIBREF354 was the first to use Gated Recurrent Unit networks (“GRUs”), both of which are RNN variants. BIBREF354 used byte-level representations of sentences as input for the networks. Recently, BIBREF89 and BIBREF176 also used LSTMs. Later, GRUs were successfully used for by BIBREF355 and BIBREF356 . In addition to GRUs, BIBREF354 also experimented with deep residual networks (“ResNets”) at DSL 2016. During 2016 and 2017, there was a spike in the use of convolutional neural networks (CNNs) for , most successfully by BIBREF302 and BIBREF357 . Recently, BIBREF358 combined a CNN with adversarial learning to better generalize to unseen domains, surpassing the results of BIBREF151 based on the same training regime as . BIBREF275 used CBOW NN, achieving better results over the development set of DSL 2017 than RNN-based neural networks. BIBREF62 used deep averaging networks (DANs) based on word embeddings in language variety identification. Other Methods BIBREF45 used the decision table majority classifier algorithm from the WEKA toolkit in English variety detection. The bagging algorithm using DTs was the best method they tested (73.86% accuracy), followed closely by the decision table with 73.07% accuracy. BIBREF359 were the first to apply hidden Markov models (HMM) to . More recently HMMs have been used by BIBREF214 , BIBREF288 , and BIBREF261 . BIBREF360 generated aggregate Markov models, which resulted in the best results when distinguishing between six languages, obtaining 74% accuracy with text length of ten characters. BIBREF156 used an extended Markov Model (“eMM”), which is essentially a standard HMM with modified emission probabilities. Their eMM used manually optimized weights to combine four scores (products of relative frequencies) into one score. BIBREF361 used Markov logic networks BIBREF362 to predict the language used in interlinear glossed text examples contained in linguistic papers. BIBREF363 evaluated the use of unsupervised Fuzzy C Means algorithm (“FCM”) in language identification. The unsupervised algorithm was used on the training data to create document clusters. Each cluster was tagged with the language having the most documents in the cluster. Then in the identification phase, the mystery text was mapped to the closest cluster and identified with its language. A supervised centroid classifier based on cosine similarity obtained clearly better results in their experiments (93% vs. 77% accuracy). BIBREF119 and BIBREF67 evaluated the extreme gradient boosting (“XGBoost”) method BIBREF364 . BIBREF119 found that gradient boosting gave better results than RFs, while conversely, BIBREF67 found that LR gave better results than gradient boosting. BIBREF365 used compression methods for , whereby a single test document is added to the training text of each language in turn, and the language with the smallest difference (after compression) between the sizes of the original training text file and the combined training and test document files is selected as the prediction. This has obvious disadvantages in terms of real-time computational cost for prediction, but is closely related to language modeling approaches to (with the obvious difference that the language model doesn't need to be retrained multiply for each test document). In terms of compression methods, BIBREF366 experimented with Maximal Tree Machines (“MTMs”), and BIBREF367 used LZW-based compression. Very popular in text categorization and topic modeling, BIBREF368 , BIBREF23 , and BIBREF24 used Latent Dirichlet Allocation (“LDA”: BIBREF369 ) based features in classifying tweets between Arabic dialects, English, and French. Each tweet was assigned with an LDA topic, which was used as one of the features of an LR classifier. BIBREF249 used a Gaussian Process classifier with an RBF kernel in an ensemble with an LR classifier. Their ensemble achieved only ninth place in the “PAN” (Plagiarism Analysis, Authorship Identification, and Near-Duplicate Detection workshop) Author Profiling language variety shared task BIBREF370 and did not reach the results of the baseline for the task. BIBREF181 , BIBREF188 used a Passive Aggressive classifier, which proved to be almost as good as the SVMs in their evaluations between five different machine learning algorithms from the same package. Ensemble Methods Ensemble methods are meta-classification methods capable of combining several base classifiers into a combined model via a “meta-classifier” over the outputs of the base classifiers, either explicitly trained or some heuristic. It is a simple and effective approach that is used widely in machine learning to boost results beyond those of the individual base classifiers, and particularly effective when applied to large numbers of individually uncorrelated base classifiers. BIBREF20 used simple majority voting to combine classifiers using different features and methods. In majority voting, the language of the test document is identified if a majority ( INLINEFORM0 ) of the classifiers in the ensemble vote for the same language. In plurality voting, the language with most votes is chosen as in the simple scoring method (simple1). Some authors also refer to plurality voting as majority voting. BIBREF371 used majority voting in tweet . BIBREF210 used majority voting with JSM classifiers. BIBREF265 and BIBREF269 used majority voting between SVM classifiers trained with different features. BIBREF266 used majority voting to combine four classifiers: RF, random tree, SVM, and DT. BIBREF372 and BIBREF152 used majority voting between three off-the-shelf language identifiers. BIBREF104 used majority voting between perplexity-based and other classifiers. BIBREF141 used majority voting between three sum of relative frequencies-based classifiers where values were weighted with different weighting schemes. BIBREF270 , BIBREF125 , BIBREF171 , BIBREF185 , BIBREF172 , and BIBREF260 used plurality voting with SVMs. BIBREF182 used voting between several perplexity-based classifiers with different features at the 2017 DSL shared task. A voting ensemble gave better results on the closed track than a singular word-based perplexity classifier (0.9025 weighted F1-score over 0.9013), but worse results on the open track (0.9016 with ensemble and 0.9065 without). In a highest probability ensemble, the winner is simply the language which is given the highest probability by any of the individual classifiers in the ensemble. BIBREF96 used Gaussian Mixture Models (“GMM”) to give probabilities to the outputs of classifiers using different features. BIBREF372 used higher confidence between two off-the-shelf language identifiers. BIBREF265 used GMM to transform SVM prediction scores into probabilities. BIBREF270 , BIBREF125 used highest confidence over a range of base SVMs. BIBREF125 used an ensemble composed of low-dimension hash-based classifiers. According to their experiments, hashing provided up to 86% dimensionality reduction without negatively affecting performance. Their probability-based ensemble obtained 89.2% accuracy, while the voting ensemble got 88.7%. BIBREF166 combined an SVM and a LR classifier. A mean probability ensemble can be used to combine classifiers that produce probabilities (or other mutually comparable values) for languages. The average of values for each language over the classifier results is used to determine the winner and the results are equal to the sum of values method (sumvalues1). BIBREF270 evaluated several ensemble methods and found that the mean probability ensemble attained better results than plurality voting, median probability, product, highest confidence, or Borda count ensembles. In a median probability ensemble, the medians over the probabilities given by the individual classifiers are calculated for each language. BIBREF270 and BIBREF171 used a median probability rule ensemble over SVM classifiers. Consistent with the results of BIBREF270 , BIBREF171 found that a mean ensemble was better than a median ensemble, attaining 68% accuracy vs. 67% for the median ensemble. A product rule ensemble takes the probabilities for the base classifiers and calculates their product (or sum of the log probabilities), with the effect of penalising any language where there is a particularly low probability from any of the base classifiers. BIBREF210 used log probability voting with JSM classifiers. BIBREF210 observed a small increase in average accuracy using the product ensemble over a majority voting ensemble. In a INLINEFORM0 -best ensemble, several models are created for each language INLINEFORM1 by partitioning the corpus INLINEFORM2 into separate samples. The score INLINEFORM3 is calculated for each model. For each language, plurality voting is then applied to the INLINEFORM4 models with the best scores to predict the language of the test document INLINEFORM5 . BIBREF349 evaluated INLINEFORM6 -best with INLINEFORM7 based on several similarity measures. BIBREF54 compared INLINEFORM8 and INLINEFORM9 and concluded that there was no major difference in accuracy when distinguishing between six languages (100 character test set). BIBREF373 experimented with INLINEFORM10 -best classifiers, but they gave clearly worse results than the other classifiers they evaluated. BIBREF212 used INLINEFORM11 -best in two phases, first selecting INLINEFORM12 closest neighbors with simple similarity, and then using INLINEFORM13 with a more advanced similarity ranking. In bagging, independent samples of the training data are generated by random sampling with replacement, individual classifiers are trained over each such training data sample, and the final classification is determined by plurality voting. BIBREF67 evaluated the use of bagging with an LR classifier in PAN 2017 language variety identification shared task, however, bagging did not improve the accuracy in the 10-fold cross-validation experiments on the training set. BIBREF374 used bagging with word convolutional neural networks (“W-CNN”). BIBREF45 used bagging with DTs in English national variety detection and found DT-based bagging to be the best evaluated method when all 60 different features (a wide selection of formal, POS, lexicon-based, and data-based features) were used, attaining 73.86% accuracy. BIBREF45 continued the experiments using the ReliefF feature selection algorithm from the WEKA toolkit to select the most efficient features, and achieved 77.32% accuracy over the reduced feature set using a NB classifier. BIBREF130 evaluated the Rotation Forest meta classifier for DTs. The method randomly splits the used features into a pre-determined number of subsets and then uses PCA for each subset. It obtained 66.6% accuracy, attaining fifth place among the twelve methods evaluated. The AdaBoost algorithm BIBREF375 examines the performance of the base classifiers on the evaluation set and iteratively boosts the significance of misclassified training instances, with a restart mechanism to avoid local minima. AdaBoost was the best of the five machine learning techniques evaluated by BIBREF53 , faring better than C4.5, NB, RF, and linear SVM. BIBREF130 used the LogitBoost variation of AdaBoost. It obtained 67.0% accuracy, attaining third place among the twelve methods evaluated. In stacking, a higher level classifier is explicitly trained on the output of several base classifiers. BIBREF96 used AdaBoost.ECC and CART to combine classifiers using different features. More recently, BIBREF127 used LR to combine the results of five RNNs. As an ensemble they produced better results than NB and LR, which were better than the individual RNNs. Also in 2017, BIBREF185 , BIBREF172 used RF to combine several linear SVMs with different features. The system used by BIBREF172 ranked first in the German dialect identification shared task, and the system by BIBREF185 came second (71.65% accuracy) in the Arabic dialect identification shared task. Empirical Evaluation In the previous two sections, we have alluded to issues of evaluation in research to date. In this section, we examine the literature more closely, providing a broad overview of the evaluation metrics that have been used, as well as the experimental settings in which research has been evaluated. Standardized Evaluation for The most common approach is to treat the task as a document-level classification problem. Given a set of evaluation documents, each having a known correct label from a closed set of labels (often referred to as the “gold-standard”), and a predicted label for each document from the same set, the document-level accuracy is the proportion of documents that are correctly labeled over the entire evaluation collection. This is the most frequently reported metric and conveys the same information as the error rate, which is simply the proportion of documents that are incorrectly labeled (i.e. INLINEFORM0 ). Authors sometimes provide a per-language breakdown of results. There are two distinct ways in which results are generally summarized per-language: (1) precision, in which documents are grouped according to their predicted language; and (2) recall, in which documents are grouped according to what language they are actually written in. Earlier work has tended to only provide a breakdown based on the correct label (i.e. only reporting per-language recall). This gives us a sense of how likely a document in any given language is to be classified correctly, but does not give an indication of how likely a prediction for a given language is of being correct. Under the monolingual assumption (i.e. each document is written in exactly one language), this is not too much of a problem, as a false negative for one language must also be a false positive for another language, so precision and recall are closely linked. Nonetheless, authors have recently tended to explicitly provide both precision and recall for clarity. It is also common practice to report an F-score INLINEFORM0 , which is the harmonic mean of precision and recall. The F-score (also sometimes called F1-score or F-measure) was developed in IR to measure the effectiveness of retrieval with respect to a user who attaches different relative importance to precision and recall BIBREF376 . When used as an evaluation metric for classification tasks, it is common to place equal weight on precision and recall (hence “F1”-score, in reference to the INLINEFORM1 hyper-parameter, which equally weights precision and recall when INLINEFORM2 ). In addition to evaluating performance for each individual language, authors have also sought to convey the relationship between classification errors and specific sets of languages. Errors in systems are generally not random; rather, certain sets of languages are much more likely to be confused. The typical method of conveying this information is through the use of a confusion matrix, a tabulation of the distribution of (predicted language, actual language) pairs. Presenting full confusion matrices becomes problematic as the number of languages considered increases, and as a result has become relatively uncommon in work that covers a broader range of languages. Per-language results are also harder to interpret as the number of languages increases, and so it is common to present only collection-level summary statistics. There are two conventional methods for summarizing across a whole collection: (1) giving each document equal weight; and (2) giving each class (i.e. language) equal weight. (1) is referred to as a micro-average, and (2) as a macro-average. For under the monolingual assumption, micro-averaged precision and recall are the same, since each instance of a false positive for one language must also be a false negative for another language. In other words, micro-averaged precision and recall are both simply the collection-level accuracy. On the other hand, macro-averaged precision and recall give equal weight to each language. In datasets where the number of documents per language is the same, this again works out to being the collection-level average. However, research has frequently dealt with datasets where there is a substantial skew between classes. In such cases, the collection-level accuracy is strongly biased towards more heavily-represented languages. To address this issue, in work on skewed document collections, authors tend to report both the collection-level accuracy and the macro-averaged precision/recall/F-score, in order to give a more complete picture of the characteristics of the method being studied. Whereas the notions of macro-averaged precision and recall are clearly defined, there are two possible methods to calculate the macro-averaged F-score. The first is to calculate it as the harmonic mean of the macro-averaged precision and recall, and the second is to calculate it as the arithmetic mean of the per-class F-score. The comparability of published results is also limited by the variation in size and source of the data used for evaluation. In work to date, authors have used data from a variety of different sources to evaluate the performance of proposed solutions. Typically, data for a number of languages is collected from a single source, and the number of languages considered varies widely. Earlier work tended to focus on a smaller number of Western European languages. Later work has shifted focus to supporting larger numbers of languages simultaneously, with the work of BIBREF101 pushing the upper bound, reporting a language identifier that supports over 1300 languages. The increased size of the language set considered is partly due to the increased availability of language-labeled documents from novel sources such as Wikipedia and Twitter. This supplements existing data from translations of the Universal Declaration of Human Rights, bible translations, as well as parallel texts from MT datasets such as OPUS and SETimes, and European Government data such as JRC-Acquis. These factors have led to a shift away from proprietary datasets such as the ECI multilingual corpus that were commonly used in earlier research. As more languages are considered simultaneously, the accuracy of systems decreases. A particularly striking illustration of this is the evaluation results by BIBREF148 for the logLIGA method BIBREF312 . BIBREF312 report an accuracy of 99.8% over tweets (averaging 80 characters) in six European languages as opposed to the 97.9% from the original LIGA method. The LIGA and logLIGA implementations by BIBREF148 have comparable accuracy for six languages, but the accuracy for 285 languages (with 70 character test length) is only slightly over 60% for logLIGA and the original LIGA method is at almost 85%. Many evaluations are not directly comparable as the test sizes, language sets, and hyper-parameters differ. A particularly good example is the method of BIBREF7 . The original paper reports an accuracy of 99.8% over eight European languages (>300 bytes test size). BIBREF150 report an accuracy of 68.6% for the method over a dataset of 67 languages (500 byte test size), and BIBREF148 report an accuracy of over 90% for 285 languages (25 character test size). Separate to the question of the number and variety of languages included are issues regarding the quantity of training data used. A number of studies have examined the relationship between accuracy and quantity of training data through the use of learning curves. The general finding is that accuracy increases with more training data, though there are some authors that report an optimal amount of training data, where adding more training data decreases accuracy thereafter BIBREF377 . Overall, it is not clear whether there is a universal quantity of data that is “enough” for any language, rather this amount appears to be affected by the particular set of languages as well as the domain of the data. The breakdown presented by BIBREF32 shows that with less than 100KB per language, there are some languages where classification accuracy is near perfect, whereas there are others where it is very poor. Another aspect that is frequently reported on is how long a sample of text needs to be before its language can be correctly detected. Unsurprisingly, the general consensus is that longer samples are easier to classify correctly. There is a strong interest in classifying short segments of text, as certain applications naturally involve short text documents, such as of microblog messages or search engine queries. Another area where of texts as short as one word has been investigated is in the context of dealing with documents that contain text in more than one language, where word-level has been proposed as a possible solution (see openissues:multilingual). These outstanding challenges have led to research focused specifically on of shorter segments of text, which we discuss in more detail in openissues:short. From a practical perspective, knowing the rate at which a system can process and classify documents is useful as it allows a practitioner to predict the time required to process a document collection given certain computational resources. However, so many factors influence the rate at which documents are processed that comparison of absolute values across publications is largely meaningless. Instead, it is more valuable to consider publications that compare multiple systems under controlled conditions (same computer hardware, same evaluation data, etc.). The most common observations are that classification times between different algorithms can differ by orders of magnitude, and that the fastest methods are not always the most accurate. Beyond that, the diversity of systems tested and the variety in the test data make it difficult to draw further conclusions about the relative speed of algorithms. Where explicit feature selection is used, the number of features retained is a parameter of interest, as it affects both the memory requirements of the system and its classification rate. In general, a smaller feature set results in a faster and more lightweight identifier. Relatively few authors give specific details of the relationship between the number of features selected and accuracy. A potential reason for this is that the improvement in accuracy plateaus with increasing feature count, though the exact number of features required varies substantially with the method and the data used. At the lower end of the scale, BIBREF7 report that 300–400 features per language is sufficient. Conversely BIBREF148 found that, for the same method, the best results for the evaluation set were attained with 20,000 features per language. Corpora Used for Evaluation As discussed in standardevaluation, the objective comparison of different methods for is difficult due to the variation in the data that different authors have used to evaluate methods. BIBREF32 emphasize this by demonstrating how the performance of a system can vary according to the data used for evaluation. This implies that comparisons of results reported by different authors may not be meaningful, as a strong result in one paper may not translate into a strong result on the dataset used in a different paper. In other areas of research, authors have proposed standardized corpora to allow for the objective comparison of different methods. Some authors have released datasets to accompany their work, to allow for direct replication of their experiments and encourage comparison and standardization. datasets lists a number of datasets that have been released to accompany specific publications. In this list, we only include corpora that were prepared specifically for research, and that include the full text of documents. Corpora of language-labelled Twitter messages that only provide document identifiers are also available, but reproducing the full original corpus is always an issue as the original Twitter messages are deleted or otherwise made unavailable. One challenge in standardizing datasets for is that the codes used to label languages are not fully standardized, and a large proportion of labeling systems only cover a minor portion of the languages used in the world today BIBREF381 . BIBREF382 discuss this problem in detail, listing different language code sets, as well as the internal structure exhibited by some of the code sets. Some standards consider certain groups of “languages” as varieties of a single macro-language, whereas others consider them to be discrete languages. An example of this is found in South Slavic languages, where some language code sets refer to Serbo-Croatian, whereas others make distinctions between Bosnian, Serbian and Croatian BIBREF98 . The unclear boundaries between such languages make it difficult to build a reference corpus of documents for each language, or to compare language-specific results across datasets. Another challenge in standardizing datasets for is the great deal of variation that can exist between data in the same language. We examine this in greater detail in openissues:encoding, where we discuss how the same language can use a number of different orthographies, can be digitized using a number of different encodings, and may also exist in transliterated forms. The issue of variation within a language complicates the development of standardized datasets, due to challenges in determining which variants of a language should be included. Since we have seen that the performance of systems can vary per-domain BIBREF32 , that research is often motivated by target applications (see applications), and that domain-specific information can be used to improve accuracy (see openissues:domainspecific), it is often unsound to use a generic dataset to develop a language identifier for a particular domain. A third challenge in standardizing datasets for is the cost of obtaining correctly-labeled data. Manual labeling of data is usually prohibitively expensive, as it requires access to native speakers of all languages that the dataset aims to include. Large quantities of raw text data are available from sources such as web crawls or Wikipedia, but this data is frequently mislabeled (e.g. most non-English Wikipedias still include some English-language documents). In constructing corpora from such resources, it is common to use some form of automatic , but this makes such corpora unsuitable for evaluation purposes as they are biased towards documents that can be correctly identified by automatic systems BIBREF152 . Future work in this area could investigate other means of ensuring correct gold-standard labels while minimizing the annotation cost. Despite these challenges, standardized datasets are critical for replicable and comparable research in . Where a subset of data is used from a larger collection, researchers should include details of the specific subset, including any breakdown into training and test data, or partitions for cross-validation. Where data from a new source is used, justification should be given for its inclusion, as well as some means for other researchers to replicate experiments on the same dataset. Shared Tasks To address specific sub-problems in , a number of shared tasks have been organized on problems such as in multilingual documents BIBREF378 , code-switched data BIBREF383 , discriminating between closely related languages BIBREF384 , and dialect and language variety identification in various languages BIBREF385 , BIBREF386 , BIBREF370 , BIBREF387 . Shared tasks are important for because they provide datasets and standardized evaluation methods that serve as benchmarks for the community. We summarize all shared tasks organized to date in sharedtasks. Generally, datasets for shared tasks have been made publicly available after the conclusion of the task, and are a good source of standardized evaluation data. However, the shared tasks to date have tended to target specific sub-problems in , and no general, broad-coverage datasets have been compiled. Widespread interest in over closely-related languages has resulted in a number of shared tasks that specifically tackle the issue. Some tasks have focused on varieties of a specific language. For example, the DEFT2010 shared task BIBREF385 examined varieties of French, requiring participants to classify French documents with respect to their geographical source, in addition to the decade in which they were published. Another example is the Arabic Dialect Identification (“ADI”) shared task at the VarDial workshop BIBREF126 , BIBREF386 , and the Arabic Multi-Genre Broadcast (“MGB”) Challenge BIBREF387 . Two shared tasks focused on a narrow group of languages using Twitter data. The first was TweetLID, a shared task on of Twitter messages according to six languages in common use in Spain, namely: Spanish, Portuguese, Catalan, English, Galician, and Basque (in order of the number of documents in the dataset) BIBREF388 , BIBREF389 . The organizers provided almost 35,000 Twitter messages, and in addition to the six monolingual tags, supported four additional categories: undetermined, multilingual (i.e. the message contains more than one language, without requiring the system to specify the component languages), ambiguous (i.e. the message is ambiguous between two or more of the six target languages), and other (i.e. the message is in a language other than the six target languages). The second shared task was the PAN lab on authorship profiling 2017 BIBREF370 . The PAN lab on authorship profiling is held annually and historically has focused on age, gender, and personality traits prediction in social media. In 2017 the competition introduced the inclusion of language varieties and dialects of Arabic, English, Spanish, and Portuguese, More ambitiously, the four editions of the Discriminating between Similar Languages (DSL) BIBREF384 , BIBREF6 , BIBREF317 , BIBREF386 shared tasks required participants to discriminate between a set of languages in several language groups, each consisting of highly-similar languages or national varieties of that language. The dataset, entitled DSL Corpus Collection (“DSLCC”) BIBREF77 , and the languages included are summarized in dslcc. Historically the best-performing systems BIBREF265 , BIBREF390 , BIBREF43 have approached the task via hierarchical classification, first predicting the language group, then the language within that group. Application Areas There are various reasons to investigate . Studies in approach the task from different perspectives, and with different motivations and application goals in mind. In this section, we briefly summarize what these motivations are, and how their specific needs differ. The oldest motivation for automatic is perhaps in conjunction with translation BIBREF27 . Automatic is used as a pre-processing step to determine what translation model to apply to an input text, whether it be by routing to a specific human translator or by applying MT. Such a use case is still very common, and can be seen in the Google Chrome web browser, where an built-in module is used to offer MT services to the user when the detected language of the web page being visited differs from the user's language settings. NLP components such as POS taggers and parsers tend to make a strong assumption that the input text is monolingual in a given language. Similarly to the translation case, can play an obvious role in routing documents written in different languages to NLP components tailored to those languages. More subtle is the case of documents with mixed multilingual content, the most commonly-occurring instance of which is foreign inclusion, where a document is predominantly in a single language (e.g. German or Japanese) but is interspersed with words and phrases (often technical terms) from a language such as English. For example, BIBREF391 found that around 6% of word tokens in German text sourced from the Internet are English inclusions. In the context of POS tagging, one strategy for dealing with inclusions is to have a dedicated POS for all foreign words, and force the POS tagger to perform both foreign inclusion detection and POS tag these words in the target language; this is the approach taken in the Penn POS tagset, for example BIBREF392 . An alternative strategy is to have an explicit foreign inclusion detection pre-processor, and some special handling of foreign inclusions. For example, in the context of German parsing, BIBREF391 used foreign inclusion predictions to restrict the set of (German) POS tags used to form a parse tree, and found that this approach substantially improved parser accuracy. Another commonly-mentioned use case is for multilingual document storage and retrieval. A document retrieval system (such as, but not limited to, a web search engine) may be required to index documents in multiple languages. In such a setting, it is common to apply at two points: (1) to the documents being indexed; and (2) to the queries being executed on the collection. Simple keyword matching techniques can be problematic in text-based document retrieval, because the same word can be valid in multiple languages. A classic example of such words (known as “false friends”) includes gift, which in German means “poison”. Performing on both the document and the query helps to avoid confusion between such terms, by taking advantage of the context in which it appears in order to infer the language. This has resulted in specific work in of web pages, as well as search engine queries. BIBREF393 and BIBREF394 give overviews of shared tasks specifically concentrating on language labeling of individual search query words. Having said this, in many cases, the search query itself does a sufficiently good job of selecting documents in a particular language, and overt is often not performed in mixed multilingual search contexts. Automatic has also been used to facilitate linguistic and other text-based research. BIBREF34 report that their motivation for developing a language identifier was “to find out how many web pages are written in a particular language”. Automatic has been used in constructing web-based corpora. The Crúbadán project BIBREF395 and the Finno-Ugric Languages and the Internet project BIBREF396 make use of automated techniques to gather linguistic resources for under-resourced languages. Similarly, the Online Database of INterlinear text (“ODIN”: BIBREF397 ) uses automated as one of the steps in collecting interlinear glossed text from the web for purposes of linguistic search and bootstrapping NLP tools. One challenge in collecting linguistic resources from the web is that documents can be multilingual (i.e. contain text in more than one language). This is problematic for standard methods, which assume that a document is written in a single language, and has prompted research into segmenting text by language, as well as word-level , to enable extraction of linguistic resources from multilingual documents. A number of shared tasks discussed in detail in evaluation:sharedtasks included data from social media. Examples are the TweetLID shared task on tweet held at SEPLN 2014 BIBREF388 , BIBREF389 , the data sets used in the first and second shared tasks on in code-switched data which were partially taken from Twitter BIBREF383 , BIBREF398 , and the third edition of the DSL shared task which contained two out-of-domain test sets consisting of tweets BIBREF317 . The 5th edition of the PAN at CLEF author profiling task included language variety identification for tweets BIBREF370 . There has also been research on identifying the language of private messages between eBay users BIBREF399 , presumably as a filtering step prior to more in-depth data analysis. Off-the-Shelf Language Identifiers An “off-the-shelf” language identifier is software that is distributed with pre-trained models for a number of languages, so that a user is not required to provide training data before using the system. Such a setup is highly attractive to many end-users of automatic whose main interest is in utilizing the output of a language identifier rather than implementing and developing the technique. To this end, a number of off-the-shelf language identifiers have been released over time. Many authors have evaluated these off-the-shelf identifiers, including a recent evaluation involving 13 language identifiers which was carried out by BIBREF400 . In this section, we provide a brief summary of open-source or otherwise free systems that are available, as well as the key characteristics of each system. We have also included dates of when the software has been last updated as of October 2018. TextCat is the most well-known Perl implementation of the out-of-place method, it lists models for 76 languages in its off-the-shelf configuration; the program is not actively maintained. TextCat is not the only example of an off-the-shelf implementation of the out-of-place method: other implementations include libtextcat with 76 language models, JTCL with 15 languages, and mguesser with 104 models for different language-encoding pairs. The main issue addressed by later implementations is classification speed: TextCat is implemented in Perl and is not optimized for speed, whereas implementations such as libtextcat and mguesser have been specifically written to be fast and efficient. whatlang-rs uses an algorithm based on character trigrams and refers the user to the BIBREF7 article. It comes pre-trained with 83 languages. is the language identifier embedded in the Google Chrome web browser. It uses a NB classifier, and script-specific classification strategies. assumes that all the input is in UTF-8, and assigns the responsibility of encoding detection and transcoding to the user. uses Unicode information to determine the script of the input. also implements a number of pre-processing heuristics to help boost performance on its target domain (web pages), such as stripping character sequences like .jpg. The standard implementation supports 83 languages, and an extended model is also available, that supports 160 languages. is a Java library that implements a language identifier based on a NB classifier trained over character . The software comes with pre-trained models for 53 languages, using data from Wikipedia. makes use of a range of normalization heuristics to improve the performance on particular languages, including: (1) clustering of Chinese/Japanese/Korean characters to reduce sparseness; (2) removal of “language-independent” characters, and other text normalization; and (3) normalization of Arabic characters. is a Python implementation of the method described by BIBREF150 , which exploits training data for the same language across multiple different sources of text to identify sequences of characters that are strongly predictive of a given language, regardless of the source of the text. This feature set is combined with a NB classifier, and is distributed with a pre-trained model for 97 languages prepared using data from 5 different text sources. BIBREF151 provide an empirical comparison of to , and and find that it compares favorably both in terms of accuracy and classification speed. There are also implementations of the classifier component (but not the training portion) of in Java, C, and JavaScript. BIBREF153 uses a vector-space model with per-feature weighting on character sequences. One particular feature of is that it uses discriminative training in selecting features, i.e. it specifically makes use of features that are strong evidence against a particular language, which is generally not captured by NB models. Another feature of is that it uses inter-string smoothing to exploit sentence-level locality in making language predictions, under the assumption that adjacent sentences are likely to be in the same language. BIBREF153 reports that this substantially improves the accuracy of the identifier. Another distinguishing feature of is that it comes pre-trained with data for 1400 languages, which is the highest number by a large margin of any off-the-shelf system. whatthelang is a recent language identifier written in Python, which utilizes the FastText NN-based text classification algorithm. It supports 176 languages. implements an off-the-shelf classifier trained using Wikipedia data, covering 122 languages. Although not described as such, the actual classification algorithm used is a linear model, and is thus closely related to both NB and a cosine-based vector space model. In addition to the above-mentioned general-purpose language identifiers, there have also been efforts to produce pre-trained language identifiers targeted specifically at Twitter messages. is a Twitter-specific tool with built-in models for 19 languages. It uses a document representation based on tries BIBREF401 . The algorithm is a LR classifier using all possible substrings of the data, which is important to maximize the available information from the relatively short Twitter messages. BIBREF152 provides a comparison of 8 off-the-shelf language identifiers applied without re-training to Twitter messages. One issue they report is that comparing the accuracy of off-the-shelf systems is difficult because of the different subset of languages supported by each system, which may also not fully cover the languages present in the target data. The authors choose to compare accuracy over the full set of languages, arguing that this best reflects the likely use-case of applying an off-the-shelf system to new data. They find that the best individual systems are , and , but that slightly higher accuracy can be attained by a simple voting-based ensemble classifier involving these three systems. In addition to this, commercial or other closed-source language identifiers and language identifier services exist, of which we name a few. The Polyglot 3000 and Lextek Language Identifier are standalone language identifiers for Windows. Open Xerox Language Identifier is a web service with available REST and SOAP APIs. Research Directions and Open Issues in Several papers have catalogued open issues in BIBREF327 , BIBREF382 , BIBREF1 , BIBREF334 , BIBREF32 , BIBREF324 , BIBREF317 . Some of the issues, such as text representation (features) and choice of algorithm (methods), have already been covered in detail in this survey. In this section, we synthesize the remaining issues into a single section, and also add new issues that have not been discussed in previous work. For each issue, we review related work and suggest promising directions for future work. Text Preprocessing Text preprocessing (also known as normalization) is an umbrella term for techniques where an automatic transformation is applied to text before it is presented to a classifier. The aim of such a process is to eliminate sources of variation that are expected to be confounding factors with respect to the target task. Text preprocessing is slightly different from data cleaning, as data cleaning is a transformation applied only to training data, whereas normalization is applied to both training and test data. BIBREF1 raise text preprocessing as an outstanding issue in , arguing that its effects on the task have not been sufficiently investigated. In this section, we summarize the normalization strategies that have been proposed in the literature. Case folding is the elimination of capitalization, replacing characters in a text with either their lower-case or upper-case forms. Basic approaches generally map between [a-z] and [A-Z] in the ASCII encoding, but this approach is insufficient for extended Latin encodings, where diacritics must also be appropriately handled. A resource that makes this possible is the Unicode Character Database (UCD) which defines uppercase, lowercase and titlecase properties for each character, enabling automatic case folding for documents in a Unicode encoding such as UTF-8. Range compression is the grouping of a range of characters into a single logical set for counting purposes, and is a technique that is commonly used to deal with the sparsity that results from character sets for ideographic languages, such as Chinese, that may have thousands of unique “characters”, each of which is observed with relatively low frequency. BIBREF402 use such a technique where all characters in a given range are mapped into a single “bucket”, and the frequency of items in each bucket is used as a feature to represent the document. Byte-level representations of encodings that use multi-byte sequences to represent codepoints achieve a similar effect by “splitting” codepoints. In encodings such as UTF-8, the codepoints used by a single language are usually grouped together in “code planes”, where each codepoint in a given code plane shares the same upper byte. Thus, even though the distribution over codepoints may be quite sparse, when the byte-level representation uses byte sequences that are shorter than the multi-byte sequence of a codepoint, the shared upper byte will be predictive of specific languages. Cleaning may also be applied, where heuristic rules are used to remove some data that is perceived to hinder the accuracy of the language identifier. For example, BIBREF34 identify HTML entities as a candidate for removal in document cleaning, on the basis that classifiers trained on data which does not include such entities may drop in accuracy when applied to raw HTML documents. includes heuristics such as expanding HTML entities, deleting digits and punctuation, and removing SGML-like tags. Similarly, also removes “language-independent characters” such as numbers, symbols, URLs, and email addresses. It also removes words that are all-capitals and tries to remove other acronyms and proper names using heuristics. In the domain of Twitter messages, BIBREF313 remove links, usernames, smilies, and hashtags (a Twitter-specific “tagging” feature), arguing that these entities are language independent and thus should not feature in the model. BIBREF136 address of web pages, and report removing HTML formatting, and applying stopping using a small stopword list. BIBREF59 carry out experiments on the ECI multilingual corpus and report removing punctuation, space characters, and digits. The idea of preprocessing text to eliminate domain-specific “noise” is closely related to the idea of learning domain-independent characteristics of a language BIBREF150 . One difference is that normalization is normally heuristic-driven, where a manually-specified set of rules is used to eliminate unwanted elements of the text, whereas domain-independent text representations are data-driven, where text from different sources is used to identify the characteristics that a language shares between different sources. Both approaches share conceptual similarities with problems such as content extraction for web pages. In essence, the aim is to isolate the components of the text that actually represent language, and suppress the components that carry other information. One application is the language-aware extraction of text strings embedded in binary files, which has been shown to perform better than conventional heuristic approaches BIBREF36 . Future work in this area could focus specifically on the application of language-aware techniques to content extraction, using models of language to segment documents into textual and non-textual components. Such methods could also be used to iteratively improve itself by improving the quality of training data. Orthography and Transliteration is further complicated when we consider that some languages can be written in different orthographies (e.g. Bosnian and Serbian can be written in both Latin and Cyrillic script). Transliteration is another phenomenon that has a similar effect, whereby phonetic transcriptions in another script are produced for particular languages. These transcriptions can either be standardized and officially sanctioned, such as the use of Hanyu Pinyin for Chinese, or may also emerge irregularly and organically as in the case of arabizi for Arabic BIBREF403 . BIBREF1 identify variation in the encodings and scripts used by a given language as an open issue in , pointing out that early work tended to focus on languages written using a romanized script, and suggesting that dealing with issues of encoding and orthography adds substantial complexity to the task. BIBREF34 discuss the relative difficulties of discriminating between languages that vary in any combination of encoding, script and language family, and give examples of pairs of languages that fall into each category. across orthographies and transliteration is an area that has not received much attention in work to date, but presents unique and interesting challenges that are suitable targets for future research. An interesting and unexplored question is whether it is possible to detect that documents in different encodings or scripts are written in the same language, or what language a text is transliterated from, without any a-priori knowledge of the encoding or scripts used. One possible approach to this could be to take advantage of standard orderings of alphabets in a language – the pattern of differences between adjacent characters should be consistent across encodings, though whether this is characteristic of any given language requires exploration. Supporting Low-Resource Languages BIBREF1 paint a fairly bleak picture of the support for low-resource languages in automatic . This is supported by the arguments of BIBREF382 who detail specific issues in building hugely multilingual datasets. BIBREF404 also specifically called for research into automatic for low-density languages. Ethnologue BIBREF0 lists a total of 7099 languages. BIBREF382 describe the Ethnologue in more detail, and discuss the role that plays in other aspects of supporting minority languages, including detecting and cataloging resources. The problem is circular: methods are typically supervised, and need training data for each language to be covered, but the most efficient way to recover such data is through methods. A number of projects are ongoing with the specific aim of gathering linguistic data from the web, targeting as broad a set of languages as possible. One such project is the aforementioned ODIN BIBREF361 , BIBREF397 , which aims to collect parallel snippets of text from Linguistics articles published on the web. ODIN specifically targets articles containing Interlinear Glossed Text (IGT), a semi-structured format for presenting text and a corresponding gloss that is commonly used in Linguistics. Other projects that exist with the aim of creating text corpora for under-resourced languages by crawling the web are the Crúbadán project BIBREF395 and SeedLing BIBREF405 . The Crúbadán crawler uses seed data in a target language to generate word lists that in turn are used as queries for a search engine. The returned documents are then compared with the seed resource via an automatic language identifier, which is used to eliminate false positives. BIBREF395 reports that corpora for over 400 languages have been built using this method. The SeedLing project crawls texts from several web sources which has resulted in a total of 1451 languages from 105 language families. According to the authors, this represents 19% of the world's languages. Much recent work on multilingual documents (openissues:multilingual) has been done with support for minority languages as a key goal. One of the common problems with gathering linguistic data from the web is that the data in the target language is often embedded in a document containing data in another language. This has spurred recent developments in text segmentation by language and word-level . BIBREF326 present a method to detect documents that contain text in more than one language and identify the languages present with their relative proportions in the document. The method is evaluated on real-world data from a web crawl targeted to collect documents for specific low-density languages. for low-resource languages is a promising area for future work. One of the key questions that has not been clearly answered is how much data is needed to accurately model a language for purposes of . Work to date suggests that there may not be a simple answer to this question as accuracy varies according to the number and variety of languages modeled BIBREF32 , as well as the diversity of data available to model a specific language BIBREF150 . Number of Languages Early research in tended to focus on a very limited number of languages (sometimes as few as 2). This situation has improved somewhat with many current off-the-shelf language identifiers supporting on the order of 50–100 languages (ots). The standout in this regard is BIBREF101 , supporting 1311 languages in its default configuration. However, evaluation of the identifier of BIBREF153 on a different domain found that the system suffered in terms of accuracy because it detected many languages that were not present in the test data BIBREF152 . BIBREF397 describe the construction of web crawlers specifically targeting IGT, as well as the identification of the languages represented in the IGT snippets. for thousands of languages from very small quantities of text is one of the issues that they have had to tackle. They list four specific challenges for in ODIN: (1) the large number of languages; (2) “unseen” languages that appear in the test data but not in training data; (3) short target sentences; and (4) (sometimes inconsistent) transliteration into Latin text. Their solution to this task is to take advantage of a domain-specific feature: they assume that the name of the language that they are extracting must appear in the document containing the IGT, and hence treat this as a co-reference resolution problem. They report that this approach significantly outperforms the text-based approach in this particular problem setting. An interesting area to explore is the trade-off between the number of languages supported and the accuracy per-language. From existing results it is not clear if it is possible to continue increasing the number of languages supported without adversely affecting the average accuracy, but it would be useful to quantify if this is actually the case across a broad range of text sources. mostlanguages lists the articles where the with more than 30 languages has been investigated. “Unseen” Languages and Unsupervised “Unseen” languages are languages that we do not have training data for but may nonetheless be encountered by a system when applied to real-world data. Dealing with languages for which we do not have training data has been identified as an issue by BIBREF1 and has also been mentioned by BIBREF361 as a specific challenge in harvesting linguistic data from the web. BIBREF233 use an unlabeled training set with a labeled evaluation set for token-level code switching identification between Modern Standard Arabic (MSA) and dialectal Arabic. They utilize existing dictionaries and also a morphological analyzer for MSA, so the system is supported by extensive external knowledge sources. The possibility to use unannotated training material is nonetheless a very useful feature. Some authors have attempted to tackle the unseen language problem through attempts at unsupervised labeling of text by language. BIBREF225 uses an unsupervised clustering algorithm to separate a multilingual corpus into groups corresponding to languages. She uses singular value decomposition (SVD) to first identify the words that discriminate between documents and then to separate the terms into highly correlating groups. The documents grouped together by these discriminating terms are merged and the process is repeated until the wanted number of groups (corresponding to languages) is reached. BIBREF412 also presents an approach to unseen language problem, building graphs of co-occurrences of words in sentences, and then partitioning the graph using a custom graph-clustering algorithm which labels each word in the cluster with a single label. The number of labels is initialized to be the same as the number of words, and decreases as the algorithm is recursively applied. After a small number of iterations (the authors report 20), the labels become relatively stable and can be interpreted as cluster labels. Smaller clusters are then discarded, and the remaining clusters are interpreted as groups of words for each language. BIBREF413 compared the Chinese Whispers algorithm of BIBREF412 and Graclus clustering on unsupervised Tweet . They conclude that Chinese Whispers is better suited to . BIBREF414 used Fuzzy ART NNs for unsupervised language clustering for documents in Arabic, Persian, and Urdu. In Fuzzy ART, the clusters are also dynamically updated during the identification process. BIBREF415 also tackle the unseen language problem through clustering. They use a character representation for text, and a clustering algorithm that consists of an initial INLINEFORM0 -means phase, followed by particle-swarm optimization. This produces a large number of small clusters, which are then labeled by language through a separate step. BIBREF240 used co-occurrences of words with INLINEFORM1 -means clustering in word-level unsupervised . They used a Dirichlet process Gaussian mixture model (“DPGMM”), a non-parametric variant of a GMM, to automatically determine the number of clusters, and manually labeled the language of each cluster. BIBREF249 also used INLINEFORM2 -means clustering, and BIBREF416 used the INLINEFORM3 -means clustering algorithm in a custom framework. BIBREF244 utilized unlabeled data to improve their system by using a CRF autoencoder, unsupervised word embeddings, and word lists. A different partial solution to the issue of unseen languages is to design the classifier to be able to output “unknown” as a prediction for language. This helps to alleviate one of the problems commonly associated with the presence of unseen languages – classifiers without an “unknown” facility are forced to pick a language for each document, and in the case of unseen languages, the choice may be arbitrary and unpredictable BIBREF412 . When is used for filtering purposes, i.e. to select documents in a single language, this mislabeling can introduce substantial noise into the data extracted; furthermore, it does not matter what or how many unseen languages there are, as long as they are consistently rejected. Therefore the “unknown” output provides an adequate solution to the unseen language problem for purposes of filtering. The easiest way to implement unknown language detection is through thresholding. Most systems internally compute a score for each language for an unknown text, so thresholding can be applied either with a global threshold BIBREF33 , a per-language threshold BIBREF34 , or by comparing the score for the top-scoring INLINEFORM0 -languages. The problem of unseen languages and open-set recognition was also considered by BIBREF270 , BIBREF84 , and BIBREF126 . BIBREF126 experiments with one-class classification (“OCC”) and reaches an F-score on 98.9 using OC-SVMs (SVMs trained only with data from one language) to discriminate between 10 languages. Another possible method for unknown language detection that has not been explored extensively in the literature, is the use of non-parametric mixture models based on Hierarchical Dirichlet Processes (“HDP”). Such models have been successful in topic modeling, where an outstanding issue with the popular LDA model is the need to specify the number of topics in advance. BIBREF326 introduced an approach to detecting multilingual documents that uses a model very similar to LDA, where languages are analogous to topics in the LDA model. Using a similar analogy, an HDP-based model may be able to detect documents that are written in a language that is not currently modeled by the system. BIBREF24 used LDA to cluster unannotated tweets. Recently BIBREF417 used LDA in unsupervised sentence-level . They manually identified the languages of the topics created with LDA. If there were more topics than languages then the topics in the same language were merged. Filtering, a task that we mentioned earlier in this section, is a very common application of , and it is therefore surprising that there is little research on filtering for specific languages. Filtering is a limit case of with unseen languages, where all languages but one can be considered unknown. Future work could examine how useful different types of negative evidence are for filtering – if we want to detect English documents, e.g., are there empirical advantages in having distinct models of Italian and German (even if we don't care about the distinction between the two languages), or can we group them all together in a single “negative” class? Are we better off including as many languages as possible in the negative class, or can we safely exclude some? Multilingual Documents Multilingual documents are documents that contain text in more than one language. In constructing the hrWac corpus, BIBREF97 found that 4% of the documents they collected contained text in more than one language. BIBREF329 report that web pages in many languages contain formulaic strings in English that do not actually contribute to the content of the page, but may nonetheless confound attempts to identify multilingual documents. Recent research has investigated how to make use of multilingual documents from sources such as web crawls BIBREF40 , forum posts BIBREF263 , and microblog messages BIBREF418 . However, most methods assume that a document contains text from a single language, and so are not directly applicable to multilingual documents. Handling of multilingual documents has been named as an open research question BIBREF1 . Most NLP techniques presuppose monolingual input data, so inclusion of data in foreign languages introduces noise, and can degrade the performance of NLP systems. Automatic detection of multilingual documents can be used as a pre-filtering step to improve the quality of input data. Detecting multilingual documents is also important for acquiring linguistic data from the web, and has applications in mining bilingual texts for statistical MT from online resources BIBREF418 , or to study code-switching phenomena in online communications. There has also been interest in extracting text resources for low-density languages from multilingual web pages containing both the low-density language and another language such as English. The need to handle multilingual documents has prompted researchers to revisit the granularity of . Many researchers consider document-level to be relatively easy, and that sentence-level and word-level are more suitable targets for further research. However, word-level and sentence-level tokenization are not language-independent tasks, and for some languages are substantially harder than others BIBREF419 . BIBREF112 is a language identifier that supports identification of multilingual documents. The system is based on a vector space model using cosine similarity. for multilingual documents is performed through the use of virtual mixed languages. BIBREF112 shows how to construct vectors representative of particular combinations of languages independent of the relative proportions, and proposes a method for choosing combinations of languages to consider for any given document. One weakness of this approach is that for exhaustive coverage, this method is factorial in the number of languages, and as such intractable for a large set of languages. Furthermore, calculating the parameters for the virtual mixed languages becomes infeasibly complex for mixtures of more than 3 languages. As mentioned previously, BIBREF326 propose an LDA-inspired method for multilingual documents that is able to identify that a document is multilingual, identify the languages present and estimate the relative proportions of the document written in each language. To remove the need to specify the number of topics (or in this case, languages) in advance, BIBREF326 use a greedy heuristic that attempts to find the subset of languages that maximizes the posterior probability of a target document. One advantage of this approach is that it is not constrained to 3-language combinations like the method of BIBREF112 . Language set identification has also been considered by BIBREF34 , BIBREF407 , and BIBREF420 , BIBREF276 . To encourage further research on for multilingual documents, in the aforementioned shared task hosted by the Australiasian Language Technology Workshop 2010, discussed in evaluation:sharedtasks, participants were required to predict the language(s) present in a held-out test set containing monolingual and bilingual documents BIBREF378 . The dataset was prepared using data from Wikipedia, and bilingual documents were produced using a segment from an article in one language and a segment from the equivalent article in another language. Equivalence between articles was determined using the cross-language links embedded within each Wikipedia article. The winning entry BIBREF421 first built monolingual models from multilingual training data, and then applied them to a chunked version of the test data, making the final prediction a function of the prediction over chunks. Another approach to handling multilingual documents is to attempt to segment them into contiguous monolingual segments. In addition to identifying the languages present, this requires identifying the locations of boundaries in the text which mark the transition from one language to another. Several methods for supervised language segmentation have been proposed. BIBREF33 generalized a algorithm for monolingual documents by adding a dynamic programming algorithm based on a simple Markov model of multilingual documents. More recently, multilingual algorithms have also been presented by BIBREF140 , BIBREF73 , BIBREF74 , BIBREF106 , and BIBREF82 . Short Texts of short strings is known to be challenging for existing techniques. BIBREF37 tested four different classification methods, and found that all have substantially lower accuracy when applied to texts of 25 characters compared with texts of 125 characters. These findings were later strengthened, for example, by BIBREF145 and BIBREF148 . BIBREF195 describes a method specifically targeted at short texts that augments a dictionary with an affix table, which was tested over synthetic data derived from a parallel bible corpus. BIBREF145 focus on messages of 5–21 characters, using language models over data drawn the from Universal Declaration of Human Rights (UDHR). We would expect that generic methods for of short texts should be effective in any domain where short texts are found, such as search engine queries or microblog messages. However, BIBREF195 and BIBREF145 both only test their systems in a single domain: bible texts in the former case, and texts from the UDHR in the latter case. Other research has shown that results do not trivially generalize across domains BIBREF32 , and found that in UDHR documents is relatively easy BIBREF301 . For both bible and UDHR data, we expect that the linguistic content is relatively grammatical and well-formed, an expectation that does not carry across to domains such as search engine queries and microblogs. Another “short text” domain where has been studied is of proper names. BIBREF306 identify this as an issue. BIBREF422 found that of names is more accurate than of generic words of equivalent length. BIBREF299 raise an important criticism of work on Twitter messages to date: only a small number of European languages has been considered. BIBREF299 expand the scope of for Twitter, covering nine languages across Cyrillic, Arabic and Devanagari scripts. BIBREF152 expand the evaluation further, introducing a dataset of language-labeled Twitter messages across 65 languages constructed using a semi-automatic method that leverages user identity to avoid inducing a bias in the evaluation set towards messages that existing systems are able to identify correctly. BIBREF152 also test a 1300-language model based on BIBREF153 , but find that it performs relatively poorly in the target domain due to a tendency to over-predict low-resource languages. Work has also been done on of single words in a document, where the task is to label each word in the document with a specific language. Work to date in this area has assumed that word tokenization can be carried out on the basis of whitespace. BIBREF35 explore word-level in the context of segmenting a multilingual document into monolingual segments. Other work has assumed that the languages present in the document are known in advance. Conditional random fields (“CRFs”: BIBREF423 ) are a sequence labeling method most often used in for labeling the language of individual words in a multilingual text. CRFs can be thought of as a finite state model with probabilistic transition probabilities optimised over pre-defined cliques. They can use any observations made from the test document as features, including language labels given by monolingual language identifiers for words. BIBREF40 used a CRF trained with generalized expectation criteria, and found it to be the most accurate of all methods tested (NB, LR, HMM, CRF) at word-level . BIBREF40 introduce a technique to estimate the parameters using only monolingual data, an important consideration as there is no readily-available collection of manually-labeled multilingual documents with word-level annotations. BIBREF263 present a two-pass approach to processing Turkish-Dutch bilingual documents, where the first pass labels each word independently and the second pass uses the local context of a word to further refine the predictions. BIBREF263 achieved 97,6% accuracy on distinguishing between the two languages using a linear-chain CRF. BIBREF180 are the only ones so far to use a CRF for of monolingual texts. With a CRF, they attained a higher F-score in German dialect identification than NB or an ensemble consisting of NB, CRF, and SVM. Lately CRFs were also used for by BIBREF52 and BIBREF44 . BIBREF296 investigate of individual words in the context of code switching. They find that smoothing of models substantially improves accuracy of a language identifier based on a NB classifier when applied to individual words. Similar Languages, Language Varieties, and Dialects While one line of research into has focused on pushing the boundaries of how many languages are supported simultaneously by a single system BIBREF382 , BIBREF36 , BIBREF153 , another has taken a complementary path and focused on in groups of similar languages. Research in this area typically does not make a distinction between languages, varieties and dialects, because such terminological differences tend to be politically rather than linguistically motivated BIBREF424 , BIBREF382 , BIBREF5 , and from an NLP perspective the challenges faced are very similar. for closely-related languages, language varieties, and dialects has been studied for Malay–Indonesian BIBREF332 , Indian languages BIBREF114 , South Slavic languages BIBREF377 , BIBREF98 , BIBREF4 , BIBREF425 , Serbo-Croatian dialects BIBREF426 , English varieties BIBREF278 , BIBREF45 , Dutch–Flemish BIBREF53 , Dutch dialects (including a temporal dimension) BIBREF427 , German Dialects BIBREF428 Mainland–Singaporean–Taiwanese Chinese BIBREF429 , Portuguese varieties BIBREF5 , BIBREF259 , Spanish varieties BIBREF70 , BIBREF147 , French varieties BIBREF430 , BIBREF431 , BIBREF432 , languages of the Iberian Peninsula BIBREF388 , Romanian dialects BIBREF120 , and Arabic dialects BIBREF41 , BIBREF78 , BIBREF433 , BIBREF75 , BIBREF434 , the last of which we discuss in more detail in this section. As to off-the-shelf tools which can identify closely-related languages, BIBREF79 released a system trained to identify 27 languages, including 10 language varieties. Closely-related languages, language varieties, and dialects have also been the focus of a number of shared tasks in recent years as discussed in evaluation:sharedtasks. Similar languages are a known problem for existing language identifiers BIBREF332 , BIBREF435 . BIBREF34 identify language pairs from the same language family that also share a common script and the same encoding, as the most difficult to discriminate. BIBREF98 report that achieves only 45% accuracy when trained and tested on 3-way Bosnian/Serbian/Croatian dataset. BIBREF278 found that methods are not competitive with conventional word-based document categorization methods in distinguishing between national varieties of English. BIBREF332 reports that a character trigram model is able to distinguish Malay/Indonesian from English, French, German, and Dutch, but handcrafted rules are needed to distinguish between Malay and Indonesian. One kind of rule is the use of “exclusive words” that are known to occur in only one of the languages. A similar idea is used by BIBREF98 , in automatically learning a “blacklist” of words that have a strong negative correlation with a language – i.e. their presence implies that the text is not written in a particular language. In doing so, they achieve an overall accuracy of 98%, far surpassing the 45% of . BIBREF153 also adopts such “discriminative training” to make use of negative evidence in . BIBREF435 observed that general-purpose approaches to typically use a character representation of text, but successful approaches for closely-related languages, varieties, and dialects seem to favor a word-based representation or higher-order (e.g. 4-grams, 5-grams, and even 6-grams) that often cover whole words BIBREF429 , BIBREF98 , BIBREF278 , BIBREF343 . The study compared character with word-based representations for over varieties of Spanish, Portuguese and French, and found that word-level models performed better for varieties of Spanish, but character models perform better in the case of Portuguese and French. To train accurate and robust systems that discriminate between language varieties or similar languages, models should ideally be able to capture not only lexical but more abstract systemic differences between languages. One way to achieve this, is by using features that use de-lexicalized text representations (e.g. by substituting named entities or content words by placeholders), or at a higher level of abstraction, using POS tags or other morphosyntactic information BIBREF70 , BIBREF390 , BIBREF43 , or even adversarial machine learning to modify the learned representations to remove such artefacts BIBREF358 . Finally, an interesting research direction could be to combine work on closely-related languages with the analysis of regional or dialectal differences in language use BIBREF436 , BIBREF437 , BIBREF438 , BIBREF432 . In recent years, there has been a significant increase of interest in the computational processing of Arabic. This is evidenced by a number of research papers in several NLP tasks and applications including the identification/discrimination of Arabic dialects BIBREF41 , BIBREF78 . Arabic is particularly interesting for researchers interested in language variation due to the fact that the language is often in a diaglossic situation, in which the standard form (Modern Standard Arabic or “MSA”) coexists with several regional dialects which are used in everyday communication. Among the studies published on the topic of Arabic , BIBREF41 proposed a supervised approach to distinguish between MSA and Egyptian Arabic at the sentence level, and achieved up to 85.5% accuracy over an Arabic online commentary dataset BIBREF379 . BIBREF433 achieved higher results over the same dataset using a linear-kernel SVM classifier. BIBREF78 compiled a dataset containing MSA, Egyptian Arabic, Gulf Arabic and Levantine Arabic, and used it to investigate three classification tasks: (1) MSA and dialectal Arabic; (2) four-way classification – MSA, Egyptian Arabic, Gulf Arabic, and Levantine Arabic; and (3) three-way classification – Egyptian Arabic, Gulf Arabic, and Levantine Arabic. BIBREF439 explores the use of sentence-level Arabic dialect identification as a pre-processor for MT, in customizing the selection of the MT model used to translate a given sentence to the dialect it uses. In performing dialect-specific MT, the authors achieve an improvement of 1.0% BLEU score compared with a baseline system which does not differentiate between Arabic dialects. Finally, in addition to the above-mentioned dataset of BIBREF379 , there are a number of notable multi-dialect corpora of Arabic: a multi-dialect corpus of broadcast speeches used in the ADI shared task BIBREF440 ; a multi-dialect corpus of (informal) written Arabic containing newspaper comments and Twitter data BIBREF441 ; a parallel corpus of 2,000 sentences in MSA, Egyptian Arabic, Tunisian Arabic, Jordanian Arabic, Palestinian Arabic, and Syrian Arabic, in addition to English BIBREF442 ; a corpus of sentences in 18 Arabic dialects (corresponding to 18 different Arabic-speaking countries) based on data manually sourced from web forums BIBREF75 ; and finally two recently compiled multi-dialect corpora containing microblog posts from Twitter BIBREF241 , BIBREF443 . While not specifically targeted at identifying language varieties, BIBREF355 made the critical observation that when naively trained, systems tend to perform most poorly over language varieties from the lowest socio-economic demographics (focusing particularly on the case of English), as they tend to be most under-represented in training corpora. If, as a research community, we are interested in the social equitability of our systems, it is critical that we develop datasets that are truly representative of the global population, to better quantify and remove this effect. To this end, BIBREF355 detail a method for constructing a more representative dataset, and demonstrate the impact of training on such a dataset in terms of alleviating socio-economic bias. Domain-specific One approach to is to build a generic language identifier that aims to correctly identify the language of a text without any information about the source of the text. Some work has specifically targeted across multiple domains, learning characteristics of languages that are consistent between different sources of text BIBREF150 . However, there are often domain-specific features that are useful for identifying the language of a text. In this survey, our primary focus has been on of digitally-encoded text, using only the text itself as evidence on which to base the prediction of the language. Within a text, there can sometimes be domain-specific peculiarities that can be used for . For example, BIBREF399 investigates of user-to-user messages in the eBay e-commerce portal. He finds that using only the first two and last two words of a message is sufficient for identifying the language of a message. Conclusions This article has presented a comprehensive survey on language identification of digitally-encoded text. We have shown that is a rich, complex, and multi-faceted problem that has engaged a wide variety of research communities. accuracy is critical as it is often the first step in longer text processing pipelines, so errors made in will propagate and degrade the performance of later stages. Under controlled conditions, such as limiting the number of languages to a small set of Western European languages and using long, grammatical, and structured text such as government documents as training data, it is possible to achieve near-perfect accuracy. This led many researchers to consider a solved problem, as argued by BIBREF2 . However, becomes much harder when taking into account the peculiarities of real-world data, such as very short documents (e.g. search engine queries), non-linguistic “noise” (e.g. HTML markup), non-standard use of language (e.g. as seen in social media data), and mixed-language documents (e.g. forum posts in multilingual web forums). Modern approaches to are generally data-driven and are based on comparing new documents with models of each target language learned from data. The types of models as well as the sources of training data used in the literature are diverse, and work to date has not compared and evaluated these in a systematic manner, making it difficult to draw broader conclusions about what the “best” method for actually is. We have attempted to synthesize results to date to identify a set of “best practices”, but these should be treated as guidelines and should always be considered in the broader context of a target application. Existing work on serves to illustrate that the scope and depth of the problem are much greater than they may first appear. In openissues, we discussed open issues in , identifying the key challenges, and outlining opportunities for future research. Far from being a solved problem, aspects of make it an archetypal learning task with subtleties that could be tackled by future work on supervised learning, representation learning, multi-task learning, domain adaptation, multi-label classification and other subfields of machine learning. We hope that this paper can serve as a reference point for future work in the area, both for providing insight into work to date, as well as pointing towards the key aspects that merit further investigation. This research was supported in part by the Australian Research Council, the Kone Foundation and the Academy of Finland. We would like to thank Kimmo Koskenniemi for many valuable discussions and comments concerning the early phases of the features and the methods sections.
Answer with content missing: (Names of many identifiers missing) TextCat, ChromeCLD, LangDetect, langid.py, whatlang, whatthelang, YALI, LDIG, Polyglot 3000, Lextek Language Identifier and Open Xerox Language Identifier.
4c026715ee365c709381c5da770bdc8297eed19f
4c026715ee365c709381c5da770bdc8297eed19f_0
Q: How is "hirability" defined? Text: Introduction Among assessment methods, the job interview remains the most common way to evaluate candidates. The interview can be done via phone, live video, face to face, or more recently asynchronous video interview. For the latter, candidates connect to a platform, and record themselves while answering a set of questions chosen by the recruiter. The platform then allows several recruiters to evaluate the candidate, to discuss among themselves and possibly to invite the candidate to a face-to-face interview. Recruiters choose to use these platforms because it gives them access to a larger pool of candidates, and it speeds up the application processing time. In addition, it allows candidates to do the interview whenever and wherever it suits them the most. However, given a large number of these asynchronous interviews it may quickly become unmanageable for recruiters. The highly structured characteristic of asynchronous video interviews (same questions, same amount of time per candidate) enhances their predictive validity, and reduces inter-recruiter variability BIBREF0 . Moreover, recent advances in Social Signal Processing (SSP) BIBREF1 have enabled automated candidate assessment BIBREF2 , and companies have already started deploying solutions serving that purpose. However, previous studies used corpora of simulated interviews with limited sizes. The work proposed in this paper relies on a corpus that has been built in collaboration with a company and that consists of more than 7000 real job interviews for 475 open positions. The size of this corpus enables the exploration of emerging models such as deep learning models, that are known to be difficult to deploy for Social Computing because of the difficulty to obtain large annotations of social behaviors. Based on those facts, we propose HireNet, a new hierarchical attention neural network for the purpose of automatically classifying candidates into two classes: hirable and not hirable. Our model aims to assist recruiters in the selection process. It does not aim to make any automatic decision about candidate selection. First, this model was built to mirror the sequential and hierarchical structure of an interview assessment: recruiters watch a sequence of questions and answers, which are themselves sequences of words or behavioral signals. Second, the HireNet model integrates the context of the open position (questions during the interview and job title) in order both to determine the relative importance between question-answer pairs and to highlight important behavioral cues with regard to a question. Third, HireNet attention mechanisms enhance the interpretability of our model for each modality. In fact, they provide a way for recruiters to validate and trust the model through visualization, and possibly for candidates to locate their strengths or areas of improvement in an interview. In this paper, we first present an overview of the related works for automatic video interview assessment. Then we go through the construction and the underlying hypotheses of HireNet, our neural model for asynchronous video interview assessment. After, we discuss the binary classification results of our model compared to various baselines, and show salient interview slices highlighted by the integrated attention mechanisms. Finally we conclude and discuss the future directions of our study. Databases To the best of our knowledge, only one corpus of interviews with real open positions has been collected and is subject to automatic analysis BIBREF3 . This corpus consists of face-to-face job interviews for a marketing short assignment whose candidates are mainly students. There are video corpora of face-to-face mock interviews that include two corpora built at the Massachusetts Institute of Technology BIBREF4 , BIBREF5 , and a corpus of students in services related to hospitality BIBREF6 . Many corpora of simulated asynchronous video interviews have also been built: a corpus of employees BIBREF7 , a corpus of students from Bangalore University BIBREF8 and a corpus collected through the use of crowdsourcing tools BIBREF2 . Some researchers are also interested in online video resumes and have constituted a corpus of video CVs from YouTube BIBREF9 . A first impressions challenge dataset was also supplemented by hirability annotation BIBREF10 . Some corpora are annotated by experts or students in psychology BIBREF7 , BIBREF2 , BIBREF3 , BIBREF11 . Other corpora have used crowdsourcing platforms or naive observers BIBREF8 for annotation. Table TABREF2 contains a summary of the corpora of job interviews used in previous works. Machine learning approaches for automatic analysis of video job interview Features Recent advances in SSP have offered toolboxes to extract features from audio BIBREF13 and video streams BIBREF14 . As asynchronous job interviews are videos, features from each modality (verbal content, audio and video) have to be extracted frame by frame in order to build a classification model. Audio cues consist mainly of prosody features (fundamental frequency, intensity, mel-frequency cepstral coefficients, etc) and speaking activity (pauses, silences, short utterances, etc) BIBREF15 , BIBREF12 . Features derived from facial expressions (facial actions units, head rotation and position, gaze direction, etc) constitute the most extracted visual cues BIBREF2 . Finally, advances in automatic speech recognition have enabled researchers to use the verbal content of candidates. In order to describe the verbal content, researchers have used lexical statistics (number of words, number of unique words, etc), dictionaries (Linguistic Inquiry Word Count) BIBREF12 , topic modeling BIBREF5 , bag of words or more recently document embedding BIBREF7 . Representation Once features are extracted frame by frame, the problem of temporality has to be addressed. The most common approach is to simplify the temporal aspect by collapsing the time dimension using statistical functions (e.g. mean, standard deviation, etc). However, the lack of sequence modeling can lead to the loss of some important social signals such as emphasis by raising one's eyebrows followed by a smile BIBREF16 . Moreover co-occurrences of events are not captured by this representation. Thus, a distinction between a fake smile (activation of action unit 12) and a true smile (activation of action units 2, 4 and 12) is impossible BIBREF17 without modeling co-occurrences. To solve the problem of co-occurrences, the representation of visual words, audio words or visual audio words has been proposed BIBREF2 , BIBREF7 , BIBREF12 . The idea is to consider the snapshot of each frame as a word belonging to a specific dictionary. In order to obtain this codebook, an algorithm of unsupervised clustering is used to cluster common frames. Once we obtain the clusters, each class represents a "word" and we can easily map an ensemble of extracted frames to a document composed of these words. Then, the task is treated like a document classification. Additionally, the representation is not learned jointly with the classification models which can cause a loss of information. Modeling attempts and classification algorithms As video job interviews have multiple levels, an architectural choice has to be made accordingly. Some studies tried to find the most salient moments during an answer to a question BIBREF15 , the most important questions BIBREF5 or to use all available videos independently BIBREF2 in order to predict the outcome of a job interview. Finally, when a sufficient representation is built, a classification or a regression model is trained. Regularized logistic regression (LASSO or Ridge), Random Forest and Support Vector Machines are the most widely used algorithms. From a practical point of view, manually annotating thin slices of videos is time consuming. On the other side, considering each answer with the same label as the outcome of the interview is considerably less expensive, though some examples could be noisy. Indeed, a candidate with a negative outcome could have performed well on some questions. Furthermore, all these models do not take into account the sequentiality of social signals or questions. Neural networks and attention mechanisms in Social Computing Neural networks have proven to be successful in numerous Social Computing tasks. Multiple architectures in the field of neural networks have outperformed hand crafted features for emotion detection in videos BIBREF18 , facial landmarks detection BIBREF14 , document classification BIBREF19 These results are explained by the capability of neural networks to automatically perform useful transformations on low level features. Moreover, some architectures such as Recurrent Neural Networks were especially tailored to represent sequences. In addition, attention mechanisms have proven to be successful in highlighting salient information enhancing the performance and interpretability of neural networks. For example, in rapport detection, attention mechanisms allow to focus only on important moments during dyadic conversations BIBREF20 . Finally, numerous models have been proposed to model the interactions between modalities in emotion detection tasks through attention mechanisms BIBREF21 , BIBREF18 . HireNet and underlying hypotheses We propose here a new model named HireNet, as in a neural network for hirability prediction. It is inspired by work carried out in neural networks for natural language processing and from the HierNet BIBREF19 , in particular, which aims to model a hierarchy in a document. Following the idea that a document is composed of sentences and words, a job interview could be decomposed, as a sequence of answers to questions, and the answers, as a sequence of low level descriptors describing each answer. The model architecture (see Figure FIGREF6 ) is built relying on four hypotheses. The first hypothesis (H1) is the importance of the information provided by the sequentiality of the multimodal cues occurring in the interview. We thus choose to use a sequential model such as a recurrent neural network. The second hypothesis (H2) concerns the importance of the hierarchical structure of an interview: the decision of to hire should be performed at the candidate level, the candidates answering several questions during the interview. We thus choose to introduce different levels of hierarchy in HireNet namely the candidate level, the answer level and the word (or frame) level. The third hypothesis (H3) concerns the existence of salient information or social signals in a candidate's video interview: questions are not equally important and not all the parts of the answers have an equal influence on the recruiter's decision. We thus choose to introduce attention mechanisms in HireNet. The last hypothesis (H4) concerns the importance of contextual information such as questions and job titles. Therefore, HireNet includes vectors that encode this contextual information. Formalization We represent a video interview as an object composed of a job title INLINEFORM0 and INLINEFORM1 question-answer pairs INLINEFORM2 . In our model, the job title INLINEFORM3 is composed of a sequence of INLINEFORM4 words INLINEFORM5 where INLINEFORM6 denotes the length of the job title. In a same way, the INLINEFORM7 -th question INLINEFORM8 is a sequence of INLINEFORM9 words INLINEFORM10 where INLINEFORM11 denotes the number of words in the question INLINEFORM12 . INLINEFORM13 denotes the sequence of low level descriptors INLINEFORM14 describing the INLINEFORM15 -th answer. In our study these low level descriptors could be embedded words, features extracted from an audio frame, or features extracted from a video frame. INLINEFORM16 denotes the length of the sequence of low level descriptors of the INLINEFORM17 -th answer. We decided to use a Gated Recurrent Unit (GRU) BIBREF22 to encode information from the job title, the questions and the answers. A GRU is able to encode sequences. It uses two mechanisms to solve the vanishing gradient problem, namely the reset gate, controlling how much past information is needed; and the update gate, determining how much past information has to be kept and the amount of new information to add. For formalization, we will denote by INLINEFORM0 the hidden state of GRU at timestep INLINEFORM1 of the encoded sequence. This part of the model aims to encode the sequences of low level descriptors. As mentioned before, the sequences can represent a text, an audio stream or a video stream. A bidirectional GRU is used to obtain representations from both directions for each element of the sequence INLINEFORM0 . It contains the forward GRU which reads the sequence from left to right and backward GRU which reads the sequence from right to left: DISPLAYFORM0 DISPLAYFORM1 In the same way, an encoding for a given low level descriptor INLINEFORM0 is obtained by concatenating forward hidden states and backward hidden states: DISPLAYFORM0 Encoding sequences in a bidirectional fashion ensures the same amount of previous information for each element of INLINEFORM0 . Using a simple forward encoder could lead to biased attention vectors focusing only on the latest elements of the answers. In this study, the local context information corresponds to the questions INLINEFORM0 . In order to encode these sentences, we use a simple forward GRU. DISPLAYFORM0 And the final representation of a question is the hidden state of the last word in the question INLINEFORM0 (i.e. INLINEFORM1 ). In order to obtain a better representation of of the candidate's answer, we aim to detect elements in the sequence which were salient for the classification task. Moreover, we hypothesize that the local context is highly important. Different behavioral signals can occur depending on the question type and it can also influence the way recruiters assess their candidates BIBREF23 . An additive attention mechanism is proposed in order to extract the importance of each moment in the sequence representing the answer. DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 and INLINEFORM1 are weight matrices, INLINEFORM2 and INLINEFORM3 are weight vectors and INLINEFORM4 denotes the transpose of INLINEFORM5 . In order to have the maximum amount of information, we concatenate at the second level, the representation of the local context and the answer representation. Moreover, we think that given the way video interviews work, the more questions a candidate answers during the interview, the more he adapts and gets comfortable. In the light of this, we decided to encode question-answer pairs as a sequence. Given INLINEFORM0 , we can use the same representation scheme as that of the low level encoder: DISPLAYFORM0 DISPLAYFORM1 We will also concatenate forward hidden states and backward hidden states: DISPLAYFORM0 We encode the job title the same way we encode the questions : DISPLAYFORM0 As done for the representation of the question, the final representation of the job title is the hidden state of the last word of INLINEFORM0 (i.e. INLINEFORM1 ). The importance of a question depends on the context of the interview, and specifically, on the type of job the candidate is applying for. For instance, a junior sales position interview could accord more importance to the social skills, while an interview for a senior position could be more challenging on the technical side. Like low level attention, high level attention is composed of an additive attention mechanism: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 are weight matrices, INLINEFORM2 and INLINEFORM3 are weight vectors and INLINEFORM4 denotes the transpose of INLINEFORM5 . Finally INLINEFORM6 summarizes all the information of the job interview. Once INLINEFORM0 is obtained, we use it as representation in order to classify candidates: DISPLAYFORM0 where INLINEFORM0 is a weight matrix and INLINEFORM1 a weight vector. As the problem we are facing is that of a binary classification, we chose to minimize the binary cross-entropy computed between INLINEFORM2 and true labels of candidates INLINEFORM3 . Dataset We have decided to focus on only one specific type of job: sales positions. After filtering based on specific job titles from the ROME Database, a list of positions was selected and verified by the authors and an expert from the Human Resources (HR). Finally, in a collaboration with an HR industry actor, we have obtained a dataset of French video interviews comprising more than 475 positions and 7938 candidates. As they watch candidates' videos, recruiters can like, dislike, shortlist candidates, evaluate them on predefined criteria, or write comments. To simplify the task, we set up a binary classification: candidates who have been liked or shortlisted are considered part of the hirable class and others part of the not hirable class. If multiple annotators have annotated the same candidates, we proceed with a majority vote. In case of a draw, the candidate is considered hirable. It is important to note that the videos are quite different from what could be produced in a laboratory setup. Videos can be recorded from a webcam, a smartphone or a tablet., meaning noisy environments and low quality equipment are par for the course. Due to these real conditions, feature extraction may fail for a single modality during a candidate's entire answer. One example is the detection of action units when the image has lighting problems. We decided to use all samples available in each modality separately. Some statistics about the dataset are available in Table TABREF33 . Although the candidates agreed to the use of their interviews, the dataset will not be released to public outside of the scope of this study due to the videos being personal data subject to high privacy constraints. Experimental settings The chosen evaluation metrics are precision, recall and F1-score of hirable class. They are well suited for binary classification and used in previous studies BIBREF2 . We split the dataset into a training set, a validation set for hyper-parameter selection based on the F1-score, and a test set for the final evaluation of each model. Each set constitutes respectively 80%, 10% and 10% of the full dataset. Extraction of social multimodal features For each modality, we selected low-level descriptors to be used as per-frame features, and sequence-level features to be used as the non-sequential representation of a candidate's whole answer for our non-sequential baselines. Word2vec: Pretrained word embeddings are used for the BoTW (Bag of Text Words, presented later in this section), and the neural networks. We used word embeddings of dimension 200 from BIBREF24 pretrained on a French corpus of Wikipedia. eGeMAPS: Our frame-level audio features are extracted using OpenSmile BIBREF25 . The configuration we use is the same one used to obtain the eGeMAPS BIBREF13 features. GeMAPS is a famous minimalistic set of features selected for their saliency in Social Computing, and eGeMAPS is its extended version. We extract the per-frame features prior to the aggregations performed to obtain the eGeMAPS representation. OpenFace: We extract frame-level visual features with OpenFace BIBREF14 , a state-of-the-art visual behavioral analysis software that yields various per-frame meaningful metrics. We chose to extract the position and rotation of the head, the intensity and presence of actions units, and the gaze direction. As different videos have different frame-rates, we decided to smooth values with a time-window of 0.5 s and an overlap of 0.25 s. The duration of 0.5 s is frequently used in the literature of Social Computing BIBREF26 and has been validated in our corpus as a suitable time-window size by annotating segments of social signals in a set of videos. Baselines First, we compare our model with several vote-based methods: i) Random vote baseline (One thousand random draws respecting the train dataset label balance were made. The F1-score is then averaged over those one thousand samples); ii) Majority Vote (This baseline is simply the position-wise majority label. Since our model could just be learning the origin open position for each candidate and its corresponding majority vote, we decided to include this baseline to show that our model reaches beyond those cues). Second, we compare our model with non-sequential baselines: i)-a Non-sequential text (we train a Doc2vec BIBREF27 representation on our corpus, and we use it as a representation of our textual inputs); i)-b Non-sequential audio (we take the eGeMAPS audio representation as described in BIBREF13 . That representation is obtained by passing the above descriptors into classical statistical functions and hand-crafted ad hoc measures applied over the whole answer. The reason we chose GeMAPS features is also that they were designed to ease comparability between different works in the field of Social Computing); i)-c Non-sequential video (our low-level video descriptors include binary descriptors and continuous descriptors. The mean, standard deviation, minimum, maximum, sum of positive gradients and sum of negative gradients have been successfully used for a behavioral classification on media content in BIBREF28 . We followed that representation scheme for our continuous descriptors. As for our discrete features, we chose to extract the mean, the number of active segments, and the active segment duration mean and standard deviation) ii) Bag of * Words (We also chose to compare our model to BIBREF2 's Bag of Audio and Video Words: we run a K-means algorithm on all the low-level frames in our dataset. Then we take our samples as documents, and our frames' predicted classes as words, and use a "Term Frequency-inverse Document Frequency" (TF-iDF) representation to model each sample). For each modality, we use the non-sequential representations mentioned above in a monomodal fashion as inputs to three classic learning algorithms (namely SVM, Ridge regression and Random Forest) with respective hyperparameter searches. Best of the three algorithms is selected. As these models do not have a hierarchical structure, we will train them to yield answer-wise labels (as opposed to the candidate-wise labeling performed by our hierarchical model). At test time we average the output value of the algorithm for each candidate on the questions he answered. Third, the proposed sequential baselines aim at checking the four hypotheses described above: i) comparing the Bidirectional-GRU model with previously described non sequential approaches aims to validate H1 on the contribution of sequentiality in an answer-wise representation; ii) the Hierarchical Averaged Network (HN_AVG) baseline adds the hierarchy in the model in order to verify H2 and H3 (we replace the attention mechanism by an averaging operator over all of the non-zero bidirectional GRU outputs); iii) the Hierarchical Self Attention Network (HN_SATT) is a self-attention version of HireNet which aims to see the actual effect of the added context information (H4). Multimodal models Given the text, audio, and video trained versions of our HireNet, we report two basic models performing multimodal inference, namely an early fusion approach and a late fusion approach. In the early fusion, we concatenate the last layer INLINEFORM0 of each modality as a representation ,and proceed with the same test procedure as our non-sequential baselines. For our late fusion approach, the decision for a candidate is carried out using the average decision score INLINEFORM1 between the three modalities. Results and analyses First of all, Tables TABREF35 and TABREF39 show that most of our neural models fairly surpass the vote-based baselines. In Table TABREF35 , the F1-score has increased, going from the non-sequential baselines, to the Bidirectional-GRU baselines for all the modalities, which supports H1. We can also see that HN_AVG is superior to the Bidirectional-GRU baselines for audio and text validating H2 for those two modalities. This suggests that sequentiality and hierarchy are adequate inductive biases for a job interview assessment machine learning algorithm. As for H3, HN_SATT did show better results than HN_AVG, for text and audio. In the end, our HireNet model surpasses HN_AVG and HN_SATT for each modality. Consequently, a fair amount of useful information is present in the contextual frame of an interview, and this information can be leveraged through our model, as it is stated in H4. Audio and text monomodal models display better performance than video models. The same results were obtained in BIBREF2 . Our attempts at fusing the multimodal information synthesized in the last layer of each HireNet model only slightly improved on the single modality models. Attention visualization Text In order to visualize the different words on which attention values were high, we computed new values of interest as it has been done in BIBREF20 . As the sentence length changes between answers, we multiply every word's attention value ( INLINEFORM0 ) by the number of words in the answer, resulting in the relative attention of the word with respect to the sentence. In a same way, we multiply each question attention by the number of questions, resulting in the relative attention of the question with respect to the job interview. Then, in a similar way as BIBREF19 , we compute INLINEFORM1 where INLINEFORM2 and INLINEFORM3 are respectively the values of interest for word INLINEFORM4 and question INLINEFORM5 . The list of the 20 most important words contains numerous names of banks and insurances companies (Natixis, Aviva, CNP, etc) and job knowledge vocabulary (mortgage, brokerage, tax exemption, etc), which means that their occurrence in candidates answers takes an important role in hirability prediction. Video In order to visualize which moments were highlighted by attention mechanisms in a video, we display an example of the attention values for an answer in Figure FIGREF41 . In this figure, the higher the attention value, the more the corresponding frames are considered task-relevant by the attention mechanism. As we can see, some peaks are present. Three thin slices with high attention values are presented. Some social signals that are important in a job interview are identified. We hypothesize that the smile detected in Frame 1 could be part of a tactic to please the interviewer known as deceptive ingratiation BIBREF29 . In addition, Frames 2 and 3 are representative of stress signals from the candidate. In fact, lip suck was suggested to be linked to anxiety in BIBREF30 . Audio The same visualization procedure used for video has been investigated for audio. As audio signal is harder to visualize, we decided to describe the general pattern of audio attention weights. In most cases, when the prosody is homogeneous through the answer, attention weights are distributed uniformly and show no peaks, as opposed to what was observed for video. However, salient moments may appear, especially when candidates produce successive disfluencies. Thus, we have identified peaks where false starts, filler words, repeating or restarting sentences occur. Questions We aim to explore the attention given to the different questions during the same interview. For this purpose, we randomly picked one open position from the test dataset comprising 40 candidates. Questions describing the interview and the corresponding averaged attention weights are displayed in the Figure FIGREF42 . First, it seems attention weight variability between questions is higher for the audio modality than for text and video modalities. Second, the decrease in attention for Questions 5 and 6 could be explained by the fact that those questions are designed to assess "soft skills". Third, peaks of attention weight for the audio modality on Questions 2 and 4 could be induced by the fact that these questions are job-centric. Indeed, it could be possible that disfluencies tend to appear more in job-centric questions or that prosody is more important in first impressions of competence. Conclusion and future directions The HR industry actors nowadays do offer tools to automatically assess candidates undergoing asynchronous video interviews. However, no studies have been published regarding these tools and their predictive validity. The contribution of this work is twofold. First, we evaluate the validity of previous approaches in real conditions (e.g. in-the-wild settings, true applications, real evaluations, etc). Second, we used deep learning methods in order to faithfully model the structure of asynchronous video interviews. In that sense, we proposed a new version of Hierarchical Attention Networks that is aware of the interviews contextual elements (questions and job title) called HireNet, which has showed better performance than previous approaches. First basic experiments on multimodal fusion have also been performed (early and late fusion). In future work, the obtained multimodal performance could be improved by leveraging more sophisticated multimodal fusion schemes. HireNet was evaluated on a corpus containing interviews for various jobs – 475 different positions – in the domain of sales positions. Theoretical findings from industrial organizational psychology suggest that some dimensions are common across different positions BIBREF31 . However we would like to extend the corpus to other domains than sales in order to i) validate the relevance of our model for other types of positions, ii) determine which competencies are common or not across jobs. In that sense, the use of multi-domain models BIBREF32 could be of great help. Our model currently considers two labels (“hirable” and “not hirable”). Extending our annotations to more fine-grained information (communication skills, social effectiveness, etc) could provide useful insights about the profile of a candidate and its potential fit with the position in question. Through the use of attention mechanisms, we aimed to highlight salient moments and questions for each modality, which contributes to the transparency and the interpretability of HireNet. Such transparency is very important for Human Resources practitioners to trust an automatic evaluation. Further investigations could be conducted on the proposed attention mechanisms: i) to confirm the saliency of the selected moments using the discipline of Industrial and Organizational psychology; ii) to know the influence of the slices deemed important. This way, a tool to help candidates train for interviews could be developed. Last but not least, ethics and fairness are important considerations, that deserve to be studied. In that sense, detection of individual and global bias should be prioritized in order to give useful feedbacks to practitioners. Furthermore we are considering using adversarial learning as in BIBREF33 in order to ensure fairness during the training process. Acknowledgments This work was supported by the company EASYRECRUE, from whom the job interview videos were collected. We would like to thank Jeremy Langlais for his support and his help. We would also like to thank Valentin Barriere for his valuable input and the name given to the model and Marc Jeanmougin and Nicolas Bouche for their help with the computing environment. Finally, we thank Erin Douglas for proofreading the article.
candidates who have been liked or shortlisted are considered part of the hirable class
4ef3bfebabda83a6d5ca55d30de0e05893f241e3
4ef3bfebabda83a6d5ca55d30de0e05893f241e3_0
Q: Have the candidates given their consent to have their videos used for the research? Text: Introduction Among assessment methods, the job interview remains the most common way to evaluate candidates. The interview can be done via phone, live video, face to face, or more recently asynchronous video interview. For the latter, candidates connect to a platform, and record themselves while answering a set of questions chosen by the recruiter. The platform then allows several recruiters to evaluate the candidate, to discuss among themselves and possibly to invite the candidate to a face-to-face interview. Recruiters choose to use these platforms because it gives them access to a larger pool of candidates, and it speeds up the application processing time. In addition, it allows candidates to do the interview whenever and wherever it suits them the most. However, given a large number of these asynchronous interviews it may quickly become unmanageable for recruiters. The highly structured characteristic of asynchronous video interviews (same questions, same amount of time per candidate) enhances their predictive validity, and reduces inter-recruiter variability BIBREF0 . Moreover, recent advances in Social Signal Processing (SSP) BIBREF1 have enabled automated candidate assessment BIBREF2 , and companies have already started deploying solutions serving that purpose. However, previous studies used corpora of simulated interviews with limited sizes. The work proposed in this paper relies on a corpus that has been built in collaboration with a company and that consists of more than 7000 real job interviews for 475 open positions. The size of this corpus enables the exploration of emerging models such as deep learning models, that are known to be difficult to deploy for Social Computing because of the difficulty to obtain large annotations of social behaviors. Based on those facts, we propose HireNet, a new hierarchical attention neural network for the purpose of automatically classifying candidates into two classes: hirable and not hirable. Our model aims to assist recruiters in the selection process. It does not aim to make any automatic decision about candidate selection. First, this model was built to mirror the sequential and hierarchical structure of an interview assessment: recruiters watch a sequence of questions and answers, which are themselves sequences of words or behavioral signals. Second, the HireNet model integrates the context of the open position (questions during the interview and job title) in order both to determine the relative importance between question-answer pairs and to highlight important behavioral cues with regard to a question. Third, HireNet attention mechanisms enhance the interpretability of our model for each modality. In fact, they provide a way for recruiters to validate and trust the model through visualization, and possibly for candidates to locate their strengths or areas of improvement in an interview. In this paper, we first present an overview of the related works for automatic video interview assessment. Then we go through the construction and the underlying hypotheses of HireNet, our neural model for asynchronous video interview assessment. After, we discuss the binary classification results of our model compared to various baselines, and show salient interview slices highlighted by the integrated attention mechanisms. Finally we conclude and discuss the future directions of our study. Databases To the best of our knowledge, only one corpus of interviews with real open positions has been collected and is subject to automatic analysis BIBREF3 . This corpus consists of face-to-face job interviews for a marketing short assignment whose candidates are mainly students. There are video corpora of face-to-face mock interviews that include two corpora built at the Massachusetts Institute of Technology BIBREF4 , BIBREF5 , and a corpus of students in services related to hospitality BIBREF6 . Many corpora of simulated asynchronous video interviews have also been built: a corpus of employees BIBREF7 , a corpus of students from Bangalore University BIBREF8 and a corpus collected through the use of crowdsourcing tools BIBREF2 . Some researchers are also interested in online video resumes and have constituted a corpus of video CVs from YouTube BIBREF9 . A first impressions challenge dataset was also supplemented by hirability annotation BIBREF10 . Some corpora are annotated by experts or students in psychology BIBREF7 , BIBREF2 , BIBREF3 , BIBREF11 . Other corpora have used crowdsourcing platforms or naive observers BIBREF8 for annotation. Table TABREF2 contains a summary of the corpora of job interviews used in previous works. Machine learning approaches for automatic analysis of video job interview Features Recent advances in SSP have offered toolboxes to extract features from audio BIBREF13 and video streams BIBREF14 . As asynchronous job interviews are videos, features from each modality (verbal content, audio and video) have to be extracted frame by frame in order to build a classification model. Audio cues consist mainly of prosody features (fundamental frequency, intensity, mel-frequency cepstral coefficients, etc) and speaking activity (pauses, silences, short utterances, etc) BIBREF15 , BIBREF12 . Features derived from facial expressions (facial actions units, head rotation and position, gaze direction, etc) constitute the most extracted visual cues BIBREF2 . Finally, advances in automatic speech recognition have enabled researchers to use the verbal content of candidates. In order to describe the verbal content, researchers have used lexical statistics (number of words, number of unique words, etc), dictionaries (Linguistic Inquiry Word Count) BIBREF12 , topic modeling BIBREF5 , bag of words or more recently document embedding BIBREF7 . Representation Once features are extracted frame by frame, the problem of temporality has to be addressed. The most common approach is to simplify the temporal aspect by collapsing the time dimension using statistical functions (e.g. mean, standard deviation, etc). However, the lack of sequence modeling can lead to the loss of some important social signals such as emphasis by raising one's eyebrows followed by a smile BIBREF16 . Moreover co-occurrences of events are not captured by this representation. Thus, a distinction between a fake smile (activation of action unit 12) and a true smile (activation of action units 2, 4 and 12) is impossible BIBREF17 without modeling co-occurrences. To solve the problem of co-occurrences, the representation of visual words, audio words or visual audio words has been proposed BIBREF2 , BIBREF7 , BIBREF12 . The idea is to consider the snapshot of each frame as a word belonging to a specific dictionary. In order to obtain this codebook, an algorithm of unsupervised clustering is used to cluster common frames. Once we obtain the clusters, each class represents a "word" and we can easily map an ensemble of extracted frames to a document composed of these words. Then, the task is treated like a document classification. Additionally, the representation is not learned jointly with the classification models which can cause a loss of information. Modeling attempts and classification algorithms As video job interviews have multiple levels, an architectural choice has to be made accordingly. Some studies tried to find the most salient moments during an answer to a question BIBREF15 , the most important questions BIBREF5 or to use all available videos independently BIBREF2 in order to predict the outcome of a job interview. Finally, when a sufficient representation is built, a classification or a regression model is trained. Regularized logistic regression (LASSO or Ridge), Random Forest and Support Vector Machines are the most widely used algorithms. From a practical point of view, manually annotating thin slices of videos is time consuming. On the other side, considering each answer with the same label as the outcome of the interview is considerably less expensive, though some examples could be noisy. Indeed, a candidate with a negative outcome could have performed well on some questions. Furthermore, all these models do not take into account the sequentiality of social signals or questions. Neural networks and attention mechanisms in Social Computing Neural networks have proven to be successful in numerous Social Computing tasks. Multiple architectures in the field of neural networks have outperformed hand crafted features for emotion detection in videos BIBREF18 , facial landmarks detection BIBREF14 , document classification BIBREF19 These results are explained by the capability of neural networks to automatically perform useful transformations on low level features. Moreover, some architectures such as Recurrent Neural Networks were especially tailored to represent sequences. In addition, attention mechanisms have proven to be successful in highlighting salient information enhancing the performance and interpretability of neural networks. For example, in rapport detection, attention mechanisms allow to focus only on important moments during dyadic conversations BIBREF20 . Finally, numerous models have been proposed to model the interactions between modalities in emotion detection tasks through attention mechanisms BIBREF21 , BIBREF18 . HireNet and underlying hypotheses We propose here a new model named HireNet, as in a neural network for hirability prediction. It is inspired by work carried out in neural networks for natural language processing and from the HierNet BIBREF19 , in particular, which aims to model a hierarchy in a document. Following the idea that a document is composed of sentences and words, a job interview could be decomposed, as a sequence of answers to questions, and the answers, as a sequence of low level descriptors describing each answer. The model architecture (see Figure FIGREF6 ) is built relying on four hypotheses. The first hypothesis (H1) is the importance of the information provided by the sequentiality of the multimodal cues occurring in the interview. We thus choose to use a sequential model such as a recurrent neural network. The second hypothesis (H2) concerns the importance of the hierarchical structure of an interview: the decision of to hire should be performed at the candidate level, the candidates answering several questions during the interview. We thus choose to introduce different levels of hierarchy in HireNet namely the candidate level, the answer level and the word (or frame) level. The third hypothesis (H3) concerns the existence of salient information or social signals in a candidate's video interview: questions are not equally important and not all the parts of the answers have an equal influence on the recruiter's decision. We thus choose to introduce attention mechanisms in HireNet. The last hypothesis (H4) concerns the importance of contextual information such as questions and job titles. Therefore, HireNet includes vectors that encode this contextual information. Formalization We represent a video interview as an object composed of a job title INLINEFORM0 and INLINEFORM1 question-answer pairs INLINEFORM2 . In our model, the job title INLINEFORM3 is composed of a sequence of INLINEFORM4 words INLINEFORM5 where INLINEFORM6 denotes the length of the job title. In a same way, the INLINEFORM7 -th question INLINEFORM8 is a sequence of INLINEFORM9 words INLINEFORM10 where INLINEFORM11 denotes the number of words in the question INLINEFORM12 . INLINEFORM13 denotes the sequence of low level descriptors INLINEFORM14 describing the INLINEFORM15 -th answer. In our study these low level descriptors could be embedded words, features extracted from an audio frame, or features extracted from a video frame. INLINEFORM16 denotes the length of the sequence of low level descriptors of the INLINEFORM17 -th answer. We decided to use a Gated Recurrent Unit (GRU) BIBREF22 to encode information from the job title, the questions and the answers. A GRU is able to encode sequences. It uses two mechanisms to solve the vanishing gradient problem, namely the reset gate, controlling how much past information is needed; and the update gate, determining how much past information has to be kept and the amount of new information to add. For formalization, we will denote by INLINEFORM0 the hidden state of GRU at timestep INLINEFORM1 of the encoded sequence. This part of the model aims to encode the sequences of low level descriptors. As mentioned before, the sequences can represent a text, an audio stream or a video stream. A bidirectional GRU is used to obtain representations from both directions for each element of the sequence INLINEFORM0 . It contains the forward GRU which reads the sequence from left to right and backward GRU which reads the sequence from right to left: DISPLAYFORM0 DISPLAYFORM1 In the same way, an encoding for a given low level descriptor INLINEFORM0 is obtained by concatenating forward hidden states and backward hidden states: DISPLAYFORM0 Encoding sequences in a bidirectional fashion ensures the same amount of previous information for each element of INLINEFORM0 . Using a simple forward encoder could lead to biased attention vectors focusing only on the latest elements of the answers. In this study, the local context information corresponds to the questions INLINEFORM0 . In order to encode these sentences, we use a simple forward GRU. DISPLAYFORM0 And the final representation of a question is the hidden state of the last word in the question INLINEFORM0 (i.e. INLINEFORM1 ). In order to obtain a better representation of of the candidate's answer, we aim to detect elements in the sequence which were salient for the classification task. Moreover, we hypothesize that the local context is highly important. Different behavioral signals can occur depending on the question type and it can also influence the way recruiters assess their candidates BIBREF23 . An additive attention mechanism is proposed in order to extract the importance of each moment in the sequence representing the answer. DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 and INLINEFORM1 are weight matrices, INLINEFORM2 and INLINEFORM3 are weight vectors and INLINEFORM4 denotes the transpose of INLINEFORM5 . In order to have the maximum amount of information, we concatenate at the second level, the representation of the local context and the answer representation. Moreover, we think that given the way video interviews work, the more questions a candidate answers during the interview, the more he adapts and gets comfortable. In the light of this, we decided to encode question-answer pairs as a sequence. Given INLINEFORM0 , we can use the same representation scheme as that of the low level encoder: DISPLAYFORM0 DISPLAYFORM1 We will also concatenate forward hidden states and backward hidden states: DISPLAYFORM0 We encode the job title the same way we encode the questions : DISPLAYFORM0 As done for the representation of the question, the final representation of the job title is the hidden state of the last word of INLINEFORM0 (i.e. INLINEFORM1 ). The importance of a question depends on the context of the interview, and specifically, on the type of job the candidate is applying for. For instance, a junior sales position interview could accord more importance to the social skills, while an interview for a senior position could be more challenging on the technical side. Like low level attention, high level attention is composed of an additive attention mechanism: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 are weight matrices, INLINEFORM2 and INLINEFORM3 are weight vectors and INLINEFORM4 denotes the transpose of INLINEFORM5 . Finally INLINEFORM6 summarizes all the information of the job interview. Once INLINEFORM0 is obtained, we use it as representation in order to classify candidates: DISPLAYFORM0 where INLINEFORM0 is a weight matrix and INLINEFORM1 a weight vector. As the problem we are facing is that of a binary classification, we chose to minimize the binary cross-entropy computed between INLINEFORM2 and true labels of candidates INLINEFORM3 . Dataset We have decided to focus on only one specific type of job: sales positions. After filtering based on specific job titles from the ROME Database, a list of positions was selected and verified by the authors and an expert from the Human Resources (HR). Finally, in a collaboration with an HR industry actor, we have obtained a dataset of French video interviews comprising more than 475 positions and 7938 candidates. As they watch candidates' videos, recruiters can like, dislike, shortlist candidates, evaluate them on predefined criteria, or write comments. To simplify the task, we set up a binary classification: candidates who have been liked or shortlisted are considered part of the hirable class and others part of the not hirable class. If multiple annotators have annotated the same candidates, we proceed with a majority vote. In case of a draw, the candidate is considered hirable. It is important to note that the videos are quite different from what could be produced in a laboratory setup. Videos can be recorded from a webcam, a smartphone or a tablet., meaning noisy environments and low quality equipment are par for the course. Due to these real conditions, feature extraction may fail for a single modality during a candidate's entire answer. One example is the detection of action units when the image has lighting problems. We decided to use all samples available in each modality separately. Some statistics about the dataset are available in Table TABREF33 . Although the candidates agreed to the use of their interviews, the dataset will not be released to public outside of the scope of this study due to the videos being personal data subject to high privacy constraints. Experimental settings The chosen evaluation metrics are precision, recall and F1-score of hirable class. They are well suited for binary classification and used in previous studies BIBREF2 . We split the dataset into a training set, a validation set for hyper-parameter selection based on the F1-score, and a test set for the final evaluation of each model. Each set constitutes respectively 80%, 10% and 10% of the full dataset. Extraction of social multimodal features For each modality, we selected low-level descriptors to be used as per-frame features, and sequence-level features to be used as the non-sequential representation of a candidate's whole answer for our non-sequential baselines. Word2vec: Pretrained word embeddings are used for the BoTW (Bag of Text Words, presented later in this section), and the neural networks. We used word embeddings of dimension 200 from BIBREF24 pretrained on a French corpus of Wikipedia. eGeMAPS: Our frame-level audio features are extracted using OpenSmile BIBREF25 . The configuration we use is the same one used to obtain the eGeMAPS BIBREF13 features. GeMAPS is a famous minimalistic set of features selected for their saliency in Social Computing, and eGeMAPS is its extended version. We extract the per-frame features prior to the aggregations performed to obtain the eGeMAPS representation. OpenFace: We extract frame-level visual features with OpenFace BIBREF14 , a state-of-the-art visual behavioral analysis software that yields various per-frame meaningful metrics. We chose to extract the position and rotation of the head, the intensity and presence of actions units, and the gaze direction. As different videos have different frame-rates, we decided to smooth values with a time-window of 0.5 s and an overlap of 0.25 s. The duration of 0.5 s is frequently used in the literature of Social Computing BIBREF26 and has been validated in our corpus as a suitable time-window size by annotating segments of social signals in a set of videos. Baselines First, we compare our model with several vote-based methods: i) Random vote baseline (One thousand random draws respecting the train dataset label balance were made. The F1-score is then averaged over those one thousand samples); ii) Majority Vote (This baseline is simply the position-wise majority label. Since our model could just be learning the origin open position for each candidate and its corresponding majority vote, we decided to include this baseline to show that our model reaches beyond those cues). Second, we compare our model with non-sequential baselines: i)-a Non-sequential text (we train a Doc2vec BIBREF27 representation on our corpus, and we use it as a representation of our textual inputs); i)-b Non-sequential audio (we take the eGeMAPS audio representation as described in BIBREF13 . That representation is obtained by passing the above descriptors into classical statistical functions and hand-crafted ad hoc measures applied over the whole answer. The reason we chose GeMAPS features is also that they were designed to ease comparability between different works in the field of Social Computing); i)-c Non-sequential video (our low-level video descriptors include binary descriptors and continuous descriptors. The mean, standard deviation, minimum, maximum, sum of positive gradients and sum of negative gradients have been successfully used for a behavioral classification on media content in BIBREF28 . We followed that representation scheme for our continuous descriptors. As for our discrete features, we chose to extract the mean, the number of active segments, and the active segment duration mean and standard deviation) ii) Bag of * Words (We also chose to compare our model to BIBREF2 's Bag of Audio and Video Words: we run a K-means algorithm on all the low-level frames in our dataset. Then we take our samples as documents, and our frames' predicted classes as words, and use a "Term Frequency-inverse Document Frequency" (TF-iDF) representation to model each sample). For each modality, we use the non-sequential representations mentioned above in a monomodal fashion as inputs to three classic learning algorithms (namely SVM, Ridge regression and Random Forest) with respective hyperparameter searches. Best of the three algorithms is selected. As these models do not have a hierarchical structure, we will train them to yield answer-wise labels (as opposed to the candidate-wise labeling performed by our hierarchical model). At test time we average the output value of the algorithm for each candidate on the questions he answered. Third, the proposed sequential baselines aim at checking the four hypotheses described above: i) comparing the Bidirectional-GRU model with previously described non sequential approaches aims to validate H1 on the contribution of sequentiality in an answer-wise representation; ii) the Hierarchical Averaged Network (HN_AVG) baseline adds the hierarchy in the model in order to verify H2 and H3 (we replace the attention mechanism by an averaging operator over all of the non-zero bidirectional GRU outputs); iii) the Hierarchical Self Attention Network (HN_SATT) is a self-attention version of HireNet which aims to see the actual effect of the added context information (H4). Multimodal models Given the text, audio, and video trained versions of our HireNet, we report two basic models performing multimodal inference, namely an early fusion approach and a late fusion approach. In the early fusion, we concatenate the last layer INLINEFORM0 of each modality as a representation ,and proceed with the same test procedure as our non-sequential baselines. For our late fusion approach, the decision for a candidate is carried out using the average decision score INLINEFORM1 between the three modalities. Results and analyses First of all, Tables TABREF35 and TABREF39 show that most of our neural models fairly surpass the vote-based baselines. In Table TABREF35 , the F1-score has increased, going from the non-sequential baselines, to the Bidirectional-GRU baselines for all the modalities, which supports H1. We can also see that HN_AVG is superior to the Bidirectional-GRU baselines for audio and text validating H2 for those two modalities. This suggests that sequentiality and hierarchy are adequate inductive biases for a job interview assessment machine learning algorithm. As for H3, HN_SATT did show better results than HN_AVG, for text and audio. In the end, our HireNet model surpasses HN_AVG and HN_SATT for each modality. Consequently, a fair amount of useful information is present in the contextual frame of an interview, and this information can be leveraged through our model, as it is stated in H4. Audio and text monomodal models display better performance than video models. The same results were obtained in BIBREF2 . Our attempts at fusing the multimodal information synthesized in the last layer of each HireNet model only slightly improved on the single modality models. Attention visualization Text In order to visualize the different words on which attention values were high, we computed new values of interest as it has been done in BIBREF20 . As the sentence length changes between answers, we multiply every word's attention value ( INLINEFORM0 ) by the number of words in the answer, resulting in the relative attention of the word with respect to the sentence. In a same way, we multiply each question attention by the number of questions, resulting in the relative attention of the question with respect to the job interview. Then, in a similar way as BIBREF19 , we compute INLINEFORM1 where INLINEFORM2 and INLINEFORM3 are respectively the values of interest for word INLINEFORM4 and question INLINEFORM5 . The list of the 20 most important words contains numerous names of banks and insurances companies (Natixis, Aviva, CNP, etc) and job knowledge vocabulary (mortgage, brokerage, tax exemption, etc), which means that their occurrence in candidates answers takes an important role in hirability prediction. Video In order to visualize which moments were highlighted by attention mechanisms in a video, we display an example of the attention values for an answer in Figure FIGREF41 . In this figure, the higher the attention value, the more the corresponding frames are considered task-relevant by the attention mechanism. As we can see, some peaks are present. Three thin slices with high attention values are presented. Some social signals that are important in a job interview are identified. We hypothesize that the smile detected in Frame 1 could be part of a tactic to please the interviewer known as deceptive ingratiation BIBREF29 . In addition, Frames 2 and 3 are representative of stress signals from the candidate. In fact, lip suck was suggested to be linked to anxiety in BIBREF30 . Audio The same visualization procedure used for video has been investigated for audio. As audio signal is harder to visualize, we decided to describe the general pattern of audio attention weights. In most cases, when the prosody is homogeneous through the answer, attention weights are distributed uniformly and show no peaks, as opposed to what was observed for video. However, salient moments may appear, especially when candidates produce successive disfluencies. Thus, we have identified peaks where false starts, filler words, repeating or restarting sentences occur. Questions We aim to explore the attention given to the different questions during the same interview. For this purpose, we randomly picked one open position from the test dataset comprising 40 candidates. Questions describing the interview and the corresponding averaged attention weights are displayed in the Figure FIGREF42 . First, it seems attention weight variability between questions is higher for the audio modality than for text and video modalities. Second, the decrease in attention for Questions 5 and 6 could be explained by the fact that those questions are designed to assess "soft skills". Third, peaks of attention weight for the audio modality on Questions 2 and 4 could be induced by the fact that these questions are job-centric. Indeed, it could be possible that disfluencies tend to appear more in job-centric questions or that prosody is more important in first impressions of competence. Conclusion and future directions The HR industry actors nowadays do offer tools to automatically assess candidates undergoing asynchronous video interviews. However, no studies have been published regarding these tools and their predictive validity. The contribution of this work is twofold. First, we evaluate the validity of previous approaches in real conditions (e.g. in-the-wild settings, true applications, real evaluations, etc). Second, we used deep learning methods in order to faithfully model the structure of asynchronous video interviews. In that sense, we proposed a new version of Hierarchical Attention Networks that is aware of the interviews contextual elements (questions and job title) called HireNet, which has showed better performance than previous approaches. First basic experiments on multimodal fusion have also been performed (early and late fusion). In future work, the obtained multimodal performance could be improved by leveraging more sophisticated multimodal fusion schemes. HireNet was evaluated on a corpus containing interviews for various jobs – 475 different positions – in the domain of sales positions. Theoretical findings from industrial organizational psychology suggest that some dimensions are common across different positions BIBREF31 . However we would like to extend the corpus to other domains than sales in order to i) validate the relevance of our model for other types of positions, ii) determine which competencies are common or not across jobs. In that sense, the use of multi-domain models BIBREF32 could be of great help. Our model currently considers two labels (“hirable” and “not hirable”). Extending our annotations to more fine-grained information (communication skills, social effectiveness, etc) could provide useful insights about the profile of a candidate and its potential fit with the position in question. Through the use of attention mechanisms, we aimed to highlight salient moments and questions for each modality, which contributes to the transparency and the interpretability of HireNet. Such transparency is very important for Human Resources practitioners to trust an automatic evaluation. Further investigations could be conducted on the proposed attention mechanisms: i) to confirm the saliency of the selected moments using the discipline of Industrial and Organizational psychology; ii) to know the influence of the slices deemed important. This way, a tool to help candidates train for interviews could be developed. Last but not least, ethics and fairness are important considerations, that deserve to be studied. In that sense, detection of individual and global bias should be prioritized in order to give useful feedbacks to practitioners. Furthermore we are considering using adversarial learning as in BIBREF33 in order to ensure fairness during the training process. Acknowledgments This work was supported by the company EASYRECRUE, from whom the job interview videos were collected. We would like to thank Jeremy Langlais for his support and his help. We would also like to thank Valentin Barriere for his valuable input and the name given to the model and Marc Jeanmougin and Nicolas Bouche for their help with the computing environment. Finally, we thank Erin Douglas for proofreading the article.
Yes
db264e363f3b3aa83526952bef02f826dff70042
db264e363f3b3aa83526952bef02f826dff70042_0
Q: Do they analyze if their system has any bias? Text: Introduction Among assessment methods, the job interview remains the most common way to evaluate candidates. The interview can be done via phone, live video, face to face, or more recently asynchronous video interview. For the latter, candidates connect to a platform, and record themselves while answering a set of questions chosen by the recruiter. The platform then allows several recruiters to evaluate the candidate, to discuss among themselves and possibly to invite the candidate to a face-to-face interview. Recruiters choose to use these platforms because it gives them access to a larger pool of candidates, and it speeds up the application processing time. In addition, it allows candidates to do the interview whenever and wherever it suits them the most. However, given a large number of these asynchronous interviews it may quickly become unmanageable for recruiters. The highly structured characteristic of asynchronous video interviews (same questions, same amount of time per candidate) enhances their predictive validity, and reduces inter-recruiter variability BIBREF0 . Moreover, recent advances in Social Signal Processing (SSP) BIBREF1 have enabled automated candidate assessment BIBREF2 , and companies have already started deploying solutions serving that purpose. However, previous studies used corpora of simulated interviews with limited sizes. The work proposed in this paper relies on a corpus that has been built in collaboration with a company and that consists of more than 7000 real job interviews for 475 open positions. The size of this corpus enables the exploration of emerging models such as deep learning models, that are known to be difficult to deploy for Social Computing because of the difficulty to obtain large annotations of social behaviors. Based on those facts, we propose HireNet, a new hierarchical attention neural network for the purpose of automatically classifying candidates into two classes: hirable and not hirable. Our model aims to assist recruiters in the selection process. It does not aim to make any automatic decision about candidate selection. First, this model was built to mirror the sequential and hierarchical structure of an interview assessment: recruiters watch a sequence of questions and answers, which are themselves sequences of words or behavioral signals. Second, the HireNet model integrates the context of the open position (questions during the interview and job title) in order both to determine the relative importance between question-answer pairs and to highlight important behavioral cues with regard to a question. Third, HireNet attention mechanisms enhance the interpretability of our model for each modality. In fact, they provide a way for recruiters to validate and trust the model through visualization, and possibly for candidates to locate their strengths or areas of improvement in an interview. In this paper, we first present an overview of the related works for automatic video interview assessment. Then we go through the construction and the underlying hypotheses of HireNet, our neural model for asynchronous video interview assessment. After, we discuss the binary classification results of our model compared to various baselines, and show salient interview slices highlighted by the integrated attention mechanisms. Finally we conclude and discuss the future directions of our study. Databases To the best of our knowledge, only one corpus of interviews with real open positions has been collected and is subject to automatic analysis BIBREF3 . This corpus consists of face-to-face job interviews for a marketing short assignment whose candidates are mainly students. There are video corpora of face-to-face mock interviews that include two corpora built at the Massachusetts Institute of Technology BIBREF4 , BIBREF5 , and a corpus of students in services related to hospitality BIBREF6 . Many corpora of simulated asynchronous video interviews have also been built: a corpus of employees BIBREF7 , a corpus of students from Bangalore University BIBREF8 and a corpus collected through the use of crowdsourcing tools BIBREF2 . Some researchers are also interested in online video resumes and have constituted a corpus of video CVs from YouTube BIBREF9 . A first impressions challenge dataset was also supplemented by hirability annotation BIBREF10 . Some corpora are annotated by experts or students in psychology BIBREF7 , BIBREF2 , BIBREF3 , BIBREF11 . Other corpora have used crowdsourcing platforms or naive observers BIBREF8 for annotation. Table TABREF2 contains a summary of the corpora of job interviews used in previous works. Machine learning approaches for automatic analysis of video job interview Features Recent advances in SSP have offered toolboxes to extract features from audio BIBREF13 and video streams BIBREF14 . As asynchronous job interviews are videos, features from each modality (verbal content, audio and video) have to be extracted frame by frame in order to build a classification model. Audio cues consist mainly of prosody features (fundamental frequency, intensity, mel-frequency cepstral coefficients, etc) and speaking activity (pauses, silences, short utterances, etc) BIBREF15 , BIBREF12 . Features derived from facial expressions (facial actions units, head rotation and position, gaze direction, etc) constitute the most extracted visual cues BIBREF2 . Finally, advances in automatic speech recognition have enabled researchers to use the verbal content of candidates. In order to describe the verbal content, researchers have used lexical statistics (number of words, number of unique words, etc), dictionaries (Linguistic Inquiry Word Count) BIBREF12 , topic modeling BIBREF5 , bag of words or more recently document embedding BIBREF7 . Representation Once features are extracted frame by frame, the problem of temporality has to be addressed. The most common approach is to simplify the temporal aspect by collapsing the time dimension using statistical functions (e.g. mean, standard deviation, etc). However, the lack of sequence modeling can lead to the loss of some important social signals such as emphasis by raising one's eyebrows followed by a smile BIBREF16 . Moreover co-occurrences of events are not captured by this representation. Thus, a distinction between a fake smile (activation of action unit 12) and a true smile (activation of action units 2, 4 and 12) is impossible BIBREF17 without modeling co-occurrences. To solve the problem of co-occurrences, the representation of visual words, audio words or visual audio words has been proposed BIBREF2 , BIBREF7 , BIBREF12 . The idea is to consider the snapshot of each frame as a word belonging to a specific dictionary. In order to obtain this codebook, an algorithm of unsupervised clustering is used to cluster common frames. Once we obtain the clusters, each class represents a "word" and we can easily map an ensemble of extracted frames to a document composed of these words. Then, the task is treated like a document classification. Additionally, the representation is not learned jointly with the classification models which can cause a loss of information. Modeling attempts and classification algorithms As video job interviews have multiple levels, an architectural choice has to be made accordingly. Some studies tried to find the most salient moments during an answer to a question BIBREF15 , the most important questions BIBREF5 or to use all available videos independently BIBREF2 in order to predict the outcome of a job interview. Finally, when a sufficient representation is built, a classification or a regression model is trained. Regularized logistic regression (LASSO or Ridge), Random Forest and Support Vector Machines are the most widely used algorithms. From a practical point of view, manually annotating thin slices of videos is time consuming. On the other side, considering each answer with the same label as the outcome of the interview is considerably less expensive, though some examples could be noisy. Indeed, a candidate with a negative outcome could have performed well on some questions. Furthermore, all these models do not take into account the sequentiality of social signals or questions. Neural networks and attention mechanisms in Social Computing Neural networks have proven to be successful in numerous Social Computing tasks. Multiple architectures in the field of neural networks have outperformed hand crafted features for emotion detection in videos BIBREF18 , facial landmarks detection BIBREF14 , document classification BIBREF19 These results are explained by the capability of neural networks to automatically perform useful transformations on low level features. Moreover, some architectures such as Recurrent Neural Networks were especially tailored to represent sequences. In addition, attention mechanisms have proven to be successful in highlighting salient information enhancing the performance and interpretability of neural networks. For example, in rapport detection, attention mechanisms allow to focus only on important moments during dyadic conversations BIBREF20 . Finally, numerous models have been proposed to model the interactions between modalities in emotion detection tasks through attention mechanisms BIBREF21 , BIBREF18 . HireNet and underlying hypotheses We propose here a new model named HireNet, as in a neural network for hirability prediction. It is inspired by work carried out in neural networks for natural language processing and from the HierNet BIBREF19 , in particular, which aims to model a hierarchy in a document. Following the idea that a document is composed of sentences and words, a job interview could be decomposed, as a sequence of answers to questions, and the answers, as a sequence of low level descriptors describing each answer. The model architecture (see Figure FIGREF6 ) is built relying on four hypotheses. The first hypothesis (H1) is the importance of the information provided by the sequentiality of the multimodal cues occurring in the interview. We thus choose to use a sequential model such as a recurrent neural network. The second hypothesis (H2) concerns the importance of the hierarchical structure of an interview: the decision of to hire should be performed at the candidate level, the candidates answering several questions during the interview. We thus choose to introduce different levels of hierarchy in HireNet namely the candidate level, the answer level and the word (or frame) level. The third hypothesis (H3) concerns the existence of salient information or social signals in a candidate's video interview: questions are not equally important and not all the parts of the answers have an equal influence on the recruiter's decision. We thus choose to introduce attention mechanisms in HireNet. The last hypothesis (H4) concerns the importance of contextual information such as questions and job titles. Therefore, HireNet includes vectors that encode this contextual information. Formalization We represent a video interview as an object composed of a job title INLINEFORM0 and INLINEFORM1 question-answer pairs INLINEFORM2 . In our model, the job title INLINEFORM3 is composed of a sequence of INLINEFORM4 words INLINEFORM5 where INLINEFORM6 denotes the length of the job title. In a same way, the INLINEFORM7 -th question INLINEFORM8 is a sequence of INLINEFORM9 words INLINEFORM10 where INLINEFORM11 denotes the number of words in the question INLINEFORM12 . INLINEFORM13 denotes the sequence of low level descriptors INLINEFORM14 describing the INLINEFORM15 -th answer. In our study these low level descriptors could be embedded words, features extracted from an audio frame, or features extracted from a video frame. INLINEFORM16 denotes the length of the sequence of low level descriptors of the INLINEFORM17 -th answer. We decided to use a Gated Recurrent Unit (GRU) BIBREF22 to encode information from the job title, the questions and the answers. A GRU is able to encode sequences. It uses two mechanisms to solve the vanishing gradient problem, namely the reset gate, controlling how much past information is needed; and the update gate, determining how much past information has to be kept and the amount of new information to add. For formalization, we will denote by INLINEFORM0 the hidden state of GRU at timestep INLINEFORM1 of the encoded sequence. This part of the model aims to encode the sequences of low level descriptors. As mentioned before, the sequences can represent a text, an audio stream or a video stream. A bidirectional GRU is used to obtain representations from both directions for each element of the sequence INLINEFORM0 . It contains the forward GRU which reads the sequence from left to right and backward GRU which reads the sequence from right to left: DISPLAYFORM0 DISPLAYFORM1 In the same way, an encoding for a given low level descriptor INLINEFORM0 is obtained by concatenating forward hidden states and backward hidden states: DISPLAYFORM0 Encoding sequences in a bidirectional fashion ensures the same amount of previous information for each element of INLINEFORM0 . Using a simple forward encoder could lead to biased attention vectors focusing only on the latest elements of the answers. In this study, the local context information corresponds to the questions INLINEFORM0 . In order to encode these sentences, we use a simple forward GRU. DISPLAYFORM0 And the final representation of a question is the hidden state of the last word in the question INLINEFORM0 (i.e. INLINEFORM1 ). In order to obtain a better representation of of the candidate's answer, we aim to detect elements in the sequence which were salient for the classification task. Moreover, we hypothesize that the local context is highly important. Different behavioral signals can occur depending on the question type and it can also influence the way recruiters assess their candidates BIBREF23 . An additive attention mechanism is proposed in order to extract the importance of each moment in the sequence representing the answer. DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 and INLINEFORM1 are weight matrices, INLINEFORM2 and INLINEFORM3 are weight vectors and INLINEFORM4 denotes the transpose of INLINEFORM5 . In order to have the maximum amount of information, we concatenate at the second level, the representation of the local context and the answer representation. Moreover, we think that given the way video interviews work, the more questions a candidate answers during the interview, the more he adapts and gets comfortable. In the light of this, we decided to encode question-answer pairs as a sequence. Given INLINEFORM0 , we can use the same representation scheme as that of the low level encoder: DISPLAYFORM0 DISPLAYFORM1 We will also concatenate forward hidden states and backward hidden states: DISPLAYFORM0 We encode the job title the same way we encode the questions : DISPLAYFORM0 As done for the representation of the question, the final representation of the job title is the hidden state of the last word of INLINEFORM0 (i.e. INLINEFORM1 ). The importance of a question depends on the context of the interview, and specifically, on the type of job the candidate is applying for. For instance, a junior sales position interview could accord more importance to the social skills, while an interview for a senior position could be more challenging on the technical side. Like low level attention, high level attention is composed of an additive attention mechanism: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 are weight matrices, INLINEFORM2 and INLINEFORM3 are weight vectors and INLINEFORM4 denotes the transpose of INLINEFORM5 . Finally INLINEFORM6 summarizes all the information of the job interview. Once INLINEFORM0 is obtained, we use it as representation in order to classify candidates: DISPLAYFORM0 where INLINEFORM0 is a weight matrix and INLINEFORM1 a weight vector. As the problem we are facing is that of a binary classification, we chose to minimize the binary cross-entropy computed between INLINEFORM2 and true labels of candidates INLINEFORM3 . Dataset We have decided to focus on only one specific type of job: sales positions. After filtering based on specific job titles from the ROME Database, a list of positions was selected and verified by the authors and an expert from the Human Resources (HR). Finally, in a collaboration with an HR industry actor, we have obtained a dataset of French video interviews comprising more than 475 positions and 7938 candidates. As they watch candidates' videos, recruiters can like, dislike, shortlist candidates, evaluate them on predefined criteria, or write comments. To simplify the task, we set up a binary classification: candidates who have been liked or shortlisted are considered part of the hirable class and others part of the not hirable class. If multiple annotators have annotated the same candidates, we proceed with a majority vote. In case of a draw, the candidate is considered hirable. It is important to note that the videos are quite different from what could be produced in a laboratory setup. Videos can be recorded from a webcam, a smartphone or a tablet., meaning noisy environments and low quality equipment are par for the course. Due to these real conditions, feature extraction may fail for a single modality during a candidate's entire answer. One example is the detection of action units when the image has lighting problems. We decided to use all samples available in each modality separately. Some statistics about the dataset are available in Table TABREF33 . Although the candidates agreed to the use of their interviews, the dataset will not be released to public outside of the scope of this study due to the videos being personal data subject to high privacy constraints. Experimental settings The chosen evaluation metrics are precision, recall and F1-score of hirable class. They are well suited for binary classification and used in previous studies BIBREF2 . We split the dataset into a training set, a validation set for hyper-parameter selection based on the F1-score, and a test set for the final evaluation of each model. Each set constitutes respectively 80%, 10% and 10% of the full dataset. Extraction of social multimodal features For each modality, we selected low-level descriptors to be used as per-frame features, and sequence-level features to be used as the non-sequential representation of a candidate's whole answer for our non-sequential baselines. Word2vec: Pretrained word embeddings are used for the BoTW (Bag of Text Words, presented later in this section), and the neural networks. We used word embeddings of dimension 200 from BIBREF24 pretrained on a French corpus of Wikipedia. eGeMAPS: Our frame-level audio features are extracted using OpenSmile BIBREF25 . The configuration we use is the same one used to obtain the eGeMAPS BIBREF13 features. GeMAPS is a famous minimalistic set of features selected for their saliency in Social Computing, and eGeMAPS is its extended version. We extract the per-frame features prior to the aggregations performed to obtain the eGeMAPS representation. OpenFace: We extract frame-level visual features with OpenFace BIBREF14 , a state-of-the-art visual behavioral analysis software that yields various per-frame meaningful metrics. We chose to extract the position and rotation of the head, the intensity and presence of actions units, and the gaze direction. As different videos have different frame-rates, we decided to smooth values with a time-window of 0.5 s and an overlap of 0.25 s. The duration of 0.5 s is frequently used in the literature of Social Computing BIBREF26 and has been validated in our corpus as a suitable time-window size by annotating segments of social signals in a set of videos. Baselines First, we compare our model with several vote-based methods: i) Random vote baseline (One thousand random draws respecting the train dataset label balance were made. The F1-score is then averaged over those one thousand samples); ii) Majority Vote (This baseline is simply the position-wise majority label. Since our model could just be learning the origin open position for each candidate and its corresponding majority vote, we decided to include this baseline to show that our model reaches beyond those cues). Second, we compare our model with non-sequential baselines: i)-a Non-sequential text (we train a Doc2vec BIBREF27 representation on our corpus, and we use it as a representation of our textual inputs); i)-b Non-sequential audio (we take the eGeMAPS audio representation as described in BIBREF13 . That representation is obtained by passing the above descriptors into classical statistical functions and hand-crafted ad hoc measures applied over the whole answer. The reason we chose GeMAPS features is also that they were designed to ease comparability between different works in the field of Social Computing); i)-c Non-sequential video (our low-level video descriptors include binary descriptors and continuous descriptors. The mean, standard deviation, minimum, maximum, sum of positive gradients and sum of negative gradients have been successfully used for a behavioral classification on media content in BIBREF28 . We followed that representation scheme for our continuous descriptors. As for our discrete features, we chose to extract the mean, the number of active segments, and the active segment duration mean and standard deviation) ii) Bag of * Words (We also chose to compare our model to BIBREF2 's Bag of Audio and Video Words: we run a K-means algorithm on all the low-level frames in our dataset. Then we take our samples as documents, and our frames' predicted classes as words, and use a "Term Frequency-inverse Document Frequency" (TF-iDF) representation to model each sample). For each modality, we use the non-sequential representations mentioned above in a monomodal fashion as inputs to three classic learning algorithms (namely SVM, Ridge regression and Random Forest) with respective hyperparameter searches. Best of the three algorithms is selected. As these models do not have a hierarchical structure, we will train them to yield answer-wise labels (as opposed to the candidate-wise labeling performed by our hierarchical model). At test time we average the output value of the algorithm for each candidate on the questions he answered. Third, the proposed sequential baselines aim at checking the four hypotheses described above: i) comparing the Bidirectional-GRU model with previously described non sequential approaches aims to validate H1 on the contribution of sequentiality in an answer-wise representation; ii) the Hierarchical Averaged Network (HN_AVG) baseline adds the hierarchy in the model in order to verify H2 and H3 (we replace the attention mechanism by an averaging operator over all of the non-zero bidirectional GRU outputs); iii) the Hierarchical Self Attention Network (HN_SATT) is a self-attention version of HireNet which aims to see the actual effect of the added context information (H4). Multimodal models Given the text, audio, and video trained versions of our HireNet, we report two basic models performing multimodal inference, namely an early fusion approach and a late fusion approach. In the early fusion, we concatenate the last layer INLINEFORM0 of each modality as a representation ,and proceed with the same test procedure as our non-sequential baselines. For our late fusion approach, the decision for a candidate is carried out using the average decision score INLINEFORM1 between the three modalities. Results and analyses First of all, Tables TABREF35 and TABREF39 show that most of our neural models fairly surpass the vote-based baselines. In Table TABREF35 , the F1-score has increased, going from the non-sequential baselines, to the Bidirectional-GRU baselines for all the modalities, which supports H1. We can also see that HN_AVG is superior to the Bidirectional-GRU baselines for audio and text validating H2 for those two modalities. This suggests that sequentiality and hierarchy are adequate inductive biases for a job interview assessment machine learning algorithm. As for H3, HN_SATT did show better results than HN_AVG, for text and audio. In the end, our HireNet model surpasses HN_AVG and HN_SATT for each modality. Consequently, a fair amount of useful information is present in the contextual frame of an interview, and this information can be leveraged through our model, as it is stated in H4. Audio and text monomodal models display better performance than video models. The same results were obtained in BIBREF2 . Our attempts at fusing the multimodal information synthesized in the last layer of each HireNet model only slightly improved on the single modality models. Attention visualization Text In order to visualize the different words on which attention values were high, we computed new values of interest as it has been done in BIBREF20 . As the sentence length changes between answers, we multiply every word's attention value ( INLINEFORM0 ) by the number of words in the answer, resulting in the relative attention of the word with respect to the sentence. In a same way, we multiply each question attention by the number of questions, resulting in the relative attention of the question with respect to the job interview. Then, in a similar way as BIBREF19 , we compute INLINEFORM1 where INLINEFORM2 and INLINEFORM3 are respectively the values of interest for word INLINEFORM4 and question INLINEFORM5 . The list of the 20 most important words contains numerous names of banks and insurances companies (Natixis, Aviva, CNP, etc) and job knowledge vocabulary (mortgage, brokerage, tax exemption, etc), which means that their occurrence in candidates answers takes an important role in hirability prediction. Video In order to visualize which moments were highlighted by attention mechanisms in a video, we display an example of the attention values for an answer in Figure FIGREF41 . In this figure, the higher the attention value, the more the corresponding frames are considered task-relevant by the attention mechanism. As we can see, some peaks are present. Three thin slices with high attention values are presented. Some social signals that are important in a job interview are identified. We hypothesize that the smile detected in Frame 1 could be part of a tactic to please the interviewer known as deceptive ingratiation BIBREF29 . In addition, Frames 2 and 3 are representative of stress signals from the candidate. In fact, lip suck was suggested to be linked to anxiety in BIBREF30 . Audio The same visualization procedure used for video has been investigated for audio. As audio signal is harder to visualize, we decided to describe the general pattern of audio attention weights. In most cases, when the prosody is homogeneous through the answer, attention weights are distributed uniformly and show no peaks, as opposed to what was observed for video. However, salient moments may appear, especially when candidates produce successive disfluencies. Thus, we have identified peaks where false starts, filler words, repeating or restarting sentences occur. Questions We aim to explore the attention given to the different questions during the same interview. For this purpose, we randomly picked one open position from the test dataset comprising 40 candidates. Questions describing the interview and the corresponding averaged attention weights are displayed in the Figure FIGREF42 . First, it seems attention weight variability between questions is higher for the audio modality than for text and video modalities. Second, the decrease in attention for Questions 5 and 6 could be explained by the fact that those questions are designed to assess "soft skills". Third, peaks of attention weight for the audio modality on Questions 2 and 4 could be induced by the fact that these questions are job-centric. Indeed, it could be possible that disfluencies tend to appear more in job-centric questions or that prosody is more important in first impressions of competence. Conclusion and future directions The HR industry actors nowadays do offer tools to automatically assess candidates undergoing asynchronous video interviews. However, no studies have been published regarding these tools and their predictive validity. The contribution of this work is twofold. First, we evaluate the validity of previous approaches in real conditions (e.g. in-the-wild settings, true applications, real evaluations, etc). Second, we used deep learning methods in order to faithfully model the structure of asynchronous video interviews. In that sense, we proposed a new version of Hierarchical Attention Networks that is aware of the interviews contextual elements (questions and job title) called HireNet, which has showed better performance than previous approaches. First basic experiments on multimodal fusion have also been performed (early and late fusion). In future work, the obtained multimodal performance could be improved by leveraging more sophisticated multimodal fusion schemes. HireNet was evaluated on a corpus containing interviews for various jobs – 475 different positions – in the domain of sales positions. Theoretical findings from industrial organizational psychology suggest that some dimensions are common across different positions BIBREF31 . However we would like to extend the corpus to other domains than sales in order to i) validate the relevance of our model for other types of positions, ii) determine which competencies are common or not across jobs. In that sense, the use of multi-domain models BIBREF32 could be of great help. Our model currently considers two labels (“hirable” and “not hirable”). Extending our annotations to more fine-grained information (communication skills, social effectiveness, etc) could provide useful insights about the profile of a candidate and its potential fit with the position in question. Through the use of attention mechanisms, we aimed to highlight salient moments and questions for each modality, which contributes to the transparency and the interpretability of HireNet. Such transparency is very important for Human Resources practitioners to trust an automatic evaluation. Further investigations could be conducted on the proposed attention mechanisms: i) to confirm the saliency of the selected moments using the discipline of Industrial and Organizational psychology; ii) to know the influence of the slices deemed important. This way, a tool to help candidates train for interviews could be developed. Last but not least, ethics and fairness are important considerations, that deserve to be studied. In that sense, detection of individual and global bias should be prioritized in order to give useful feedbacks to practitioners. Furthermore we are considering using adversarial learning as in BIBREF33 in order to ensure fairness during the training process. Acknowledgments This work was supported by the company EASYRECRUE, from whom the job interview videos were collected. We would like to thank Jeremy Langlais for his support and his help. We would also like to thank Valentin Barriere for his valuable input and the name given to the model and Marc Jeanmougin and Nicolas Bouche for their help with the computing environment. Finally, we thank Erin Douglas for proofreading the article.
No
e8e6986365f899dead0768ecf7b1eca8a2699f2f
e8e6986365f899dead0768ecf7b1eca8a2699f2f_0
Q: Is there any ethical consideration in the research? Text: Introduction Among assessment methods, the job interview remains the most common way to evaluate candidates. The interview can be done via phone, live video, face to face, or more recently asynchronous video interview. For the latter, candidates connect to a platform, and record themselves while answering a set of questions chosen by the recruiter. The platform then allows several recruiters to evaluate the candidate, to discuss among themselves and possibly to invite the candidate to a face-to-face interview. Recruiters choose to use these platforms because it gives them access to a larger pool of candidates, and it speeds up the application processing time. In addition, it allows candidates to do the interview whenever and wherever it suits them the most. However, given a large number of these asynchronous interviews it may quickly become unmanageable for recruiters. The highly structured characteristic of asynchronous video interviews (same questions, same amount of time per candidate) enhances their predictive validity, and reduces inter-recruiter variability BIBREF0 . Moreover, recent advances in Social Signal Processing (SSP) BIBREF1 have enabled automated candidate assessment BIBREF2 , and companies have already started deploying solutions serving that purpose. However, previous studies used corpora of simulated interviews with limited sizes. The work proposed in this paper relies on a corpus that has been built in collaboration with a company and that consists of more than 7000 real job interviews for 475 open positions. The size of this corpus enables the exploration of emerging models such as deep learning models, that are known to be difficult to deploy for Social Computing because of the difficulty to obtain large annotations of social behaviors. Based on those facts, we propose HireNet, a new hierarchical attention neural network for the purpose of automatically classifying candidates into two classes: hirable and not hirable. Our model aims to assist recruiters in the selection process. It does not aim to make any automatic decision about candidate selection. First, this model was built to mirror the sequential and hierarchical structure of an interview assessment: recruiters watch a sequence of questions and answers, which are themselves sequences of words or behavioral signals. Second, the HireNet model integrates the context of the open position (questions during the interview and job title) in order both to determine the relative importance between question-answer pairs and to highlight important behavioral cues with regard to a question. Third, HireNet attention mechanisms enhance the interpretability of our model for each modality. In fact, they provide a way for recruiters to validate and trust the model through visualization, and possibly for candidates to locate their strengths or areas of improvement in an interview. In this paper, we first present an overview of the related works for automatic video interview assessment. Then we go through the construction and the underlying hypotheses of HireNet, our neural model for asynchronous video interview assessment. After, we discuss the binary classification results of our model compared to various baselines, and show salient interview slices highlighted by the integrated attention mechanisms. Finally we conclude and discuss the future directions of our study. Databases To the best of our knowledge, only one corpus of interviews with real open positions has been collected and is subject to automatic analysis BIBREF3 . This corpus consists of face-to-face job interviews for a marketing short assignment whose candidates are mainly students. There are video corpora of face-to-face mock interviews that include two corpora built at the Massachusetts Institute of Technology BIBREF4 , BIBREF5 , and a corpus of students in services related to hospitality BIBREF6 . Many corpora of simulated asynchronous video interviews have also been built: a corpus of employees BIBREF7 , a corpus of students from Bangalore University BIBREF8 and a corpus collected through the use of crowdsourcing tools BIBREF2 . Some researchers are also interested in online video resumes and have constituted a corpus of video CVs from YouTube BIBREF9 . A first impressions challenge dataset was also supplemented by hirability annotation BIBREF10 . Some corpora are annotated by experts or students in psychology BIBREF7 , BIBREF2 , BIBREF3 , BIBREF11 . Other corpora have used crowdsourcing platforms or naive observers BIBREF8 for annotation. Table TABREF2 contains a summary of the corpora of job interviews used in previous works. Machine learning approaches for automatic analysis of video job interview Features Recent advances in SSP have offered toolboxes to extract features from audio BIBREF13 and video streams BIBREF14 . As asynchronous job interviews are videos, features from each modality (verbal content, audio and video) have to be extracted frame by frame in order to build a classification model. Audio cues consist mainly of prosody features (fundamental frequency, intensity, mel-frequency cepstral coefficients, etc) and speaking activity (pauses, silences, short utterances, etc) BIBREF15 , BIBREF12 . Features derived from facial expressions (facial actions units, head rotation and position, gaze direction, etc) constitute the most extracted visual cues BIBREF2 . Finally, advances in automatic speech recognition have enabled researchers to use the verbal content of candidates. In order to describe the verbal content, researchers have used lexical statistics (number of words, number of unique words, etc), dictionaries (Linguistic Inquiry Word Count) BIBREF12 , topic modeling BIBREF5 , bag of words or more recently document embedding BIBREF7 . Representation Once features are extracted frame by frame, the problem of temporality has to be addressed. The most common approach is to simplify the temporal aspect by collapsing the time dimension using statistical functions (e.g. mean, standard deviation, etc). However, the lack of sequence modeling can lead to the loss of some important social signals such as emphasis by raising one's eyebrows followed by a smile BIBREF16 . Moreover co-occurrences of events are not captured by this representation. Thus, a distinction between a fake smile (activation of action unit 12) and a true smile (activation of action units 2, 4 and 12) is impossible BIBREF17 without modeling co-occurrences. To solve the problem of co-occurrences, the representation of visual words, audio words or visual audio words has been proposed BIBREF2 , BIBREF7 , BIBREF12 . The idea is to consider the snapshot of each frame as a word belonging to a specific dictionary. In order to obtain this codebook, an algorithm of unsupervised clustering is used to cluster common frames. Once we obtain the clusters, each class represents a "word" and we can easily map an ensemble of extracted frames to a document composed of these words. Then, the task is treated like a document classification. Additionally, the representation is not learned jointly with the classification models which can cause a loss of information. Modeling attempts and classification algorithms As video job interviews have multiple levels, an architectural choice has to be made accordingly. Some studies tried to find the most salient moments during an answer to a question BIBREF15 , the most important questions BIBREF5 or to use all available videos independently BIBREF2 in order to predict the outcome of a job interview. Finally, when a sufficient representation is built, a classification or a regression model is trained. Regularized logistic regression (LASSO or Ridge), Random Forest and Support Vector Machines are the most widely used algorithms. From a practical point of view, manually annotating thin slices of videos is time consuming. On the other side, considering each answer with the same label as the outcome of the interview is considerably less expensive, though some examples could be noisy. Indeed, a candidate with a negative outcome could have performed well on some questions. Furthermore, all these models do not take into account the sequentiality of social signals or questions. Neural networks and attention mechanisms in Social Computing Neural networks have proven to be successful in numerous Social Computing tasks. Multiple architectures in the field of neural networks have outperformed hand crafted features for emotion detection in videos BIBREF18 , facial landmarks detection BIBREF14 , document classification BIBREF19 These results are explained by the capability of neural networks to automatically perform useful transformations on low level features. Moreover, some architectures such as Recurrent Neural Networks were especially tailored to represent sequences. In addition, attention mechanisms have proven to be successful in highlighting salient information enhancing the performance and interpretability of neural networks. For example, in rapport detection, attention mechanisms allow to focus only on important moments during dyadic conversations BIBREF20 . Finally, numerous models have been proposed to model the interactions between modalities in emotion detection tasks through attention mechanisms BIBREF21 , BIBREF18 . HireNet and underlying hypotheses We propose here a new model named HireNet, as in a neural network for hirability prediction. It is inspired by work carried out in neural networks for natural language processing and from the HierNet BIBREF19 , in particular, which aims to model a hierarchy in a document. Following the idea that a document is composed of sentences and words, a job interview could be decomposed, as a sequence of answers to questions, and the answers, as a sequence of low level descriptors describing each answer. The model architecture (see Figure FIGREF6 ) is built relying on four hypotheses. The first hypothesis (H1) is the importance of the information provided by the sequentiality of the multimodal cues occurring in the interview. We thus choose to use a sequential model such as a recurrent neural network. The second hypothesis (H2) concerns the importance of the hierarchical structure of an interview: the decision of to hire should be performed at the candidate level, the candidates answering several questions during the interview. We thus choose to introduce different levels of hierarchy in HireNet namely the candidate level, the answer level and the word (or frame) level. The third hypothesis (H3) concerns the existence of salient information or social signals in a candidate's video interview: questions are not equally important and not all the parts of the answers have an equal influence on the recruiter's decision. We thus choose to introduce attention mechanisms in HireNet. The last hypothesis (H4) concerns the importance of contextual information such as questions and job titles. Therefore, HireNet includes vectors that encode this contextual information. Formalization We represent a video interview as an object composed of a job title INLINEFORM0 and INLINEFORM1 question-answer pairs INLINEFORM2 . In our model, the job title INLINEFORM3 is composed of a sequence of INLINEFORM4 words INLINEFORM5 where INLINEFORM6 denotes the length of the job title. In a same way, the INLINEFORM7 -th question INLINEFORM8 is a sequence of INLINEFORM9 words INLINEFORM10 where INLINEFORM11 denotes the number of words in the question INLINEFORM12 . INLINEFORM13 denotes the sequence of low level descriptors INLINEFORM14 describing the INLINEFORM15 -th answer. In our study these low level descriptors could be embedded words, features extracted from an audio frame, or features extracted from a video frame. INLINEFORM16 denotes the length of the sequence of low level descriptors of the INLINEFORM17 -th answer. We decided to use a Gated Recurrent Unit (GRU) BIBREF22 to encode information from the job title, the questions and the answers. A GRU is able to encode sequences. It uses two mechanisms to solve the vanishing gradient problem, namely the reset gate, controlling how much past information is needed; and the update gate, determining how much past information has to be kept and the amount of new information to add. For formalization, we will denote by INLINEFORM0 the hidden state of GRU at timestep INLINEFORM1 of the encoded sequence. This part of the model aims to encode the sequences of low level descriptors. As mentioned before, the sequences can represent a text, an audio stream or a video stream. A bidirectional GRU is used to obtain representations from both directions for each element of the sequence INLINEFORM0 . It contains the forward GRU which reads the sequence from left to right and backward GRU which reads the sequence from right to left: DISPLAYFORM0 DISPLAYFORM1 In the same way, an encoding for a given low level descriptor INLINEFORM0 is obtained by concatenating forward hidden states and backward hidden states: DISPLAYFORM0 Encoding sequences in a bidirectional fashion ensures the same amount of previous information for each element of INLINEFORM0 . Using a simple forward encoder could lead to biased attention vectors focusing only on the latest elements of the answers. In this study, the local context information corresponds to the questions INLINEFORM0 . In order to encode these sentences, we use a simple forward GRU. DISPLAYFORM0 And the final representation of a question is the hidden state of the last word in the question INLINEFORM0 (i.e. INLINEFORM1 ). In order to obtain a better representation of of the candidate's answer, we aim to detect elements in the sequence which were salient for the classification task. Moreover, we hypothesize that the local context is highly important. Different behavioral signals can occur depending on the question type and it can also influence the way recruiters assess their candidates BIBREF23 . An additive attention mechanism is proposed in order to extract the importance of each moment in the sequence representing the answer. DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 and INLINEFORM1 are weight matrices, INLINEFORM2 and INLINEFORM3 are weight vectors and INLINEFORM4 denotes the transpose of INLINEFORM5 . In order to have the maximum amount of information, we concatenate at the second level, the representation of the local context and the answer representation. Moreover, we think that given the way video interviews work, the more questions a candidate answers during the interview, the more he adapts and gets comfortable. In the light of this, we decided to encode question-answer pairs as a sequence. Given INLINEFORM0 , we can use the same representation scheme as that of the low level encoder: DISPLAYFORM0 DISPLAYFORM1 We will also concatenate forward hidden states and backward hidden states: DISPLAYFORM0 We encode the job title the same way we encode the questions : DISPLAYFORM0 As done for the representation of the question, the final representation of the job title is the hidden state of the last word of INLINEFORM0 (i.e. INLINEFORM1 ). The importance of a question depends on the context of the interview, and specifically, on the type of job the candidate is applying for. For instance, a junior sales position interview could accord more importance to the social skills, while an interview for a senior position could be more challenging on the technical side. Like low level attention, high level attention is composed of an additive attention mechanism: DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 are weight matrices, INLINEFORM2 and INLINEFORM3 are weight vectors and INLINEFORM4 denotes the transpose of INLINEFORM5 . Finally INLINEFORM6 summarizes all the information of the job interview. Once INLINEFORM0 is obtained, we use it as representation in order to classify candidates: DISPLAYFORM0 where INLINEFORM0 is a weight matrix and INLINEFORM1 a weight vector. As the problem we are facing is that of a binary classification, we chose to minimize the binary cross-entropy computed between INLINEFORM2 and true labels of candidates INLINEFORM3 . Dataset We have decided to focus on only one specific type of job: sales positions. After filtering based on specific job titles from the ROME Database, a list of positions was selected and verified by the authors and an expert from the Human Resources (HR). Finally, in a collaboration with an HR industry actor, we have obtained a dataset of French video interviews comprising more than 475 positions and 7938 candidates. As they watch candidates' videos, recruiters can like, dislike, shortlist candidates, evaluate them on predefined criteria, or write comments. To simplify the task, we set up a binary classification: candidates who have been liked or shortlisted are considered part of the hirable class and others part of the not hirable class. If multiple annotators have annotated the same candidates, we proceed with a majority vote. In case of a draw, the candidate is considered hirable. It is important to note that the videos are quite different from what could be produced in a laboratory setup. Videos can be recorded from a webcam, a smartphone or a tablet., meaning noisy environments and low quality equipment are par for the course. Due to these real conditions, feature extraction may fail for a single modality during a candidate's entire answer. One example is the detection of action units when the image has lighting problems. We decided to use all samples available in each modality separately. Some statistics about the dataset are available in Table TABREF33 . Although the candidates agreed to the use of their interviews, the dataset will not be released to public outside of the scope of this study due to the videos being personal data subject to high privacy constraints. Experimental settings The chosen evaluation metrics are precision, recall and F1-score of hirable class. They are well suited for binary classification and used in previous studies BIBREF2 . We split the dataset into a training set, a validation set for hyper-parameter selection based on the F1-score, and a test set for the final evaluation of each model. Each set constitutes respectively 80%, 10% and 10% of the full dataset. Extraction of social multimodal features For each modality, we selected low-level descriptors to be used as per-frame features, and sequence-level features to be used as the non-sequential representation of a candidate's whole answer for our non-sequential baselines. Word2vec: Pretrained word embeddings are used for the BoTW (Bag of Text Words, presented later in this section), and the neural networks. We used word embeddings of dimension 200 from BIBREF24 pretrained on a French corpus of Wikipedia. eGeMAPS: Our frame-level audio features are extracted using OpenSmile BIBREF25 . The configuration we use is the same one used to obtain the eGeMAPS BIBREF13 features. GeMAPS is a famous minimalistic set of features selected for their saliency in Social Computing, and eGeMAPS is its extended version. We extract the per-frame features prior to the aggregations performed to obtain the eGeMAPS representation. OpenFace: We extract frame-level visual features with OpenFace BIBREF14 , a state-of-the-art visual behavioral analysis software that yields various per-frame meaningful metrics. We chose to extract the position and rotation of the head, the intensity and presence of actions units, and the gaze direction. As different videos have different frame-rates, we decided to smooth values with a time-window of 0.5 s and an overlap of 0.25 s. The duration of 0.5 s is frequently used in the literature of Social Computing BIBREF26 and has been validated in our corpus as a suitable time-window size by annotating segments of social signals in a set of videos. Baselines First, we compare our model with several vote-based methods: i) Random vote baseline (One thousand random draws respecting the train dataset label balance were made. The F1-score is then averaged over those one thousand samples); ii) Majority Vote (This baseline is simply the position-wise majority label. Since our model could just be learning the origin open position for each candidate and its corresponding majority vote, we decided to include this baseline to show that our model reaches beyond those cues). Second, we compare our model with non-sequential baselines: i)-a Non-sequential text (we train a Doc2vec BIBREF27 representation on our corpus, and we use it as a representation of our textual inputs); i)-b Non-sequential audio (we take the eGeMAPS audio representation as described in BIBREF13 . That representation is obtained by passing the above descriptors into classical statistical functions and hand-crafted ad hoc measures applied over the whole answer. The reason we chose GeMAPS features is also that they were designed to ease comparability between different works in the field of Social Computing); i)-c Non-sequential video (our low-level video descriptors include binary descriptors and continuous descriptors. The mean, standard deviation, minimum, maximum, sum of positive gradients and sum of negative gradients have been successfully used for a behavioral classification on media content in BIBREF28 . We followed that representation scheme for our continuous descriptors. As for our discrete features, we chose to extract the mean, the number of active segments, and the active segment duration mean and standard deviation) ii) Bag of * Words (We also chose to compare our model to BIBREF2 's Bag of Audio and Video Words: we run a K-means algorithm on all the low-level frames in our dataset. Then we take our samples as documents, and our frames' predicted classes as words, and use a "Term Frequency-inverse Document Frequency" (TF-iDF) representation to model each sample). For each modality, we use the non-sequential representations mentioned above in a monomodal fashion as inputs to three classic learning algorithms (namely SVM, Ridge regression and Random Forest) with respective hyperparameter searches. Best of the three algorithms is selected. As these models do not have a hierarchical structure, we will train them to yield answer-wise labels (as opposed to the candidate-wise labeling performed by our hierarchical model). At test time we average the output value of the algorithm for each candidate on the questions he answered. Third, the proposed sequential baselines aim at checking the four hypotheses described above: i) comparing the Bidirectional-GRU model with previously described non sequential approaches aims to validate H1 on the contribution of sequentiality in an answer-wise representation; ii) the Hierarchical Averaged Network (HN_AVG) baseline adds the hierarchy in the model in order to verify H2 and H3 (we replace the attention mechanism by an averaging operator over all of the non-zero bidirectional GRU outputs); iii) the Hierarchical Self Attention Network (HN_SATT) is a self-attention version of HireNet which aims to see the actual effect of the added context information (H4). Multimodal models Given the text, audio, and video trained versions of our HireNet, we report two basic models performing multimodal inference, namely an early fusion approach and a late fusion approach. In the early fusion, we concatenate the last layer INLINEFORM0 of each modality as a representation ,and proceed with the same test procedure as our non-sequential baselines. For our late fusion approach, the decision for a candidate is carried out using the average decision score INLINEFORM1 between the three modalities. Results and analyses First of all, Tables TABREF35 and TABREF39 show that most of our neural models fairly surpass the vote-based baselines. In Table TABREF35 , the F1-score has increased, going from the non-sequential baselines, to the Bidirectional-GRU baselines for all the modalities, which supports H1. We can also see that HN_AVG is superior to the Bidirectional-GRU baselines for audio and text validating H2 for those two modalities. This suggests that sequentiality and hierarchy are adequate inductive biases for a job interview assessment machine learning algorithm. As for H3, HN_SATT did show better results than HN_AVG, for text and audio. In the end, our HireNet model surpasses HN_AVG and HN_SATT for each modality. Consequently, a fair amount of useful information is present in the contextual frame of an interview, and this information can be leveraged through our model, as it is stated in H4. Audio and text monomodal models display better performance than video models. The same results were obtained in BIBREF2 . Our attempts at fusing the multimodal information synthesized in the last layer of each HireNet model only slightly improved on the single modality models. Attention visualization Text In order to visualize the different words on which attention values were high, we computed new values of interest as it has been done in BIBREF20 . As the sentence length changes between answers, we multiply every word's attention value ( INLINEFORM0 ) by the number of words in the answer, resulting in the relative attention of the word with respect to the sentence. In a same way, we multiply each question attention by the number of questions, resulting in the relative attention of the question with respect to the job interview. Then, in a similar way as BIBREF19 , we compute INLINEFORM1 where INLINEFORM2 and INLINEFORM3 are respectively the values of interest for word INLINEFORM4 and question INLINEFORM5 . The list of the 20 most important words contains numerous names of banks and insurances companies (Natixis, Aviva, CNP, etc) and job knowledge vocabulary (mortgage, brokerage, tax exemption, etc), which means that their occurrence in candidates answers takes an important role in hirability prediction. Video In order to visualize which moments were highlighted by attention mechanisms in a video, we display an example of the attention values for an answer in Figure FIGREF41 . In this figure, the higher the attention value, the more the corresponding frames are considered task-relevant by the attention mechanism. As we can see, some peaks are present. Three thin slices with high attention values are presented. Some social signals that are important in a job interview are identified. We hypothesize that the smile detected in Frame 1 could be part of a tactic to please the interviewer known as deceptive ingratiation BIBREF29 . In addition, Frames 2 and 3 are representative of stress signals from the candidate. In fact, lip suck was suggested to be linked to anxiety in BIBREF30 . Audio The same visualization procedure used for video has been investigated for audio. As audio signal is harder to visualize, we decided to describe the general pattern of audio attention weights. In most cases, when the prosody is homogeneous through the answer, attention weights are distributed uniformly and show no peaks, as opposed to what was observed for video. However, salient moments may appear, especially when candidates produce successive disfluencies. Thus, we have identified peaks where false starts, filler words, repeating or restarting sentences occur. Questions We aim to explore the attention given to the different questions during the same interview. For this purpose, we randomly picked one open position from the test dataset comprising 40 candidates. Questions describing the interview and the corresponding averaged attention weights are displayed in the Figure FIGREF42 . First, it seems attention weight variability between questions is higher for the audio modality than for text and video modalities. Second, the decrease in attention for Questions 5 and 6 could be explained by the fact that those questions are designed to assess "soft skills". Third, peaks of attention weight for the audio modality on Questions 2 and 4 could be induced by the fact that these questions are job-centric. Indeed, it could be possible that disfluencies tend to appear more in job-centric questions or that prosody is more important in first impressions of competence. Conclusion and future directions The HR industry actors nowadays do offer tools to automatically assess candidates undergoing asynchronous video interviews. However, no studies have been published regarding these tools and their predictive validity. The contribution of this work is twofold. First, we evaluate the validity of previous approaches in real conditions (e.g. in-the-wild settings, true applications, real evaluations, etc). Second, we used deep learning methods in order to faithfully model the structure of asynchronous video interviews. In that sense, we proposed a new version of Hierarchical Attention Networks that is aware of the interviews contextual elements (questions and job title) called HireNet, which has showed better performance than previous approaches. First basic experiments on multimodal fusion have also been performed (early and late fusion). In future work, the obtained multimodal performance could be improved by leveraging more sophisticated multimodal fusion schemes. HireNet was evaluated on a corpus containing interviews for various jobs – 475 different positions – in the domain of sales positions. Theoretical findings from industrial organizational psychology suggest that some dimensions are common across different positions BIBREF31 . However we would like to extend the corpus to other domains than sales in order to i) validate the relevance of our model for other types of positions, ii) determine which competencies are common or not across jobs. In that sense, the use of multi-domain models BIBREF32 could be of great help. Our model currently considers two labels (“hirable” and “not hirable”). Extending our annotations to more fine-grained information (communication skills, social effectiveness, etc) could provide useful insights about the profile of a candidate and its potential fit with the position in question. Through the use of attention mechanisms, we aimed to highlight salient moments and questions for each modality, which contributes to the transparency and the interpretability of HireNet. Such transparency is very important for Human Resources practitioners to trust an automatic evaluation. Further investigations could be conducted on the proposed attention mechanisms: i) to confirm the saliency of the selected moments using the discipline of Industrial and Organizational psychology; ii) to know the influence of the slices deemed important. This way, a tool to help candidates train for interviews could be developed. Last but not least, ethics and fairness are important considerations, that deserve to be studied. In that sense, detection of individual and global bias should be prioritized in order to give useful feedbacks to practitioners. Furthermore we are considering using adversarial learning as in BIBREF33 in order to ensure fairness during the training process. Acknowledgments This work was supported by the company EASYRECRUE, from whom the job interview videos were collected. We would like to thank Jeremy Langlais for his support and his help. We would also like to thank Valentin Barriere for his valuable input and the name given to the model and Marc Jeanmougin and Nicolas Bouche for their help with the computing environment. Finally, we thank Erin Douglas for proofreading the article.
No
f63519bb5e116671cebd65cc78880c5cb573c570
f63519bb5e116671cebd65cc78880c5cb573c570_0
Q: What low-resource languages were used in this work? Text: Introduction Universal Dependencies (UD) BIBREF0, BIBREF1, BIBREF2 is an ongoing project aiming to develop cross-lingually consistent treebanks for different languages. UD provided a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. The annotation schema relies on Universal Stanford Dependencies BIBREF3 and Google Universal POS tags BIBREF4. The general principle is to provide universal annotation ; meanwhile, each language can add language-specific relations to the universal pool when necessary. The main goal of UD project is to facilitate multi-lingual parser production and cross-lingual learningFOOTREF1. Cross-lingual learning is the task of gaining advantages from high-resource languages in terms of annotated data to build a model for low-resource languages. This paradigm of learning is now an invaluable tool for improving the performance of natural language processing in low-resource languages. Based on the universal annotations of the UD project, there are several works on cross-lingual tasks. Most of them focus on grammar-related tasks such as POS tagging BIBREF5 and dependency parsing BIBREF6, BIBREF7, BIBREF8. In this paper, we are going to study the effectiveness of UD in making cross-lingual models for more complex tasks such as semantic relation extraction and paraphrase identification. To the best of our knowledge, no work was done on the application of UD annotations in the mentioned tasks. Universal dependencies approach for cross-lingual learning is based on the fact that UD captures similarities as well as idiosyncrasies among typologically different languages. The important characteristic of UD annotations is that although the UD parse trees of parallel sentences in different languages may not be completely equivalent, they have many similar sub-trees, in the sense that at least core parts of trees are equal BIBREF9. In this paper, we study two cross-lingual tasks : semantic relation extraction and paraphrase identification. The former is the task of identifying semantic connections between entities in a sentence ; while the training and test data are in different languages. The latter is defined to determine whether two sentences are paraphrase or not ; while the training' pairs of sentences are in a different language from the test data. To employ similarities of UD trees of different languages to train cross-lingual models, we propose to use syntactic based methods which ideally can deal with parsing information of data. We found that tree kernels allow to estimate the similarities among texts directly from their parse trees. They are known to operate on dependency parse trees and automatically generate robust prediction models based on the similarities of them. We have made parallel dataset for each task and presented the cross-lingual variant of kernel functions for them. Evaluation by the parallel test data reveals that the accuracy of models trained by a language and tested on the other languages get close to mono-lingual when the syntactic parsers are trained with UD corpora. This suggests that syntactic patterns trained on the UD trees can be invariant with respect to very different languages. To compare the proposed approach with the cross-lingual variant of neural models, we employed several state-of-the-art deep networks and equipped them with pre-trained bi-lingual word embeddings. English training data are fed into the networks, which create a mapping between the input and output values. Then test set is given to the trained network. Results show that the tree-based models outperform end-to-end neural models in cross-lingual experiments. Moreover, we employed Tree-LSTM network BIBREF10 with UD parse trees, which is capable to produce semantic representation from tree-ordered input data. Tree-LSTM doesn't directly deal with syntactic features of the input sentence, rather it processes the input tokens in order of placing in a tree, e.g. from bottom to up or vice versa. Experiments show superiority of Tree-LSTM trained by UD trees over sequential models like LSTM in cross-lingual evaluations. This paper is organized as follows : Section SECREF2 describes how UD approach allows to capture similarities and differences across diverse languages. Section SECREF3 presents tree-based models for cross-lingual learning of PI and RE tasks. Section SECREF4 presents an empirical study on cross-lingual learning using UD. Finally Section SECREF5 gives the analysis and conclusion remarks. Transfer Learning via Universal Dependencies The Universal Dependencies project aims to produce consistent dependency treebanks and parsers for many languages BIBREF0, BIBREF1, BIBREF2. The most important achievements of the project are the cross-lingual annotation guidelines and sets of universal POS and the grammatical relation tags. Consequentially many treebanks have been developed for different languages. The general rule of UD project is to provide a universal tag set ; however each language can add language-specific relations to the universal pool or omit some tags. To capture similarities and differences across languages, UD uses a representation consisting of three components : (i) dependency relations between lexical words ; (ii) function words modifying lexical words ; and (iii) morphological features associated with words BIBREF9. The underlying principle of the syntactic annotation schema of the UD project is that dependencies hold between content words, while function words attach to the content word that they further specify BIBREF3. There is an important difference between UD schema and Stanford Typed Dependencies (STD) BIBREF11 as the STD schema chooses function words as heads : prepositions in prepositional phrases, and copula verbs that have a prepositional phrase as their complement. Although the UD parse graphs of a sentence in different languages may not be completely equal, they have similar core parts. Figure FIGREF5 shows the UD graph of English sentence “The memo presents details about the lineup management" and its translation into French and Farsi. Both the similarities and differences of UD graphs are demonstrated in that figure. Most of the nodes and edges are similar. Farsi has the language-specific relation “compound :lvc", which relates the noun part of the compound verb to the verbal part as depicted in Figure FIGREF5. So far, UD treebanks have been developed for over 70 languages and all of them are freely available for download. UD project released a pipeline, called UDPipe, which is used to train models for UD parsing using the UD treebanks BIBREF12. UD parsing and similarity of UD structures in different languages provide facilities to train multi-lingual models. In what follows, we focus on two tasks, paraphrase identification and semantic relation extraction, and present cross-learning models for them. Cross-Lingual Tree-based Models To employ UD parsing in cross-lingual learning, there should be a training algorithm that is capable of utilizing similarities of UD parse trees in different languages. Kernel methods such as SVM use a similarity function, which is called kernel function, to assign a similarity score to pairs of data samples. A kernel function $K$ over an object space $X$ is symmetric, positive semi-definite function $K: X \times X \rightarrow [0,\infty )$ that assigns a similarity score to two instances of $X$, where $K(x,y)=\phi (x)\cdot \phi (y)=\sum {\phi _{i}(x)\phi _{i}(y)}$. Here, $\phi (x)$ is a mapping function from the data object in $X$ to the high-dimensional feature space. Using the kernel function, it is not necessary to extract all features one by one and then multiply the feature vectors. Instead, kernel functions compute the final value directly based on the similarity of data examples. Tree kernels are the most popular kernels for many natural language processing tasks BIBREF13, BIBREF14. Tree Kernels compute the number of common substructures between two trees $T_1$ and $T_2$ without explicitly considering the whole fragment space BIBREF15. Suppose the set $\mathcal {F}=\lbrace f_1,f_2, \dots , f_{|\mathcal {F}|} \rbrace $ be the tree fragment space and $\mathcal {X}_i(n)$ be an indicator function that is 1 if the $f_i$ rooted at node $n$ and equals to 0, otherwise. Now, tree kernel over $T_1$ and $T_2$ is defined as below BIBREF15 : where $N_{T_1}$ and $N_{T_2}$ are the set of nodes of $T_1$ and $T_2$, respectively and which shows the number of common fragments rooted in $n_1$ and $n_2$ nodes. Different tree kernels vary in their definition of $\Delta $ function and fragment type. There are three important characterizations of fragment type BIBREF16 : SubTree, SubSet Tree and Partial Tree. A SubTree is defined by taking a node of a tree along with all its descendants. SubSet Tree is more general and does not necessarily contain all of the descendants. Instead, it must be generated by utilizing the same grammatical rule set of the original trees. A Partial Tree is more general and relaxes SubSet Tree's constraints. Some popular tree kernels are SubSet Tree Kernel (SST), Partial Tree Kernel (PTK) BIBREF17 and Smoothing Partial Tree Kernel (SPTK) BIBREF15. In the next section, we employ the tree kernels along with UD parse trees for solving cross-lingual tasks. Cross-Lingual Tree-based Models ::: Cross-Lingual Paraphrase Identification Paraphrase Identification (PI) is the task of determining whether two sentences are paraphrase or not. It is considered a binary classification task. The best mono-lingual methods often achieve about 85% accuracy over this corpus BIBREF14, BIBREF18. Filice et al. BIBREF14 extended the tree kernels described in the previous section to operate on text pairs. The underlying idea is that this task is characterized by several syntactic/semantic patterns that a kernel machine can automatically capture from the training material. We can assess a text pair as a paraphrase if it shows a valid transformation rule that we observed in the training data. The following example can clarify this concept. A simple paraphrase rewriting rule is the active-passive transformation, such as in “Federer beat Nadal” and “Nadal was defeated by Federer”. The same transformation can be observed in other paraphrases, such as in “Mark studied biology” and “Biology was learned by Mark”. Although these two pairs of paraphrases have completely different topics, they have a very similar syntactic structure. Tree kernel combinations can capture this inter-pair similarity and allow a learning algorithm such as SVM to learn the syntactic-semantic patterns characterizing valid paraphrases. Given a tree kernel $TK$ and text pairs $p_i = (i_1, i_2)$, the best tree kernel combination for the paraphrase identification task described in BIBREF14 is the following : SMTK ( pa, pb ) = softmax ( TK(a1,b1)TK(a2, b2), TK(a1,b2)TK(a2,b1) ) where softmax$(x_1,x_2)= \frac{1}{m} \log \left(e^{m x_1} + e^{m x_2}\right)$ is a simple function approximating the max operator, which cannot be directly used in kernel formulations, as it can create non valid kernel functions. In this kernel combination the two different alignments between the trees of the two pairs are tried and the best alignment is chosen. This allows to exploit the inherent symmetry of the Paraphrase Identification task (i.e., if $a$ is a paraphrase of $b$, it also implies that $b$ is a paraphrase of $a$). When we adopt the universal dependencies, different languages have a common formalism to represent text syntax, and tree kernels, that mostly operate at a syntactical level, can still provide reliable similarity estimations, i.e., $SM_{TK}(p_a, p_b)$ can work even if $p_a$ and $p_b$ have different languages. This allows operating in a cross-lingual setting. For instance, we can use a model trained on a high-resource language for classifying textual data of a poor-resource language. In addition to the syntactic similarity evaluation, the PTK and SPTK which are used in the $SM_{TK}$ formulation also perform a lexical matching among the words of the trees to be compared. Cross-Lingual Tree-based Models ::: Cross-Lingual Semantic Relation Extraction Relation Extraction (RE) is defined as the task of identifying semantic relations between entities in a text. The goal is to determine whether there is a semantic relation between two given entities in a text, and also to specify the type of relationship if present. RE is an important part of Information Extraction BIBREF19. Relation extraction methods often focus on the Shortest Dependency Path (SDP) between entities BIBREF20. However, there are some crucial differences between UD annotation principles and others parse formalisms that causes us to reconsider SDP of UD trees. Considering the sentence : “The most common $[$audits$]_{e1}$ were about $[$waste$]_{e2}$ and recycling", there is a Message-Topic relation between $e1$ and $e2$. The most informative words of the sentence for the relation are “were" and “about" ; while the other words of the sentence can be ignored and the same relation is still realized. It is a crucial challenge of relation extraction methods that important information may appear at any part of the sentence. Most previous works assume that the words lying in the window surrounding entities are enough to extract the relation governing entities BIBREF21, BIBREF22. However, words of a sentence are often reordered when the sentence is translated into other languages. Therefore, using words in the window surrounding entities may result in an accurate model for mono-lingual experiments, but not necessarily for cross-lingual ones. Regarding UD parsing, there are several significant differences between universal annotation schema and other schemas for dependency parsing. Two main differences are related to prepositions and copula verbs. According to the UD annotation guidelines, prepositions are attached to the head of a nominal, and copula verbs are attached to the head of a clause. However in other schemas, prepositions are often the root of the nominal, and the clause is attached to the copula. Figure FIGREF12 shows the parse tree of the example : “The most common $[$audits$]_{e1}$ were about $[$waste$]_{e2}$ and recycling". The tree is produced by the ARK parser, which does not follow universal schema. As mentioned before, “were" and “about" lie on the SDP between $e1$ and $e2$. However, considering the UD parse tree depicted in Figure FIGREF12, there is no word in the SDP ; while both “were" and “about" are attached to $e2$. As a result, we propose that the words which are dependent on the entities be considered to be the informative words in addition to the SDP's words. We use these words for making a cross-lingual model. Kernel functions have several interesting characteristics. The combination of kernel functions in a linear or polynomial way results in a valid kernel function BIBREF23. Composite kernel functions are built on individual kernels ; each of them captures part of the features of a data object. Tree kernels capture the data's syntactic structure, while a word sequence kernel considers the words of a sequence in a particular order. To define a cross-lingual kernel, we have adopted the composite kernel used by the Nguyen et al. BIBREF16 : where $K_{P-e}$ is a polynomial kernel. Its base kernel is an entity kernel ($K_E$), which is applied to an entity-related feature vector consisting of (named) entity type, mention type, headword, and POS tag. $K_{SST}$ is the Sub-Set Tree (SST) kernel, which is applied to the Path-Enclosed Tree (PET) of the constituency tree structure. PET is the smallest common subtree including the two entities BIBREF24, BIBREF25. $K_{PT}$ is the Partial Tree kernel BIBREF17, which is applied to the dependency-based tree structures. Parameter $\alpha $ weighs the kernels. To incorporate the most informative words of the sentence into the model, the feature vector $V_o$ is defined similarly to the work of Hashimoto et al. BIBREF21. They proposed concatenating these vectors to make the $V_o$ : the vector representing $e1$, the vector representing $e2$, the average of vectors representing words between two entities, the average of vectors representing words in a window before $e1$, and the average of vectors representing words in a window after $e2$. Since $V_o$ is defined based on the position of words in the sentence and thus is not necessary a cross-lingual consistent feature vector, we propose to define feature vector $V_{ud}$ by concatenating these vectors : the vector representing $e1$, the vector representing $e2$, the average of vectors representing words in the shortest path between two entities (instead of words between $e1$ and $e2$), the average of vectors representing words dependent to $e1$ (instead of words before $e1$), and the average of vectors representing words dependent to $e2$ (instead of words after $e2$). $V_{ud}$ is cross-lingually consistent provided that the words are picked up from UD parse trees and represented by multi-lingual embeddings. Based on the $CK$ defined in formula DISPLAY_FORM13 and the feature vectors $V_o$ and $V_{ud}$, the following composite kernels are proposed : where $K_{P-o}$ is polynomial kernel applied on a feature vector $V_o$. where $K_{P-ud}$ is polynomial kernel applied on a feature vector $V_{ud}$. Constituency parsing of a sentence in a language depends on the syntactic rules governing the position of words. In general, constituency parse trees of a sentence in different languages are different. So, the constituency tree should not be involved in the cross-lingual model. Here, $CK_2$ is our proposed kernel, which is used for CL-RE. However, $CK_1$ and $CK_3$ can also be used for cross-lingual experiments subject to the similarity of syntactic parsing of the source and target languages. SST kernel works only on the constituency trees and not on the dependency trees BIBREF17. Therefore, for evaluating the similarity of dependency trees, PT kernel is used. The PT kernel cannot process labels on the edges ; so dependency trees are converted to the Lexical Centered Tree (LCT) format BIBREF15 and then PT kernel is applied on the transformed trees. In LCT format, the lexical is kept at the center and the other information related to that lexical, such as POS tag and grammatical relation, is then added as its children. MultiWord Expression (MWE) is a lexeme made up a sequence of two or more lexemes as each lexeme has its own meaning, but the meaning of the whole expression cannot (or at least can only partially) be computed from the meaning of its parts. MWE displays lexical, syntactic, semantic, pragmatic and/or statistical idiosyncrasies BIBREF26. The nature of MWE leads us to deal with the whole lexemes as a word. Fortunately, MWE can be identified from the parse tree. There are three types of dependency relations for MWE in UD parsing : flat, fixed, and compound. According to UD guidelines, the flat relation is used for exocentric (headless) semi-fixed MWEs like names (Walter Burley Griffin) and dates (20 November). The fixed relation applies to completely fixed grammaticized (function word-like) MWE (like instead of, such as), whereas compound applies to endocentric (headed) MWE (like apple pie). To produce feature vector $V_{ud}$, it is better to treat MWE as single words, especially MWE with fixed relations between parts because considering each part of the MWE separately and averaging their embedding vectors may result in a meaningless vector. This point matters when the words of low-resource languages are first translated into other languages and then presented by an embedding of that language. Therefore, the procedure of producing feature vector $V_{ud}$ should be modified with a simple heuristic : every node of the UD tree within the shortest path between two entities or dependent to $e1$ or $e2$ which have a child node with fixed dependency type is considered with its child as one word. If the child has also a child with a fixed dependency, all of them are considered as one word. For example, Figure FIGREF17 shows the UD tree of a Farsi sentence which is the translation of the English sentence in Figure FIGREF12. Entities are distinguished from other nodes by putting a circle around them. The 5th and 6th nodes from the left make a multiword expression that means “about". Applying the above heuristic results in them both being considered as a single word and so the correct translation to another language is found. Some other examples of Farsi MWEs are “قبل از آن که/before", “در حالی که/while", “به درون/into", “به جز/except", and “بر روی/on". In French language there are also MWEs, such as “bien que/although", “en tant que/as", “tant de/so many", “afin de/in order to", “prés de/near". Apart from fixed, flat, and compound, there are grammatical relations which are language-specific and show MWE structures BIBREF27. If the target language has language-specific relations, the above heuristic should be applied to them. For example, compound :lvc relation, which is defined for several languages including Farsi, represents the dependence from the noun part to the light verb part of compound verbs. An example of this relation was shown in Figure FIGREF5. The words “ارائه/presentation" and “می‌دهد/give" together mean “present". Experiments In this section, the experimental analysis of the proposed models is presented. We have implemented the cross-lingual variant of kernel functions for PI and RE tasks as described in section SECREF3 and measured the accuracy of models by testing them on the parallel data set. The main advantage of the proposed method is that it needs no data of the test language, in the sense that the model trained using the training data of a language, e.g. English, is directly used in the other languages, e.g. Farsi, Arabic, etc. From this point of view, the proposed method can only be compared with those methods that use no data (neither labeled nor un-labeled) of the test language or parallel corpus or machine translators between the training and test languages. One solution for cross-lingual tasks is to equip the high accurate neural networks proposed for each task with pre-trained multi-lingual word embeddings, without any change in the architecture of the network. Therefore, we re-implemented some deep methods and compared the proposed approach with them for both PI and RE tasks. Experiments ::: Paraphrase Identification For this task, we made a parallel test dataset and implemented PT and SPT kernels and compared the results with two-channel CNN of Wang et al. BIBREF18. Experiments ::: Paraphrase Identification ::: Construction of Parallel Dataset To prepare a multi-language corpus for PI, we employed an existing English corpus with its Arabic translation and made Farsi correspondence. Microsoft Research Paraphrase Corpus (MSRC) BIBREF28 mostly used by the researches for English PI task. It contains 4,076 and 1,725 pairs of sentences for the training and test, respectively. This data has been extracted from news sources on the web, and has been annotated by humans whether each pair captures a paraphrase equivalence relationship. PI relates to the task of Semantic Textual Similarity (STS), in which the goal is to capture the degree of equivalence of meaning rather than making a binary decision. SemEval-2017 task 1 put the emphasis on multi-lingual STS BIBREF29. They selected 510 pairs from the test part of the MSRC corpus, and translated them into Arabic by Arabic native speakers. All data have been manually tagged with a number from 0 to 5 to show the degree of similarity. The Arabic part of the STS dataset of SemEval-2017 is parallel to some parts of the MSRC test corpus. So there is a parallel English-Arabic dataset. Because of the similarity between PI and STS tasks, the dataset of STS can also be used in the PI task, just by converting the scores to 0 or 1. So, the original binary scores of the STS dataset have been retrieved from the MSRC corpus. As a result, a corpus with 510 pairs of English sentences and Arabic translation for PI task is ready. In addition to Arabic translation, we produced correspondence Farsi data by translation of parallel English-Arabic dataset into Farsi by a Farsi native speaker. In the experiments, MSRC corpus was divided as follows : 1) the training part of MSRC corpus for training ; 2) those data from test part of MSRC, which we don't have their Arabic or Farsi counterpart, for tuning hyper-parameters as development set ; and 3) 510 parallel English-Arabic-Farsi from the test part of MSRC for the test. Therefore, our training and test data have 4076 and 510 samples, respectively. Table TABREF21 shows the statistics of our data. Experiments ::: Paraphrase Identification ::: Tools and Setup The classifiers were trained with the C-SVM learning algorithm within KeLP BIBREF30, which is a kernel-based machine learning framework and implemented tree kernels. We employed PT and SPT kernel functions. For evaluating node similarity in SPTK function, we used the same method described in BIBREF14 : if $n_1$ and $n_2$ are two identical syntactic nodes, $\sigma (n_1,n_2)$ denoted the similarity of $n_1$ and $n_2$ and is equal to 1. If $n_1$ and $n_2$ are two lexical nodes with the same POS tag, their similarity is computed as the cosine similarity of the corresponding vectors in a wordspace. In all other cases $\sigma = 0$. English wordspace was generated by using word2vec tool. In the cross-lingual setup, we need a vocabulary to find the translation of lexical nodes and then compute their similarity in a wordspace. For English-Arabic experiments, we used Almaany dictionary to find the translation of Arabic words into English. For English-Farsi experiments, we used the Aryanpour dictionary to extract the English equivalent of Farsi words. To evaluate the performance of the classifiers we used Accuracy and F$_1$ as the previous works BIBREF31, BIBREF32, BIBREF18. For dependency parsing, UDPipe was used, which is a trainable pipeline for tokenization, tagging, lemmatization, and dependency parsing. We used version 2.4 of the UD pre-trained models of English, Arabic, and Farsi. To implement the CNN network of Wang et al. BIBREF18, we used the same word embedding they used. They set the size of the word vector dimension as d =300, and pre-trained the vectors with the word2vec toolkit on the English Gigaword (LDC2011T07). Hyper-parameters of the network are the same as their work. Experiments ::: Paraphrase Identification ::: Results We first examine the tree kernels in the mono-lingual and then in the cross-lingual learning. Experiments ::: Paraphrase Identification ::: Results ::: Evaluation of tree-based models in mono-lingual learning In the first experiment, we benchmark the UD-based models on the monolingual dataset. So, we employed the original split of MSRC corpus and trained models using PT and SPT kernels. These models essentially work based on the lexico-syntactic patterns observed in training sentences. Filice et al. BIBREF14 proposed several kernels including linear, graph and SPT kernels. They showed the best accuracy is obtained using the combination of them. However, we use only tree kernels in cross-lingual experiments, to measure how much we can rely on the similarities of UD parse trees in different languages. As Table TABREF29 shows, tree kernels including PTK and SPTK show comparable results according to the accuracy and F$_1$ measures. This means that PT and SPT kernels, which are trained by UD parse trees, make accurate models that can be used in solving the PI task. In the next experiment, we use these models to evaluate Arabic and Farsi test data. Experiments ::: Paraphrase Identification ::: Results ::: Evaluation of tree-based models with UD in cross-lingual learning Now, we employ the parallel dataset for cross-lingual evaluation of the UD-based model trained by English data. A baseline for this task is the majority voting in that what we get if we always predict the most frequent label of the training data. A better baseline for cross-lingual PI is to use some neural models and couple them with pre-induced multilingual embeddings. So, we re-run the two-channel CNN model of Wang et al. BIBREF18 by our test data. Upper bound for the cross-lingual experiment is considered the accuracy of the model when it is evaluated by the data of the same language of the training data, e.g. English. Table TABREF30 shows that using PTK 61.6% of accuracy is obtained for English test data. It is 57.7% and 57.3% for Arabic and Farsi, respectively ; while the accuracy of the majority baseline is 50.6%. CNN model obtained similar accuracy but much lower F$_1$ scores. Comparing the results of Tables TABREF29 and TABREF30 reveals that the accuracy of both kernels drops significantly when they are tested by our small test data. The reason is that the distribution of MSRC training data over positive and negative classes is significantly different from our test data. Specifically, 67.5% of MSRC's training data are positive ; while 50.5% of our test data are positive. Experiments ::: Paraphrase Identification ::: Results ::: Evaluation of tree-based models with parse formalisms rather than UD In this experiment, we produced dependency parse trees of Farsi data employing Hazm parser which is trained on non-UD tree-bank. Table TABREF30 shows that in this case accuracy of the models significantly drops. Taking a deeper look at the tree kernels, PTK doesn't use the similarity of words and works based on exact matching of them. So, in cross-lingual experiments, it considers only the similarity of trees. In this case, accuracy on Farsi test data is 50.6% which is the same as the majority baseline. This experiment reveals that the trees of parallel sentences that are produced by UD parsers are significantly more similar than the trees generated by other formalisms. Experiments ::: Relation Extraction In this section, we explain the experiments of cross-lingual RE and present the results. Specifically, we compared tree-based methods including combination of tree kernels and TreeLSTM with deep methods of CNN BIBREF33, Bi-LSTM BIBREF34 and RCNN BIBREF35. Experiments ::: Relation Extraction ::: Construction of Parallel Dataset SemEval 2010 released a dataset for relation extraction in task 8 BIBREF36, which is used by many researchers. This dataset contains 8000 samples for the training and 2717 samples for the test. It was annotated with 19 types of relations : 9 semantically different relationships (with two directions) and an undirected Other class. A brief description of these relation types is given in Table TABREF34. The SemEval-2010 dataset is in English. For cross-lingual experiments, the first 1000 samples of the test part were translated into Farsi and French. Two native Farsi and French speakers with high expertise in English were asked to translate the data. Experiments ::: Relation Extraction ::: Tools and Setup Similar to PI's experiments, KeLP was used to implement the kernel combination. The strategy for dealing with multiple classes is “one versus others”. For constituency parsing, Stanford CoreNLP was used that contains pre-trained models for English and French within the Stanford package. For parsing Farsi data, the University of Tehran’s constituency parser BIBREF37 was used. Parameter $\alpha $ of the formula DISPLAY_FORM14-DISPLAY_FORM16 is 0.23 as the previous works BIBREF16. To obtain bi-lingual word embeddings, the multiCluster method of Ammar et al. BIBREF38 was used and 512-dimensional vectors were trained for English, French, and Farsi. Experiments ::: Relation Extraction ::: Result We first examine the tree kernels in the mono-lingual and then in the cross-lingual learning. Experiments ::: Relation Extraction ::: Result ::: Evaluation of tree-based models in mono-lingual learning There is a huge amount of works on RE, which mainly utilizes neural networks. These methods use different features including lexical, grammatical, and semantic features such as POS, WordNet, and dependency parsing. Table TABREF39 shows the state-of-the-art neural models evaluated by SemEval 2010-task 8 test set (2717 samples). The best proposed method, $CK_1$, obtained 84.0% of F$_1$ which is comparable with the others. Experiments ::: Relation Extraction ::: Result ::: Evaluation of tree-based models with UD in cross-lingual learning Table TABREF40 shows accuracy of 84.2% F$_1$ score for $CK_1$ when tested on the first 1000 samples of English test data. The accuracy of this model for its Farsi and French counterparts is 53.4% and 61.2% respectively. This kernel employs sentence context, and so it didn't show exciting results in the cross-lingual experiment ; especially for Farsi data. This is because Farsi is one of the SOV languages, in contrast to English and French, which are SVO. This means verbs are usually at the end of the sentence in Farsi. When the sentence's verb is highly informative for the relation between two entities, it places outside the window surrounding two entities and so it doesn't contribute to the feature vector $V_o$. Table TABREF40 show the F$_1$ score of the models trained by $CK_2$ and $CK_3$. These kernels utilize the context words of the UD trees. Comparing three kernels, F$_1$ increased from 53.4% to 65.2% for Farsi, and to 67.5% for the French test data. The best result for Farsi came from kernel $CK_2$ ; whereas $CK_3$ performed better with the French data. Thus, it can be concluded that the constituency-based parse trees of English and French data have more similar sub-trees than English and Farsi. The reason partially relates to the common tool for English and French ; because Stanford CoreNLP has pre-trained models for both of these languages. Therefore, English and French models followed the same schema, while Farsi adopted different schema for constituency parsing. In addition to the composite kernels, we trained a Tree-LSTM model over the UD parse trees. Tree-LSTM doesn't process the syntactic features of the input sentence, rather it takes the tokens in order of the tree's node. However, to contribute the grammatical features, for each token its word embedding was concatenated to its dependency type embedding and its POS tag embedding. The resulting network obtained 80.0% of F$_1$ when tested by English test data. F$_1$ of this model is 52.0% for Farsi and 55.6% for French. Although the Tree-LSTM model obtained lower F$_1$ in comparison with the tree kernels, it still does better than deep baselines : we re-implemented the CNN model of Qin et al. BIBREF33, Att-BiLSTM of Zhou et al. BIBREF34, and RCNN of Lai et al. BIBREF35. All networks use bilingual word embeddings in the embedding layer. As Table TABREF40 shows the best F$_1$ scores were obtained by RCNN which utilizes CNN over the LSTM layer. However, the results are significantly lower than the UD-based models, specifically in Farsi. Because word order of Farsi and English sentences are very different ; as Farsi is SOV and English is SVO. Experiments ::: Relation Extraction ::: Result ::: Effect of Multi-Word Expressions Last two rows of Table TABREF40 show the F$_1$ score of the model trained on the English training data using the $CK2$ and $CK3$, in which MWEs were considered to be a single node within the dependency tree, as described at the end of Section SECREF10. The accuracy of $CK_2$ mainly increased for the Farsi data, because Farsi has many multi-word expressions such as compound verbs. Farsi has only about 250 simple verbs and all the other verbs are compound BIBREF43. Considering MWE as a single node causes all the tokens which compose a verb to be treated as a single word, and so the true translation will be found when searching for that word in dictionaries. Figure FIGREF46 shows the F$_1$ scores of best models for different semantic classes. Discussion and Conclusion Taking a deeper look at the proposed method, most of the mis-classifications of the cross-lingual tree models are related to the following issues : Structural Difference : The main reason for the error of classifiers is structural differences. Although UD tries to produce as most similar trees as it can for parallel sentences, there are many language-specific dependency patterns that could not be neglected. Lexical Gap : Words mainly convey the meaning of the sentence. A lexical gap between source and target languages usually ruins the accuracy of cross-lingual models. Confusion of different senses on a surface : Words of different languages usually have multiple senses. Confusion of different senses of words causes incorrect translation of words, because dictionaries translate word to word, but not word-sense to word-sense. On the other hand, Word Sense Disambiguation (WSD) is a difficult task and needs additional resources such as high-quality multi-lingual wordnets BIBREF44. Incorrect translation of prepositions : Prepositions are very informative for the RE task. Hashimoto et al. presented the five most informative unigrams and three-grams for three types of relations of the SemEval 2010-task 8 dataset BIBREF21, which are shown in Table TABREF47. Wang et al. BIBREF42 also presented the most representative trigrams for different relations on the same data set. Also, Lahbib et al. BIBREF45 presented the most common Arabic prepositions and showed that each one reflects some specific kinds of semantic relations. Confusion of senses for prepositions is a very common issue in word-to-word translation. Phrasal verbs : Phrasal verbs, which have a metaphorical meaning, often cannot be translated word for word. For example, the Farsi verb “از دست دادن / to give from hand”, means “lose". When the most informative chunk of the sentence is the phrasal verb, the proposed method does not capture the true meaning. In general, more lexical and structural similarities between the source and target languages increase the accuracy of UD-based transfer learning. As future works, it is proposed that the UD-based approach is studied for other cross-lingual learning tasks and other languages along with different learning algorithms that are capable of dealing with parse trees.
Unanswerable
ffa4d4bfb226382ca4ecde65ecdc44a3d9e0ce81
ffa4d4bfb226382ca4ecde65ecdc44a3d9e0ce81_0
Q: What classification task was used to evaluate the cross-lingual adaptation method described in this work? Text: Introduction Universal Dependencies (UD) BIBREF0, BIBREF1, BIBREF2 is an ongoing project aiming to develop cross-lingually consistent treebanks for different languages. UD provided a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. The annotation schema relies on Universal Stanford Dependencies BIBREF3 and Google Universal POS tags BIBREF4. The general principle is to provide universal annotation ; meanwhile, each language can add language-specific relations to the universal pool when necessary. The main goal of UD project is to facilitate multi-lingual parser production and cross-lingual learningFOOTREF1. Cross-lingual learning is the task of gaining advantages from high-resource languages in terms of annotated data to build a model for low-resource languages. This paradigm of learning is now an invaluable tool for improving the performance of natural language processing in low-resource languages. Based on the universal annotations of the UD project, there are several works on cross-lingual tasks. Most of them focus on grammar-related tasks such as POS tagging BIBREF5 and dependency parsing BIBREF6, BIBREF7, BIBREF8. In this paper, we are going to study the effectiveness of UD in making cross-lingual models for more complex tasks such as semantic relation extraction and paraphrase identification. To the best of our knowledge, no work was done on the application of UD annotations in the mentioned tasks. Universal dependencies approach for cross-lingual learning is based on the fact that UD captures similarities as well as idiosyncrasies among typologically different languages. The important characteristic of UD annotations is that although the UD parse trees of parallel sentences in different languages may not be completely equivalent, they have many similar sub-trees, in the sense that at least core parts of trees are equal BIBREF9. In this paper, we study two cross-lingual tasks : semantic relation extraction and paraphrase identification. The former is the task of identifying semantic connections between entities in a sentence ; while the training and test data are in different languages. The latter is defined to determine whether two sentences are paraphrase or not ; while the training' pairs of sentences are in a different language from the test data. To employ similarities of UD trees of different languages to train cross-lingual models, we propose to use syntactic based methods which ideally can deal with parsing information of data. We found that tree kernels allow to estimate the similarities among texts directly from their parse trees. They are known to operate on dependency parse trees and automatically generate robust prediction models based on the similarities of them. We have made parallel dataset for each task and presented the cross-lingual variant of kernel functions for them. Evaluation by the parallel test data reveals that the accuracy of models trained by a language and tested on the other languages get close to mono-lingual when the syntactic parsers are trained with UD corpora. This suggests that syntactic patterns trained on the UD trees can be invariant with respect to very different languages. To compare the proposed approach with the cross-lingual variant of neural models, we employed several state-of-the-art deep networks and equipped them with pre-trained bi-lingual word embeddings. English training data are fed into the networks, which create a mapping between the input and output values. Then test set is given to the trained network. Results show that the tree-based models outperform end-to-end neural models in cross-lingual experiments. Moreover, we employed Tree-LSTM network BIBREF10 with UD parse trees, which is capable to produce semantic representation from tree-ordered input data. Tree-LSTM doesn't directly deal with syntactic features of the input sentence, rather it processes the input tokens in order of placing in a tree, e.g. from bottom to up or vice versa. Experiments show superiority of Tree-LSTM trained by UD trees over sequential models like LSTM in cross-lingual evaluations. This paper is organized as follows : Section SECREF2 describes how UD approach allows to capture similarities and differences across diverse languages. Section SECREF3 presents tree-based models for cross-lingual learning of PI and RE tasks. Section SECREF4 presents an empirical study on cross-lingual learning using UD. Finally Section SECREF5 gives the analysis and conclusion remarks. Transfer Learning via Universal Dependencies The Universal Dependencies project aims to produce consistent dependency treebanks and parsers for many languages BIBREF0, BIBREF1, BIBREF2. The most important achievements of the project are the cross-lingual annotation guidelines and sets of universal POS and the grammatical relation tags. Consequentially many treebanks have been developed for different languages. The general rule of UD project is to provide a universal tag set ; however each language can add language-specific relations to the universal pool or omit some tags. To capture similarities and differences across languages, UD uses a representation consisting of three components : (i) dependency relations between lexical words ; (ii) function words modifying lexical words ; and (iii) morphological features associated with words BIBREF9. The underlying principle of the syntactic annotation schema of the UD project is that dependencies hold between content words, while function words attach to the content word that they further specify BIBREF3. There is an important difference between UD schema and Stanford Typed Dependencies (STD) BIBREF11 as the STD schema chooses function words as heads : prepositions in prepositional phrases, and copula verbs that have a prepositional phrase as their complement. Although the UD parse graphs of a sentence in different languages may not be completely equal, they have similar core parts. Figure FIGREF5 shows the UD graph of English sentence “The memo presents details about the lineup management" and its translation into French and Farsi. Both the similarities and differences of UD graphs are demonstrated in that figure. Most of the nodes and edges are similar. Farsi has the language-specific relation “compound :lvc", which relates the noun part of the compound verb to the verbal part as depicted in Figure FIGREF5. So far, UD treebanks have been developed for over 70 languages and all of them are freely available for download. UD project released a pipeline, called UDPipe, which is used to train models for UD parsing using the UD treebanks BIBREF12. UD parsing and similarity of UD structures in different languages provide facilities to train multi-lingual models. In what follows, we focus on two tasks, paraphrase identification and semantic relation extraction, and present cross-learning models for them. Cross-Lingual Tree-based Models To employ UD parsing in cross-lingual learning, there should be a training algorithm that is capable of utilizing similarities of UD parse trees in different languages. Kernel methods such as SVM use a similarity function, which is called kernel function, to assign a similarity score to pairs of data samples. A kernel function $K$ over an object space $X$ is symmetric, positive semi-definite function $K: X \times X \rightarrow [0,\infty )$ that assigns a similarity score to two instances of $X$, where $K(x,y)=\phi (x)\cdot \phi (y)=\sum {\phi _{i}(x)\phi _{i}(y)}$. Here, $\phi (x)$ is a mapping function from the data object in $X$ to the high-dimensional feature space. Using the kernel function, it is not necessary to extract all features one by one and then multiply the feature vectors. Instead, kernel functions compute the final value directly based on the similarity of data examples. Tree kernels are the most popular kernels for many natural language processing tasks BIBREF13, BIBREF14. Tree Kernels compute the number of common substructures between two trees $T_1$ and $T_2$ without explicitly considering the whole fragment space BIBREF15. Suppose the set $\mathcal {F}=\lbrace f_1,f_2, \dots , f_{|\mathcal {F}|} \rbrace $ be the tree fragment space and $\mathcal {X}_i(n)$ be an indicator function that is 1 if the $f_i$ rooted at node $n$ and equals to 0, otherwise. Now, tree kernel over $T_1$ and $T_2$ is defined as below BIBREF15 : where $N_{T_1}$ and $N_{T_2}$ are the set of nodes of $T_1$ and $T_2$, respectively and which shows the number of common fragments rooted in $n_1$ and $n_2$ nodes. Different tree kernels vary in their definition of $\Delta $ function and fragment type. There are three important characterizations of fragment type BIBREF16 : SubTree, SubSet Tree and Partial Tree. A SubTree is defined by taking a node of a tree along with all its descendants. SubSet Tree is more general and does not necessarily contain all of the descendants. Instead, it must be generated by utilizing the same grammatical rule set of the original trees. A Partial Tree is more general and relaxes SubSet Tree's constraints. Some popular tree kernels are SubSet Tree Kernel (SST), Partial Tree Kernel (PTK) BIBREF17 and Smoothing Partial Tree Kernel (SPTK) BIBREF15. In the next section, we employ the tree kernels along with UD parse trees for solving cross-lingual tasks. Cross-Lingual Tree-based Models ::: Cross-Lingual Paraphrase Identification Paraphrase Identification (PI) is the task of determining whether two sentences are paraphrase or not. It is considered a binary classification task. The best mono-lingual methods often achieve about 85% accuracy over this corpus BIBREF14, BIBREF18. Filice et al. BIBREF14 extended the tree kernels described in the previous section to operate on text pairs. The underlying idea is that this task is characterized by several syntactic/semantic patterns that a kernel machine can automatically capture from the training material. We can assess a text pair as a paraphrase if it shows a valid transformation rule that we observed in the training data. The following example can clarify this concept. A simple paraphrase rewriting rule is the active-passive transformation, such as in “Federer beat Nadal” and “Nadal was defeated by Federer”. The same transformation can be observed in other paraphrases, such as in “Mark studied biology” and “Biology was learned by Mark”. Although these two pairs of paraphrases have completely different topics, they have a very similar syntactic structure. Tree kernel combinations can capture this inter-pair similarity and allow a learning algorithm such as SVM to learn the syntactic-semantic patterns characterizing valid paraphrases. Given a tree kernel $TK$ and text pairs $p_i = (i_1, i_2)$, the best tree kernel combination for the paraphrase identification task described in BIBREF14 is the following : SMTK ( pa, pb ) = softmax ( TK(a1,b1)TK(a2, b2), TK(a1,b2)TK(a2,b1) ) where softmax$(x_1,x_2)= \frac{1}{m} \log \left(e^{m x_1} + e^{m x_2}\right)$ is a simple function approximating the max operator, which cannot be directly used in kernel formulations, as it can create non valid kernel functions. In this kernel combination the two different alignments between the trees of the two pairs are tried and the best alignment is chosen. This allows to exploit the inherent symmetry of the Paraphrase Identification task (i.e., if $a$ is a paraphrase of $b$, it also implies that $b$ is a paraphrase of $a$). When we adopt the universal dependencies, different languages have a common formalism to represent text syntax, and tree kernels, that mostly operate at a syntactical level, can still provide reliable similarity estimations, i.e., $SM_{TK}(p_a, p_b)$ can work even if $p_a$ and $p_b$ have different languages. This allows operating in a cross-lingual setting. For instance, we can use a model trained on a high-resource language for classifying textual data of a poor-resource language. In addition to the syntactic similarity evaluation, the PTK and SPTK which are used in the $SM_{TK}$ formulation also perform a lexical matching among the words of the trees to be compared. Cross-Lingual Tree-based Models ::: Cross-Lingual Semantic Relation Extraction Relation Extraction (RE) is defined as the task of identifying semantic relations between entities in a text. The goal is to determine whether there is a semantic relation between two given entities in a text, and also to specify the type of relationship if present. RE is an important part of Information Extraction BIBREF19. Relation extraction methods often focus on the Shortest Dependency Path (SDP) between entities BIBREF20. However, there are some crucial differences between UD annotation principles and others parse formalisms that causes us to reconsider SDP of UD trees. Considering the sentence : “The most common $[$audits$]_{e1}$ were about $[$waste$]_{e2}$ and recycling", there is a Message-Topic relation between $e1$ and $e2$. The most informative words of the sentence for the relation are “were" and “about" ; while the other words of the sentence can be ignored and the same relation is still realized. It is a crucial challenge of relation extraction methods that important information may appear at any part of the sentence. Most previous works assume that the words lying in the window surrounding entities are enough to extract the relation governing entities BIBREF21, BIBREF22. However, words of a sentence are often reordered when the sentence is translated into other languages. Therefore, using words in the window surrounding entities may result in an accurate model for mono-lingual experiments, but not necessarily for cross-lingual ones. Regarding UD parsing, there are several significant differences between universal annotation schema and other schemas for dependency parsing. Two main differences are related to prepositions and copula verbs. According to the UD annotation guidelines, prepositions are attached to the head of a nominal, and copula verbs are attached to the head of a clause. However in other schemas, prepositions are often the root of the nominal, and the clause is attached to the copula. Figure FIGREF12 shows the parse tree of the example : “The most common $[$audits$]_{e1}$ were about $[$waste$]_{e2}$ and recycling". The tree is produced by the ARK parser, which does not follow universal schema. As mentioned before, “were" and “about" lie on the SDP between $e1$ and $e2$. However, considering the UD parse tree depicted in Figure FIGREF12, there is no word in the SDP ; while both “were" and “about" are attached to $e2$. As a result, we propose that the words which are dependent on the entities be considered to be the informative words in addition to the SDP's words. We use these words for making a cross-lingual model. Kernel functions have several interesting characteristics. The combination of kernel functions in a linear or polynomial way results in a valid kernel function BIBREF23. Composite kernel functions are built on individual kernels ; each of them captures part of the features of a data object. Tree kernels capture the data's syntactic structure, while a word sequence kernel considers the words of a sequence in a particular order. To define a cross-lingual kernel, we have adopted the composite kernel used by the Nguyen et al. BIBREF16 : where $K_{P-e}$ is a polynomial kernel. Its base kernel is an entity kernel ($K_E$), which is applied to an entity-related feature vector consisting of (named) entity type, mention type, headword, and POS tag. $K_{SST}$ is the Sub-Set Tree (SST) kernel, which is applied to the Path-Enclosed Tree (PET) of the constituency tree structure. PET is the smallest common subtree including the two entities BIBREF24, BIBREF25. $K_{PT}$ is the Partial Tree kernel BIBREF17, which is applied to the dependency-based tree structures. Parameter $\alpha $ weighs the kernels. To incorporate the most informative words of the sentence into the model, the feature vector $V_o$ is defined similarly to the work of Hashimoto et al. BIBREF21. They proposed concatenating these vectors to make the $V_o$ : the vector representing $e1$, the vector representing $e2$, the average of vectors representing words between two entities, the average of vectors representing words in a window before $e1$, and the average of vectors representing words in a window after $e2$. Since $V_o$ is defined based on the position of words in the sentence and thus is not necessary a cross-lingual consistent feature vector, we propose to define feature vector $V_{ud}$ by concatenating these vectors : the vector representing $e1$, the vector representing $e2$, the average of vectors representing words in the shortest path between two entities (instead of words between $e1$ and $e2$), the average of vectors representing words dependent to $e1$ (instead of words before $e1$), and the average of vectors representing words dependent to $e2$ (instead of words after $e2$). $V_{ud}$ is cross-lingually consistent provided that the words are picked up from UD parse trees and represented by multi-lingual embeddings. Based on the $CK$ defined in formula DISPLAY_FORM13 and the feature vectors $V_o$ and $V_{ud}$, the following composite kernels are proposed : where $K_{P-o}$ is polynomial kernel applied on a feature vector $V_o$. where $K_{P-ud}$ is polynomial kernel applied on a feature vector $V_{ud}$. Constituency parsing of a sentence in a language depends on the syntactic rules governing the position of words. In general, constituency parse trees of a sentence in different languages are different. So, the constituency tree should not be involved in the cross-lingual model. Here, $CK_2$ is our proposed kernel, which is used for CL-RE. However, $CK_1$ and $CK_3$ can also be used for cross-lingual experiments subject to the similarity of syntactic parsing of the source and target languages. SST kernel works only on the constituency trees and not on the dependency trees BIBREF17. Therefore, for evaluating the similarity of dependency trees, PT kernel is used. The PT kernel cannot process labels on the edges ; so dependency trees are converted to the Lexical Centered Tree (LCT) format BIBREF15 and then PT kernel is applied on the transformed trees. In LCT format, the lexical is kept at the center and the other information related to that lexical, such as POS tag and grammatical relation, is then added as its children. MultiWord Expression (MWE) is a lexeme made up a sequence of two or more lexemes as each lexeme has its own meaning, but the meaning of the whole expression cannot (or at least can only partially) be computed from the meaning of its parts. MWE displays lexical, syntactic, semantic, pragmatic and/or statistical idiosyncrasies BIBREF26. The nature of MWE leads us to deal with the whole lexemes as a word. Fortunately, MWE can be identified from the parse tree. There are three types of dependency relations for MWE in UD parsing : flat, fixed, and compound. According to UD guidelines, the flat relation is used for exocentric (headless) semi-fixed MWEs like names (Walter Burley Griffin) and dates (20 November). The fixed relation applies to completely fixed grammaticized (function word-like) MWE (like instead of, such as), whereas compound applies to endocentric (headed) MWE (like apple pie). To produce feature vector $V_{ud}$, it is better to treat MWE as single words, especially MWE with fixed relations between parts because considering each part of the MWE separately and averaging their embedding vectors may result in a meaningless vector. This point matters when the words of low-resource languages are first translated into other languages and then presented by an embedding of that language. Therefore, the procedure of producing feature vector $V_{ud}$ should be modified with a simple heuristic : every node of the UD tree within the shortest path between two entities or dependent to $e1$ or $e2$ which have a child node with fixed dependency type is considered with its child as one word. If the child has also a child with a fixed dependency, all of them are considered as one word. For example, Figure FIGREF17 shows the UD tree of a Farsi sentence which is the translation of the English sentence in Figure FIGREF12. Entities are distinguished from other nodes by putting a circle around them. The 5th and 6th nodes from the left make a multiword expression that means “about". Applying the above heuristic results in them both being considered as a single word and so the correct translation to another language is found. Some other examples of Farsi MWEs are “قبل از آن که/before", “در حالی که/while", “به درون/into", “به جز/except", and “بر روی/on". In French language there are also MWEs, such as “bien que/although", “en tant que/as", “tant de/so many", “afin de/in order to", “prés de/near". Apart from fixed, flat, and compound, there are grammatical relations which are language-specific and show MWE structures BIBREF27. If the target language has language-specific relations, the above heuristic should be applied to them. For example, compound :lvc relation, which is defined for several languages including Farsi, represents the dependence from the noun part to the light verb part of compound verbs. An example of this relation was shown in Figure FIGREF5. The words “ارائه/presentation" and “می‌دهد/give" together mean “present". Experiments In this section, the experimental analysis of the proposed models is presented. We have implemented the cross-lingual variant of kernel functions for PI and RE tasks as described in section SECREF3 and measured the accuracy of models by testing them on the parallel data set. The main advantage of the proposed method is that it needs no data of the test language, in the sense that the model trained using the training data of a language, e.g. English, is directly used in the other languages, e.g. Farsi, Arabic, etc. From this point of view, the proposed method can only be compared with those methods that use no data (neither labeled nor un-labeled) of the test language or parallel corpus or machine translators between the training and test languages. One solution for cross-lingual tasks is to equip the high accurate neural networks proposed for each task with pre-trained multi-lingual word embeddings, without any change in the architecture of the network. Therefore, we re-implemented some deep methods and compared the proposed approach with them for both PI and RE tasks. Experiments ::: Paraphrase Identification For this task, we made a parallel test dataset and implemented PT and SPT kernels and compared the results with two-channel CNN of Wang et al. BIBREF18. Experiments ::: Paraphrase Identification ::: Construction of Parallel Dataset To prepare a multi-language corpus for PI, we employed an existing English corpus with its Arabic translation and made Farsi correspondence. Microsoft Research Paraphrase Corpus (MSRC) BIBREF28 mostly used by the researches for English PI task. It contains 4,076 and 1,725 pairs of sentences for the training and test, respectively. This data has been extracted from news sources on the web, and has been annotated by humans whether each pair captures a paraphrase equivalence relationship. PI relates to the task of Semantic Textual Similarity (STS), in which the goal is to capture the degree of equivalence of meaning rather than making a binary decision. SemEval-2017 task 1 put the emphasis on multi-lingual STS BIBREF29. They selected 510 pairs from the test part of the MSRC corpus, and translated them into Arabic by Arabic native speakers. All data have been manually tagged with a number from 0 to 5 to show the degree of similarity. The Arabic part of the STS dataset of SemEval-2017 is parallel to some parts of the MSRC test corpus. So there is a parallel English-Arabic dataset. Because of the similarity between PI and STS tasks, the dataset of STS can also be used in the PI task, just by converting the scores to 0 or 1. So, the original binary scores of the STS dataset have been retrieved from the MSRC corpus. As a result, a corpus with 510 pairs of English sentences and Arabic translation for PI task is ready. In addition to Arabic translation, we produced correspondence Farsi data by translation of parallel English-Arabic dataset into Farsi by a Farsi native speaker. In the experiments, MSRC corpus was divided as follows : 1) the training part of MSRC corpus for training ; 2) those data from test part of MSRC, which we don't have their Arabic or Farsi counterpart, for tuning hyper-parameters as development set ; and 3) 510 parallel English-Arabic-Farsi from the test part of MSRC for the test. Therefore, our training and test data have 4076 and 510 samples, respectively. Table TABREF21 shows the statistics of our data. Experiments ::: Paraphrase Identification ::: Tools and Setup The classifiers were trained with the C-SVM learning algorithm within KeLP BIBREF30, which is a kernel-based machine learning framework and implemented tree kernels. We employed PT and SPT kernel functions. For evaluating node similarity in SPTK function, we used the same method described in BIBREF14 : if $n_1$ and $n_2$ are two identical syntactic nodes, $\sigma (n_1,n_2)$ denoted the similarity of $n_1$ and $n_2$ and is equal to 1. If $n_1$ and $n_2$ are two lexical nodes with the same POS tag, their similarity is computed as the cosine similarity of the corresponding vectors in a wordspace. In all other cases $\sigma = 0$. English wordspace was generated by using word2vec tool. In the cross-lingual setup, we need a vocabulary to find the translation of lexical nodes and then compute their similarity in a wordspace. For English-Arabic experiments, we used Almaany dictionary to find the translation of Arabic words into English. For English-Farsi experiments, we used the Aryanpour dictionary to extract the English equivalent of Farsi words. To evaluate the performance of the classifiers we used Accuracy and F$_1$ as the previous works BIBREF31, BIBREF32, BIBREF18. For dependency parsing, UDPipe was used, which is a trainable pipeline for tokenization, tagging, lemmatization, and dependency parsing. We used version 2.4 of the UD pre-trained models of English, Arabic, and Farsi. To implement the CNN network of Wang et al. BIBREF18, we used the same word embedding they used. They set the size of the word vector dimension as d =300, and pre-trained the vectors with the word2vec toolkit on the English Gigaword (LDC2011T07). Hyper-parameters of the network are the same as their work. Experiments ::: Paraphrase Identification ::: Results We first examine the tree kernels in the mono-lingual and then in the cross-lingual learning. Experiments ::: Paraphrase Identification ::: Results ::: Evaluation of tree-based models in mono-lingual learning In the first experiment, we benchmark the UD-based models on the monolingual dataset. So, we employed the original split of MSRC corpus and trained models using PT and SPT kernels. These models essentially work based on the lexico-syntactic patterns observed in training sentences. Filice et al. BIBREF14 proposed several kernels including linear, graph and SPT kernels. They showed the best accuracy is obtained using the combination of them. However, we use only tree kernels in cross-lingual experiments, to measure how much we can rely on the similarities of UD parse trees in different languages. As Table TABREF29 shows, tree kernels including PTK and SPTK show comparable results according to the accuracy and F$_1$ measures. This means that PT and SPT kernels, which are trained by UD parse trees, make accurate models that can be used in solving the PI task. In the next experiment, we use these models to evaluate Arabic and Farsi test data. Experiments ::: Paraphrase Identification ::: Results ::: Evaluation of tree-based models with UD in cross-lingual learning Now, we employ the parallel dataset for cross-lingual evaluation of the UD-based model trained by English data. A baseline for this task is the majority voting in that what we get if we always predict the most frequent label of the training data. A better baseline for cross-lingual PI is to use some neural models and couple them with pre-induced multilingual embeddings. So, we re-run the two-channel CNN model of Wang et al. BIBREF18 by our test data. Upper bound for the cross-lingual experiment is considered the accuracy of the model when it is evaluated by the data of the same language of the training data, e.g. English. Table TABREF30 shows that using PTK 61.6% of accuracy is obtained for English test data. It is 57.7% and 57.3% for Arabic and Farsi, respectively ; while the accuracy of the majority baseline is 50.6%. CNN model obtained similar accuracy but much lower F$_1$ scores. Comparing the results of Tables TABREF29 and TABREF30 reveals that the accuracy of both kernels drops significantly when they are tested by our small test data. The reason is that the distribution of MSRC training data over positive and negative classes is significantly different from our test data. Specifically, 67.5% of MSRC's training data are positive ; while 50.5% of our test data are positive. Experiments ::: Paraphrase Identification ::: Results ::: Evaluation of tree-based models with parse formalisms rather than UD In this experiment, we produced dependency parse trees of Farsi data employing Hazm parser which is trained on non-UD tree-bank. Table TABREF30 shows that in this case accuracy of the models significantly drops. Taking a deeper look at the tree kernels, PTK doesn't use the similarity of words and works based on exact matching of them. So, in cross-lingual experiments, it considers only the similarity of trees. In this case, accuracy on Farsi test data is 50.6% which is the same as the majority baseline. This experiment reveals that the trees of parallel sentences that are produced by UD parsers are significantly more similar than the trees generated by other formalisms. Experiments ::: Relation Extraction In this section, we explain the experiments of cross-lingual RE and present the results. Specifically, we compared tree-based methods including combination of tree kernels and TreeLSTM with deep methods of CNN BIBREF33, Bi-LSTM BIBREF34 and RCNN BIBREF35. Experiments ::: Relation Extraction ::: Construction of Parallel Dataset SemEval 2010 released a dataset for relation extraction in task 8 BIBREF36, which is used by many researchers. This dataset contains 8000 samples for the training and 2717 samples for the test. It was annotated with 19 types of relations : 9 semantically different relationships (with two directions) and an undirected Other class. A brief description of these relation types is given in Table TABREF34. The SemEval-2010 dataset is in English. For cross-lingual experiments, the first 1000 samples of the test part were translated into Farsi and French. Two native Farsi and French speakers with high expertise in English were asked to translate the data. Experiments ::: Relation Extraction ::: Tools and Setup Similar to PI's experiments, KeLP was used to implement the kernel combination. The strategy for dealing with multiple classes is “one versus others”. For constituency parsing, Stanford CoreNLP was used that contains pre-trained models for English and French within the Stanford package. For parsing Farsi data, the University of Tehran’s constituency parser BIBREF37 was used. Parameter $\alpha $ of the formula DISPLAY_FORM14-DISPLAY_FORM16 is 0.23 as the previous works BIBREF16. To obtain bi-lingual word embeddings, the multiCluster method of Ammar et al. BIBREF38 was used and 512-dimensional vectors were trained for English, French, and Farsi. Experiments ::: Relation Extraction ::: Result We first examine the tree kernels in the mono-lingual and then in the cross-lingual learning. Experiments ::: Relation Extraction ::: Result ::: Evaluation of tree-based models in mono-lingual learning There is a huge amount of works on RE, which mainly utilizes neural networks. These methods use different features including lexical, grammatical, and semantic features such as POS, WordNet, and dependency parsing. Table TABREF39 shows the state-of-the-art neural models evaluated by SemEval 2010-task 8 test set (2717 samples). The best proposed method, $CK_1$, obtained 84.0% of F$_1$ which is comparable with the others. Experiments ::: Relation Extraction ::: Result ::: Evaluation of tree-based models with UD in cross-lingual learning Table TABREF40 shows accuracy of 84.2% F$_1$ score for $CK_1$ when tested on the first 1000 samples of English test data. The accuracy of this model for its Farsi and French counterparts is 53.4% and 61.2% respectively. This kernel employs sentence context, and so it didn't show exciting results in the cross-lingual experiment ; especially for Farsi data. This is because Farsi is one of the SOV languages, in contrast to English and French, which are SVO. This means verbs are usually at the end of the sentence in Farsi. When the sentence's verb is highly informative for the relation between two entities, it places outside the window surrounding two entities and so it doesn't contribute to the feature vector $V_o$. Table TABREF40 show the F$_1$ score of the models trained by $CK_2$ and $CK_3$. These kernels utilize the context words of the UD trees. Comparing three kernels, F$_1$ increased from 53.4% to 65.2% for Farsi, and to 67.5% for the French test data. The best result for Farsi came from kernel $CK_2$ ; whereas $CK_3$ performed better with the French data. Thus, it can be concluded that the constituency-based parse trees of English and French data have more similar sub-trees than English and Farsi. The reason partially relates to the common tool for English and French ; because Stanford CoreNLP has pre-trained models for both of these languages. Therefore, English and French models followed the same schema, while Farsi adopted different schema for constituency parsing. In addition to the composite kernels, we trained a Tree-LSTM model over the UD parse trees. Tree-LSTM doesn't process the syntactic features of the input sentence, rather it takes the tokens in order of the tree's node. However, to contribute the grammatical features, for each token its word embedding was concatenated to its dependency type embedding and its POS tag embedding. The resulting network obtained 80.0% of F$_1$ when tested by English test data. F$_1$ of this model is 52.0% for Farsi and 55.6% for French. Although the Tree-LSTM model obtained lower F$_1$ in comparison with the tree kernels, it still does better than deep baselines : we re-implemented the CNN model of Qin et al. BIBREF33, Att-BiLSTM of Zhou et al. BIBREF34, and RCNN of Lai et al. BIBREF35. All networks use bilingual word embeddings in the embedding layer. As Table TABREF40 shows the best F$_1$ scores were obtained by RCNN which utilizes CNN over the LSTM layer. However, the results are significantly lower than the UD-based models, specifically in Farsi. Because word order of Farsi and English sentences are very different ; as Farsi is SOV and English is SVO. Experiments ::: Relation Extraction ::: Result ::: Effect of Multi-Word Expressions Last two rows of Table TABREF40 show the F$_1$ score of the model trained on the English training data using the $CK2$ and $CK3$, in which MWEs were considered to be a single node within the dependency tree, as described at the end of Section SECREF10. The accuracy of $CK_2$ mainly increased for the Farsi data, because Farsi has many multi-word expressions such as compound verbs. Farsi has only about 250 simple verbs and all the other verbs are compound BIBREF43. Considering MWE as a single node causes all the tokens which compose a verb to be treated as a single word, and so the true translation will be found when searching for that word in dictionaries. Figure FIGREF46 shows the F$_1$ scores of best models for different semantic classes. Discussion and Conclusion Taking a deeper look at the proposed method, most of the mis-classifications of the cross-lingual tree models are related to the following issues : Structural Difference : The main reason for the error of classifiers is structural differences. Although UD tries to produce as most similar trees as it can for parallel sentences, there are many language-specific dependency patterns that could not be neglected. Lexical Gap : Words mainly convey the meaning of the sentence. A lexical gap between source and target languages usually ruins the accuracy of cross-lingual models. Confusion of different senses on a surface : Words of different languages usually have multiple senses. Confusion of different senses of words causes incorrect translation of words, because dictionaries translate word to word, but not word-sense to word-sense. On the other hand, Word Sense Disambiguation (WSD) is a difficult task and needs additional resources such as high-quality multi-lingual wordnets BIBREF44. Incorrect translation of prepositions : Prepositions are very informative for the RE task. Hashimoto et al. presented the five most informative unigrams and three-grams for three types of relations of the SemEval 2010-task 8 dataset BIBREF21, which are shown in Table TABREF47. Wang et al. BIBREF42 also presented the most representative trigrams for different relations on the same data set. Also, Lahbib et al. BIBREF45 presented the most common Arabic prepositions and showed that each one reflects some specific kinds of semantic relations. Confusion of senses for prepositions is a very common issue in word-to-word translation. Phrasal verbs : Phrasal verbs, which have a metaphorical meaning, often cannot be translated word for word. For example, the Farsi verb “از دست دادن / to give from hand”, means “lose". When the most informative chunk of the sentence is the phrasal verb, the proposed method does not capture the true meaning. In general, more lexical and structural similarities between the source and target languages increase the accuracy of UD-based transfer learning. As future works, it is proposed that the UD-based approach is studied for other cross-lingual learning tasks and other languages along with different learning algorithms that are capable of dealing with parse trees.
Paraphrase Identification
a779d452d11f368c66f7b51f7190d0fe9402f505
a779d452d11f368c66f7b51f7190d0fe9402f505_0
Q: How many parameters does the presented model have? Text: Introduction Keyword detection is like searching for a needle in a haystack: the detector must listen to continuously streaming audio, ignoring nearly all of it, yet still triggering correctly and instantly. In the last few years, with the advent of voice assistants, keyword spotting has become a common way to initiate a conversation with them (e.g. "Ok Google", "Alexa", or "Hey Siri"). As the assistant use cases spread through a variety of devices, from mobile phones to home appliances and further into the internet-of-things (IoT) –many of them battery powered or with restricted computational capacity, it is important for the keyword spotting system to be both high-quality as well as computationally efficient. Neural networks are core to the state of-the-art keyword spotting systems. These solutions, however, are not developed as a single deep neural network (DNN). Instead, they are traditionally comprised of different subsystems, independently trained, and/or manually designed. For example, a typical system is composed by three main components: 1) a signal processing frontend, 2) an acoustic encoder, and 3) a separate decoder. Of those components, it is the last two that make use of DNNs along with a wide variety of decoding implementations. They range from traditional approaches that make use of a Hidden Markov Model (HMM) to characterize acoustic features from a DNN into both "keyword" and "background" (i.e. non-keyword speech and noise) classes BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Simpler derivatives of that approach perform a temporal integration computation that verifies the outputs of the acoustic model are high in the right sequence for the target keyword in order to produce a single detection likelyhood score BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 . Other recent systems make use of CTC-trained DNNs –typically recurrent neural networks (RNNs) BIBREF10 , or even sequence-to-sequence trained models that rely on beam search decoding BIBREF11 . This last family of systems is the closest to be considered end-to-end, however they are generally too computationally complex for many embedded applications. Optimizing independent components, however, creates added complexities and is suboptimal in quality compared to doing it jointly. Deployment also suffers due to the extra complexity, making it harder to optimize resources (e.g. processing power and memory consumption). The system described in this paper addresses those concerns by learning both the encoder and decoder components into a single deep neural network, jointly optimizing to directly produce the detection likelyhood score. This system could be trained to subsume the signal processing frontend as well as in BIBREF2 , BIBREF12 , but it is computationally costlier to replace highly optimized fast fourier transform implementations with a neural network of equivalent quality. However, it is something we consider exploring in the future. Overall, we find this system provides state of the art quality across a number of audio and speech conditions compared to a traditional, non end-to-end baseline system described in BIBREF13 . Moreover, the proposed system significantly reduces the resource requirements for deployment by cutting computation and size over five times compared to the baseline system. The rest of the paper is organized as follows. In Section SECREF2 we present the architecture of the keyword spotting system; in particular the two main contributions of this work: the neural network topology, and the end-to-end training methodology. Next, in Section SECREF3 we describe the experimental setup, and the results of our evaluations in Section SECREF4 , where we compare against the baseline approach of BIBREF13 . Finally, we conclude with a discussion of our findings in Section SECREF5 . End-to-End system This paper proposes a new end-to-end keyword spotting system that by subsuming both the encoding and decoding components into a single neural network can be trained to produce directly an estimation (i.e. score) of the presence of a keyword in streaming audio. The following two sections cover the efficient memoized neural network topology being utilized, as well as the method to train the end-to-end neural network to directly produce the keyword spotting score. Efficient memoized neural network topology We make use of a type of neural network layer topology called SVDF (single value decomposition filter), originally introduced in BIBREF14 to approximate a fully connected layer with a low rank approximation. As proposed in BIBREF14 and depicted in equation EQREF2 , the activation INLINEFORM0 for each node INLINEFORM1 in the rank-1 SVDF layer at a given inference step INLINEFORM2 can be interpreted as performing a mix of selectivity in time ( INLINEFORM3 ) with selectivity in the feature space ( INLINEFORM4 ) over a sequence of input vectors INLINEFORM5 of size INLINEFORM6 . DISPLAYFORM0 This is equivalent to performing, on an SVDF layer of INLINEFORM0 nodes, INLINEFORM1 1-D convolutions of the feature filters INLINEFORM2 (by "sliding" each of the INLINEFORM3 filters on the input feature frames, with a stride of INLINEFORM4 ), and then filtering each of INLINEFORM5 output vectors (of size INLINEFORM6 ) with the time filters INLINEFORM7 . A more general and efficient interpretation, depicted in Figure FIGREF3 , is that the layer is just processing a single input vector INLINEFORM0 at a time. Thus for each node INLINEFORM1 , the input INLINEFORM2 goes through the feature filter INLINEFORM3 , and the resulting scalar output gets concatenated to those INLINEFORM4 computed in previous inference steps. The memory is either initialized to zeros during training for the first INLINEFORM5 inferences. Finally the time filter INLINEFORM6 is applied to them. This is how stateful networks work, where the layer is able to memorize the past within its state. Different from typical recurrent approaches though, and other types of stateful layers BIBREF15 , the SVDF does not recur the outputs into the state (memory), nor rewrites the entirety of the state with each iteration. Instead, the memory keeps each inference's state isolated from subsequent runs, just pushing new entries and popping old ones based on the memory size INLINEFORM7 configured for the layer. This also means that by stacking SVDF layers we are extending the receptive field of the network. For example, a DNN with INLINEFORM8 stacked layers, each with a memory of INLINEFORM9 , means that the DNN is taking into account inputs as old as INLINEFORM10 . This approach works very well for streaming execution, like in speech, text, and other sequential processing, where we constantly process new inputs from a large, possibly infinite sequence but do not want to attend to all of it. An implementation is available at BIBREF16 . This layer topology offers a number of benefits over other approaches. Compared with the convolutions use in BIBREF13 , it allows finer-grained control of the number of parameters and computation, given that the SVDF is composed by several relatively small filters. This is useful when selecting a tradeoff between quality, size and computation. Moreover, because of this characteristic, the SVDF allows creating very small networks that outperform other topologies which operate at larger granularity (e.g. our first stage, always-on network has about 13K parameters BIBREF7 ). The SVDF also pairs very well with linear “bottleneck” layers to significantly reduce the parameter count as in BIBREF17 , BIBREF18 , and more recently in BIBREF9 . And because it allows for creating evenly sized deep networks, we can insert them throughout the network as in Figure FIGREF8 . Another benefit is that due to the explicit sizing of the receptive field it allows for a fine grained control over how much to remember from the past. This has resulted in SVDF outperforming RNN-LSTMs, which do not benefit from, and are potentially hurt by, paying attention to theoretically infinite past. It also avoids having complex logic to reset the state every few seconds as in BIBREF11 . Method to train the end-to-end neural network The goal of our end-to-end training is to optimize the network to produce the likelihood score, and to do so as precisely as possible. This means have a high score right at the place where the last bit of the keyword is present in the streaming audio, and not before and particularly not much after (i.e. a "spiky" behaviour is desirable). This is important since the system is bound to an operating point defined by a threshold (between 0 and 1) that is choosen to strike a balance between false-accepts and false-rejects, and a smooth likelyhood curve would add variability to the firing point. Moreover, any time between the true end of the keyword and the point where the score meets the threshold will become latency in the system (e.g. the "assistant" will be slow to respond). A common drawback of CTC-trained RNNs BIBREF19 we aim to avoid. We generate input sequences composed of pairs < INLINEFORM0 , INLINEFORM1 >. Where INLINEFORM2 is a 1D tensor corresponding to log-mel filter-bank energies produced by a front-end as in BIBREF5 , BIBREF14 , BIBREF13 , and INLINEFORM3 is the class label (one of INLINEFORM4 ). Each tensor INLINEFORM5 is first force-aligned from annotated audio utterances, using a large LVCSR system, to break up the components of the keyword BIBREF20 . For example, "ok google" is broken into: "ou", "k", "eI", "<silence>", "g", "u", "g", "@", "l". Then we assign labels of 1 to all sequence entries, part of a true keyword utterance, that correspond to the last component of the keyword ("l" in our "ok google" example). All other entries are assigned a label of 0, including those that are part of the keyword but that are not its last component. See Figure FIGREF6 . Additionally, we tweak the label generation by adding a fixed amount of entries with a label of 1, starting from the first vector INLINEFORM6 corresponding to the final keyword component. This is with the intetion of balancing the amount of negative and positive examples, in the same spirit as BIBREF0 . This proved important to make training stable, as otherwise the amount of negative examples overpowered the positive ones. The end-to-end training uses a simple frame-level cross-entropy (CE) loss that for the feature vector INLINEFORM0 is defined by INLINEFORM1 , where INLINEFORM2 are the parameters of the network, INLINEFORM3 the INLINEFORM4 th output of the final softmax. Our training recipe uses asynchronous stochastic gradient descent (ASGD) to produce a single neural network that can be fed streaming input features and produce a detection score. We propose two options to this recipe: Encoder+decoder. A two stage training procedure where we first train an acoustic encoder, as in BIBREF5 , BIBREF14 , BIBREF13 , and then a decoder from the outputs of the encoder (rather than filterbank energies) and the labels from SECREF5 . We do this in a single DNN by creating a final topology that is composed of the encoder and its pre-trained parameters (including the softmax), followed by the decoder. See Figure FIGREF8 . During the second stage of training the encoder parameters are frozen, such that only the decoder is trained. This recipe useful on models that tend to overfit to parts of the training set. End-to-end. In this option, we train the DNN end-to-end directly, with the sequences from SECREF5 . The DNN may use any topology, but we use that of the encoder+decoder, except for the intermediate encoder softmax. See Figure FIGREF8 . Similar to the encoder+decoder recipe, we can also initialize the encoder part with a pre-trained model, and use an adaptation rate INLINEFORM0 to tune how much the encoder part is being adjusted (e.g. a rate of 0 is equivalent to the encoder+decoder recipe). This end-to-end pipeline, where the entirety of the topology's parameters are adjusted, tends to outperform the encoder+decoder one, particularly in smaller sized models which do not tend to overfit. Experimental setup In order to determine the effectiveness of our approach, we compare against a known keyword spotting system proposed in BIBREF13 . This section describes the setups used in the results section. Front-end Both setups use the same front-end, which generates 40-dimensional log-mel filter-bank energies out of 30ms windows of streaming audio, with overlaps of 10ms. The front-end can be queried to produce a sequence of contiguous frames centered around the current frame INLINEFORM0 . Older frames are said to form the left context INLINEFORM1 , and newer frames form the right context INLINEFORM2 . Additionally, the sequences can be requested with a given stride INLINEFORM3 . Baseline model setup Our baseline system (Baseline_1850K) is taken from BIBREF13 . It consists of a DNN trained to predict subword targets within the keywords. The input to the DNN consists of a sequence with INLINEFORM0 frames of left and INLINEFORM1 frames of right context; each with a stride of INLINEFORM2 . The topology consists of a 1-D convolutional layer with 92 filters (of shape 8x8 and stride 8x8), followed by 3 fully-connected layers with 512 nodes and a rectified linear unit activation each. A final softmax output predicts the 7 subword targets, obtained from the same forced alignment process described in SECREF5 . This results in the baseline DNN containing 1.7M parameters, and performing 1.8M multiply-accumulate operations per inference (every 30ms of streaming audio). A keyword spotting score between 0 and 1 is computed by first smoothing the posterior values, averaging them over a sliding window of the previous 100 frames with respect to the current INLINEFORM3 ; the score is then defined as the largest product of the smoothed posteriors in the sliding window as originally proposed in BIBREF6 . End-to-end model setup The end-to-end system (prefix E2E) uses the DNN topology depicted in Figure FIGREF8 . We present results with 3 distinct size configurations (infixes 700K, 318K, and 40K) each representing the number of approximate parameters, and 2 types of training recipes (suffixes 1stage and 2stage) corresponding to end-to-end and encoder+decoder respectively, as described in UID7 . The input to all DNNs consist of a sequence with INLINEFORM0 frames of left and INLINEFORM1 frames of right context; each with a stride of INLINEFORM2 . More specifically, the E2E_700K model uses INLINEFORM3 nodes in the first 4 SVDF layers, each with a memory INLINEFORM4 , with intermediate bottleneck layers each of size 64; the following 3 SVDF layers have INLINEFORM5 nodes, each with a memory INLINEFORM6 . This model performs 350K multiply-accumulate operations per inference (every 20ms of streaming audio). The E2E_318K model uses INLINEFORM7 nodes in the first 4 SVDF layers, each with a memory INLINEFORM8 , with intermediate bottleneck layers each of size 64; the remainder layers are the same as E2E_700K. This model performs 159K multiply-accumulate operations per inference. Finally, the E2E_40K model uses INLINEFORM9 nodes in the first 4 SVDF layers, each with a memory INLINEFORM10 , with intermediate bottleneck layers each of size 32; the remainder layers are the same as the other two models. This model performs 20K multiply-accumulate operations per inference. Dataset The training data for all experiments consists of 1 million anonymized hand-transcribed utterances of the keywords "Ok Google" and "Hey Google", with an even distribution. To improve robustness, we create "multi-style" training data by synthetically distorting the utterances, simulating the effect of background noise and reverberation. 8 distorted utterances are created for each original utterance; noise samples used in this process are extracted from environmental recordings of everyday events, music, and Youtube videos. Results are reported on four sets representative of various environmental conditions: Clean non-accented contains 170K non-accented english utterances of the keywords in "clean" conditions, plus 64K samples without the keywords (1K hours); Clean accented has 153K english utterances of the keywords with Australian, British, and Indian accents (also in "clean" conditions), plus 64K samples without the keywords (1K hours); High pitched has 1K high pitched utterances of the keywords, and 64K samples without them (1K hours); Query logs contains 110K keyword and 21K non-keyword utterances, collected from anonymized voice search queries. This last set contains background noises from real living conditions. Results Our goal is to compare the efectiviness of the proposed approach against the baseline system described in BIBREF13 . We evaluate the false-reject (FR) and false-accept (FA) tradeoff across several end-to-end models of distinct sizes and computational complexities. As can be seen in the Receiver Operating Characteristic (ROC) curves in Figure FIGREF14 , the 2 largest end-to-end models, with 2-stage training, significantly outperform the recognition quality of the much larger and complex Baseline_1850K system. More specifically, E2E_318K_2stage and E2E_700K_2stage show up to 60% relative FR rate reduction over Baseline_1850K in most test conditions. Moreover, E2E_318K_2stage uses only about 26% of the computations that Baseline_1850K uses (once normalizing their execution rates over time), but still shows significant improvements. We also explore end-to-end models at a size that, as described in BIBREF7 , is small enough, in both size and computation, to be executed continuously with very little power consumption. These 2 models, E2E_40K_1stage and E2E_40K_2stage, also explore the capacity of end-to-end training (1stage) versus encoder+decoder training (2stage). As can be appreciated in the ROC curves, 1stage training outperforms 2stage training on all conditions, but particularly on both "clean" environments where it gets fairly close to the performance of the baseline setup. That is a significant achievement considering E2E_40K_1stage has 2.3% the parameters and performs 3.2% the computations of Baseline_1850K. Table TABREF13 compares the recognition quality of all setups by fixing on a very low false-accept rate of 0.1 FA per hour on a dataset containing only negative (i.e. non-keyword) utterances. Thus the table shows the false-reject rates at that operating point. Here we can appreciate similar trends as those described above: the 2 largest end-to-end models outperforms the baseline across all datasets, reducing FR rate about 40% on the clean conditions and 40%-20% on the other 2 sets depending on the model size. This table also shows how 1stage outperforms 2stage for small size models, and presents similar FR rates as Baseline_1850K on clean conditions. Conclusion We presented a system for keyword spotting that by combining an efficient topology and two types of end-to-end training can significantly ourperform previous appraoches, at a much lower cost of size and computation. We specifically show how it beats the performance of a setup taken from BIBREF13 with models over 5 times smaller, and even get close to the same performance with a model over 40 times smaller. Our approach provides further benefits of not requiring anything other than a front-end and a neural network to perform the detection, and thus it is easier to extend to newer keywords and/or fine-tune with new training data. Future work includes exploring other loss-functions, as well as generalizing multi-channel support.
(infixes 700K, 318K, and 40K) each representing the number of approximate parameters
cdc5a998cb73262594cdae1dda49576044da3d3d
cdc5a998cb73262594cdae1dda49576044da3d3d_0
Q: How do they measure the quality of detection? Text: Introduction Keyword detection is like searching for a needle in a haystack: the detector must listen to continuously streaming audio, ignoring nearly all of it, yet still triggering correctly and instantly. In the last few years, with the advent of voice assistants, keyword spotting has become a common way to initiate a conversation with them (e.g. "Ok Google", "Alexa", or "Hey Siri"). As the assistant use cases spread through a variety of devices, from mobile phones to home appliances and further into the internet-of-things (IoT) –many of them battery powered or with restricted computational capacity, it is important for the keyword spotting system to be both high-quality as well as computationally efficient. Neural networks are core to the state of-the-art keyword spotting systems. These solutions, however, are not developed as a single deep neural network (DNN). Instead, they are traditionally comprised of different subsystems, independently trained, and/or manually designed. For example, a typical system is composed by three main components: 1) a signal processing frontend, 2) an acoustic encoder, and 3) a separate decoder. Of those components, it is the last two that make use of DNNs along with a wide variety of decoding implementations. They range from traditional approaches that make use of a Hidden Markov Model (HMM) to characterize acoustic features from a DNN into both "keyword" and "background" (i.e. non-keyword speech and noise) classes BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Simpler derivatives of that approach perform a temporal integration computation that verifies the outputs of the acoustic model are high in the right sequence for the target keyword in order to produce a single detection likelyhood score BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 . Other recent systems make use of CTC-trained DNNs –typically recurrent neural networks (RNNs) BIBREF10 , or even sequence-to-sequence trained models that rely on beam search decoding BIBREF11 . This last family of systems is the closest to be considered end-to-end, however they are generally too computationally complex for many embedded applications. Optimizing independent components, however, creates added complexities and is suboptimal in quality compared to doing it jointly. Deployment also suffers due to the extra complexity, making it harder to optimize resources (e.g. processing power and memory consumption). The system described in this paper addresses those concerns by learning both the encoder and decoder components into a single deep neural network, jointly optimizing to directly produce the detection likelyhood score. This system could be trained to subsume the signal processing frontend as well as in BIBREF2 , BIBREF12 , but it is computationally costlier to replace highly optimized fast fourier transform implementations with a neural network of equivalent quality. However, it is something we consider exploring in the future. Overall, we find this system provides state of the art quality across a number of audio and speech conditions compared to a traditional, non end-to-end baseline system described in BIBREF13 . Moreover, the proposed system significantly reduces the resource requirements for deployment by cutting computation and size over five times compared to the baseline system. The rest of the paper is organized as follows. In Section SECREF2 we present the architecture of the keyword spotting system; in particular the two main contributions of this work: the neural network topology, and the end-to-end training methodology. Next, in Section SECREF3 we describe the experimental setup, and the results of our evaluations in Section SECREF4 , where we compare against the baseline approach of BIBREF13 . Finally, we conclude with a discussion of our findings in Section SECREF5 . End-to-End system This paper proposes a new end-to-end keyword spotting system that by subsuming both the encoding and decoding components into a single neural network can be trained to produce directly an estimation (i.e. score) of the presence of a keyword in streaming audio. The following two sections cover the efficient memoized neural network topology being utilized, as well as the method to train the end-to-end neural network to directly produce the keyword spotting score. Efficient memoized neural network topology We make use of a type of neural network layer topology called SVDF (single value decomposition filter), originally introduced in BIBREF14 to approximate a fully connected layer with a low rank approximation. As proposed in BIBREF14 and depicted in equation EQREF2 , the activation INLINEFORM0 for each node INLINEFORM1 in the rank-1 SVDF layer at a given inference step INLINEFORM2 can be interpreted as performing a mix of selectivity in time ( INLINEFORM3 ) with selectivity in the feature space ( INLINEFORM4 ) over a sequence of input vectors INLINEFORM5 of size INLINEFORM6 . DISPLAYFORM0 This is equivalent to performing, on an SVDF layer of INLINEFORM0 nodes, INLINEFORM1 1-D convolutions of the feature filters INLINEFORM2 (by "sliding" each of the INLINEFORM3 filters on the input feature frames, with a stride of INLINEFORM4 ), and then filtering each of INLINEFORM5 output vectors (of size INLINEFORM6 ) with the time filters INLINEFORM7 . A more general and efficient interpretation, depicted in Figure FIGREF3 , is that the layer is just processing a single input vector INLINEFORM0 at a time. Thus for each node INLINEFORM1 , the input INLINEFORM2 goes through the feature filter INLINEFORM3 , and the resulting scalar output gets concatenated to those INLINEFORM4 computed in previous inference steps. The memory is either initialized to zeros during training for the first INLINEFORM5 inferences. Finally the time filter INLINEFORM6 is applied to them. This is how stateful networks work, where the layer is able to memorize the past within its state. Different from typical recurrent approaches though, and other types of stateful layers BIBREF15 , the SVDF does not recur the outputs into the state (memory), nor rewrites the entirety of the state with each iteration. Instead, the memory keeps each inference's state isolated from subsequent runs, just pushing new entries and popping old ones based on the memory size INLINEFORM7 configured for the layer. This also means that by stacking SVDF layers we are extending the receptive field of the network. For example, a DNN with INLINEFORM8 stacked layers, each with a memory of INLINEFORM9 , means that the DNN is taking into account inputs as old as INLINEFORM10 . This approach works very well for streaming execution, like in speech, text, and other sequential processing, where we constantly process new inputs from a large, possibly infinite sequence but do not want to attend to all of it. An implementation is available at BIBREF16 . This layer topology offers a number of benefits over other approaches. Compared with the convolutions use in BIBREF13 , it allows finer-grained control of the number of parameters and computation, given that the SVDF is composed by several relatively small filters. This is useful when selecting a tradeoff between quality, size and computation. Moreover, because of this characteristic, the SVDF allows creating very small networks that outperform other topologies which operate at larger granularity (e.g. our first stage, always-on network has about 13K parameters BIBREF7 ). The SVDF also pairs very well with linear “bottleneck” layers to significantly reduce the parameter count as in BIBREF17 , BIBREF18 , and more recently in BIBREF9 . And because it allows for creating evenly sized deep networks, we can insert them throughout the network as in Figure FIGREF8 . Another benefit is that due to the explicit sizing of the receptive field it allows for a fine grained control over how much to remember from the past. This has resulted in SVDF outperforming RNN-LSTMs, which do not benefit from, and are potentially hurt by, paying attention to theoretically infinite past. It also avoids having complex logic to reset the state every few seconds as in BIBREF11 . Method to train the end-to-end neural network The goal of our end-to-end training is to optimize the network to produce the likelihood score, and to do so as precisely as possible. This means have a high score right at the place where the last bit of the keyword is present in the streaming audio, and not before and particularly not much after (i.e. a "spiky" behaviour is desirable). This is important since the system is bound to an operating point defined by a threshold (between 0 and 1) that is choosen to strike a balance between false-accepts and false-rejects, and a smooth likelyhood curve would add variability to the firing point. Moreover, any time between the true end of the keyword and the point where the score meets the threshold will become latency in the system (e.g. the "assistant" will be slow to respond). A common drawback of CTC-trained RNNs BIBREF19 we aim to avoid. We generate input sequences composed of pairs < INLINEFORM0 , INLINEFORM1 >. Where INLINEFORM2 is a 1D tensor corresponding to log-mel filter-bank energies produced by a front-end as in BIBREF5 , BIBREF14 , BIBREF13 , and INLINEFORM3 is the class label (one of INLINEFORM4 ). Each tensor INLINEFORM5 is first force-aligned from annotated audio utterances, using a large LVCSR system, to break up the components of the keyword BIBREF20 . For example, "ok google" is broken into: "ou", "k", "eI", "<silence>", "g", "u", "g", "@", "l". Then we assign labels of 1 to all sequence entries, part of a true keyword utterance, that correspond to the last component of the keyword ("l" in our "ok google" example). All other entries are assigned a label of 0, including those that are part of the keyword but that are not its last component. See Figure FIGREF6 . Additionally, we tweak the label generation by adding a fixed amount of entries with a label of 1, starting from the first vector INLINEFORM6 corresponding to the final keyword component. This is with the intetion of balancing the amount of negative and positive examples, in the same spirit as BIBREF0 . This proved important to make training stable, as otherwise the amount of negative examples overpowered the positive ones. The end-to-end training uses a simple frame-level cross-entropy (CE) loss that for the feature vector INLINEFORM0 is defined by INLINEFORM1 , where INLINEFORM2 are the parameters of the network, INLINEFORM3 the INLINEFORM4 th output of the final softmax. Our training recipe uses asynchronous stochastic gradient descent (ASGD) to produce a single neural network that can be fed streaming input features and produce a detection score. We propose two options to this recipe: Encoder+decoder. A two stage training procedure where we first train an acoustic encoder, as in BIBREF5 , BIBREF14 , BIBREF13 , and then a decoder from the outputs of the encoder (rather than filterbank energies) and the labels from SECREF5 . We do this in a single DNN by creating a final topology that is composed of the encoder and its pre-trained parameters (including the softmax), followed by the decoder. See Figure FIGREF8 . During the second stage of training the encoder parameters are frozen, such that only the decoder is trained. This recipe useful on models that tend to overfit to parts of the training set. End-to-end. In this option, we train the DNN end-to-end directly, with the sequences from SECREF5 . The DNN may use any topology, but we use that of the encoder+decoder, except for the intermediate encoder softmax. See Figure FIGREF8 . Similar to the encoder+decoder recipe, we can also initialize the encoder part with a pre-trained model, and use an adaptation rate INLINEFORM0 to tune how much the encoder part is being adjusted (e.g. a rate of 0 is equivalent to the encoder+decoder recipe). This end-to-end pipeline, where the entirety of the topology's parameters are adjusted, tends to outperform the encoder+decoder one, particularly in smaller sized models which do not tend to overfit. Experimental setup In order to determine the effectiveness of our approach, we compare against a known keyword spotting system proposed in BIBREF13 . This section describes the setups used in the results section. Front-end Both setups use the same front-end, which generates 40-dimensional log-mel filter-bank energies out of 30ms windows of streaming audio, with overlaps of 10ms. The front-end can be queried to produce a sequence of contiguous frames centered around the current frame INLINEFORM0 . Older frames are said to form the left context INLINEFORM1 , and newer frames form the right context INLINEFORM2 . Additionally, the sequences can be requested with a given stride INLINEFORM3 . Baseline model setup Our baseline system (Baseline_1850K) is taken from BIBREF13 . It consists of a DNN trained to predict subword targets within the keywords. The input to the DNN consists of a sequence with INLINEFORM0 frames of left and INLINEFORM1 frames of right context; each with a stride of INLINEFORM2 . The topology consists of a 1-D convolutional layer with 92 filters (of shape 8x8 and stride 8x8), followed by 3 fully-connected layers with 512 nodes and a rectified linear unit activation each. A final softmax output predicts the 7 subword targets, obtained from the same forced alignment process described in SECREF5 . This results in the baseline DNN containing 1.7M parameters, and performing 1.8M multiply-accumulate operations per inference (every 30ms of streaming audio). A keyword spotting score between 0 and 1 is computed by first smoothing the posterior values, averaging them over a sliding window of the previous 100 frames with respect to the current INLINEFORM3 ; the score is then defined as the largest product of the smoothed posteriors in the sliding window as originally proposed in BIBREF6 . End-to-end model setup The end-to-end system (prefix E2E) uses the DNN topology depicted in Figure FIGREF8 . We present results with 3 distinct size configurations (infixes 700K, 318K, and 40K) each representing the number of approximate parameters, and 2 types of training recipes (suffixes 1stage and 2stage) corresponding to end-to-end and encoder+decoder respectively, as described in UID7 . The input to all DNNs consist of a sequence with INLINEFORM0 frames of left and INLINEFORM1 frames of right context; each with a stride of INLINEFORM2 . More specifically, the E2E_700K model uses INLINEFORM3 nodes in the first 4 SVDF layers, each with a memory INLINEFORM4 , with intermediate bottleneck layers each of size 64; the following 3 SVDF layers have INLINEFORM5 nodes, each with a memory INLINEFORM6 . This model performs 350K multiply-accumulate operations per inference (every 20ms of streaming audio). The E2E_318K model uses INLINEFORM7 nodes in the first 4 SVDF layers, each with a memory INLINEFORM8 , with intermediate bottleneck layers each of size 64; the remainder layers are the same as E2E_700K. This model performs 159K multiply-accumulate operations per inference. Finally, the E2E_40K model uses INLINEFORM9 nodes in the first 4 SVDF layers, each with a memory INLINEFORM10 , with intermediate bottleneck layers each of size 32; the remainder layers are the same as the other two models. This model performs 20K multiply-accumulate operations per inference. Dataset The training data for all experiments consists of 1 million anonymized hand-transcribed utterances of the keywords "Ok Google" and "Hey Google", with an even distribution. To improve robustness, we create "multi-style" training data by synthetically distorting the utterances, simulating the effect of background noise and reverberation. 8 distorted utterances are created for each original utterance; noise samples used in this process are extracted from environmental recordings of everyday events, music, and Youtube videos. Results are reported on four sets representative of various environmental conditions: Clean non-accented contains 170K non-accented english utterances of the keywords in "clean" conditions, plus 64K samples without the keywords (1K hours); Clean accented has 153K english utterances of the keywords with Australian, British, and Indian accents (also in "clean" conditions), plus 64K samples without the keywords (1K hours); High pitched has 1K high pitched utterances of the keywords, and 64K samples without them (1K hours); Query logs contains 110K keyword and 21K non-keyword utterances, collected from anonymized voice search queries. This last set contains background noises from real living conditions. Results Our goal is to compare the efectiviness of the proposed approach against the baseline system described in BIBREF13 . We evaluate the false-reject (FR) and false-accept (FA) tradeoff across several end-to-end models of distinct sizes and computational complexities. As can be seen in the Receiver Operating Characteristic (ROC) curves in Figure FIGREF14 , the 2 largest end-to-end models, with 2-stage training, significantly outperform the recognition quality of the much larger and complex Baseline_1850K system. More specifically, E2E_318K_2stage and E2E_700K_2stage show up to 60% relative FR rate reduction over Baseline_1850K in most test conditions. Moreover, E2E_318K_2stage uses only about 26% of the computations that Baseline_1850K uses (once normalizing their execution rates over time), but still shows significant improvements. We also explore end-to-end models at a size that, as described in BIBREF7 , is small enough, in both size and computation, to be executed continuously with very little power consumption. These 2 models, E2E_40K_1stage and E2E_40K_2stage, also explore the capacity of end-to-end training (1stage) versus encoder+decoder training (2stage). As can be appreciated in the ROC curves, 1stage training outperforms 2stage training on all conditions, but particularly on both "clean" environments where it gets fairly close to the performance of the baseline setup. That is a significant achievement considering E2E_40K_1stage has 2.3% the parameters and performs 3.2% the computations of Baseline_1850K. Table TABREF13 compares the recognition quality of all setups by fixing on a very low false-accept rate of 0.1 FA per hour on a dataset containing only negative (i.e. non-keyword) utterances. Thus the table shows the false-reject rates at that operating point. Here we can appreciate similar trends as those described above: the 2 largest end-to-end models outperforms the baseline across all datasets, reducing FR rate about 40% on the clean conditions and 40%-20% on the other 2 sets depending on the model size. This table also shows how 1stage outperforms 2stage for small size models, and presents similar FR rates as Baseline_1850K on clean conditions. Conclusion We presented a system for keyword spotting that by combining an efficient topology and two types of end-to-end training can significantly ourperform previous appraoches, at a much lower cost of size and computation. We specifically show how it beats the performance of a setup taken from BIBREF13 with models over 5 times smaller, and even get close to the same performance with a model over 40 times smaller. Our approach provides further benefits of not requiring anything other than a front-end and a neural network to perform the detection, and thus it is easier to extend to newer keywords and/or fine-tune with new training data. Future work includes exploring other loss-functions, as well as generalizing multi-channel support.
We evaluate the false-reject (FR) and false-accept (FA) tradeoff across several end-to-end models of distinct sizes and computational complexities.
1383ddd4619cf81227c72f3d9f30c10c47a0cdad
1383ddd4619cf81227c72f3d9f30c10c47a0cdad_0
Q: What previous approaches are considered? Text: Introduction Keyword detection is like searching for a needle in a haystack: the detector must listen to continuously streaming audio, ignoring nearly all of it, yet still triggering correctly and instantly. In the last few years, with the advent of voice assistants, keyword spotting has become a common way to initiate a conversation with them (e.g. "Ok Google", "Alexa", or "Hey Siri"). As the assistant use cases spread through a variety of devices, from mobile phones to home appliances and further into the internet-of-things (IoT) –many of them battery powered or with restricted computational capacity, it is important for the keyword spotting system to be both high-quality as well as computationally efficient. Neural networks are core to the state of-the-art keyword spotting systems. These solutions, however, are not developed as a single deep neural network (DNN). Instead, they are traditionally comprised of different subsystems, independently trained, and/or manually designed. For example, a typical system is composed by three main components: 1) a signal processing frontend, 2) an acoustic encoder, and 3) a separate decoder. Of those components, it is the last two that make use of DNNs along with a wide variety of decoding implementations. They range from traditional approaches that make use of a Hidden Markov Model (HMM) to characterize acoustic features from a DNN into both "keyword" and "background" (i.e. non-keyword speech and noise) classes BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . Simpler derivatives of that approach perform a temporal integration computation that verifies the outputs of the acoustic model are high in the right sequence for the target keyword in order to produce a single detection likelyhood score BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 . Other recent systems make use of CTC-trained DNNs –typically recurrent neural networks (RNNs) BIBREF10 , or even sequence-to-sequence trained models that rely on beam search decoding BIBREF11 . This last family of systems is the closest to be considered end-to-end, however they are generally too computationally complex for many embedded applications. Optimizing independent components, however, creates added complexities and is suboptimal in quality compared to doing it jointly. Deployment also suffers due to the extra complexity, making it harder to optimize resources (e.g. processing power and memory consumption). The system described in this paper addresses those concerns by learning both the encoder and decoder components into a single deep neural network, jointly optimizing to directly produce the detection likelyhood score. This system could be trained to subsume the signal processing frontend as well as in BIBREF2 , BIBREF12 , but it is computationally costlier to replace highly optimized fast fourier transform implementations with a neural network of equivalent quality. However, it is something we consider exploring in the future. Overall, we find this system provides state of the art quality across a number of audio and speech conditions compared to a traditional, non end-to-end baseline system described in BIBREF13 . Moreover, the proposed system significantly reduces the resource requirements for deployment by cutting computation and size over five times compared to the baseline system. The rest of the paper is organized as follows. In Section SECREF2 we present the architecture of the keyword spotting system; in particular the two main contributions of this work: the neural network topology, and the end-to-end training methodology. Next, in Section SECREF3 we describe the experimental setup, and the results of our evaluations in Section SECREF4 , where we compare against the baseline approach of BIBREF13 . Finally, we conclude with a discussion of our findings in Section SECREF5 . End-to-End system This paper proposes a new end-to-end keyword spotting system that by subsuming both the encoding and decoding components into a single neural network can be trained to produce directly an estimation (i.e. score) of the presence of a keyword in streaming audio. The following two sections cover the efficient memoized neural network topology being utilized, as well as the method to train the end-to-end neural network to directly produce the keyword spotting score. Efficient memoized neural network topology We make use of a type of neural network layer topology called SVDF (single value decomposition filter), originally introduced in BIBREF14 to approximate a fully connected layer with a low rank approximation. As proposed in BIBREF14 and depicted in equation EQREF2 , the activation INLINEFORM0 for each node INLINEFORM1 in the rank-1 SVDF layer at a given inference step INLINEFORM2 can be interpreted as performing a mix of selectivity in time ( INLINEFORM3 ) with selectivity in the feature space ( INLINEFORM4 ) over a sequence of input vectors INLINEFORM5 of size INLINEFORM6 . DISPLAYFORM0 This is equivalent to performing, on an SVDF layer of INLINEFORM0 nodes, INLINEFORM1 1-D convolutions of the feature filters INLINEFORM2 (by "sliding" each of the INLINEFORM3 filters on the input feature frames, with a stride of INLINEFORM4 ), and then filtering each of INLINEFORM5 output vectors (of size INLINEFORM6 ) with the time filters INLINEFORM7 . A more general and efficient interpretation, depicted in Figure FIGREF3 , is that the layer is just processing a single input vector INLINEFORM0 at a time. Thus for each node INLINEFORM1 , the input INLINEFORM2 goes through the feature filter INLINEFORM3 , and the resulting scalar output gets concatenated to those INLINEFORM4 computed in previous inference steps. The memory is either initialized to zeros during training for the first INLINEFORM5 inferences. Finally the time filter INLINEFORM6 is applied to them. This is how stateful networks work, where the layer is able to memorize the past within its state. Different from typical recurrent approaches though, and other types of stateful layers BIBREF15 , the SVDF does not recur the outputs into the state (memory), nor rewrites the entirety of the state with each iteration. Instead, the memory keeps each inference's state isolated from subsequent runs, just pushing new entries and popping old ones based on the memory size INLINEFORM7 configured for the layer. This also means that by stacking SVDF layers we are extending the receptive field of the network. For example, a DNN with INLINEFORM8 stacked layers, each with a memory of INLINEFORM9 , means that the DNN is taking into account inputs as old as INLINEFORM10 . This approach works very well for streaming execution, like in speech, text, and other sequential processing, where we constantly process new inputs from a large, possibly infinite sequence but do not want to attend to all of it. An implementation is available at BIBREF16 . This layer topology offers a number of benefits over other approaches. Compared with the convolutions use in BIBREF13 , it allows finer-grained control of the number of parameters and computation, given that the SVDF is composed by several relatively small filters. This is useful when selecting a tradeoff between quality, size and computation. Moreover, because of this characteristic, the SVDF allows creating very small networks that outperform other topologies which operate at larger granularity (e.g. our first stage, always-on network has about 13K parameters BIBREF7 ). The SVDF also pairs very well with linear “bottleneck” layers to significantly reduce the parameter count as in BIBREF17 , BIBREF18 , and more recently in BIBREF9 . And because it allows for creating evenly sized deep networks, we can insert them throughout the network as in Figure FIGREF8 . Another benefit is that due to the explicit sizing of the receptive field it allows for a fine grained control over how much to remember from the past. This has resulted in SVDF outperforming RNN-LSTMs, which do not benefit from, and are potentially hurt by, paying attention to theoretically infinite past. It also avoids having complex logic to reset the state every few seconds as in BIBREF11 . Method to train the end-to-end neural network The goal of our end-to-end training is to optimize the network to produce the likelihood score, and to do so as precisely as possible. This means have a high score right at the place where the last bit of the keyword is present in the streaming audio, and not before and particularly not much after (i.e. a "spiky" behaviour is desirable). This is important since the system is bound to an operating point defined by a threshold (between 0 and 1) that is choosen to strike a balance between false-accepts and false-rejects, and a smooth likelyhood curve would add variability to the firing point. Moreover, any time between the true end of the keyword and the point where the score meets the threshold will become latency in the system (e.g. the "assistant" will be slow to respond). A common drawback of CTC-trained RNNs BIBREF19 we aim to avoid. We generate input sequences composed of pairs < INLINEFORM0 , INLINEFORM1 >. Where INLINEFORM2 is a 1D tensor corresponding to log-mel filter-bank energies produced by a front-end as in BIBREF5 , BIBREF14 , BIBREF13 , and INLINEFORM3 is the class label (one of INLINEFORM4 ). Each tensor INLINEFORM5 is first force-aligned from annotated audio utterances, using a large LVCSR system, to break up the components of the keyword BIBREF20 . For example, "ok google" is broken into: "ou", "k", "eI", "<silence>", "g", "u", "g", "@", "l". Then we assign labels of 1 to all sequence entries, part of a true keyword utterance, that correspond to the last component of the keyword ("l" in our "ok google" example). All other entries are assigned a label of 0, including those that are part of the keyword but that are not its last component. See Figure FIGREF6 . Additionally, we tweak the label generation by adding a fixed amount of entries with a label of 1, starting from the first vector INLINEFORM6 corresponding to the final keyword component. This is with the intetion of balancing the amount of negative and positive examples, in the same spirit as BIBREF0 . This proved important to make training stable, as otherwise the amount of negative examples overpowered the positive ones. The end-to-end training uses a simple frame-level cross-entropy (CE) loss that for the feature vector INLINEFORM0 is defined by INLINEFORM1 , where INLINEFORM2 are the parameters of the network, INLINEFORM3 the INLINEFORM4 th output of the final softmax. Our training recipe uses asynchronous stochastic gradient descent (ASGD) to produce a single neural network that can be fed streaming input features and produce a detection score. We propose two options to this recipe: Encoder+decoder. A two stage training procedure where we first train an acoustic encoder, as in BIBREF5 , BIBREF14 , BIBREF13 , and then a decoder from the outputs of the encoder (rather than filterbank energies) and the labels from SECREF5 . We do this in a single DNN by creating a final topology that is composed of the encoder and its pre-trained parameters (including the softmax), followed by the decoder. See Figure FIGREF8 . During the second stage of training the encoder parameters are frozen, such that only the decoder is trained. This recipe useful on models that tend to overfit to parts of the training set. End-to-end. In this option, we train the DNN end-to-end directly, with the sequences from SECREF5 . The DNN may use any topology, but we use that of the encoder+decoder, except for the intermediate encoder softmax. See Figure FIGREF8 . Similar to the encoder+decoder recipe, we can also initialize the encoder part with a pre-trained model, and use an adaptation rate INLINEFORM0 to tune how much the encoder part is being adjusted (e.g. a rate of 0 is equivalent to the encoder+decoder recipe). This end-to-end pipeline, where the entirety of the topology's parameters are adjusted, tends to outperform the encoder+decoder one, particularly in smaller sized models which do not tend to overfit. Experimental setup In order to determine the effectiveness of our approach, we compare against a known keyword spotting system proposed in BIBREF13 . This section describes the setups used in the results section. Front-end Both setups use the same front-end, which generates 40-dimensional log-mel filter-bank energies out of 30ms windows of streaming audio, with overlaps of 10ms. The front-end can be queried to produce a sequence of contiguous frames centered around the current frame INLINEFORM0 . Older frames are said to form the left context INLINEFORM1 , and newer frames form the right context INLINEFORM2 . Additionally, the sequences can be requested with a given stride INLINEFORM3 . Baseline model setup Our baseline system (Baseline_1850K) is taken from BIBREF13 . It consists of a DNN trained to predict subword targets within the keywords. The input to the DNN consists of a sequence with INLINEFORM0 frames of left and INLINEFORM1 frames of right context; each with a stride of INLINEFORM2 . The topology consists of a 1-D convolutional layer with 92 filters (of shape 8x8 and stride 8x8), followed by 3 fully-connected layers with 512 nodes and a rectified linear unit activation each. A final softmax output predicts the 7 subword targets, obtained from the same forced alignment process described in SECREF5 . This results in the baseline DNN containing 1.7M parameters, and performing 1.8M multiply-accumulate operations per inference (every 30ms of streaming audio). A keyword spotting score between 0 and 1 is computed by first smoothing the posterior values, averaging them over a sliding window of the previous 100 frames with respect to the current INLINEFORM3 ; the score is then defined as the largest product of the smoothed posteriors in the sliding window as originally proposed in BIBREF6 . End-to-end model setup The end-to-end system (prefix E2E) uses the DNN topology depicted in Figure FIGREF8 . We present results with 3 distinct size configurations (infixes 700K, 318K, and 40K) each representing the number of approximate parameters, and 2 types of training recipes (suffixes 1stage and 2stage) corresponding to end-to-end and encoder+decoder respectively, as described in UID7 . The input to all DNNs consist of a sequence with INLINEFORM0 frames of left and INLINEFORM1 frames of right context; each with a stride of INLINEFORM2 . More specifically, the E2E_700K model uses INLINEFORM3 nodes in the first 4 SVDF layers, each with a memory INLINEFORM4 , with intermediate bottleneck layers each of size 64; the following 3 SVDF layers have INLINEFORM5 nodes, each with a memory INLINEFORM6 . This model performs 350K multiply-accumulate operations per inference (every 20ms of streaming audio). The E2E_318K model uses INLINEFORM7 nodes in the first 4 SVDF layers, each with a memory INLINEFORM8 , with intermediate bottleneck layers each of size 64; the remainder layers are the same as E2E_700K. This model performs 159K multiply-accumulate operations per inference. Finally, the E2E_40K model uses INLINEFORM9 nodes in the first 4 SVDF layers, each with a memory INLINEFORM10 , with intermediate bottleneck layers each of size 32; the remainder layers are the same as the other two models. This model performs 20K multiply-accumulate operations per inference. Dataset The training data for all experiments consists of 1 million anonymized hand-transcribed utterances of the keywords "Ok Google" and "Hey Google", with an even distribution. To improve robustness, we create "multi-style" training data by synthetically distorting the utterances, simulating the effect of background noise and reverberation. 8 distorted utterances are created for each original utterance; noise samples used in this process are extracted from environmental recordings of everyday events, music, and Youtube videos. Results are reported on four sets representative of various environmental conditions: Clean non-accented contains 170K non-accented english utterances of the keywords in "clean" conditions, plus 64K samples without the keywords (1K hours); Clean accented has 153K english utterances of the keywords with Australian, British, and Indian accents (also in "clean" conditions), plus 64K samples without the keywords (1K hours); High pitched has 1K high pitched utterances of the keywords, and 64K samples without them (1K hours); Query logs contains 110K keyword and 21K non-keyword utterances, collected from anonymized voice search queries. This last set contains background noises from real living conditions. Results Our goal is to compare the efectiviness of the proposed approach against the baseline system described in BIBREF13 . We evaluate the false-reject (FR) and false-accept (FA) tradeoff across several end-to-end models of distinct sizes and computational complexities. As can be seen in the Receiver Operating Characteristic (ROC) curves in Figure FIGREF14 , the 2 largest end-to-end models, with 2-stage training, significantly outperform the recognition quality of the much larger and complex Baseline_1850K system. More specifically, E2E_318K_2stage and E2E_700K_2stage show up to 60% relative FR rate reduction over Baseline_1850K in most test conditions. Moreover, E2E_318K_2stage uses only about 26% of the computations that Baseline_1850K uses (once normalizing their execution rates over time), but still shows significant improvements. We also explore end-to-end models at a size that, as described in BIBREF7 , is small enough, in both size and computation, to be executed continuously with very little power consumption. These 2 models, E2E_40K_1stage and E2E_40K_2stage, also explore the capacity of end-to-end training (1stage) versus encoder+decoder training (2stage). As can be appreciated in the ROC curves, 1stage training outperforms 2stage training on all conditions, but particularly on both "clean" environments where it gets fairly close to the performance of the baseline setup. That is a significant achievement considering E2E_40K_1stage has 2.3% the parameters and performs 3.2% the computations of Baseline_1850K. Table TABREF13 compares the recognition quality of all setups by fixing on a very low false-accept rate of 0.1 FA per hour on a dataset containing only negative (i.e. non-keyword) utterances. Thus the table shows the false-reject rates at that operating point. Here we can appreciate similar trends as those described above: the 2 largest end-to-end models outperforms the baseline across all datasets, reducing FR rate about 40% on the clean conditions and 40%-20% on the other 2 sets depending on the model size. This table also shows how 1stage outperforms 2stage for small size models, and presents similar FR rates as Baseline_1850K on clean conditions. Conclusion We presented a system for keyword spotting that by combining an efficient topology and two types of end-to-end training can significantly ourperform previous appraoches, at a much lower cost of size and computation. We specifically show how it beats the performance of a setup taken from BIBREF13 with models over 5 times smaller, and even get close to the same performance with a model over 40 times smaller. Our approach provides further benefits of not requiring anything other than a front-end and a neural network to perform the detection, and thus it is easier to extend to newer keywords and/or fine-tune with new training data. Future work includes exploring other loss-functions, as well as generalizing multi-channel support.
Our baseline system (Baseline_1850K) is taken from BIBREF13 .
d7aed39c359fd381495b12996c4dfc1d3da38ed5
d7aed39c359fd381495b12996c4dfc1d3da38ed5_0
Q: How is the back-translation model trained? Text: Introduction Semantic parsing aims to map natural language questions to the logical forms of their underlying meanings, which can be regarded as programs and executed to yield answers, aka denotations BIBREF0 . In the past few years, neural network based semantic parsers have achieved promising performances BIBREF1 , however, their success is limited to the setting with rich supervision, which is costly to obtain. There have been recent attempts at low-resource semantic parsing, including data augmentation methods which are learned from a small number of annotated examples BIBREF2 , and methods for adapting to unseen domains while only being trained on annotated examples in other domains. This work investigates neural semantic parsing in a low-resource setting, in which case we only have our prior knowledge about a limited number of simple mapping rules, including a small amount of domain-independent word-level matching tables if necessary, but have no access to either annotated programs or execution results. Our key idea is to use these rules to collect modest question-programs pairs as the starting point, and then leverage automatically generated examples to improve the accuracy and generality of the model. This presents three challenges including how to generate examples in an efficient way, how to measure the quality of generated examples which might contain errors and noise, and how to train a semantic parser that makes robust predictions for examples covered by rules and generalizes well to uncovered examples. We address the aforementioned challenges with a framework consisting of three key components. The first component is a data generator. It includes a neural semantic parsing model, which maps a natural language question to a program, and a neural question generation model, which maps a program to a natural language question. We learn these two models in a back-translation paradigm using pseudo parallel examples, inspired by its big success on unsupervised neural machine translation BIBREF3 , BIBREF4 . The second component is a quality controller, which is used for filtering out noise and errors contained in the pseudo data. We construct a phrase table with frequent mapping patterns, therefore noise and errors with low frequency can be filtered out. A similar idea has been worked as posterior regularization in neural machine translation BIBREF5 , BIBREF6 . The third component is a meta learner. Instead of transferring a model pretrained with examples covered by rules to the generated examples, we leverage model-agnostic meta-learning BIBREF7 , an elegant meta-learning algorithm which has been successfully applied to a wide range of tasks including few-shot learning and adaptive control. We regard different data sources as different tasks, and use outputs of the quality controller for stable training. We test our approach on three tasks with different programs, including SQL (and SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and subject-predicate pairs over a large-scale knowledge graph BIBREF10 . The program for SQL queries for single-turn questions and subject-predicate pairs over knowledge graph is simple while the program for SQL queries for multi-turn questions have top-tier complexity among currently proposed tasks. Results show that our approach yields large improvements over rule-based systems, and incorporating different strategies incrementally improves the overall performance. On WikiSQL, our best performing system achieves execution accuracy of 72.7%, comparable to a strong system learned from denotations BIBREF11 with an accuracy of 74.8%. Problem Statement We focus on the task of executive semantic parsing. The goal is to map a natural language question/utterance INLINEFORM0 to a logical form/program INLINEFORM1 , which can be executed over a world INLINEFORM2 to obtain the correct answer INLINEFORM3 . We consider three tasks. The first task is single-turn table-based semantic parsing, in which case INLINEFORM0 is a self-contained question, INLINEFORM1 is a SQL query in the form of “SELECT agg col INLINEFORM2 WHERE col INLINEFORM3 = val INLINEFORM4 AND ...”, and INLINEFORM5 is a web table consisting of multiple rows and columns. We use WikiSQL BIBREF8 as the testbed for this task. The second task is multi-turn table-based semantic parsing. Compared to the first task, INLINEFORM6 could be a follow-up question, the meaning of which depends on the conversation history. Accordingly, INLINEFORM7 in this task supports additional operations that copy previous turn INLINEFORM8 to the current turn. We use SequentialQA BIBREF9 for evaluation. In the third task, we change INLINEFORM9 to a large-scale knowledge-graph (i.e. Freebase) and consider knowledge-based question answering for single-turn questions. We use SimpleQuestions BIBREF10 as the testbed, where the INLINEFORM10 is in the form of a simple INLINEFORM11 -calculus like INLINEFORM12 , and the generation of INLINEFORM13 is equivalent to the prediction of the predicate and the subject entity. We study the problem in a low-resource setting. In the training process, we don't have annotated logical forms INLINEFORM0 or execution results INLINEFORM1 . Instead, we have a collection of natural language questions for the task, a limited number of simple mapping rules based on our prior knowledge about the task, and may also have a small amount of domain-independent word-level matching tables if necessary. These rules are not perfect, with low coverage, and can even be incorrect for some situations. For instance, when predicting a SQL command in the first task, we have a prior knowledge that (1) WHERE values potentially have co-occurring words with table cells; (2) the words “more” and “greater” tend to be mapped to WHERE operator “ INLINEFORM2 ”; (3) within a WHERE clause, header and cell should be in the same column; and (4) the word “average” tends to be mapped to aggregator “avg”. Similarly, when predicting a INLINEFORM3 -calculus in the third task, the entity name might be present in the question, and among all the predicates connected to the entity, the predicate with maximum number of co-occurred words might be correct. We would like to study to what extent our model can achieve if we use rules as the starting point. Learning Algorithm We describe our approach for low-resource neural semantic parsing in this section. We propose to train a neural semantic parser using back-translation and meta-learning. The learning process is summarized in Algorithm FIGREF1 . We describe the three components in this section, namely back-translation, quality control, and meta-learning. Back-Translation Following the back-translation paradigm BIBREF3 , BIBREF4 , we have a semantic parser, which maps a natural language question INLINEFORM0 to a logical form INLINEFORM1 , and a question generator, which maps INLINEFORM2 to INLINEFORM3 . The semantic parser works for the primary task, and the question generator mainly works for generating pseudo datapoints. We start the training process by applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5 . The resulting dataset is considered as the training data to initialize both the semantic parser and the question generator. Afterwards, both models are improved following the back-translation protocol that target sequences should follow the real data distribution, yet source sequences can be generated with noises. This is based on the consideration that in an encoder-decoder model, the decoder is more sensitive to the data distribution than the encoder. We use datapoints from both models to train the semantic parser because a logical form is structural which follows a grammar, whose distribution is similar to the ground truth. Quality Controller Directly using generated datapoints as supervised training data is not desirable because those generated datapoints contain noises or errors. To address this, we follow the application of posterior regularization in neural machine translation BIBREF5 , and implement a dictionary-based discriminator which is used to measure the quality of a pseudo data. The basic idea is that although these generated datapoints are not perfect, the frequent patterns of the mapping between a phrase in INLINEFORM0 to a token in INLINEFORM1 are helpful in filtering out noise in the generated data with low frequency BIBREF6 . There are multiple ways to collect the phrase table information, such as using statistical phrase-level alignment algorithms like Giza++ or directly counting the co-occurrence of any question word and logical form token. We use the latter one in this work. Further details are described in the appendix. Meta-Learning A simple way to update the semantic parser is to merge the datapoints in hand and train a one-size-fits-all model BIBREF2 . However, this will hurt model's stability on examples covered by rules, and examples of the same task may vary widely BIBREF12 . Dealing with different types of examples requires the model to possess different abilities. For example, tackling examples uncovered by rules in WikiSQL requires the model to have the additional ability to map a column name to a totally different utterance, such as “country” to “nation”. Another simple solution is self-training BIBREF13 . One can train a model with examples covered by rules, and use the model as a teacher to make predictions on examples uncovered by rules and update the model on these predictions. However, self-training is somewhat tautological because the model is learned to make predictions which it already can produce. We learn the semantic parser with meta-learning, regarding learning from examples covered by rules or uncovered by rules as two (pseudo) tasks. Compared to the aforementioned strategies, the advantage of exploring meta-learning here is two-fold. First, we learn a specific model for each task, which provides guarantees about its stability on examples covered by rules. In the test phase, we can use the rule to detect which task an example belongs to, and use the corresponding task-specific model to make predictions. When dealing with examples covered by rules, we can either directly use rules to make predictions or use the updated model, depending on the accuracy of the learned model on the examples covered by rules on development set. Second, latent patterns of examples may vary widely in terms of whether or not they are covered by rules. Meta-learning is more desirable in this situation because it learns the model's ability to learn, improving model's versatility rather than mapping the latent patterns learned from datapoints in one distribution to datapoints in another distribution by force. Figure FIGREF1 is an illustration of data combination, self-training, and meta-learning. Meta-learning includes two optimizations: the learner that learns new tasks, and the meta-learner that trains the learner. In this work, the meta-learner is optimized by finding a good initialization that is highly adaptable. Specifically, we use model-agnostic meta-learning, MAML BIBREF7 , a powerful meta-learning algorithm with desirable properties including introducing no additional parameters and making no assumptions of the form of the model. In MAML, task-specific parameter INLINEFORM0 is initialized by INLINEFORM1 , and updated using gradient decent based on the loss function INLINEFORM2 of task INLINEFORM3 . In this work, the loss functions of two tasks are the same. The updated parameter INLINEFORM4 is then used to calculate the model's performance across tasks to update the parameter INLINEFORM5 . In this work, following the practical suggestions given by BIBREF17 , we update INLINEFORM6 in the inner-loop and regard the outputs of the quality controller as the input of both tasks. If we only have examples covered by rules, such as those used in the initialization phase, meta-learning learns to learn a good initial parameter that is evaluated by its usefulness on the examples from the same distribution. In the training phase, datapoints from both tasks are generated, and meta-learning learns to learn an initialization parameter which can be quickly and efficiently adapted to examples from both tasks. Experiment We conduct experiments on three tasks to test our approach, including generating SQL (or SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and predicting subject-predicate pairs over a knowledge graph BIBREF10 . We describe task definition, base models, experiments settings and empirical results for each task, respectively. Table-Based Semantic Parsing Given a natural language INLINEFORM0 and a table INLINEFORM1 with INLINEFORM2 columns and INLINEFORM3 rows as the input, the task is to output a SQL query INLINEFORM4 , which could be executed on table INLINEFORM5 to yield the correct answer of INLINEFORM6 . We conduct experiments on WikiSQL BIBREF8 , which provides 87,726 annotated question-SQL pairs over 26,375 web tables. In this work, we do not use either SQL queries or answers in the training process. We use execution accuracy as the evaluation metric, which measures the percentage of generated SQL queries that result in the correct answer. We describe our rules for WikiSQL here. We first detect WHERE values, which exactly match to table cells. After that, if a cell appears at more than one column, we choose the column name with more overlapped words with the question, with a constraint that the number of co-occurred words is larger than 1. By default, a WHERE operator is INLINEFORM0 , except for the case that surrounding words of a value contain keywords for INLINEFORM1 and INLINEFORM2 . Then, we deal with the SELECT column, which has the largest number of co-occurred words and cannot be same with any WHERE column. By default, the SELECT AGG is NONE, except for matching to any keywords in Table TABREF8 . The coverage of our rule on training set is 78.4%, with execution accuracy of 77.9%. We implement a neural network modular approach as the base model, which includes different modules to predict different SQL constituents. This approach is based on the understanding of the SQL grammar in WikiSQL, namely “SELECT $agg $column WHERE $column $op $value (AND $column $op $value)*”, where tokens starting with “$” are the slots to be predicted BIBREF18 . In practice, modular approaches typically achieve higher accuracy than end-to-end learning approach. Specifically, at the first step we implement a sequential labeling module to detect WHERE values and link them to table cells. Advantages of starting from WHERE values include that WHERE values are less ambiguous compared to other slots, and that the number of WHERE clauses can be naturally detected. After that, for each WHERE value, we use the preceding and following contexts in the question to predict its WHERE column and the WHERE operator through two unidirectional LSTM. Column attention BIBREF18 is used for predicting a particular column. Similar LSTM-based classifiers are used to predict SELECT column and SELECT aggregator. According to whether the training data can be processed by our rules, we divide it into two parts: rule covered part and rule uncovered part. For the rule covered part we could get rule covered training data using our rules. For the rule uncovered part we could also get training data using the trained Base model we have, we refer to these data as self-inference training data. Furthermore, we could get more training data by back translation, we refer to these data as question-generation training data. For all the settings, the Base Model is initialized with rule covered training data. In Base + Self Training Method, we finetune the Base model with self-inference training data. In Base + Question Generation Method, we use question-generation training data to finetune our model. In Base + BT Method, we use both self-inference and question-generation data to finetune our model. In Base + BT + QC, we add our quality controller. In Base + BT + QC + MAML, we further add meta-learning. Results are given in Table TABREF5 . We can see that back-translation, quality control and MAML incrementally improves the accuracy. Question generation is better than self-training here because the logical form in WikiSQL is relatively simple, so the distribution of the sampled logical forms is similar to the original one. In the back-translation setting, generated examples come from both self-training and the question generation model. The model performs better than rules on rule-covered examples, and improves the accuracy on uncovered examples. Figure FIGREF12 shows the learning curves of the COLUMN prediction model with or without using MAML. The model using MAML has a better starting point during training, which reflects the effectiveness of the pre-trained parameter. Knowledge-Based Question Answering We test our approach on question answering over another genre of environment: knowledge graph consisting of subject-relation-object triples. Given a natural language question and a knowledge graph, the task aims to correctly answer the question with evidences from the knowledge graph. We do our study on SimpleQuestions BIBREF10 , which includes 108,442 simple questions, each of which is accompanied by a subject-relation-object triple. Questions are constructed in a way that subject and relation are mentioned in the question, and that object is the answer. The task requires predicting the entityId and the relation involved in the question. Our rule for KBQA is simple without using a curated mapping dictionary. First, we detect an entity from the question using strict string matching, with the constraint that only one entity from the KB has the same surface string and that the question contains only one entity. After that, we get the connected relations of the detected entity, and assign the relation as the one with maximum number of co-occurred words. The coverage of our rule on training set is 16.0%, with an accuracy of 97.3% for relation prediction. We follow BIBREF22 , and implement a KBQA pipeline consisting of three modules in this work. At the first step, we use a sequence labeling model, i.e. LSTM-CRF, to detect entity mention words in the question. After that, we use an entity linking model with BM25 built on Elasticsearch. Top-K ranked similar entities are retrieved as candidate list. Then, we get all the relations connected to entities in the candidate list as candidate relations, and use a relation prediction model, which is based on Match-LSTM BIBREF23 , to predict the relation. Finally, from all the entities connected to the predicted relation, we choose the one with highest BM25 score as the predicted entity. We use FB2M as the KB, which includes about 2 million triples. The settings are the same as those described in table-based semantic parsing. Results are given in Table TABREF10 , which are consistent with the numbers in WikiSQL. Using back-translation, quality control and MAML incrementally improves the accuracy, and our approach generalizes well to rule-uncovered examples. Conversational Table-Based Semantic Parsing We consider the task of conversational table-based semantic parsing in this part. Compared to single-turn table-based semantic parsing as described in subsection SECREF6 , the meaning of a natural language may also depends on questions of past turns, which is the common ellipsis and co-reference phenomena in conversational agents. Given a natural language question at the current turn, a web table, and previous turn questions in a conversation as the input, the task aims to generate a program (i.e. logical form), which can be executed on the table to obtain the correct answer of the current turn question. We conduct experiments on SequentialQA BIBREF9 which is derived from the WikiTableQuestions dataset BIBREF19 . It contains 6,066 question sequences covering 17,553 question-answer pairs. Each sequence includes 2.9 natural language questions on average. Different from WikiSQL which provides the correct logical form for each question, SequentialQA only annotates the correct answer. This dataset is also harder than the previous two, since it requires complex, highly compositional logical forms to get the answer. Existing approaches are evaluated by question answering accuracy, which measures whether the predicted answer is correct or not. The pipeline of rules in SequentialQA is similar to that of WikiSQL. Compared to the grammar of WikiSQL, the grammar of SequentialQA has additional actions including copying the previous-turn logical form, no greater than, no more than, and negation. Table TABREF23 shows the additional word-level mapping table used in SequentialQA. The coverage of our rule on training set is 75.5%, with an accuracy of 38.5%. We implement a modular approach on top of a grammar of derivation rules (actions) as the base model. Similar to BIBREF9 , our grammar consists of predefined actions used for predicting SELECT column, WHERE column, WHERE operator, WHERE value, and determining whether it is required to copy the entire action sequence of the previous turn questions. After encoding a question and previous turn questions into vectors, we first use a controller module to predict an action sequence consisting of slots, and then use specific modules to predict the argument of each slot. Similar to BIBREF9 , we use a recurrent structure as the backbone of each module and use the softmax layer for making prediction. The settings are the same as those described in table-based semantic parsing. From Table TABREF20 , we can see that question generation does not work well on this task. This is because the difficulty in generating sequential questions and complex target logical forms. Applying MAML to examples not coming from question generation performs best. We leave contextual question generation as a future work. Conclusion and Future Directions We present an approach to learn neural semantic parser from simple domain-independent rules, instead of annotated logical forms or denotations. Our approach starts from examples covered by rules, which are used to initialize a semantic parser and a question generator in a back-translation paradigm. Generated examples are measured and filtered based on statistic analysis, and then used with model-agnostic meta-learning, which guarantees model's accuracy and stability on rule-covered examples, and acquires the versatility to generalize well on rule-uncovered examples. We conduct experiments on three datasets for table-based and knowledge-based question answering tasks. Results show that incorporating different strategies incrementally improves the performance. Our best model on WikiSQL achieves comparable accuracy to the system learned from denotation. In the future, we plan to focus on more complex logical forms.
applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5, both models are improved following the back-translation protocol that target sequences should follow the real data distribution
9c423e3b44e3acc2d4b0606688d4ac9d6285ed0f
9c423e3b44e3acc2d4b0606688d4ac9d6285ed0f_0
Q: Are the rules dataset specific? Text: Introduction Semantic parsing aims to map natural language questions to the logical forms of their underlying meanings, which can be regarded as programs and executed to yield answers, aka denotations BIBREF0 . In the past few years, neural network based semantic parsers have achieved promising performances BIBREF1 , however, their success is limited to the setting with rich supervision, which is costly to obtain. There have been recent attempts at low-resource semantic parsing, including data augmentation methods which are learned from a small number of annotated examples BIBREF2 , and methods for adapting to unseen domains while only being trained on annotated examples in other domains. This work investigates neural semantic parsing in a low-resource setting, in which case we only have our prior knowledge about a limited number of simple mapping rules, including a small amount of domain-independent word-level matching tables if necessary, but have no access to either annotated programs or execution results. Our key idea is to use these rules to collect modest question-programs pairs as the starting point, and then leverage automatically generated examples to improve the accuracy and generality of the model. This presents three challenges including how to generate examples in an efficient way, how to measure the quality of generated examples which might contain errors and noise, and how to train a semantic parser that makes robust predictions for examples covered by rules and generalizes well to uncovered examples. We address the aforementioned challenges with a framework consisting of three key components. The first component is a data generator. It includes a neural semantic parsing model, which maps a natural language question to a program, and a neural question generation model, which maps a program to a natural language question. We learn these two models in a back-translation paradigm using pseudo parallel examples, inspired by its big success on unsupervised neural machine translation BIBREF3 , BIBREF4 . The second component is a quality controller, which is used for filtering out noise and errors contained in the pseudo data. We construct a phrase table with frequent mapping patterns, therefore noise and errors with low frequency can be filtered out. A similar idea has been worked as posterior regularization in neural machine translation BIBREF5 , BIBREF6 . The third component is a meta learner. Instead of transferring a model pretrained with examples covered by rules to the generated examples, we leverage model-agnostic meta-learning BIBREF7 , an elegant meta-learning algorithm which has been successfully applied to a wide range of tasks including few-shot learning and adaptive control. We regard different data sources as different tasks, and use outputs of the quality controller for stable training. We test our approach on three tasks with different programs, including SQL (and SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and subject-predicate pairs over a large-scale knowledge graph BIBREF10 . The program for SQL queries for single-turn questions and subject-predicate pairs over knowledge graph is simple while the program for SQL queries for multi-turn questions have top-tier complexity among currently proposed tasks. Results show that our approach yields large improvements over rule-based systems, and incorporating different strategies incrementally improves the overall performance. On WikiSQL, our best performing system achieves execution accuracy of 72.7%, comparable to a strong system learned from denotations BIBREF11 with an accuracy of 74.8%. Problem Statement We focus on the task of executive semantic parsing. The goal is to map a natural language question/utterance INLINEFORM0 to a logical form/program INLINEFORM1 , which can be executed over a world INLINEFORM2 to obtain the correct answer INLINEFORM3 . We consider three tasks. The first task is single-turn table-based semantic parsing, in which case INLINEFORM0 is a self-contained question, INLINEFORM1 is a SQL query in the form of “SELECT agg col INLINEFORM2 WHERE col INLINEFORM3 = val INLINEFORM4 AND ...”, and INLINEFORM5 is a web table consisting of multiple rows and columns. We use WikiSQL BIBREF8 as the testbed for this task. The second task is multi-turn table-based semantic parsing. Compared to the first task, INLINEFORM6 could be a follow-up question, the meaning of which depends on the conversation history. Accordingly, INLINEFORM7 in this task supports additional operations that copy previous turn INLINEFORM8 to the current turn. We use SequentialQA BIBREF9 for evaluation. In the third task, we change INLINEFORM9 to a large-scale knowledge-graph (i.e. Freebase) and consider knowledge-based question answering for single-turn questions. We use SimpleQuestions BIBREF10 as the testbed, where the INLINEFORM10 is in the form of a simple INLINEFORM11 -calculus like INLINEFORM12 , and the generation of INLINEFORM13 is equivalent to the prediction of the predicate and the subject entity. We study the problem in a low-resource setting. In the training process, we don't have annotated logical forms INLINEFORM0 or execution results INLINEFORM1 . Instead, we have a collection of natural language questions for the task, a limited number of simple mapping rules based on our prior knowledge about the task, and may also have a small amount of domain-independent word-level matching tables if necessary. These rules are not perfect, with low coverage, and can even be incorrect for some situations. For instance, when predicting a SQL command in the first task, we have a prior knowledge that (1) WHERE values potentially have co-occurring words with table cells; (2) the words “more” and “greater” tend to be mapped to WHERE operator “ INLINEFORM2 ”; (3) within a WHERE clause, header and cell should be in the same column; and (4) the word “average” tends to be mapped to aggregator “avg”. Similarly, when predicting a INLINEFORM3 -calculus in the third task, the entity name might be present in the question, and among all the predicates connected to the entity, the predicate with maximum number of co-occurred words might be correct. We would like to study to what extent our model can achieve if we use rules as the starting point. Learning Algorithm We describe our approach for low-resource neural semantic parsing in this section. We propose to train a neural semantic parser using back-translation and meta-learning. The learning process is summarized in Algorithm FIGREF1 . We describe the three components in this section, namely back-translation, quality control, and meta-learning. Back-Translation Following the back-translation paradigm BIBREF3 , BIBREF4 , we have a semantic parser, which maps a natural language question INLINEFORM0 to a logical form INLINEFORM1 , and a question generator, which maps INLINEFORM2 to INLINEFORM3 . The semantic parser works for the primary task, and the question generator mainly works for generating pseudo datapoints. We start the training process by applying the rule INLINEFORM4 to a set of natural language questions INLINEFORM5 . The resulting dataset is considered as the training data to initialize both the semantic parser and the question generator. Afterwards, both models are improved following the back-translation protocol that target sequences should follow the real data distribution, yet source sequences can be generated with noises. This is based on the consideration that in an encoder-decoder model, the decoder is more sensitive to the data distribution than the encoder. We use datapoints from both models to train the semantic parser because a logical form is structural which follows a grammar, whose distribution is similar to the ground truth. Quality Controller Directly using generated datapoints as supervised training data is not desirable because those generated datapoints contain noises or errors. To address this, we follow the application of posterior regularization in neural machine translation BIBREF5 , and implement a dictionary-based discriminator which is used to measure the quality of a pseudo data. The basic idea is that although these generated datapoints are not perfect, the frequent patterns of the mapping between a phrase in INLINEFORM0 to a token in INLINEFORM1 are helpful in filtering out noise in the generated data with low frequency BIBREF6 . There are multiple ways to collect the phrase table information, such as using statistical phrase-level alignment algorithms like Giza++ or directly counting the co-occurrence of any question word and logical form token. We use the latter one in this work. Further details are described in the appendix. Meta-Learning A simple way to update the semantic parser is to merge the datapoints in hand and train a one-size-fits-all model BIBREF2 . However, this will hurt model's stability on examples covered by rules, and examples of the same task may vary widely BIBREF12 . Dealing with different types of examples requires the model to possess different abilities. For example, tackling examples uncovered by rules in WikiSQL requires the model to have the additional ability to map a column name to a totally different utterance, such as “country” to “nation”. Another simple solution is self-training BIBREF13 . One can train a model with examples covered by rules, and use the model as a teacher to make predictions on examples uncovered by rules and update the model on these predictions. However, self-training is somewhat tautological because the model is learned to make predictions which it already can produce. We learn the semantic parser with meta-learning, regarding learning from examples covered by rules or uncovered by rules as two (pseudo) tasks. Compared to the aforementioned strategies, the advantage of exploring meta-learning here is two-fold. First, we learn a specific model for each task, which provides guarantees about its stability on examples covered by rules. In the test phase, we can use the rule to detect which task an example belongs to, and use the corresponding task-specific model to make predictions. When dealing with examples covered by rules, we can either directly use rules to make predictions or use the updated model, depending on the accuracy of the learned model on the examples covered by rules on development set. Second, latent patterns of examples may vary widely in terms of whether or not they are covered by rules. Meta-learning is more desirable in this situation because it learns the model's ability to learn, improving model's versatility rather than mapping the latent patterns learned from datapoints in one distribution to datapoints in another distribution by force. Figure FIGREF1 is an illustration of data combination, self-training, and meta-learning. Meta-learning includes two optimizations: the learner that learns new tasks, and the meta-learner that trains the learner. In this work, the meta-learner is optimized by finding a good initialization that is highly adaptable. Specifically, we use model-agnostic meta-learning, MAML BIBREF7 , a powerful meta-learning algorithm with desirable properties including introducing no additional parameters and making no assumptions of the form of the model. In MAML, task-specific parameter INLINEFORM0 is initialized by INLINEFORM1 , and updated using gradient decent based on the loss function INLINEFORM2 of task INLINEFORM3 . In this work, the loss functions of two tasks are the same. The updated parameter INLINEFORM4 is then used to calculate the model's performance across tasks to update the parameter INLINEFORM5 . In this work, following the practical suggestions given by BIBREF17 , we update INLINEFORM6 in the inner-loop and regard the outputs of the quality controller as the input of both tasks. If we only have examples covered by rules, such as those used in the initialization phase, meta-learning learns to learn a good initial parameter that is evaluated by its usefulness on the examples from the same distribution. In the training phase, datapoints from both tasks are generated, and meta-learning learns to learn an initialization parameter which can be quickly and efficiently adapted to examples from both tasks. Experiment We conduct experiments on three tasks to test our approach, including generating SQL (or SQL-like) queries for both single-turn and multi-turn questions over web tables BIBREF8 , BIBREF9 , and predicting subject-predicate pairs over a knowledge graph BIBREF10 . We describe task definition, base models, experiments settings and empirical results for each task, respectively. Table-Based Semantic Parsing Given a natural language INLINEFORM0 and a table INLINEFORM1 with INLINEFORM2 columns and INLINEFORM3 rows as the input, the task is to output a SQL query INLINEFORM4 , which could be executed on table INLINEFORM5 to yield the correct answer of INLINEFORM6 . We conduct experiments on WikiSQL BIBREF8 , which provides 87,726 annotated question-SQL pairs over 26,375 web tables. In this work, we do not use either SQL queries or answers in the training process. We use execution accuracy as the evaluation metric, which measures the percentage of generated SQL queries that result in the correct answer. We describe our rules for WikiSQL here. We first detect WHERE values, which exactly match to table cells. After that, if a cell appears at more than one column, we choose the column name with more overlapped words with the question, with a constraint that the number of co-occurred words is larger than 1. By default, a WHERE operator is INLINEFORM0 , except for the case that surrounding words of a value contain keywords for INLINEFORM1 and INLINEFORM2 . Then, we deal with the SELECT column, which has the largest number of co-occurred words and cannot be same with any WHERE column. By default, the SELECT AGG is NONE, except for matching to any keywords in Table TABREF8 . The coverage of our rule on training set is 78.4%, with execution accuracy of 77.9%. We implement a neural network modular approach as the base model, which includes different modules to predict different SQL constituents. This approach is based on the understanding of the SQL grammar in WikiSQL, namely “SELECT $agg $column WHERE $column $op $value (AND $column $op $value)*”, where tokens starting with “$” are the slots to be predicted BIBREF18 . In practice, modular approaches typically achieve higher accuracy than end-to-end learning approach. Specifically, at the first step we implement a sequential labeling module to detect WHERE values and link them to table cells. Advantages of starting from WHERE values include that WHERE values are less ambiguous compared to other slots, and that the number of WHERE clauses can be naturally detected. After that, for each WHERE value, we use the preceding and following contexts in the question to predict its WHERE column and the WHERE operator through two unidirectional LSTM. Column attention BIBREF18 is used for predicting a particular column. Similar LSTM-based classifiers are used to predict SELECT column and SELECT aggregator. According to whether the training data can be processed by our rules, we divide it into two parts: rule covered part and rule uncovered part. For the rule covered part we could get rule covered training data using our rules. For the rule uncovered part we could also get training data using the trained Base model we have, we refer to these data as self-inference training data. Furthermore, we could get more training data by back translation, we refer to these data as question-generation training data. For all the settings, the Base Model is initialized with rule covered training data. In Base + Self Training Method, we finetune the Base model with self-inference training data. In Base + Question Generation Method, we use question-generation training data to finetune our model. In Base + BT Method, we use both self-inference and question-generation data to finetune our model. In Base + BT + QC, we add our quality controller. In Base + BT + QC + MAML, we further add meta-learning. Results are given in Table TABREF5 . We can see that back-translation, quality control and MAML incrementally improves the accuracy. Question generation is better than self-training here because the logical form in WikiSQL is relatively simple, so the distribution of the sampled logical forms is similar to the original one. In the back-translation setting, generated examples come from both self-training and the question generation model. The model performs better than rules on rule-covered examples, and improves the accuracy on uncovered examples. Figure FIGREF12 shows the learning curves of the COLUMN prediction model with or without using MAML. The model using MAML has a better starting point during training, which reflects the effectiveness of the pre-trained parameter. Knowledge-Based Question Answering We test our approach on question answering over another genre of environment: knowledge graph consisting of subject-relation-object triples. Given a natural language question and a knowledge graph, the task aims to correctly answer the question with evidences from the knowledge graph. We do our study on SimpleQuestions BIBREF10 , which includes 108,442 simple questions, each of which is accompanied by a subject-relation-object triple. Questions are constructed in a way that subject and relation are mentioned in the question, and that object is the answer. The task requires predicting the entityId and the relation involved in the question. Our rule for KBQA is simple without using a curated mapping dictionary. First, we detect an entity from the question using strict string matching, with the constraint that only one entity from the KB has the same surface string and that the question contains only one entity. After that, we get the connected relations of the detected entity, and assign the relation as the one with maximum number of co-occurred words. The coverage of our rule on training set is 16.0%, with an accuracy of 97.3% for relation prediction. We follow BIBREF22 , and implement a KBQA pipeline consisting of three modules in this work. At the first step, we use a sequence labeling model, i.e. LSTM-CRF, to detect entity mention words in the question. After that, we use an entity linking model with BM25 built on Elasticsearch. Top-K ranked similar entities are retrieved as candidate list. Then, we get all the relations connected to entities in the candidate list as candidate relations, and use a relation prediction model, which is based on Match-LSTM BIBREF23 , to predict the relation. Finally, from all the entities connected to the predicted relation, we choose the one with highest BM25 score as the predicted entity. We use FB2M as the KB, which includes about 2 million triples. The settings are the same as those described in table-based semantic parsing. Results are given in Table TABREF10 , which are consistent with the numbers in WikiSQL. Using back-translation, quality control and MAML incrementally improves the accuracy, and our approach generalizes well to rule-uncovered examples. Conversational Table-Based Semantic Parsing We consider the task of conversational table-based semantic parsing in this part. Compared to single-turn table-based semantic parsing as described in subsection SECREF6 , the meaning of a natural language may also depends on questions of past turns, which is the common ellipsis and co-reference phenomena in conversational agents. Given a natural language question at the current turn, a web table, and previous turn questions in a conversation as the input, the task aims to generate a program (i.e. logical form), which can be executed on the table to obtain the correct answer of the current turn question. We conduct experiments on SequentialQA BIBREF9 which is derived from the WikiTableQuestions dataset BIBREF19 . It contains 6,066 question sequences covering 17,553 question-answer pairs. Each sequence includes 2.9 natural language questions on average. Different from WikiSQL which provides the correct logical form for each question, SequentialQA only annotates the correct answer. This dataset is also harder than the previous two, since it requires complex, highly compositional logical forms to get the answer. Existing approaches are evaluated by question answering accuracy, which measures whether the predicted answer is correct or not. The pipeline of rules in SequentialQA is similar to that of WikiSQL. Compared to the grammar of WikiSQL, the grammar of SequentialQA has additional actions including copying the previous-turn logical form, no greater than, no more than, and negation. Table TABREF23 shows the additional word-level mapping table used in SequentialQA. The coverage of our rule on training set is 75.5%, with an accuracy of 38.5%. We implement a modular approach on top of a grammar of derivation rules (actions) as the base model. Similar to BIBREF9 , our grammar consists of predefined actions used for predicting SELECT column, WHERE column, WHERE operator, WHERE value, and determining whether it is required to copy the entire action sequence of the previous turn questions. After encoding a question and previous turn questions into vectors, we first use a controller module to predict an action sequence consisting of slots, and then use specific modules to predict the argument of each slot. Similar to BIBREF9 , we use a recurrent structure as the backbone of each module and use the softmax layer for making prediction. The settings are the same as those described in table-based semantic parsing. From Table TABREF20 , we can see that question generation does not work well on this task. This is because the difficulty in generating sequential questions and complex target logical forms. Applying MAML to examples not coming from question generation performs best. We leave contextual question generation as a future work. Conclusion and Future Directions We present an approach to learn neural semantic parser from simple domain-independent rules, instead of annotated logical forms or denotations. Our approach starts from examples covered by rules, which are used to initialize a semantic parser and a question generator in a back-translation paradigm. Generated examples are measured and filtered based on statistic analysis, and then used with model-agnostic meta-learning, which guarantees model's accuracy and stability on rule-covered examples, and acquires the versatility to generalize well on rule-uncovered examples. We conduct experiments on three datasets for table-based and knowledge-based question answering tasks. Results show that incorporating different strategies incrementally improves the performance. Our best model on WikiSQL achieves comparable accuracy to the system learned from denotation. In the future, we plan to focus on more complex logical forms.
Yes